February 18, 2005

The Goal of Security

Jeroen van Gelderen once made the remark that FreeBSD has a security goal, but OpenBSD has *the* security goal.

Coming from a goals oriented background - more about that later - I immediately understood and thought no more of it. Now I want to find out more, and ... it seems this is not a formal position of the open source operating systems, but more an informal shared understanding. Hence Jeroen's remark was even more perceptive, and I have no easy guidelines to copy from.

The Goal of Security is quite significant. As part of this both OSs have security officers. A security officer is often someone with quite significant power; to coordinate patches, but he or she can also hold back releases and effect inclusion or exclusion of packages. (Now, that might not sound all that significant in normal project management terms, but in the Internet world of open source, where rough consensus rules and most of the work is done by people who are unpaid, it's a big deal. It's recognition that when it comes to security, rough consensus might not be the only way to do things. And it's about the only override to that rough and ready world.)

As well as the security officer, there are also policy statements on how exploits are handled, and disclosure methods. Generally, in a Security Goal oriented project, anyone can report a security bug, and there are often easy ways to do it. If one has ever tried to report a bug through formal channels, one knows that having informal methods to report a bug is worth a lot.

There are also security subprojects within, such as code audits, module rewrites, and cross-divisional re-alignments. The importance of the security term underscores to all the disparate subprojects that this particular one might be directing requirements their way. This is another big change in the Open Source world - the database people don't listen to the GUI people, and the networks stack people don't listen to the email people ... but they all listen to the security people in a Security Goal project.

Expressing a goal is then an extraordinarily useful thing. It brings torturous mail list arguments back to a ground context. It identifies what we are all striving to achieve, and if we can't demonstrate how our argument moves towards the goal, then we aren't helping.

Yet, goals are almost a dirty word. It was great management speak back in the 80s and 90s, along with various other "must do" things like vision statements. They came, and they went, I haven't heard of anyone talking about a goal for years. One could say this means they didn't work, but I'd say that's unfair - goals are just hard to understand until you've lived them.

Rather than debug the failure of management-speak, let's go back to where they came from: the military, which is where I get my goal-oriented perspective from. There, for fighting soldiers, they are called objectives, and these things are taught to soldiers anywhere from corporal and above. Why the military? I guess because in the heat of the battle, it's easy to forget why you are there, and stressing something solid and basic has real value when the bullets are whizzing overhead.

The military has its own set of verbs. Soldiers redefine "to capture, to destroy, to kill..." so they can stress the objectively measurable event using their own language. When orders are given to troops, the mission - another word for today's goal - is the most important statement, and the leader reads it out twice. The leader reads the mission out twice! After the plans, soldiers are asked, "what's the mission?"

If they didn't do these things, soldiers would go out and do the wrong thing. Trust me on this, it's hard enough to direct soldiers with the help of solid goals, it is impossible without, and many a military campaign has founded through poor goals. Which leads us to their definition of the objective:

The objective is the one thing that if you have achieved it, you have done what you should have, and if you have not done it, then anything else that you might have done was in vain.

It's a tricky thing to define, and I'm going to skip that for now. But at some point, it settles into the mind, and one can be quite incisive. As long as the objective is set it is possible to measure any proposal against it, and thus we ground everything in the reality of our goal.

But to do this, the goal has to be stated. It has to be taught to soldiers who didn't finish grade school but can be taught to fire a rifle, and equally, it has to be stressed to people in complex projects built from opposing perspectives. When people from the crypto, legal, governance, computer science, finance, accounting, retail, charity, economics, government and who knows what other disciplines crowd into one contentious applications sphere such as the Internet, it feels to me like I didn't finish grade school, and the bullets are whizzing overhead.

But I know how to fire a rifle. I know how to do my bit, so ... what's the goal? What on this hellish earth is going to convince me to lift my head up and risk those bullets?

When it comes to security, I think it suffices for an OpenBSD-like project to state that Security is the Goal. Or, in the case of a more general purposes OS like FreeBSD, Security is a Goal, and others also are Goals, that we deliberately and explicitly need to juggle - with the important caveat that Security is the only Goal that can override the others.

It needs to be stated and it needs to be shared. Also needed are what we call Limitations on the Goal. These are important secondary statements that qualify the Goal. By way of example, here are some good ones:

  • As delivered out of the box. This means that we are concentrating on a deliverable that is secure on install; not one that needs tightening up by an expert, and even more importantly, we may even choose to make it harder for certain things where we can make it more secure 'out of the box'.
  • For the average user. Which means my Mom. Not the security expert, not the Internet techie, and not the corporate IT department.
  • Before you cry foul at those limitations, ponder them for a bit. I picked those because they happen to be the unstated Limitations on an unstated security goal for a particular project (Mozilla). Your project might reverse those limitations or pick others. But also note how they had a defining quality that locked your mind into a conflict.

    When you feel that you are in conflict with the goal, and its limitations, that's when the goal is doing its job. Face the conflict - either bring your thoughts into alignment with the goal, or ask why the goal is as it is!

    To summarise this long rant, I'd encourage anyone who has an interest in a security project to start thinking about explicitly setting the goal or goals. Making that work means stating it, initially. But it is in the follow through that the reward is found:

    • state your limitations
    • appoint your security officer or security director
    • design the exploit procedure
    • work out the disclosure policy
    • get used to thinking in terms of "how does this help us meet the goal?"
    • start the background projects needed to meet the goal in the future
    • develop an open understanding of where you fall short of the goal!

    And above all, share and document this process. When you've done this, you'll be able to establish the credibility of the project and the people in a very simple, objective and measurable statement:

    Our Goal is Security.
    Posted by iang at February 18, 2005 12:44 PM | TrackBack
    Comments

    I've been meaning to write a little blurb on the similarities between 5PM governance and good software security.

    If you look at qmail (which has gone years without an exploit) and the "qmail security guarantee" page:

    http://cr.yp.to/qmail/guarantee.html

    it strikes me point (4) is similar to 5PM for governance.

    To quote: "Move separate functions into mutually untrusting programs."

    If only people who write daemons that work with TCPIP should read and implement things like this, the number of break-ins would drop like a stone.

    It took openssh to be cracked into before they rushed around and implemented priviledge seperation. Why not take it a step further like DJB has?

    See point (4) in the qmail security guarantee.

    Anyone planning to implement a service accessible over the internet has to think like this.

    Why not do that for userland programs as well - each user should have several UIDs available to them - the web browser should run under a seperate UID for instance, the JVM another. Any Java or browser exploit would not be able to touch any data. How about writing code that picks unused UIDs much like programs can pick untrusted ports above 1024. Every instance of the web browser you launch can run as another UID.

    If you are stuck running what is available and are planning a web server application... seperate the web server from the database. Put the IDS logger on yet another box. Put a squid in front of the web server... the valuable stuff - i.e. your data should be the furthest from the outside network... and please use stack smashing protection and avoid bad libraries that don't implement sanity checks...

    When you are using a web browser and want to check out dubious sites? Run it as a seperate user with an ephemeral /tmp as home. You can ssh back to your own box and the DISPLAY gets set automagically.

    This is what brings security, and not running around trying to pretend that eveyone writes perfect code.

    Cheers!

    Posted by: Venkat Manakkal at February 18, 2005 02:02 PM

    Nice page that. Sounds like Dan Bernstein is an old timer like myself.

    That's quite perceptive to relate DB's #4 to 5PM's separation of concerns! It is the same thing, effectively, except 5PM takes its lead from accounting and governance. We do exactly the same thing in our payment systems, with sometimes half a dozen different processes/daemons between the net and the backend.

    Patrick, check out #7 :-)

    Posted by: Iang at February 18, 2005 02:40 PM

    > Patrick, check out #7 :-)

    Good one, yes!

    (I had previously discussed the use of malloc with Ian, and in particular mentioned the special allocator I use in Fexl.)

    I'm an old-timer too!

    Posted by: Patrick Chkoreff at February 18, 2005 10:54 PM

    The trouble is that if security is the goal, you don't want an OS, you want a backhoe and a concrete mixer. That way, you can dig a deep hole, drop your computer system down it, and backfill the hole with concrete. The resulting computer will have no useful functionality, but since functionality is not a goal, it doesn't matter: it is indisputably secure, so you have done what you should have done.

    Posted by: John Cowan at March 5, 2013 06:58 AM
    Post a comment









    Remember personal info?






    Hit preview to see your comment as it would be displayed.