A long article by Dan Bricklin entitled "Learning From Accidents and a Terrorist Attack" reviews a book about Normal Accident Theory (book entitled "Normal Accidents" by Charles Perrow). Normal Accident Theory encapsulates the design of systems that are tightly coupled and thus highly prone to weird and wonderful failures. Sounds like financial cryptography to me! The article is long, and only recommended for die-hard software engineers who write code that matters, so I'll copy only the extracts on missile early warning failures, below. For the rest, go to the source.
An example he gives of independent redundant systems providing operators much information was of the early warning systems for incoming missiles in North America at the time (early 1980's). He describes the false alarms, several every day, most of which are dismissed quickly. When an alarm comes in to a command center, a telephone conference is started with duty officers at other command centers. If it looks serious (as it does every few days), higher level officials are added to the conference call. If it still looks real, then a third level conference is started, including the president of the U.S. (which hadn't happened so far at the time). The false alarms are usually from weather or birds that look to satellites or other sensors like a missile launch. By checking with other sensors that use independent technology or inputs, such as radar, they can see the lack of confirmation. They also look to intelligence of what the Soviets are doing (though the Soviets may be reacting to similar false alarms themselves or to their surveillance of the U.S.).
In one false alarm in November of 1979, many of the monitors reported what looked exactly like a massive Soviet attack. While they were checking it out, ten tactical fighters were sent aloft and U.S. missiles were put on low-level alert. It turned out that a training tape on an auxiliary system found its way into the real system. The alarm was suspected of being false in two minutes, but was certified false after six (preparing a counter strike takes ten minutes for a submarine-launched attack). In another false alarm, test messages had a bit stuck in the 1 position due to a hardware failure, indicating 2 missiles instead of zero. There was no loopback to help detect the error.Posted by iang at September 17, 2004 06:10 AM | TrackBack
>the Soviets may be reacting to similar false alarms
>> themselves or to their surveillance of the U.S.
Yes they do: I once read an article about how Putin himself and high-level military advisers were intently watching the progress of a rocket that had been launched off the polar coast of Norway: the launch had not been announced, the rockets' signature was close to that of a military missile and, most ominously, its trajectory fell entirely within a cone over which the russians had limited monitoring abilities due to the failure of one of their antiquated satellites. In the end it was shown to be a scientificpayload from a small nordic research organization that had not cleared the launch through the appropriate channel.
I am not sure what this has to do with cryptography, though...
-- O.L.
Posted by: Olivier at September 17, 2004 11:20 AMExcellent example!
The above discussion has *nothing* to do with cryptography, this is the software engineering layer - how we build financial cryptography systems with dollaps of software engineering practices. In the whole FC concept, one could say that the point is that it isn't really about cryptography... but that's hotly disputed by some, including RAH who invented the term!
Posted by: Iang at September 17, 2004 11:23 AMthough the Soviets may be reacting to similar false alarms themselves
In 1983, Russian Lt. Col. Petrov chose to ignore alarms of incoming US missiles and spared us a great deal of pain. http://www.worldcitizens.org/forgottenhero.html
Posted by: user at September 22, 2004 06:34 AM