September 03, 2009

How to avoid Yesterday's Unknowns - Algorithm Agility in Protocols

Following yesterday's post, here's a today example of thinking about unknowns -- yesterday's, today's and tomorrow's. Currently the experts in crypto and protocol circles are championing "algorithm agility". Why? Because SHA1 is under a cloud, and MD5 is all-but-drowned. It should be replaced!

MD5 is yesterday's unknown. I vaguely recall that MD5 was "not best" in 1995, and the trick was to put in SHA1 (and not SHA0). So is SHA1: this blog has frequently reported on SHA1 running into stormy weather, since the original Wang et al papers. I even placed it into a conceptual framework of Pareto security. We've known this since mid 2004, which makes it very yesterday's news.

Unfortunately, a lot of groups did not heed the warnings, and are still running systems based either loosely or completely on SHA1, MD5 or worse. And now the researchers have the bit between their academic teeth, and are attacking the CAs and their certs.

The good news is that people are now working to replace SHA1. The bad news is that they are looking around for someone (else) to blame. And one of the easy targets is the protocol itself, which is "insufficiently agile". Developers can do something about this, they can add "algorithm agility" and then the problem will go away, so the theory goes.

But another story can be seen in the SSL and OpenPGP communities. Both of these places spent inordinate amounts of time on algorithm agility in the past: RSA, DSA, bit strength, hashes within signatures, AES, DES, Blowfish, ... these are all negotiable, replaceable, variable _within the protocol_ at some level or other. In fact, during the 1990s, a great and glorious war was fought against patents, governments, businesses, individuals and even bits&bytes. For various reasons, each of these battlegrounds became a siren call for more agility.

But it seems that both camps forgot to make the hash function fully agile and replaceable. (Disclosure: I was part of the OpenPGP group, so I have more of a view on that camp. There were occasional muted discussions on this, but in general, the issue was always deferred. There was a mild sense of urgency, knowledge of the gap, but finishing the document was always more important than changing the protocol. I still don't disagree with that sense of priotities.)

Hash replacement was forgotten because all of the efforts were spent on fighting the last war. Agility wasn't pursued in its generality, because it was messy, complex and introduced more than its share of problems; instead, only known threats from the last war were made agile. And one was forgotten, of course, being the one never in dispute.

Instead of realising the trap, and re-thinking the approach, the cries of algorithm agility are getting louder. In 2019 we'll be here again, with complete and perfect hash agility (NIST-approved no doubt) and what will happen? Something else will break, and it'll be something that we didn't make agile. The cycle will start again.

Instead, cut the gordian knot, and go with The One:

"There is one cipher suite, and it is numbered Number 1."

When you see a cloud, don't buy an umbrella; replace the climate. With #2. Completely. This way, you and your users will benefit from all of the potential for a redesign of the entire protocol, not just the headline things from the last war that catch up to you today.

Most well-designed systems will last at least 5 years. Which gives room for a general planning cycle for replacing the lot. Which -- once you understand the strategy -- is a tractable design problem, because in modern software product development, most products are replaced completely every few years. A protocol need be no different.

So, once every cycle, you might spent a year cleaning up from the yesterday's war. And then, we get back to thinking about tomorrow's war for several years.

Posted by iang at September 3, 2009 04:21 PM | TrackBack
Comments

Ian,

This problem of agility is one which has an underlying problem (lack of foresight).

NIST amongst others are addressing "the how" not "the why" of the issue.

That is they are specifing specific ways to do a particular job (AES etc) but not a suitable framework within which it operates.

One of the issues with this is that the "how" method gets to tightly bound within specific applications, which gives rise to the issues of inflexable legacy systems.

It might be ok to talk about a five year cycle for software only application, but not for infrastructure hardware systems where twenty five to fifty year life cycles are more likley.

Although it sounds superficialy like just an API, the framework needs to be more indepth than that partly due to the nature of "embeded systems".

It needs to give consideration to "the why" of what people do at all levels of application.

Unfortunatly it is not going to be an easy task but the sooner it starts the less pain there will be in the long term.

As an example consider the US DOD Internet Protocol -v- ISO OSI X model that originated in Europe. The OSI X protocols were at the time generaly considered so over engineered and specified as to be impractical to implement, and it was certainly true that the resources required in many cases where just not available.

The US took a pragmatic approach with DOD IP, which now predominates.

However IP has had very significant growing pains, backward compatability issues are a nightmare, and security not even an afterthought.

In practice what is now happening is that "the why" that was envisioned in the European OSI framework is retrospectivly being built into the US DOD IP bit by bit.

However this bolting on what is now needed is causing inefficiency incompatability and add hoc protocols that are ill thought out within the greater context they have to work in and frequently are found to be defficient to the point of being broken in that context.

Although from a "market perspective" obsolecensse is a given, an assumption of a short 18months life cycle (for software and FMCG) is not valid for all markets (Infrestructure for power, water, comms etc).

Environmental considerations are going to push more and more long life cycle systems into the world, as is legislation for National Security of Infrestructure and environment.

The current "hamster wheel of pain" of patching, upgrading, and throw away technology seen, is not sustainable even in the SOHO PC and low cost server market.

But unless the appropriate frameworks are put in place the hammster will run for practical purposes indefinatly without getting any where.

This will put significant limitations on what can and will be achived, due to resources being wasted on the inordinate update process.

As an example think about your home heating and air conditioning being controled by the utility supplier to manage demand against resources and capability.

Some juresdictions are looking at making this a legal requirment.

Now consider what could happen if the equipment is designed "optimaly" for todays market and with little thought about the security context.

Two things are going to happen.

The first is utility suppliers will opt to not upgrade long term fundemental supply infrastructure, just more and more limit supply against increasing demand (cheapest option for them). As time progresses the supply network will become more and more brittle.

At some point something will happen, such as an operator will make a mistake or a natural event will occure or somebody with deliberatly intent will make changes. A cascade failure will occure and if you are lucky all that will happen is a few hours black out while the fault is found and the utility network restored.

But what happens if it is a person of ill intent who wants to deliberatly bring down the utility network?

The chances are that if they know what they are doing they can do damage that is so fundemental to the utility network that it could take months to repair or replace (see Peter Gutmanns paper on what happened when a major electricity supply was lost).

And unless the security is changed in all those heating and air conditioning units the same thing will happen again and again.

Now think of the cost and work involved with making the security changes...

If however the apropriate framework was inplace prior to the controlers being built, then although the unit cost would be fractionaly higher (by a cent or two) making the changes could be done as and when required with minimal cost.

However if built in correctly as an overal system it would be more flexable and importantly resiliant thus significantly reducing the future risk and thus cost.

Posted by: Clive Robinson at September 9, 2009 05:26 AM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.