Comments: Threatwatch: taking money & code from "interested parties" (OpenBSD + FBI = backdoors)

writes:

The OCF was a target for side channel key leaking mechanisms, as well as pf (the stateful inspection packet filter), in addition to the gigabit Ethernet driver stack for the OpenBSD operating system; all of those projects NETSEC donated engineers and equipment for, including the first revision of the OCF hardware acceleration framework based on the HiFN line of crypto accelerators.

The project involved was the GSA Technical Support Center, a circa 1999 joint research and development project between the FBI and the NSA; the technologies we developed were Multi Level Security controls for case collaboration between the NSA and the FBI due to the Posse Commitatus Act, although in reality those controls were only there for show as the intended facility did in fact host both FBI and NSA in the same building.

We were tasked with proposing various methods used to reverse engineer smart card technologies, including Piranha techniques for stripping organic materials from smart cards and other embedded systems used for key material storage, so that the gates could be analyzed with Scanning Electron and Scanning Tunneling Microscopy. We also developed proposals for distributed brute force key cracking systems used for DES/3DES cryptanalysis, in addition to other methods for side channel leaking and covert backdoors in firmware-based systems. Some of these projects were spun off into other sub projects, JTAG analysis components etc. I left NETSEC in 2000 to start another venture, I had some fairly significant concerns with many aspects of these projects, and I was the lead architect for the site-to-site VPN project developed for Executive Office for United States Attorneys, which was a statically keyed VPN system used at 235+ US Attorney locations and which later proved to have been backdoored by the FBI so that they could recover (potentially) grand jury information from various US Attorney sites across the United States and abroad.

Posted by csoonline has more details at December 15, 2010 04:41 AM

Ian,

As I have said in the past the three big attack areas that the likes of the NSA, GCHQ, et al are going to be looking at are,

1, Protocol failings.
2, Side Channel failings (timing etc).
3, Known Plain Text attacks.

Protocol failings are the biggie if you want to backdoor systems because they are system independant as are all frameworks and there is also a significant advantage in being "first to market" as you can then exploit the edge effects by leveraging "interoperability". That is everybody else has to get their system to work with yours.

We know from experiance that protocols and frameworks are difficult, obvious examples are WEP and SSL. The failings are often extreamly difficult to spot sometimes not, it depends on the field of endevour the person looking at them comes from and an inate sense of "hinkyness" when doing "walk throughs".

The classic class of protocol failing that is probably most known about is "fall back". That is when two systems negotiate to communicate they start with two lists (one at either end) of supported features etc. They negotiate their way down the lists untill they find a compatable feature at both ends they can use. This process is usually totally transparent to the user and they don't see the end result. Thus not realise that they are working with the lowest common denominator that in terms of crypto might be 40bit DES. Forcing to lowest common denominator is a simple filtering trick for a Man in The Middle attack and used sparingly the targets will be totaly oblivious to it.

This raises the real issues with protocol failings they effectivly live as long as the protocol does for a number of reasons one being backwards compatability the other being market inertia. That is even if a protocol failing is found most people go "yeh so what" and carry on with their existing system often kidding themselves that as their version has version 2.x etc they are safe, but forgetting their software also falls back to version 1.0 for backwards compatability with systems that have not or cannot be updated to version 2.x or above.

We are about to see this serious problem arise with "Smart Meters" and "Medical Implants". The people designing the communications systems for these are simple techies that want robust behaviour and invariably don't understand security in any real sense (this is what happened with WEP). However these systems are embeded and will have life times of 25+ years... Thus once developed market inertia will keep them inplace for atleast a 25+ year time frame...

But the biggest success you can have to back dooring a system is to get something mandated in "the one ring that binds them all" that whilst being theoreticaly good has subtal issues with practical implementation.

AES is a classic example of this failing, though I'm in no way claiming it's deliberate just that it shows what can happen when not all the fields of endevor come to the table. As such it's the same but oposit of what happend with WEP, this time it was the theory boys at the table and the techies else where.

During the NIST organised AES selection process only the theoretical side was considered they did not talk about potential implementation attacks even though they where known about. One such issue was "loop unrolling", to get high throughput on CPU's you want to minimise things that cause CPU pipline stalls thus you want to minimise the size of branches and not jump or make sub calls.

Now as AES was an Open Competition all the candidate code was made available for review and performance on certain CPU's was considered to be more important than others.

The result was the code candidates became a secondary competition and where tweeked in every possible way to get the best figures or "Most efficient utilisation" of the CPU.

As these where considered to be "the best of the best" and open to be used by everyone they where dropped "as is" into many many designs either directly or through of the shelf code libraries.

However what most people do not appear to realise and the likes of the NSA know all to well is that unless very very carefully designed "efficiency is the enemy of practical security".

That is the more efficient you make a design the more information it hemorrhages through side channels especialy in high efficiency CPU designs in multitasking OS's.

The result being fertile ground for effective and very practical side channel attacks via such things as timing.

Thus side channels are the result of well over fourty years of design trends driven by marketing specmanship of our system is "faster" or more "efficient" than the competitors. You only have to look at the Intel-v-AMD fight for desktop dominance to see what it has done. CPU's that are not only multiply pipelind but have multiple cache levels. Each and every one of these efficiency optomisations opens up timing side channels that are easily visable on the wire at significant distance from the system. Further if you have even unprivaledged access to the system (ie ping it's network stack) you can induce the timing faults to your significant advantage such that now you can get the AES key to be revealed or even the plain text.

The NSA are accutly aware of this but don't talk about it (they have a dual role of both protecting US classified material and also reading every other nations classified material).

You can however see it in the way they classify certain equipment such as their AES based Inline Media Encryptor (IME). If you read the specs it says it is cleared for quite high levels of secrecy but.... only for data at rest.

That is whilst in operation it is not alowed to be used on systems where data may be in transit, or put more simply you are only allowed to use it on isolated systems that is they are not connected to any networks and have the appropriate EmSec (TEMPEST) isolation...

Another issue with side channels is the further down the stack you implement them the more effective they are. So the whole premis for IPSec is flawed. IPSec is of it's self a backdoor into any system it's currently used on.

Thus if you want any kind of security you cann't be doing encryption or decryption or even processing the plain text through the CPU in an ordered manner or repeatedly on a system connected to a network or visable in any way to eavesdropping...

Traditionaly that is what an "air gapped" system does. However people forget what the "air gap" is realy all about, thus they implement a good old fashioned "sneaker net" instead where they close the air gap via storage media. This failing was well known back in the early 1980's as virus code spread even though there were no local area networks or other electronic communications involved. Forgetting this little fact is what I was banging on about several years ago when I showed a working method for infecting machines such as those used to take peoples votes via "fire and forget" malware... but nobody wanted to listen then Stuxnet came along to prove the point...

Then there is the issue of Known Plain Text attacks, certain very popular software (MS Word) embbeds vast amounts of known plain text in known positions in files. Bob Morris (father of the Morris worm writer) and at one time the NSA's senior scientist in his retirment speach specificaly alluded to known plain text as a serious and significant weakness in security. The academic and open communities appear to have lost interest in researching this asspect of security which is a shame because it realy is the bread and butter of your working cryptoanalyst and is fundemental to many practical attacks in many surprising ways.

Back in WWII known plaintext in the form of the structure of messages (standard openings) and salutations was used to provide "probables" to significantly reduce the workload of "brut force" attacks. As the system the used made use of large card indexes of "artfull facts" it became known as the "British Museum" method.

The catalogues of card indexes of "artfull facts" were built up over time from early very occasional "breaks" in high security level (Enigma / fish / purple) traffic, traffic analysis, and low level information gleand from systems like dockyard ciphers, weather codes, and general issue information that got sent in multiple ciphers and even publicly posted information (Diplomatic lists etc). It is very difficult to emphasise just how important this was in facilitating the reading of traffic and building up an intelligence analysis. So much so that in many cases just the traffic anaysis was sufficient to provide an accurate picture of enemy intentions without actually having to decrypt the messages. Thus sometimes it was possible to know what the enemy where going to do before the enemy frontline forces did...

Some sixty to seventy years later it is difficult to say just what advancments have been made in this area but it is well known that the likes of the NSA, GCHQ, et al record and index everything they have a voracious appetite for data and employ the best of the maths grads to analyse it. This has significant implications for "storage media" as data files get repeatedly re-written anew on hard drives with only tiny variations often under different keys etc. Often hard drive encryption is little better than stream encryption thus known plain text repeatedly stored alows messages in depth which we know is fatal for the security of stream ciphers.

Getting back to the issue of if OpenBSD has been "backdoored" the answer is almost certainly yes, it has "efficient IPSec" on it and IPSec has questions hanging over it's protocols and we know that the "efficient" implementation had significant side channels.

The apparent real question is was it intentional or just happened and was then exploited. But even this is a "chicken and egg" question of marginal philosophic interest and simply enables various parties to make claims and counter claims.

When a new analysis is carried out side channels and protocol failings are going to be found plain and simple. Simply because those writtting the code did not know of the potential attack vectors when making their code "efficient".

The real question boils down to "what are w e going to do in the new reality" that is are we going to learn from past mistakes and if so how are we going to deal with legacy systems and market inertia.

As you know for some time I have been saying that the likes of NIST need to step up to the plate on mandatory "frameworks" and not just play around with low level protocol competitions. Frameworks that if properly designed allow parts with failings to be easily and rapidly swapped out with minimal impact.

And importantly these frameworks need to be designed in such a way that entire protocols can just as easily be swapped out. Because we know with the best will in the world there are always going to be failings that come to light as we make assumptions about the unknown and there is always a lot of unknowns in human progress.

If we don't do it then we will show that we have not learned from our mistakes and we will have legacy systems with in built and unreplacable flaws made new over and over again. Which will end up hanging around the necks of our children and grand children via the likes of Smart meters and Medical Implants and other embedded infrastructure systems yet to be thought of.

We know from the likes of CCTV that static technology is failing technology as criminals will out smart/evolve it fairly rapidly. We are seeing the same in all areas of financial security. The only way to beat this is by "agility" and this means all systems especialy long life embedded systems have to be designed from the outset to be agile.

And agility means dumping the blind race to the bottom of "efficiency" for specmanship and replacing it with the "efficiency" of flexibility that can give security al be it at the loss of a few CPU cycles.

Sadly this will not happen untill systems designers start learning the simple truths that manufactures had to face upto in the last century that started with the Whitworth thread for nuts and bolts and developed into full scale quality control.

And it needs to be mandated, because due there is no real "asymetric distance cost metric" with intangable goods such as software. Thus it's maintanence has got into an evolutionary culdersack and is being kept there due to short sighted "efficiency" drives on short term cost savings that unfortunatly cost big in the longterm.

And as you know I keep saying that security is a quality process and needs to be built in before day zero on any project, hopefully some people will take it on board...

Posted by Clive Robinson at December 19, 2010 09:17 AM
Post a comment









Remember personal info?






Hit Preview to see your comment.
MT::App::Comments=HASH(0x5563d0e76688) Subroutine MT::Blog::SUPER::site_url redefined at /home/iang/www/fc/cgi-bin/mt/lib/MT/Object.pm line 125.