Threatwatch: taking money & code from "interested parties" (OpenBSD + FBI = backdoors)
Following email is circulating amongst crypto-plumber communities. I have no idea whether it is accurate or not. It was sent to Theo de Raadt, a shaker & mover over at security-leading OpenBSD group. He also doesn't know...
Offered here in the spirit of documenting the potential threats to the ITSec world.
From: Gregory Perry <Gregory.Perry@Go.something.example.tv>
To: "firstname.lastname@example.org" <email@example.com>
Subject: OpenBSD Crypto Framework
Long time no talk. If you will recall, a while back I was the CTO at NETSEC and arranged funding and donations for the OpenBSD Crypto Framework. At that same time I also did some consulting for the FBI, for their GSA Technical Support Center, which was a cryptologic reverse engineering project aimed at backdooring and implementing key escrow mechanisms for smart card and other hardware-based computing technologies.
My NDA with the FBI has recently expired, and I wanted to make you aware of the fact that the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA, the parent organization to the FBI. Jason Wright and several other developers were responsible for those backdoors, and you would be well advised to review any and all code commits by Wright as well as the other developers he worked with originating from NETSEC.
This is also probably the reason why you lost your DARPA funding, they more than likely caught wind of the fact that those backdoors were present and didn't want to create any derivative products based upon the same.
This is also why several inside FBI folks have been recently advocating the use of OpenBSD for VPN and firewalling implementations in virtualized environments, for example Scott Lowe is a well respected author in virtualization circles who also happens top be on the FBI payroll, and who has also recently published several tutorials for the use of OpenBSD VMs in enterprise VMware vSphere deployments.
Chief Executive Officer
"VMware Training Products & Services"
540-........ x111 (local)
866-........ x111 (toll free)
Posted by iang at December 14, 2010 10:22 PM
As I have said in the past the three big attack areas that the likes of the NSA, GCHQ, et al are going to be looking at are,
1, Protocol failings.
2, Side Channel failings (timing etc).
3, Known Plain Text attacks.
Protocol failings are the biggie if you want to backdoor systems because they are system independant as are all frameworks and there is also a significant advantage in being "first to market" as you can then exploit the edge effects by leveraging "interoperability". That is everybody else has to get their system to work with yours.
We know from experiance that protocols and frameworks are difficult, obvious examples are WEP and SSL. The failings are often extreamly difficult to spot sometimes not, it depends on the field of endevour the person looking at them comes from and an inate sense of "hinkyness" when doing "walk throughs".
The classic class of protocol failing that is probably most known about is "fall back". That is when two systems negotiate to communicate they start with two lists (one at either end) of supported features etc. They negotiate their way down the lists untill they find a compatable feature at both ends they can use. This process is usually totally transparent to the user and they don't see the end result. Thus not realise that they are working with the lowest common denominator that in terms of crypto might be 40bit DES. Forcing to lowest common denominator is a simple filtering trick for a Man in The Middle attack and used sparingly the targets will be totaly oblivious to it.
This raises the real issues with protocol failings they effectivly live as long as the protocol does for a number of reasons one being backwards compatability the other being market inertia. That is even if a protocol failing is found most people go "yeh so what" and carry on with their existing system often kidding themselves that as their version has version 2.x etc they are safe, but forgetting their software also falls back to version 1.0 for backwards compatability with systems that have not or cannot be updated to version 2.x or above.
We are about to see this serious problem arise with "Smart Meters" and "Medical Implants". The people designing the communications systems for these are simple techies that want robust behaviour and invariably don't understand security in any real sense (this is what happened with WEP). However these systems are embeded and will have life times of 25+ years... Thus once developed market inertia will keep them inplace for atleast a 25+ year time frame...
But the biggest success you can have to back dooring a system is to get something mandated in "the one ring that binds them all" that whilst being theoreticaly good has subtal issues with practical implementation.
AES is a classic example of this failing, though I'm in no way claiming it's deliberate just that it shows what can happen when not all the fields of endevor come to the table. As such it's the same but oposit of what happend with WEP, this time it was the theory boys at the table and the techies else where.
During the NIST organised AES selection process only the theoretical side was considered they did not talk about potential implementation attacks even though they where known about. One such issue was "loop unrolling", to get high throughput on CPU's you want to minimise things that cause CPU pipline stalls thus you want to minimise the size of branches and not jump or make sub calls.
Now as AES was an Open Competition all the candidate code was made available for review and performance on certain CPU's was considered to be more important than others.
The result was the code candidates became a secondary competition and where tweeked in every possible way to get the best figures or "Most efficient utilisation" of the CPU.
As these where considered to be "the best of the best" and open to be used by everyone they where dropped "as is" into many many designs either directly or through of the shelf code libraries.
However what most people do not appear to realise and the likes of the NSA know all to well is that unless very very carefully designed "efficiency is the enemy of practical security".
That is the more efficient you make a design the more information it hemorrhages through side channels especialy in high efficiency CPU designs in multitasking OS's.
The result being fertile ground for effective and very practical side channel attacks via such things as timing.
Thus side channels are the result of well over fourty years of design trends driven by marketing specmanship of our system is "faster" or more "efficient" than the competitors. You only have to look at the Intel-v-AMD fight for desktop dominance to see what it has done. CPU's that are not only multiply pipelind but have multiple cache levels. Each and every one of these efficiency optomisations opens up timing side channels that are easily visable on the wire at significant distance from the system. Further if you have even unprivaledged access to the system (ie ping it's network stack) you can induce the timing faults to your significant advantage such that now you can get the AES key to be revealed or even the plain text.
The NSA are accutly aware of this but don't talk about it (they have a dual role of both protecting US classified material and also reading every other nations classified material).
You can however see it in the way they classify certain equipment such as their AES based Inline Media Encryptor (IME). If you read the specs it says it is cleared for quite high levels of secrecy but.... only for data at rest.
That is whilst in operation it is not alowed to be used on systems where data may be in transit, or put more simply you are only allowed to use it on isolated systems that is they are not connected to any networks and have the appropriate EmSec (TEMPEST) isolation...
Another issue with side channels is the further down the stack you implement them the more effective they are. So the whole premis for IPSec is flawed. IPSec is of it's self a backdoor into any system it's currently used on.
Thus if you want any kind of security you cann't be doing encryption or decryption or even processing the plain text through the CPU in an ordered manner or repeatedly on a system connected to a network or visable in any way to eavesdropping...
Traditionaly that is what an "air gapped" system does. However people forget what the "air gap" is realy all about, thus they implement a good old fashioned "sneaker net" instead where they close the air gap via storage media. This failing was well known back in the early 1980's as virus code spread even though there were no local area networks or other electronic communications involved. Forgetting this little fact is what I was banging on about several years ago when I showed a working method for infecting machines such as those used to take peoples votes via "fire and forget" malware... but nobody wanted to listen then Stuxnet came along to prove the point...
Then there is the issue of Known Plain Text attacks, certain very popular software (MS Word) embbeds vast amounts of known plain text in known positions in files. Bob Morris (father of the Morris worm writer) and at one time the NSA's senior scientist in his retirment speach specificaly alluded to known plain text as a serious and significant weakness in security. The academic and open communities appear to have lost interest in researching this asspect of security which is a shame because it realy is the bread and butter of your working cryptoanalyst and is fundemental to many practical attacks in many surprising ways.
Back in WWII known plaintext in the form of the structure of messages (standard openings) and salutations was used to provide "probables" to significantly reduce the workload of "brut force" attacks. As the system the used made use of large card indexes of "artfull facts" it became known as the "British Museum" method.
The catalogues of card indexes of "artfull facts" were built up over time from early very occasional "breaks" in high security level (Enigma / fish / purple) traffic, traffic analysis, and low level information gleand from systems like dockyard ciphers, weather codes, and general issue information that got sent in multiple ciphers and even publicly posted information (Diplomatic lists etc). It is very difficult to emphasise just how important this was in facilitating the reading of traffic and building up an intelligence analysis. So much so that in many cases just the traffic anaysis was sufficient to provide an accurate picture of enemy intentions without actually having to decrypt the messages. Thus sometimes it was possible to know what the enemy where going to do before the enemy frontline forces did...
Some sixty to seventy years later it is difficult to say just what advancments have been made in this area but it is well known that the likes of the NSA, GCHQ, et al record and index everything they have a voracious appetite for data and employ the best of the maths grads to analyse it. This has significant implications for "storage media" as data files get repeatedly re-written anew on hard drives with only tiny variations often under different keys etc. Often hard drive encryption is little better than stream encryption thus known plain text repeatedly stored alows messages in depth which we know is fatal for the security of stream ciphers.
Getting back to the issue of if OpenBSD has been "backdoored" the answer is almost certainly yes, it has "efficient IPSec" on it and IPSec has questions hanging over it's protocols and we know that the "efficient" implementation had significant side channels.
The apparent real question is was it intentional or just happened and was then exploited. But even this is a "chicken and egg" question of marginal philosophic interest and simply enables various parties to make claims and counter claims.
When a new analysis is carried out side channels and protocol failings are going to be found plain and simple. Simply because those writtting the code did not know of the potential attack vectors when making their code "efficient".
The real question boils down to "what are w e going to do in the new reality" that is are we going to learn from past mistakes and if so how are we going to deal with legacy systems and market inertia.
As you know for some time I have been saying that the likes of NIST need to step up to the plate on mandatory "frameworks" and not just play around with low level protocol competitions. Frameworks that if properly designed allow parts with failings to be easily and rapidly swapped out with minimal impact.
And importantly these frameworks need to be designed in such a way that entire protocols can just as easily be swapped out. Because we know with the best will in the world there are always going to be failings that come to light as we make assumptions about the unknown and there is always a lot of unknowns in human progress.
If we don't do it then we will show that we have not learned from our mistakes and we will have legacy systems with in built and unreplacable flaws made new over and over again. Which will end up hanging around the necks of our children and grand children via the likes of Smart meters and Medical Implants and other embedded infrastructure systems yet to be thought of.
We know from the likes of CCTV that static technology is failing technology as criminals will out smart/evolve it fairly rapidly. We are seeing the same in all areas of financial security. The only way to beat this is by "agility" and this means all systems especialy long life embedded systems have to be designed from the outset to be agile.
And agility means dumping the blind race to the bottom of "efficiency" for specmanship and replacing it with the "efficiency" of flexibility that can give security al be it at the loss of a few CPU cycles.
Sadly this will not happen untill systems designers start learning the simple truths that manufactures had to face upto in the last century that started with the Whitworth thread for nuts and bolts and developed into full scale quality control.
And it needs to be mandated, because due there is no real "asymetric distance cost metric" with intangable goods such as software. Thus it's maintanence has got into an evolutionary culdersack and is being kept there due to short sighted "efficiency" drives on short term cost savings that unfortunatly cost big in the longterm.
And as you know I keep saying that security is a quality process and needs to be built in before day zero on any project, hopefully some people will take it on board...