RSA's Coviello declares the new threat environment:
"Organisations are defending themselves with the information security equivalent of the Maginot Line as their adversaries easily outflank perimeter defences," Coviello added. "People are the new perimeter contending with zero-day malware delivered through spear-phishing attacks that are invisible to traditional perimeter-based security defences such as antivirus and intrusion detection systems." ®
The recent spate of attacks do not tell us that the defences are weak - this is something we've known for some time. E.g., from 20th April, 2003, "The Maginot Web" said it. Yawn. Taher Elgamal, the guy who did the crypto in SSL at Netscape back in 1994, puts it this way:
How about those certificate authority breaches against Comodo and that wiped out DigiNotar?It's a combination of PKI and trust models and all that kind of stuff. If there is a business in the world that I can go to and get a digital certificate that says my name is Tim Greene then that business is broken, because I'm not Tim Greene, but I've got a certificate that says this is my name. This is a broken process in the sense that we allowed a business that is broken to get into a trusted circle. The reality is there will always be crooks, somebody will always want to make money in the wrong way. It will continue to happen until the end of time.
Is there a better way than certificate authorities?
The fact that browsers were designed with built-in root keys is unfortunate. That is the wrong thing, but it's very difficult to change that. We should have separated who is trusted from the vendor. ...
What the recent rash of exploits signal is that the attackers are now lined up and deployed against our weak defences:
Coviello said one of the ironies of the attack was that it validated trends in the market that had prompted RSA to buy network forensics and threat analysis firm NetWitness just before the attack.
This is another unfortunate hypothesis in the market for silver bullets: we need real attacks to tell us real security news. OK, now we've got it. Message heard, loud and clear. So, what to do? Coviello goes on:
Security programs need to evolve to be risk-based and agile rather than "conventional" reactive security, he argued."The existing perimeter is not enough, which is why we bought NetWitness. The NetWitness technology allowed us to determine damage and carry out remediation very quickly," Coviello said.
The existing perimeter was an old idea - one static defence, and the attacker would walk up, hit it with his head, and go elsewhere in confusion. Build it strong, went the logic, and give the attacker a big headache! ... but the people in charge at the time were steeped in the school of cryptology and computer science, and consequently lacked the essential visibility over the human aspects of security to understand how limiting this concept was, and how the attacker was blessed with sight and an ability to walk around.
Risk management throws out the old binary approach completely. To some extent, it is just in time, as a methodology. But to a large extent, the market place hasn't moved. Like deer in headlights, the big institutions watch the trucks approach, looking at each other for a solution.
Which is what makes these comments by RSA and Taher Elgamal significant. More than others, these people built the old SSL infrastructure. When the people who built it call game over, it's time to pay attention.
Posted by iang at October 13, 2011 10:31 AM | TrackBackWe were working on high-availability and no-single-point-of-failure and also doing cluster scaleup (for both commercial and numerical intensive) ... originally it was called HA/6000, but I coined "HA/CMP" as marketing term to capture both concepts. Part of the effort was with major RDBMS vendors ... including Oracle. This is old reference to Jan92 meeting in Ellison's conference room
http://www.garlic.com/~lynn/95.html#13
Two of the people in the meeting later leave and join a small client/server startup responsible for something called the "commerce server". Approx. a month after the Jan92 meeting, the cluster scaleup part is transferred (and announced as a supercomputer for numerical intensive only) and we were told we couldn't work on anything with more than four processors. This motivates us to leave.
Sometime later, we are brought in as consultants to the small client/server startup because they want to do payment transactions on their server; the startup up had also invented this technology they called "SSL" they wanted to use; the result is now frequently called "electronic commerce". As part of the deployment we do audits and walk throughs of these new businesses selling SSL domain name certificates. misc. past posts
http://www.garlic.com/~lynn/subpubkey.html#sslcert
The browser Certificate Authority operational model wasn't a "no single-point-of-failure" ... it was an "any point of failure" ... the probability of a failure increases as the number of CAs increase (a systemic risk scenario). Also there were some security assumptions about the browser/server deployments ... which were almost immediately violated ... voiding some amount of the security model. It wasn't long before I coined the term "comfort certificates" in attempt to differentiate the feeling of security from real security.
Posted by: Lynn Wheeler at October 13, 2011 08:46 AMThe SSL domain name Certification Authority model has somebody applying for a digital certificate and providing identification information. The CA then must contact the authoritative agency responsible for domain names to cross-check the application supplied identification information with whats on file with the domain name authoritative agency (time-consuming, error-prone, and expensive).
The CA industry has somewhat backed pieces of DNSSEC as eliminating some number of authoritative agency vulnerabilities (i.e. the CA industry trust root is the authoritative agency responsible for the domain name information) ... like domain name take-over. Part of that is a domain name registrant, at the same time also registers a public key. Then all future communication between the domain name owner and the domain name infrastructure is digitally signed, which can be verified with the on-file public key. It turns out that the CA industry could also require SSL digital certificate application to also be digitally signed ... and then they could eliminate the time-consuming, error-prone and expensive identification process with a much simpler, more reliable, and less expensive authentication process by doing real-time retrieval of the on-file public key at the domain name infrastructure.
However, this creates something of a "catch-22" for the SSL domain name CA industry, since if they can do real-time retrieval of on-file public key (for authentication), then others might also start doing something similar ... eliminating the need for SSL domain name digital certificates. misc. past posts mentioning catch-22 for the SSL domain name CA industry
http://www.garlic.com/~lynn/subpubkey.html#catch22
I had worked on HSP in the late 80s ... that included being able to do a reliable transactions in minimum of three exchanges ... compared to minimum of seven exchanges for TCP. In the mid-90s, as HTTP servers were starting to scale-up they were starting to find that 99% of the processor was starting to be used dedicated to running the TCP FINWAIT list (most TCP implementations had been designed assuming long-lasting reliable connections, and had never anticipated it to be used for something like HTTP). I proposed being able to piggy-back returning public key in the domain name to ip-address response. Then a light-weight SSL connection could be done in an HSP 3-packet exchange ... but client generating symmetric packet key, encrypting the packet, encrypting the packet key with the server public key. In effect, light-weight SSL could be done in the same number of packets as a non-encrypted, non-SSL operation. misc. past posts mentioning HSP
http://www.garlic.com/~lynn/subnetwork.html#xtphsp
Disclaimer: Somewhat in the wake of having done "electronic commerce", in the mid-90s, we were invited to participate in the X9A10 financial standard working group which have been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments.
The result was the x9.59 financial transaction standard which allowed for digitally signed transactions but didn't require digital certificates to be appended to the transactions. Some references
http://www.garlic.com/~lynn/x959.html#x959
Part of the issue was that even an abbreviated (relying-party-only) digital certificate (i.e. the information was registered with the relying-party, the relying-party would issue a digital certificate for use only by the relying-party) was 100 times larger than the typical payment transaction payload size. misc. past posts mentioning the enormous payload size bloat that would come from appending digital certificates to payment transactions
http://www.garlic.com/~lynn/subpubkey.html#bloat
the issue was that since the relying-party already had the information (contained in the digital certificate), it was redundant and superfluous to append the same information (in the form of digital certificate) to every payment transaction (besides representing enormous payload bloat).
The largest use of SSL in the world is this thing called "electronic commerce" for hiding transaction details. One of the characteristics of X9.59 transactions was that it was no longer necessary to hide the transactions ... which eliminated the major use of SSL in the world. Not needing to hide the information also eliminates threats from skimming, evesdropping, and data breaches. It doesn't eliminate such activity, but the major motivation is crooks using the harvested information for fraudulent transactions. X9.59 eliminated the ability of crooks to use such harvested information for fraudulent transactions ... and therefor eliminated the attractiveness to crooks for performing such operations.
I guess everyone has forgotten the grand plan of RSA(circa 1994) (Jim Bidzous's) to monetize the entire internet via CA's(one of their more retarded ideas!). (Tollbooths on the internet).
I thought this to be SO significant and threatening that ONE of my personal buyoffs to work on SKIP and the CA structure for SunLabs as an external consultant was to insist that the source code for the CA function be released in opensource form(which eventually happened).
SSL was truly a joke in the early days as far as security went(still is) and Ian Goldberg set about proving that in short order.
And dont even get me started on the so called SSL infrastructure whose demise has amazingly taken so many years.
As a LOT of us know the entire structure has been a house of cards and a confidence game rigged against companies and consumers alike.
I for one am VERY happy that DigiNotar took place and that all the CA compromises are happening and subsequently a collapse of the whole SSL/CA industry(hmm time for a short?(way too late))
Maybe now we can get onto actual financial crypto instead of SSL/CA craptastic window dressing.
Sheesh
gwen
Security "best practices" are The Maginot Line mentality. The Maginot Line was easily flanked by Germany's maneuver units.
The best defense is good offense. So how does that translate to cyber security?
Posted by: Ken at November 5, 2011 03:34 AM