Scuttlebut is circulating in the anti-virus world about the new class of trojan about to emerge. Details - facts - are scanty, and several exaggerations seem to already have fallen flat on their face already. Having said that, a general pattern is emerging, and it looks like a significant advance in the threat state of the Internet. Here's what I've picked up so far.
The new class of trojan evidences an ability to deconstruct and reconstruct the browser in the phisher's image. That is, it is a sort of meccano kit for phishers, which allows the construction of a new phisher-friendly browser. The meccano trojan operates on the Microsoft Windows operating system and allegedly is coded up for both IE and Firefox, so Firefox has crossed GP.
I say above "evidences an ability" because the view is that the kit is not quite there yet, but it's near enough to be "just around the corner." There is substantial uncertainty about this, because the details are being shared hush-hush. My feeling from the scuttlebut is that the first meccano trojans will roll into Windows machines within a month or so, in what could be considered to an alpha test of the concept. Within 6 months, that should shake out and the meccano concept will be well tried and tested. But that's just a feeling based on a secret stacked up on a prediction.
If you prefer to think in classical PKI terms, this means that the MITM is now inserted into the browser. Hence, this attack was later named as the Man in the Browser, or MITB (note inserted 2009). PKI is rendered bypassed, in a way that is indefensible to PKI at least. Deep security readers will recall that one hard-wired assumption of PKI was that the threat was on the wire, and the node was safe, and we've now reached the point where that assumption is broken in theory and in practice.
Let's summarise. First the facts: A new trojan class is capable of taking over the browser on Windows platforms. It covers both IE and Firefox. It renders all security within the browser theoretically breached.
Second, the anti-facts. Everything written above is unconfirmed, so they are not facts at all! Next, the notion that suddenly the browser disappears into a puff of smoke and the user is left naked and unprotected just doesn't make sense - to me at least. There still remain substantial economic defences against any given attack - just because one component is broken doesn't mean the system starts handing out your cash left and right. There are also things that users - both individuals and corporates - can do to protect themselves, and there many many things that sites can do to change the economics.
What then makes me believe that this is substantial without waiting for the unobtainable facts? Three things. Firstly, this is predicted in concept if not in detail. Any security researcher worth their salt has written off the nodal security model as far as Windows goes - I wrote about the fatal conceit of the threat reversal some time ago, and to many that was good logic but oh-so-ho-hum. Next, it is only the timeframe that we are arguing about, and we are about due for this. Note how last week's news predicts the browser attack:
"MetaFisher uses HTML injection techniques to phish information from victims after they've logged into a targeted bank account, said Dunham, which lets attackers steal legitimate TAN numbers (one-time PINs used by some banks overseas) and passwords without having to draw them onto phony sites."
Finally, those that are vulnerable and have seen more of the story are taking it seriously. Banks in one place that I know have already formulated their response and are moving to put it in place. In this case, it is a banking sector that is not particularly vulnerable anyway, and their solution will work - which already tells you what corner of the world it is. For the financial cryptographers, the solution is simply moving more towards the model Ricardo and x9.59 pioneered, c.f., Anne & Lynn, Gary & myself.
So where does that leave us? The fundamental statement would seem to be that the Windows platform can no longer be considered secure, not for any security that you might actually need. That day has arrived. Beyond that there is a huge amount of analysis needed to say more, far more than I can do in one post. I'll stop with these broad questions, recognising that asking the question is easier than answering it.
Far too much for one day. I'll leave you with Dan Kozen's fine bull, which symbolised my 2006 prediction of more government intervention, and today stands in for the running of the other more successful bulls.
Thankfully, the regulators appear to be showing restraint, in that they are signalling that the problem belongs to the banks. The central banks have confirmed their intention to put risk sharing in place for online banking: the user will be on the hook for something like the first 150-250 of the fraud. After that, the bank picks up the rest. This is critical - both the bank and the user must engage in the security protocol, and any attempt to do otherwise is living in state of sin, to paraphrase John von Neumann.
Let's hope the regulators hold the line on that one, and prove at least one of my predictions dead wrong.
Posted by iang at March 25, 2006 10:33 AM | TrackBackThe nodes (data at rest) have always been the most at risk. part of this is that data-at-rest tends to have much larger collection of aggregated data tending to result in much higher return for the crooks effort. Furthermore ... long before the internet and continuing right thru the internet period ... the majority of fraud has always involved insiders ... again primarily an end-node related issue ... not data-in-transit issue.
at best, most PKI efforts for data-in-transit, was to not result in any incremental risk with the introduction of the internet ... as opposed to really addressing any of the primary threats and vulnerabilities. however, internet not also provided some incremental threat against data-in-transit ... but the internet also allowed for some additional threats/attacks against end-nodes. however, the various possible internet threats against end-nodes ... may represent as much an obfuscation of identifying the actual compromise (insiders), as any real threat, in itself.
However, some number of the end-node infrastructures originally involved in a disconnected, non-threat environment ... and as a result did not have inherent designed-in countermeasures for operating in a high threat/advisary (internet) environment.
somewhat related discussion
http://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense
Isn't there a risk of some banks deciding that the solution is to bypass the browsers and mandate use of their own proprietary client programs? Inevitably this ends up meaning that only one (or maybe two) commercial operating systems are supported, effectively rewarding Microsoft for their poor security model with a reinforced monopoly..
If I remember correctly, this was the situation in Australia when online banking was introduced (at least with the Commonwealth bank). I don't know if things have improved since I left, as being presented with a system that could not be used without running windows made me lose interest fairly early on...
Posted by: Digbyt at March 26, 2006 09:01 AMBack in the old pre-phishing days there wasn't much of a well understood ("validated") threat model, and some banks did conclude that a special client was better than the browser, because they thought the browser's security model was weak. And they were right in the narrow sense of phishing.
But now the threat has moved into the user's node itself. This was always the underlying problem with phishing - if it wasn't dealt with quickly enough then the phishers would develop the infrastructure to take on the main prize, being the Windows machine.
Now the trojans are a primary threat, perhaps to become the primary threat. It is only a matter of time before the attackers decide to customise to deal with a particular bank's client. That's pretty obvious really, so if any bank were to do that on a windows client, they'd get what they deserve. Of course this all depends on the rate of viral infection of the Windows platform, but that is cold comfort so far.
Posted by: Iang at March 26, 2006 09:18 AMThere was a online banking conference circa 1995 where the banks talked about moving to the internet. pre-internet a bank supporting network dial-in ... a typical bank had their own "bank" of (dial-in) modems and claimed to require 50-60 different versions of software for different kinds of PCs with different software versions and modem hardware (along with a well-staffed online banking trouble call center).
the move to internet banking ... eliminated almost all of their trouble calls, all their own software development and support operation and the big call center ... effectively moving that to an ISP. The ISP amortized the call-in connection across all the stuff that user might do online and the internet-paradigm standardized the end-user connectivity software with a paradigm that used the same software across all online operations. this represented something like 95percent plus reduction in costs to a financial institution for supporting online operations.
there were some bank factions that had vested interests in preserving the roll-your-own, dedicated operational paradigm, but the internet-based operation cost savings were enormous (which was at least partially because of standardizing and amortizing online operational costs across everything that an end-user did online)
A big issue with many of the consumer end-nodes has been the underlying platforms originally evolved in a non-hostile environment with little built-in defensives. Furthermore a large body of applications (like games) evolved where it was common to take-over (and compromise) the whole system as part of normal operation.
As a result, some of these platforms now have quite diametricly opposing requirements ... attempting to apply a defensive layer as an after thot (i've frequently used the analogy of auto aftermarket seatbelts as a safety measure from the 60s) to deal with potentially extremely hostile internet operation while still preserving the ability for various applications to compromise the system (as part of normal operation). The 60s aftermarket seatbelts was before the complete make-over to having some amount fundamental built-in product safety.
Posted by: Lynn Wheeler at March 26, 2006 09:54 AMpart of this going back to at least the early 90s is that crooks would physically install compromises on end nodes (atm machines, pos/point-of-sale terminals) to skim/collect information on possibly tens of thousands of accounts.
this internet scenario involves installing a trojan on a end-node ... possibly getting only a couple accounts per node ... however, the trojan can be done electronically ... w/o having to physically visit each node. basically the same skim/harvest static information vulnerabilities has just been extended to a much larger number of end-node collection points.
previously mentioned was that skimming threat came into existance at least in the early 70s with the introduction of magstripe cards and electronic transactions.
however, one of the earliest magstripe vulnerabilities is somebody using the algorithm that checks for correctly formed account number ... to generate an account number and create a counterfeit magstripe with that account number. an early countermeasure for this threat was cvv genre ... basically a hash of the magstripe that is encrypted with the bank's secret key. the first part of each account number has the bank BIN ... which is used for indexing a table for electronically routing the transaction. The same network table can have the bank secret key ... so that the CVV value from magtripe can be validated (an early version of digital signature ... but using secret key instead of private/public key).
While the cvv process was a countermeasure to algorithmic generated account numbers and magstripes ... it was subject to skimming vulnerabilities ... i.e. just copy/skim the complete magstripe and reproduce the magstripe information on a counterfeit card ... basically a form of replay attack.
the "yes card" compromise mentioned recently
http://www.garlic.com/~lynn/aadsm22.htm#20 FraudWatch - Chip&Pin, a new tenner (USD10)
http://www.garlic.com/~lynn/aadsm22.htm#23 FraudWatch - Chip&Pin, a new tenner (USD10)
was basically a variation on cvv scenario. the magstripe information was incorporated into a digital certificate ... with the certification authority's digital signature. the chip would present the digital certificate as evidence of being a valid chip. the pos terminal would validate the (certification authority's) digital signature on the digital certificate. proving a valid digital signature on a digital certificate was equivalent to proving a valid chip.
Basically, this generation of chip cards could be skimmed using the same technology that skimmed magstripes. Validating the digital signature on the digital certificate was equivalent to validating the cvv on the magstripe ... in both cases, that validation was treated as being equivalent proving a valid card (the process for the digital signature was more complex than any cvv ... but any difficulty making an electronic copy was essentially identical)
The "yes card" label came because the POS terminals were programed that once it validated the digital certificate, they would accept the chip's statements. The (counterfeit) "yes cards" would say YES, the entered pin is correct; YES, the transaction is within the credit limit, YES, do an offline transaction (don't trouble the backend, online system with the transaction).
This was subsequently enhanced so that newer cards would negotiate "dynamic" authentication (instead of simple "static" authentication). the POS terminal could send a random challenge ... which the chip digitally signs with its private key. So now the terminal verifies the certification authority's digital signature on the digital certificate ... and then uses the public key in the digital certificate to validate the digital signature on the random data.
this use of dynamic data (digital signature on random data) is countermeasure to skimming static data ... and effectively various forms of replay attacks.
One of the scenarios looked at by the x9a10 working group (as part of x9.59 financial standards work) was whether there is a mitm-attack against scenario where authentication is performed separately from the actual transaction.
One possible scenario has a valid lost/stolen card paired with an electronic mitm ... the initial authentication is transparently passed to the lost/stolen card ... and then all subsequent communication is handled by the mitm ... as per the "yes card" scenario. A far-out scenario has the lost/stolen card connecting to some internet communcation unit (built by the bad guys). several mitm have wireless internet communication and the challenge/digital signature exchange is actually communicated over the internet.
given a possible mitm-attack and the "yes card" scenario ... it isn't even necessary for a criminal organization to obtain a large number of lost/stolen cards.
misc. past posts mentioning various mitm-attacks
http://www.garlic.com/~lynn/subpubkey.html#mitmattack
the similarity here is that static data authentication continues to be still be in wide-spread use enabling replay attacks and skimming/harvesting/phishing operations against a wide-variety of end-nodes.
Posted by: Lynn Wheeler at March 26, 2006 07:09 PMref:
http://www.garlic.com/~lynn/aadsm22.htm#28 Meccano Trojans coming to a desktop near you
discovery by financial industry needing to design-in security from the start
http://www.securitypipeline.com/183702555
i was made aware of the necessity of doing designed-in from the start as an undergraduate in the late 60s ... nearly 40 years ago.
it wasn't until several years later that i was made aware that a lot of stuff i was doing as an undergraduate was being used in places like this
http://www.nsa.gov/selinux/list-archive/0409/8362.cfm
lots of archived posts ... many from alt.folklore.computers about early days of cp67 in the 60s
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock
and various other posts about the science center from the 60s and 70s
http://www.garlic.com/~lynn/subtopic.html#545tech
> discovery by financial industry needing to
> design-in security from the start
> http://www.securitypipeline.com/183702555
>
> i was made aware of the necessity of doing
> designed-in from the start as an undergraduate
> in the late 60s ... nearly 40 years ago.
For fun and giggles, I say exactly the opposite in my series of rants known as GP (click on the link below).
I think both views are valid, it all comes down to ones starting assumptions.
Posted by: iang - GP link at March 28, 2006 10:46 AMwell emv thot they could effectively do a modern version of cvv using chip card ... and got the "yes card" (even tho it was well known before emv even started that static data skimming exploits had been around for over ten years).
so not only do you have to design-in security ... but you also have to understand the threat models. emv seemed like they had started with a bunch of chip engineers that had never been exposed to infrastructure threats before (purely focused on chip centric operation and didn't have any understanding of terminal, network, backroom, etc threats)
so one alternative ... is go in with absolutely no security ... assuming you have people that have never dealt with threats before ... and have them learn threats on the job.
* Ian G.:
>> I say above "evidences an ability" because the view is that the kit is
>> not quite there yet, but it's near enough to be "just around the
>> corner."
I can assure you that the technology is real. However, unlike newcomers like Verisign/iDefense, those who actually work on mitigation share their findings only with banks and law enforcement, and not the general public. As a result, the attackers don't know why things don't work for them as expected, which has tactical advantages.
Posted by: (anon) at March 28, 2006 03:32 PManon writes
> I can assure you that the technology is real. However,
> unlike newcomers like Verisign/iDefense, those who
> actually work on mitigation share their findings only
> with banks and law enforcement, and not the general
> public. As a result, the attackers don't know why things
> don't work for them as expected, which has tactical
> advantages.
there are actually two sides. one is the technologists and defenders not divulging information to the attackers. the other has been that the technologists and the attackers not divulging information to the public.
the previously referenced "yes card"
http://www.garlic.com/~lynn/aadsm22.htm#20 FraudWatch - Chip&Pin
had the attackers thoroughtly understanding all the skimming attacks (since possibly late 90s) ... but the information didn't appear to be readily available to the technologists and defenders attempting to understand the possible threat models. one might then be tempted to attribute that the chip-card solution recreating a more modern version of static cvv as a countermeasure to counterfeit cards built from scratch (as opposed to the long time threat of countfeit cards built from skimmed information).
there is the scenario controlling public information to place attackers at disadvantage. there has also frequently been the case that the attackers have all the information and the lack of public information places the defenders at a disadvantage. the comment in the "yes card" summary was that the information had been widely available to attackers for a number of years (although apparently not to the general public or defenders designing countemeasures).
one possible way of characterizing the scenario is that the cvv and other static data solutions were countermeasures to attacks on cards. the problem was that the skimming and "yes card" solutions were using skimming to attack the terminals and infrastructure (using valid skimmed data for replay attack against the terminals and infrastructure).
in the requirement given in the mid-90s to x9a10 standards working group for x9.59 standard to preserve the integrity of financial infrastructure for ALL retail payments ... was that a major recognized threat was skimming and data breaches (i.e. the "yes card" reference claims that the original chip&pin specification work was going on in the same period as the original x9.59 standards work).
part of the threat analysis was that account number use was grossly overloaded ... on therefor created significant vulnerabilities. The account number was required in large number of places and business processes for the operation of payments (has to be openly divulged and generally available).
At the same time, knowledge of the account number was frequently sufficient for performing a transaction ... a static data (vulnerabile to skimming and data breach threats), shared-secret, "something you know" authentication. from 3-factor authentication model
http://www.garlic.com/~lynn/subpubkey.html#3factor
and shared-secret "something you know"
http://www.garlic.com/~lynn/subpubkey.html#secret
http://www.garlic.com/~lynn/subpubkey.html#harvest
the x9.59 standards process was to completely seperate and diffierentiate the account number from any authentication role. the account numbers could be openly divulged and made generally available but had no authentication role (business rule that account numbers used for x9.59 transactions could not be used for transactions that didn't have separate authentication). in the x9.59 standards process much of the current security breaches and data breaches become a mute issue since the information can't be used for fraudulent transactions (effectively replay attacks).
the skimming, harvesting, and security/data breach scenarios involved replay attacks against the terminals and infrastructure (although they may possibly involve counterfeit cards built from skimmed data). this is totally distinct from other types of efforts that are designed to be countermeasures against attacks on valid cards. the issue was that if you could trivially skim information to be used in attacking the terminals and other parts of the infrastructure ... it was possible to totally bypass having to attack the cards themselves.
a similar situation may be currently being played out in various legislative bodies. with lost/stolen cards and online transaction infrastructure; you typically notice that the card has been lost/stolen, report the information, and the account number can be "turned off", possibly even before any fraud has actually occured. in the skimming, havesting, and security/data breach scenarios the individuals won't realize it has happened (until they start noticing fraudulent charges and/or when all the money is gone).
cal. state legislature passed a bill that required individual notifications when breaches had occured. that somewhat gave the individual a small additional advantage to report earlier than they would if they had to wait until all the money was gone. several other states then passed similar laws and/or are considering such laws. Work has also has begun at the federal level. however, there have been some stories in the press about there being significant lobbying at the federal level that the federal notification bill would pre-empt any state legislation and significantly reduce the situations where notification was actually required (one might be tempted to draw parallels with the jokes about the "CAN-SPAM" legislation; bills having exact opposite effect of what might be inferred by their title).
The excuse of controlling information available to attackers ... seems to have also resulted in some number of situations where the information has been readily available to the attackers (in some cases for many years) and it is controlled from being available to everyone else (at least some number of people developing threat models for use in designing countremeases ... need a lot of information about existing attacks).
Note ... as mentioned, with respect to breach notification ... part of the x9.59 standards effort from the mid-90s was to eliminate large number of the breach scenarios as representing an actual security threat and vulernability. the skimming, harvesting, and breach scenarios (for replay attacks) are further aggrevated by the fact that the long term data has shown that the majority of such fraud has involved insiders (having legitimate access to the information).
Posted by: Lynn Wheeler at April 1, 2006 11:10 AMIf this threat meme gains currency, do FC readers feel it will drive more interest in use of a VM sandbox for conducting online txns?
Sure, there are compromise opportunities in that VM, but do readers feel it sufficiently "reduces the attack surface"?
cf. http://www.vmware.com/vmtn/appliances/browserapp.html
Posted by: Matt Powers at April 6, 2006 08:18 PMi started work on virtual machines nearly 40 years ago as an undergraduate in the late 60s. a lot of stuff that i did was picked up and shipped in the standard product ... which turned out to have a number of security minded customers. trivial reference to the period:
http://www.nsa.gov/selinux/list-archive/0409/8362.cfm
I also mentioned the ab
Posted by Lynn Wheeler at ove reference earlier in this same thread:
http://www.garlic.com/~lynn/aadsm22.htm#32 Meccano Trojans coming to a descktop near you
I've joked in recent years that as an undergraduate, I was getting requests (sometimes w/o being told the source) to implement one thing or another ... that turned out to be security related ... for some things that haven't bubbled up to the top of current lists of security concerns (in many cases, security had severely regressed ... things that we had thot were done and fixed over 30 years ago keep reappearing).
misc. other collected posts about virtual machine activity from the period
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock
http://www.garlic.com/~lynn/subtopic.html#545tech
some of the virtual machine security benefits was because of strong partitioning ... any compromises tended to have their scope significantly limited.
another benefit was that the virtual machine kernel ... providing the strong partitioning ... was (at least at the start) small and compact ... and it was relatively straight forward to do a detailed audit. the interfaace and API semantics were also concise and the API implementation audit was also straight forward.
another thing that has come back into style is free software. it use to be pretty much all software was free ... but in large part because of various gov. litigation ... "unbundling" (another term for charging for software) was announced on june 23rd, 1969. This was primarily for application software, the gov. being told that the kernel software still had to be free since it was part of the correct operation of the hardware.
Later in the 70s, I was given the opportunity to be the guinee pig for kernel priced software. This transition was somewhat the result of the appearance of clone mainframe manufactors. They were shipping mainframe processors and telling their customers to just order the "free" kernel. Much of the resource manager software I had done as an undergraduate had been dropped in the morphing from 360s to 370s. My "new" resource manager (30th anniversary of its official product announce is coming up in a couple weeks) was going to be the guinee pig for kernel priced software and I was rewarded with getting to spend a lot of time with the business planners and lawyers regarding working out policy for pricing kernel software. misc. collected posts mentioning unbundling.
http://www.garlic.com/~lynn/subtopic.html#unbundle