I've several times mentioned being called in as consultants to small client/server startup that wanted to do payment transactions on the server; they had also invented "SSL" they wanted to use and it is now frequently called "electronic commerce".
Somewhat as a result, in the mid-90s, we were invited to participate in the x9a10 financial standard working group which had been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments. We did some amount of detailed, end-to-end, thread & vulnerability studies of all the different kinds of retail payments (skimming, evesdropping, data breaches, point-of-sale, internet, unattended, face-to-face, etc).
The result was the x9.59 financial transaction standard. It did nothing to address/prevent the skimming, evesdropping, data breaches and other kinds of threats ... where the attacker uses the information to perform fraudulent financial transactions. What it did do was make the information useless to the crooks. Some past references
http://www.garlic.com/~lynn/x959.html#x959
Now the major use of "SSL" in the world today has been hiding transaction information (during transmission) as countermeasure to attackers harvesting the information for fraudulent financial transactions. x9.59 doesn't stop or prevent the harvesting ... it just eliminates the usefulness of the information to the (eliminates their ability to use the information for fraudulent financial transactions) ... and therefor eliminates the need to hide transactions information and also eliminates that use of SSL
Posted by Lynn Wheeler at January 29, 2012 10:05 AMGetting to understand some Business Continuity Management (BCM) helped me personally a lot, since there Business Impact Analysis (BIA) and Risk=Threat*Impact are a must.
Posted by Sami Hanninen at January 29, 2012 12:22 PMfor the fun of it ... overlap between continuous availability, electronic commerce, and supercomputers. we were doing this high-availability (environmental threats, failures, attacks, threats & vulnerability, everything possible), cluster scaleup effort ... we call ha/cmp. some past posts
http://www.garlic.com/~lynn/subtopic.html#hacmp
this included inventing the terms disaster survivability and geographic survivability when we were out marketing ... some past posts on continuous availability
http://www.garlic.com/~lynn/submain.html#available
old post mentioning jan92 meeting in Ellison's conference room on cluster scaleup
http://www.garlic.com/~lynn/95.html#13
now two of the people in the meeting later leave and show up at small client/server startup responsible for something called commerce server (and want to do payment transactions on the server).
right the end of jan92, the cluster scaleup part is transferred and we are told we couldn't work on anything with more than four processors ... possibly within hrs of the last email in this collection cluster scaleup
http://www.garlic.com/~lynn/lhwemail.html#medusa
and a couple weeks later the cluster scaleup is announced as supercomputer ... press item 17Feb92
http://www.garlic.com/~lynn/2001n.html#6000clusters1
some more press later that spring
http://www.garlic.com/~lynn/2001n.html#6000clusters2
this was a major motivation for deciding to leave
later we consult with the two people from ellison's conference room meeting (now responsible for "commerce server"). now part of the electronic commerce was something called payment gateway ... handled financial transactions between webservers and and payment networks ... some past posts
http://www.garlic.com/~lynn/subnetwork.html#gateway
the gateway was replicated for no-single-point-of-failure with multiple connections into different parts of the internet backbone ... as well as multiple layers of hardening against lots of kinds attacks. One of the scenarios was that hardening required casting lots of things in concrete as countermeasure to various kinds of attacks (involving corruption of service) while at the same time being able to agile change and adapt when there were various kinds of failures.
@Iang,
One of the problems I've repeatedly seen with threat modeling is people take it too personaly. That is their bias is towards what frightens them for whatever reason not what actually might cause them harm. We see this in driving-v-flying, where people would rather drive 1000Km rather than fly, because they feel that they cannot be blown out of the sky by terrorists. But a look at the normalised risk of driving-v-flying would suggest they have no real perception of the the actual probabilities of injury or death involved.
Another problem I've seen often is being to specific about individual threats as opposed to the class of threat they fall into. One result is no action is taken, because individual threats are below some threashold, but if the individual threats are amalgamated into a class of threat the threashold for action may be easily surpassed.
Another asspect of using classes of risk rather than individual risks is "getting mitigation for free", if you look at any business there is a reasonable risk of fire, but for the norm of businesses the risk of a bomb is very very small. However the mitigation for both is almost identical "inform the authorities and evacuate" thus a tiny change in the proceadure covers both eventualities. As you mentioned 9/11 it shows this actually happening for real. In one business they very regularly carried out the "fire drill" and practiced the evacuation. Now few if any businesses had the threat of "aircraft strike" in their assessments and this business did not either. However when an "aircraft strike" did happen the well practiced fire drill evacuation was followed and the survival rate for that company was a lot lot higher than for similarly placed businesses that did not do regular fire drill practice.
The use of classes of threat can also be a good indicator of when your mittigation strategy is wrong. As mentioned by one of the other commentors sometimes you have to have a serious re-think and thus a compleate change of direction.
But that asside when it comes to ICT security there is a very real difference between threats to tangable physical objects and threats to intangible information objects. And all to often you see the wrong analysis being used to assess the risk. That is underlying axioms (assumptions) that only hold in the physical world get applied without question to the information world.
The most obvious example of this is the difference in theft. You usually know when a physical object has been subject to theft because you nolonger have it in your possession. Not so with an information object, when it is subject to theft what is actually stolen is a copy of the information, therefore you still have the original form of the information so remain blissfully ignorant of the theft untill some other event acts as an indicator that a theft might have occurred.
The next most obvious is locality or distance, for the theft of tangable objects you have to be local to the object to steal it. With intangable objects you can be around the other side of the globe, because you coopt the victims systems into acting as the local thief.
Then there are "force multipliers" in the physical world you are constrained by physical limitations one of which is how much physical work you can do. The physical world solution to this problem is to use energy to build machines that use more energy to do greater levels of work. Thus the limit on what an individual can do in the physical world is constrained by their access to energy and the manufacture of force multiplying tools. In the inatangable world force multipliers are just information, the cost of duplicating them is minimal and from an attackers point of view virtually zero because they don't pay for the energy or systems involved in the duplication or operation of the force multiplier.
Failure to recognise the axioms derived from the physical world and their coresponding limitations when applied to the information world will lead to the failur of any measures that rely on the limitations imposed by those axioms.
Posted by Clive Robinson at January 30, 2012 02:18 AMHow could we meaningfully model individual users' risks when we develop standard protocols or COTS software? These will be used in a variety of contexts with a broad range of risk profiles. I think threat modeling makes sense in this context, be it only as a set of documented design assumptions that the individual user could match against a risk profile. If done properly (I just hypothesize that it could be done properly), a threat model may even constitute a risk model, only at a higher level of abstraction, specifying the "average" or "typical" risks that prevail in the user population. If we interpret the relationship between threats and risks this way, the crucial question becomes how much variance there is in the risk profiles and how we can find out how close to or how far from the threat model an individual user is.
Posted by Sven Türpe at February 3, 2012 11:34 AM"I reject your reality and substitute my own!" :)
http://newschoolsecurity.com/2012/02/on-threat-modeling/
Posted by Adam at February 6, 2012 11:15 AM