May 31, 2005

IFCA's Discussion Maillist for Financial Cryptography

Hello and welcome to all on the IFCA discussion list.

As Ray posted this last weekend, he has invited me to take a more active role in the list. Although termed 'moderator' I would like to think of it is more of an encouraging role than moderating role.

To that end, I've added the IFCA discussion list to the notifications list on the Financial Cryptography blog that I keep at http://financialcryptography.com/ . This means that any blog posts will be forwarded to the list, one of which you will just have seen. On long term average, that's about one per day.

For those who follow the blog - this means we now have a more conventional forum on which to discuss the posts. We've all recognised that the blog's comment mechanism is less dynamic than a mail list, as we don't get to see other people's posts. Hopefully the IFCA discussion list will rectify that lack.

If you wish to subscribe or unsubscribe from the IFCA discussion list, here are the vital statistics as copied from every post:

> fc-discuss mailing list
> fc-discuss@ifca.ai
> http://mail.ifca.ai/mailman/listinfo/fc-discuss

Ray has installed the familiar MailMan interface which allows easy follow-your-nose administration. If you are subscribed to both you will see two copies each post, so let me know if you want to unsubscribe from the blog list (which is manually administered only).

Posted by iang at 10:33 AM | Comments (0) | TrackBack

May 27, 2005

Loss Expectancy in NPV calculations

Mr WiKiD does some hypothetical calculations to show that using some security solutions could cause more costs than benefits, as everything you add to your systems increases your chances or loss.

Is that right? Yes, any defence comes with its own weaknesses. Further, defences combine to be stronger in some places and weaker in others.

Building security systems is like tuning speaker systems (or exhaust mufflers on cars). Every time we change something, the sound waves move around so some strengths combine to give us better bass at some frequencies, but worse bass at others. If we are happy to accept the combination of waves at some frequencies and are suppressing the others through say heavy damping, then that's fine. If not we end up with a woefully misbalanced sound.

Security is mostly done like that, but without worrying about where the weaknesses combine. That's probably because the sets of ears listening to security isn't ours, it's the attackers.

So, getting back to WiKiD's assumptions, he added in the Average Annual Loss Expectancy (AALE) into his NPV (Net Present Value) calculations, and because this particular security tool had a relatively high breach rate, the project went strongly negative.

Bummer. But he hones in on an interesting thought - we need much more information on what the Loss Expectancy is for every security tool. Hypothetically I muse on that in a draft paper, and after writing on what the results of such issues are, I can see an immediate issue: how are we going to list and measure the Loss Expectancy product by product, or even category by category?

Posted by iang at 10:03 PM | Comments (2) | TrackBack

America asks "Why us?"

Adam points to something I've been stating for a year or more now and a day: why is the current security crisis happening in USA and not elsewhere, asks a nervous troll on Bruce Schneier's blog:

This isn't intended as a troll, but why is it that we never hear about this sort of problem in Europe? Is it because we simply don't hear about overseas breaches, or do the European consumer and personal privacy laws seem to be working? How radical a rethink of American buisness practices would be required if we _really_ did own our personal data....

Posted by: Anonymous at May 24, 2005 09:41 AM

Go figure. If you need to ask the question, then you're half way to the answer - start questioning the crap that you are sold as security, rights to easy credit, and all the other nonsense things that those who sell thrust down consumers' throats. Stop accepting stuff just because it sounds good or because the guy has a good reputation - why on earth does any sane person think that a social security number is likely to protect a credit system?

Adam also points to an article in which Richard Clarke, one time head of DHS, points out that we should plan for failure. Yes indeed. Standard systems practice! Another thing you shouldn't accept is that you just got offered a totally secure product.

With most of the nation's critical infrastructure owned by private companies, part of the onus falls to companies' C-level executives to be more proactive about security. "The first thing that corporate boards and C-level officials have to accept is that they will be hacked, and that they are not trying to create the perfect system, because nobody has a perfect system," he says.

In the end, hackers or cyberterrorists wanting to infiltrate any system badly enough will get in, says Clarke. So businesses must accept this and design their systems for failure. This is the only sure way to stay running in a crisis. It comes down to basic risk management and business continuity practices.

"Organizations have to architect their system to be failure-tolerant and that means compartmentalizing the system so it doesn't all go down... and they have to design it in a way that it's easy to bring back up," he says.

Relying too heavily on perimeter security and too little on additional host-based security will fall short, says Clarke. Organizations, both public and private, need to be much more proactive in protecting their networks from internal and external threats. "They spend a lot of money thinking that they can create a bullet-proof shield," he says. "So they have these very robust perimeters and nothing on the inside."

It's a long article, the rest is full of leadership/proactive/blah blah and can be skipped.

And Stefan rounds out today's grumbles with one about one more security product in a long list of them. In this case, IBM has (as Stefan claims) a simplistic approach - hash it all before sharing it. Nah, won't work, and won't scale.

Even IBM's notions of salting the hashes won't work, as the Salt becomes critical data. And once that happens, ask what happens to your database if you were to reinstall and lose the Salt? Believe me, it ain't pretty!

Posted by iang at 02:49 PM | Comments (0) | TrackBack

May 25, 2005

The Crypto Wars are On/Off/On/Off...

The Brits have let out a cheer and declared the Crypto Wars over. And "we won" they say in a press release from the Foundation for Information Policy Research, according to a post on politech:

The Crypto Wars Are Over!

The "crypto wars" are finally over - and we've won!

On 25th May 2005, Part I of the Electronic Communications Act 2000 will be torn out of the statute book and shredded, finally removing the risk of the UK Government taking powers to seize encryption keys."

I don't think that's an accurate assessment. In fact I think it's dead wrong. Read on for today's news....

Over in the US there are worrying signs of more regulations coming to re-criminalise mathematics. Already, there are moves that foreign students will have to be "licensed to operate" any sensitive equipment. ISPs can now be served with spy&gag orders, sans court approval, and nobody's ever debunked the conflicts of interest that now bedevill the certificate authorities in positions of root power.

Today's news, from CACert, is that mere possession of PGP is to be taken as intent to commit a crime. "What has to be a huge blow for anyone with PGP or virtually any other encryption program on their computer, (in fact most computers these day come with cryptographic programs pre-installed). A man found guilty on child pornography related charges, was also found to have PGP software on his system and a court ruled that this was admissible as intent to commit and/or hide crimes in his case. This has huge ramifications if you are found guilty of a crime and then they find any cryptography software installed on your computer."

I hope that's wrong, Nope, it's true. And it was the Appeals Court that said it!

The general message is clear: if you think the war's over, that's because you're fighting the last war. Turn around and meet your new enemy.


PS: Then comes this new twist from Jim:

Now hackers can hold your files hostage...

By Ted Bridis

Washington - Computer users already anxious about viruses and identity
theft have a new reason to worry: hackers have found a way to lock up the
electronic documents on your computer and then demand $200 (about R1 200)
over the Internet to get them back.

Security researchers at the San Diego-based Websense uncovered the unusual
extortion plot when a corporate customer they would not identify fell
victim to the infection, which encrypted files that included documents,
photographs and spreadsheets.

A ransom note left behind included an e-mail address, and the attacker
using the address later demanded $200 for the digital keys to unlock the
files.

"This is equivalent to someone coming into your home, putting your
valuables in a safe and not telling you the combination," said Oliver
Friedrichs, a security manager for Symantec Corporation.

....


Here's the FIPR pres release, at least most of the copy I received.


Press release - Foundation for Information Policy research

Release time: 00.01, 25th May 2005


The Crypto Wars Are Over!


The "crypto wars" are finally over - and we've won!

On 25th May 2005, Part I of the Electronic Communications Act 2000
will be torn out of the statute book and shredded, finally removing
the risk of the UK Government taking powers to seize encryption keys.

The crypto wars started in the 1970s when the US government started
treating cryptographic algorithms and software as munitions and
interfering with university research in cryptography. In the early
1990s, the Clinton administration tried to get industry to adopt the
Clipper chip - an encryption chip for which the government had a
back-door key. When this failed, they tried to introduce key escrow -
a policy that all encryption systems should leave a spare key with a
`trusted third party' that would hand the key over to the FBI on
demand. They tried to crack down on encryption products that did not
contain key escrow. When software developer Phil Zimmermann developed
PGP, a free mass-market encryption product for emails and files, the
US government even started to prosecute him, because someone had
exported his software from the USA without government permission.

In its dying days, John Major's Conservative Government proposed
draconian controls in the UK too. Any provider of encryption services
would have to be licensed and encryption keys would have to be placed
in escrow just in case the Government wanted to read your email. New
Labour opposed crypto controls in opposition, which got them a lot of
support from the IT and civil liberties communities. They changed
their minds, though, after they came to power in May 1997 and the US
government lobbied them.

However, encryption was rapidly becoming an important technology for
commercial use of the Internet - and the new industry was deeply
opposed to any bureaucracy which prevented them from innovating and
imposed unnecessary costs. So was the banking industry, which worried
about threats to payment systems from corrupt officials. In 1998, the
Foundation for Information Policy Research was established by
cryptographers, lawyers, academics and civil liberty groups, with
industry support, and helped campaign for digital freedoms.

In the autumn of 1999, Tony Blair finally conceded that controls would
be counterproductive. But the intelligence agencies remained nervous
about his decision, and in the May 2000 Electronic Communications Act
the Home Office left in a vestigial power to create a registration
regime for encryption services. That power was subject to a five year
"sunset clause", whose clock finally runs out on 25th May 2005.

Ross Anderson, chair of the Foundation of Information Policy Research
(FIPR) and a key campaigner against government control of encryption
commented, "We told government at the time that there was no real
conflict between privacy and security. On the encryption issue, time
has proved us right. The same applies to many other issues too - so
long as lawmakers take the trouble to understand a technology before
they regulate it."

Phil Zimmermann, a FIPR Advisory Council member and the man whose role
in developing PGP was crucial to winning the crypto wars in the USA
commented, "It's nice to see the last remnant of the crypto wars
in Great Britain finally laid to rest, and I feel good about our win.
Now we must focus on the other erosions of privacy in the post-9/11
world."

Notes to Editors:

1. The Foundation for Information Policy Research
is an independent body that studies the
interaction between information technology and society. Its goal is to
identify technical developments with significant social impact,
commission and undertaken research into public policy alternatives, and
promote public understanding and dialogue between technologists and
policy-makers in the UK and Europe.

2. The late Professor Roger Needham, who was a founder and trustee of
FIPR, as well as being Pro-Vice-Chancellor of Cambridge University, a
lifelong Labour party member and, for the last five years of his life,
Managing Director of Microsoft Research Europe, once said: `Our enemy
is not the government of the day - our enemy is ignorance. If
ignorance and government happen to be co-located, then we'd better do
something about it.'

3. The Electronic Communications Act 2000 received Royal Assent on
the 25th May 2000. Part I provides for the Secretary of State to create
a Register of Cryptography Support Services. s16(4) reads: "If no order
for bringing Part I of this Act into force has been made under
subsection (2) by the end of the period of five years beginning with the
day on which this Act is passed, that Part shall, by virtue of this
subsection, be repealed at the end of that period."

4. The crypto wars ended in the USA when Al Gore, the most outspoken
advocate of key escrow, was found by the US Supreme Court to have lost
the presidential election of 2000.

5. The last battle in the crypto wars to be fought on UK soil was
in the House of Lords over the Export Control Act 2002. In this bill,
Tony Blair's government took powers to license the export of intangibles
such as software, where previously the law had only enabled them to
criminalise the unlicensed export of physical goods such as guns. This
caused resistance from the IT industry, and also raised the prospect
that scientific communications would become subject to licensing. FIPR
organised a coalition of Conservative, Liberal and crossbench peers to
insert a research exemption (section 8) into the Act, and an Open
General Export License was created for developers of crypto software.

6. Phil Zimmermann is arriving in London on the 25th May to take par
...

Posted by iang at 01:37 PM | Comments (2) | TrackBack

May 21, 2005

To live in interesting times - open Identity systems

As the technical community is starting to realise the dangers of the political move to strong but unprotected ID schemes, there is renewed interest in open Internet-friendly designs to fill the real needs that people have. I've written elsewhere about CACert's evolving identity network, and here's another that just popped up: OpenId. Eagle eyed FCers will have noticed that Dani Nagy's paper in FC++ on generating RSA keys from passphrases speaks directly to this issue of identity (and I'm holding back from nagging him about the demo .... almost...). Further, and linked more closely again, there is renewed interest in things like the difficult problem of privacy blogging. Oh, and Stefan says it's not as easy as just using keys.

In a sense this is feeling a little like the 90s. Then, the enemy was the US government as it tried to close down the Pandora's box of crypto, and the net with it. This war sparked bursts of innovation epitomized by the hard lonely fight of PGP and the coattails corporate rise of SSL's bid for PKI dominance.

President Clinton signed that war to the history books in January 2000 when he gave permission for Americans to export free and open source crypto software **. Now the battle lines are being sketched up again with the signing into act last month of the REAL ID national identity card in the United States. Other anglo holdouts (UK, Canada, Australia, NZ, ...) will follow in time, and that would put most of the OECD under one form of hard identity token or another.

The net effect for the net is likely bad. The tokens being suggested in the US will not be protected, as the Europeans know how to do; one only needs to look at phishing and the confusion called the social security number to see that. Which predicts that the ID schemes will actually be only mildly useful to Internet ventures, and probably dangerous given the distance and the propensity of Identity thieves (a.k.a. fraudulent impersonators).

Yet something will be needed to stand in the place of the identity cards. It isn't good enough to point out the flaws in a system, one must have something one can claim - hopefully honestly - is better. So I feel the scene is set for a lot of experimentation in the field of strong authentication and identity systems. Hopefully, more along the lines of Stefan's privacy preserving notions and Dave Birch's frequent writings, and less along the lines of Corporate juggernaughts like Passport and LIberty Alliance. Those schemes look more like camoflauged hoovers for data mining than servants of you the user.

We may yet get to live again in interesting times, to use the Chinese parable.

Postscript **: But not closed source, not paid for, not hardware and not collaboration or teaching! Gilmore and others report signs that the United States Government is perhaps again renewing its War on Mathematics by clamping down on foreign students in Universities.

Posted by iang at 09:07 AM | Comments (3) | TrackBack

May 18, 2005

$850 million dollar email had Perfect Forward Secrecy

For those who consider the goal to be Perfect Forward Secrecy (or PFS as the acronymoids have it) think about the $850 million punitive damages awarded by a jury in Florida against Morgan Stanley and to the former owner of Sunbeam.

"The billionaire investor started the trial with an advantage after the judge in the case punished Morgan Stanley for failing to turn over e-mails related to the 1998 Coleman deal."

"Judge Elizabeth Maass ordered jurors to assume Morgan Stanley helped Sunbeam inflate its earnings. To recover damages, Perelman only had to prove that he relied on misstatements by Morgan Stanley or other parties to the transaction about Sunbeam's finances."

"Maass also allowed Perelman's lawyers to make reference to the missing e-mails in the punitive damage phase of the case Jurors examined whether any bad conduct by Morgan Stanley merited a punishment in addition to compensation they gave Perelman to offset his losses."

This is the core problem with PFS of course - which is the promise that your emails or IMs won't be provable in the future. It matters not that someone eavesdropping can't prove anything cryptographically, because it simply doesn't matter in the scheme of things. The node is where the threat is, and you are thousands of times more likely to come to blows with your partner than any other party.

Your partner has the emails. So whatever you said, no matter how secret, gets plonked in front of you in the court case, and you've got a choice: rely on PFS and lie about the emails, or say "it's a fair cop, I said that!" Unless this issue is addressed, PFS doesn't really effect the vast majority of the us.

Unfortunately addressing second-party-copies is very hard. I had mused that it would be possible to mark emails as "Without Prejudice" which is a legal term signalling that these conversations were to be kept out of court. Nicholas Bohm set me straight there when he explained that this only applied *after* you have entered dispute, by which time you are taking all care anyway.

Alternatively it may be possible to enter into what amounts to a contract with your companion to agree to keep these conversations private and ex-judicial. That might work but it has two big powerful limitations: it doesn't effect other parties seizing them and it doesn't apply to criminal cases.

Still there may be some merit in that and it will be interesting to experiment with once I get back into IM coding mode. Meanwhile, I'm wondering just what we have to do to convince Morgan Stanley to get into a $850 million tussle with us ... nice work if you can get it.

Posted by iang at 07:40 PM | Comments (1) | TrackBack

May 16, 2005

SSL for FC - not such good news

Some random notes on the adventure of securing FC with SSL. It seems that SSL still remains difficult to use. I've since found out that I was somewhat confused when I thought we could use one Apache, one IP# and share that across multiple SSL protected sites.

Quick brief of the problem: the SSL tunnel is set up and secured _before_ the HTTP headers tell the server what site is required. So there is no way to direct the SSL tunnel to be secured with a particular certificate, like the one for FC as opposed to the one for enhyper. That's a fundamental issue. That means that the cert used is fixed for IP#/port, and it contravenes Iang's 1st law 2nd hypothesis of good security protocols, to whit, "a good security protocol is divided into two halves, the first of which says to the second 'use this key.'" What arrogance!

I thought this was fixed in TLSv1 (which hereafter I'll assume as including SSLv3). Yes it is in this way: TLSv1 has a new extension in the HELO message to permit the browser to hint at what site it wants. No it is not in this way: Neither mozilla browsers nor Apache web servers implement the new extension! Worse, there is no agreement among the SSL and HTTPS sector as to how to do this.

Sheesh!

If you, like me, happen to think that more SSL will be useful in dealing with phishing (because Alice can use the cert/CA as a handle to her favourite bank) then you can vote and again for these two bugs. (You will need an account on Mozilla's bugzilla system, which I use mostly for an occasional vote).

If on the other hand you feel that phishing is ok, perhaps because the security system really wasn't meant to protect people from doing stupid things, or because money and fools should be parted, or because it hasn't happened to you yet, or for any of a myriad of other reasons, then ... don't vote (If I can figure out the same thing for Apache I'll post it.)

It seems that the desiderata to use SSL widely is shared by a brave few (other) fools over at CACert who have set up a Virtual hosts task force! Good for them, we will no longer have to worry about reaching old age in loneliness, muttering to ourselves that security wasn't meant to be this hard. CACert are experimenting on how to do this, but so far the work-around seems to be that you issue a cert that includes all the names of all the sites. Which means there is now no certificate-based independence between administrative sites - definately a kludge.

While on the topic of SSL & security, CACert's blog reports on a meeting between Comodo and Mozilla Foundation to mount a common front to phishing. "Some CAs and browser manufacturers" have been invited. More power to them, more talk is needed, spread the message. Start with:

"We have a problem with phishing."

and carry on to:

"It's our problem."

Gervase has done a white paper for the CA/browser summit called "mproving Authentication On The Internet." Good stuff.

Posted by iang at 02:34 PM | Comments (2) | TrackBack

May 13, 2005

Penny-eating worms, and how crypto should be

Research on apotential SSH worm is reported by Bruce Schneier - this is academic work in advance of a threat, looking at the potential for its appearance in the future. Unusual! There is now optional protection available for the threat, as an option, so it will be interesting to see if this deploys as and when the threat arises.

Ping rites to say that he "recently started a weblog on usable security at http://usablesecurity.com/:

"I haven't seen any other blogs on the topic, so it seemed like a good idea to get one going. I invite you to read and comment on the entries there, and i hope you will find them interesting."

Great stuff. HCI/security is at the core of phishing. Which also brings us to an article about how the KDE people and the usability people eventually came to see eye to eye and learn respect for each other's strangenesses:

"When trying to set up a mail account with an OpenPGP key in KMail, you have to make settings in three different configuration modules. Users have problems understanding that. This is not a technical issue, because once the user discovers how it works he can set everything up. But to make the developers understand that users might have a problem with the workflow, you have to explain the context of usage and the way common users think."

Which brings me to something I've been meaning to shout about - when I (finally) got KDE 3.4 compiled and running, and started using Kmail (Thunderbird has too large a footprint) the GPG encryption feature just started working!

I'll say that again: encryption using my GPG Keys JUST STARTED WORKING!!! Outstanding! That's how encryption should be - and I don't know how they did it, but read the article for some clues.

It's not perfect - for example, the default is some hairbrained attachment scheme that nobody I know can read, so I have to remember each time to select "Inline (deprecated)" which is of course how it should be sent out for cross-platform independence. But it sure beats vi&cut&paste.

Posted by iang at 03:14 PM | Comments (0) | TrackBack

May 12, 2005

Microsoft Rumours Lacking Strong Digital Signature

I've just been reminded of Stefan's post that Microsoft are looking at blinded signatures. To add to that, I've heard related rumours. Firstly, that they are planning to introduce some sort of generic blinding signature technology in the (northern) summer browser release ("Longhorn"). That is, your IE will be capable of signing for you when you visit your bank.

Now, anyone who's followed the travails and architectural twists and turns of the Ricardian contract - with its very complete and proven signature concepts - will appreciate that this is a hard problem. You don't just grab a cert, open a document, slap a sig on it and send it off. Doing any sort of affirming signature - one where the user or company is taking on a committment or a contract - is a serious multi-disciplinary undertaking that really challenges our FC neurons to the full. Add that to Microsoft's penchant for throwing any old tech into a box, putting shiny paint on it and calling it innovation, and I fear the worst.

We shall see. Secondly, buried in another area (discipline?) totally, there is yet another set of rumours in interesting counterpoint. It appears that Microsoft is under the bright lights of the Attornies General of the US of A over spyware, malware, and matters general in security. (Maybe phishing, I wasn't able to tie that one down.) And this time, they have the goods - it appears that not only is Microsoft shipping insecure software, which we all knew anyway, but they are deliberately leaving back doors in for future marketing purposes, and have been caught in the act.

Well, you know how these rumours are - everyone loves to poke fun at big guy. So probably best to write this lot down as scurrillous and woefully off the mark. Or I hope so. I fail to see how Microsoft are ever going to win back the confidence of the public if they ship signing tech with an OS full of backdoors. What do we want to sign today?

Addendum: El Reg pointed at this amusing blog where Microsoft easily forgets what platforms out there are being hacked.

Posted by iang at 06:59 PM | Comments (1) | TrackBack

Pareto-Secure

To round off our first foray into peer-review, FC++ presents a paper my own observations on a term dear to our hearts, if not our heads. Can an economics framework explain what we mean by security?

Pareto-Secure

What do people mean when they say something is secure?

Shamir's 1st law says absolute security does not exist, yet the popular press and the security buying process is inundated in secure product. For some of these products, there may be merit in the term, but for many it is more debatable. Such differences of meaning and applicability suggest low efficiency in the market for security, as well as a blackspot on the claim for security as a robust science.

One way to define 'secure' is to apply the economics theory and terminology of Pareto efficiency. This simple structure gives an easy way to categorise and choose among alternates, and identifies when an optimum has been reached. We suggest that this meaning may already be in wide spread usage, intuitively, among security practitioners and the popular press.

Full Paper

As always, comments welcome. For all FC++ discussions, we are interested in where you think these papers might be better published.

Posted by iang at 01:30 AM | Comments (2) | TrackBack

May 11, 2005

FUDWatch - VoIP success attracts the security parasites

VoIP has been an unmitigated success, once Vonage and Skype sorted out basic business models that their predecessors (remember SpeakFreely, PGPFone?) did not get right. And everyone loves a story of connivery and hacker attacks. Now the security industry is ramping up to create Fear, Uncertainty and Doubt - better off known as FUD - in the VoIP industry.

"As VoIP is rolled out en masse, we're going to see an increased number of subscribers and also an increased number of attackers," says David Endler, chairman of the VoIP Security Alliance (VOIPSA), a recently formed industry group studying VoIP security.

Consider FUD as the enabler for a security tax. We can't of course collect the revenues we deserve unless the public fully understands what dangers they are in, surely? Right? Consider these past disastrous precedents:

The VoIP experience could parallel the Wi-Fi example. As Wi-Fi began to gain momentum a few years ago, an increasing number of vulnerabilities came to light. For example, unencrypted wireless traffic could be captured and scanned for passwords, and wireless freeloaders took advantage of many early networks that would let anyone sign in. At first, Endler says, only the technological elite could take advantage of security holes like these. But before long the "script kiddies"--those who lack the skills to discover and exploit vulnerabilities on their own, but who use the tools and scripts created by skilled hackers--joined in.

Kick me for being asleep, but that's another FUD case right there. The Wi-Fi evolution has been one of overwhelming benefit to the world, and even in the face of all these so-called vulnerabilities, actual bona fide verified cases of losses are as rare as hen's teeth.

In all seriousness, we've been through a decade of Internet security now and it's well past time to grow up and treat security as an adult subject. What's the threat? Someone's listening to your phone call? Big deal, get an encrypted product like Skype. Someone's DOSing your router? Wait until they go away.

There just doesn't seem to be an economic model to threats. We know that phishing and spam are fundamentally economic. We know that cracking boxes was for fun and education, originally, and now is for the support of phishing and spam. Regardless of the exception to the economics motive found in hacking and cracking there has to be a motive and listening in to random people's phone calls just doesn't cut it.

Addendum. Ohmygawd, it gets worse: Congress is asked to protect the Internet for VoIP. Hopefully they are busy that day. Hmm, looks like the wire services have censored it. Try this PDF.

Posted by iang at 08:30 AM | Comments (3) | TrackBack

May 10, 2005

Games, P2P and currency ...

It's not often we see Finance layer thoughts out there. Here's a fascinating snippet:

Take "Second Life," the virtual world created by Linden Labs. Rather than offer a traditional game environment like "EverQuest," it provides a growing world in which inhabitants can build their own homes, create their own "in-game" games, run businesses or do pretty much anything else that strikes their fancy.

"Second Life" has 28,000 people online today, and some inhabitants are already making more than $100,000 a year in real-world money by selling digital wares constructed inside the world or running full-fledged role-playing games.

"Second Life" is built on a distributed model, in which numerous servers are connected together, each one representing about 16 acres of land in the digital world. Those patches of digital space are seamlessly connected together to create the world as experienced by visitors.

Today, all of those servers are run by Linden Labs, but the world was built to ultimately support a peer-to-peer model, where players might add their own 16-acre plot into the world from their own computer, said Linden Labs' chief executive officer, Philip Rosedale. For security reasons--including the fact that a real currency is traded inside the world--the company hasn't taken that step yet, however.

...

Posted by iang at 11:41 AM | Comments (1) | TrackBack

May 06, 2005

Damaged Pennies

Netcraft publishes the top phishing hosters - and puts Inktomi in pole position. Think class-action, damages, lack of due care, billion dollar losses ... we need more of this naming and shaming.

Rumours abound that Microsoft is about to be dragged into the security mess. I had thought (and wrote occasionally to effect) that the way phishing would move forward would be by class-action suits against software suppliers, following the lead of Lopez v. Bank of America ([1], [2]). But there is another route - that of regulators deciding to lean on the suppliers. Right now, that latter path is being explored in smoke-filled rooms, or at least that's what those who smoke say.

These two routes aren't exactly in competition, they are more symbiotic. Class-action suits often follow on the judgments of regulatory settlements, so much so that the evidence one side discovers is used by the other side to advance. In this way they work as a team.

Over on CACert, Duane alerts me to a blog that they run, and an emotional cry for help by an auditor from Coopers and Lybrand (now PriceWaterhouseCoopers). Like my own observations, the briefly named 'Gary' points out that CACert's checking procedures are as good or better than the others. He also breaks ranks and wiggles the finger at immoral and probably illegal practices at auditors in security work. I am not surprised, having heard stories that would make your faith in public auditors of security practices wilt and expire forever more. Basically, any *private* audit can be purchased and it costs double to write it yourself. Trust only what is published, and even then cast a skeptical eyebrow skywards.

Speaking of Microsoft and car wrecks to come, this factoid suggests that "25 car models run Microsoft software" ... unfortunately or perhaps luckily there is no reference.

In more damages of the other kind, RIAA File-Sharing Lawsuits Top 10,000 People Sued. In the Threats department: One-Third Of Companies Monitoring Email. Also, a nice discussion on fraud by fraudsters. A Newspaper interviews two fraudsters behind bars for what we now know as identity theft. Good background material on why and how easy.

AOL in Britain reports that one in 20 report that they have been phished. I find these sorts of surveys somewhat "phishy" given that if one in 20 of the population has been phished, we'd have rioting in the streets and politicians and phishers alike strung from lamposts. But, it's important to keep an eye on these datapoints as we want to know whether the status of phishing as primarily an "american disease" is likely to go global.

And in closing, a somewhat meanandering article that links Sarbanes-Oxley, IT and security products. It asks:

"But here is the fundamental question - has there ever been a pervasive and material financial fraud which has resulted directly or indirectly from a failure of an IT security control? Would IT controls have prevented or detected the frauds at Enron, WorldCom, Tyco, and the like?"

The author might be a closet financial cryptographer.


And, if you got this far, it is only fair to warn you that you've now lost 10 points of your IQ level. (Sorry, no URL for the following ...)

It's the technology, stupid
By Michael Horsnell in London
April 23, 2005

THE regular use of text messages and emails can lower the IQ more than twice as much as smoking marijuana. Psychologists have found that tapping away on a mobile phone or computer keypad or checking them for electronic messages temporarily knocks up to 10 points off the user's IQ.

This rate of decline in intelligence compares unfavourably with the four-point drop in IQ associated with smoking marijuana, according to British researchers, who have labelled the fleeting phenomenon of enhanced stupidity as "infomania".

Research on sleep deprivation suggests that the IQ drop caused by electronic obsession is also equivalent to a wakeful night.

The study, commissioned by technology company Hewlett Packard, concludes that infomania is mainly a problem for adult workers, especially men.

The noticeable drop in IQ is attributed to the constant distraction of "always on" technology, when employees should be concentrating on what they are paid to do. They lose concentration as their minds remain fixed in an almost permanent state of readiness to react to technology instead of focusing on the task at hand.

The brain also finds it hard to cope with juggling lots of tasks at once, reducing its overall effectiveness, the study has found. And while modern technology can have huge benefits, excessive use can be damaging not only to a person's mind, but also their social life.

Eighty volunteers took part in clinical trials on IQ deterioration and 1100 adults were interviewed.

Sixty-two per cent of people polled admitted that they were addicted to checking their email and text messages so assiduously that they scrutinised work-related ones even when at home or on holiday. Half said they always responded immediately to an email and 21 per cent would interrupt a meeting to do so.

Furthermore, infomania is having a negative effect on work colleagues, increasing stress and dissenting feelings. Nine out of 10 polled thought that colleagues who answered emails or messages during a face-to-face meeting were extremely rude. Yet one in three Britons believes that it is not only acceptable to do so, but actually diligent and efficient.

The effects on IQ were studied by Glenn Wilson, a University of London psychologist, as part of the research project.

"This is a very real and widespread phenomenon," he said. "We have found that infomania, if unchecked, will damage a worker's performance by reducing their mental sharpness."

The report suggests that firms that give employees gadgets and devices to help them keep in touch should also produce guidelines on use. These "best-practice tips" include turning devices off in meetings and using "dead time", such as travelling time, to read messages and check emails.

David Smith, commercial manager of Hewlett Packard, said: "The research suggests that we are in danger of being caught up in a 24-hour, always-on society.

"This is more worrying when you consider the potential impairment on performance and concentration for workers, and the consequent impact on businesses.

"Always-on technology has proven productivity benefits, but people need to use it responsibly. We know that technology makes us more effective, but we also know that misuse of technology can be counter-productive."

>From The Times of London in The Australian

Posted by iang at 08:02 AM | Comments (2) | TrackBack

May 04, 2005

Lies, Uncertainty and Job Interviews

I was recently chatting to a HR ("human resources") person who complained that "the banks are having trouble getting good people." This struck me as odd, as I've seen plenty of evidence that they happily reject good people (and I'm not just talking about my own experiences). Having mused on it, I think one of the problems is that the HR process is riddled with lying. Check out what the IHT reports on Cleo reporting on how to lie to get a job.

This would be funny, except it's not. Anecdotes I've heard indicate that the rate of lying in job interviews and on CVs is higher than it should be. I won't suggest my numbers ... partly because it is unscientific research, and partly because that will just give people an excuse to disbelieve. Baxter, the guy quoted in the article, does say numbers: "I would say 10 percent to 15 percent have issues that require attention," and that 3 percent to 4 percent had "serious discrepancies" like falsely reporting university attendance. That's just the ones he picked up on.

Why is lying so prevalent? And another question - has it always been this way? I have a rosey perception that it wasn't like this when I was young. Is that my own factors? Is that instead my own naivete?

Here's what I have picked up over time. Firstly, all cultures lie. People who say they don't lie are lying. (Try teaching that one to your children.) In fact, one of the research topics that academics have conducted over the last decade or two is to try and map out how different cultures develop shared but buried understandings of when it is ok to lie.

In the anglo culture this is sometimes called the white lie. As examples of the white lie, it is ok for the husband to lie about his wife's weight, or that gawd-awful dress that makes her look like a matron, if she lies about how he's good enough in bed.

Cultures differ. In Spain for example, it is ok to lie about an appointment. This is because it is necessary, indeed obligatory to insist that you take someone for a drink or you offer them a meal; the acceptable and polite way out of this is to say you have another appointment already, so you are apparently trapped in breaking one committment for another. Another aspect of Spanish politeness is that asking for directions or help is fraught with helpful lies.

In America it is ok to lie if it is about some marketing issue, which includes themselves. That is, the listener shouldn't be so stupid as to believe marketing, and if they do, then it's morally right to part their money from them. One non-american put it like this "Americans lie when it comes to admitting shortcomings or weaknesses. They will rarely admit that they do not know, they will come up with an answer, no matter what."

Americans are always marketing themselves, as distinct to the Spanish who are afraid of disappointing you. To Americans, a question must be answered, no matter what. A product must be marketed, and if there isn't one on the table, then put yourself there. America as a country now has an endemic problem with lying, as it is now at the point where government is assumed to be lying because their job is to sell the program, and that requires marketing to the people, right? Here's a Wired article where the government is knowingly lying about some security stuff it is trying to sell, even though everyone knows it is lying. Now, the gravity of this might not be apparent until one considers that America again unique amongst peers has a perverse dependency on honesty.

Apparently it remains impolite in all societies to suggest that someone is lying. Which of course suggests the obvious strategy - lie, and dare people to call you on it. Here's another one: security people lie when they say something is secure. Aside from the basic characteristic of security being a relative term not an absolute (so the statement makes no sense) most security people do not carry out the proper analysis to ground any statement in security, so a short cut is taken, and we hope that it works out. And nobody notices before we've changed jobs.

(Which diverts us briefly back to financial cryptography. In our art we make people part of the process, and issuers as parties to contracts, escrow partners as protectors of value, and techies as operators of systems are all in a position to lie. What leads them to lie and what we can do to make it very hard to lie are things we have to understand in order to protect value. We could just assume honesty like other systems, but that's just naive. That's why you read a lot of postings here about various new and interesting frauds: how and why people lie to commit frauds is part of the governance layer, it is part of the job.)

For my own culture, I cannot answer how they/we lie; believe me as you will! I'm interested in hearing how you all perceive how other cultures lie, my suspicion is that only the outsider can work it out. I really only became interested in lying when I'd hit my third culture. The discovery of new forms of lying is by contrast and comparison, then, and sometimes by an awful sinking feeling you get when you discover some totally new experience that catches you out completely.

For the British, it is ok to "make something up" if you don't actually know how you are going to do it anyway. So for example, some event in the future: I'll pick you up in my car next tuesday ... is a fine thing to say even if it never happens. Even if you would not promise it anyway, it is ok to say it, because it is outside the time horizen of reliability.

Back to the market for employment, which Spence identified as being peculiarly inefficient. In Britain, lying in job interviews is called "blagging" and is quite acceptable. Indeed in some cases an agent who puts a candidate forward will instruct the candidate to lie. The dividing line seems to be as thin as whether they can get away with it, for example on a CV or on a written test.

Which brings me around to the original question of why exactly is lying so much a part of the job process? I think it comes down to a failure of HR in general and a failure of requirements in particular. The experiences I have heard of have shown an obsessive tendency for employers in some cultures to look for perfection in candidates. This means that candidates are rejected when some answer isn't to their liking; this can be a wide range of perceptional things such as "would not fit in" or it can be simply a narrow failure to answer a particular question. No matter, it seems that if you do not get everything right, you are 'not good enough'. So lie on your CV, or your written test, or blag your way through the question, because any failure means you are dropped whereas a successful lie gets you through.

This desire for perfection is pervasive. In fact, it's positively correlated with the amount of effort put in by employers as those that conduct many interviews commonly give every interviewer the ability to say no! This of course sends the wrong signal; if you don't know something, there is no point in admitting it, and you are better off "blagging" your way through it so as to get to the next interview. And now we see why this is a failure in HR: if lying is rewarded, your company will end up full of liars. And the harder you try in your HR process, the more you are assuring that only the better quality of liar will be able to get through!

What is the underlying failure here? To an engineer this is an easy one to explain: uncertainty is what we do, and the employer should learn to appreciate it and not run from it. Seeking for perfection is perverse, it means we are likely to reject fresh approaches and end up stifled in group-think, assuming that we managed to avoid the liars. It also means that when the interviewer is limited in some way, those very limitations are imposed on the candidate, and this then gives us a feedback cycle similar to the one Spence pointed at in his seminal "Job Market Signaling" paper - except that this time even though the characteristic reaches equilibrium, neither employee nor employer will recognise the signal.

Most people will be offended by this, because implicit in today's essay is that you lie, or that your company is full of liars. Consider it more then as rejecting diversity, if looking for a politer label. (Or, more simply, assume that I'm lying and you can ignore everything written here. For those who are curious on that point, we'll leave it to the reader to decide where the lies are herein.)

Regardless of any particular lies either here or in your next interview, it should be as much a part of the employment process to discover and revel in uncertainty as any other quality, and any process that tries to avoid it is doomed. Why perfection always results in disaster and uncertainty is the foundation of survival will have to wait for another day.


Addendum: It seems I was right: 25% of CVs are Fiction!.

Posted by iang at 12:25 PM | Comments (6) | TrackBack

May 01, 2005

HCI/security - start with Kerckhoffs' 6 principles

It seems that interest in the nexus at HCI (human computer interface) and security continues to grow (added ec fc1 fc2). For my money I'd say we should start at Kerckhoff's 6 principles. Now, unfortunately we have only the original paper in French, so we can only guess at how he derived his 6 principles.

Are there any French crypto readers out there who could have a go at translating this? Kerckhoffs was a Dutchman, and perhaps this means we need to find Dutch cryptographers who can understand all his nuances... Nudge, nudge...

(Ideally the way to start this, I suspect, is to open up a translation in a Wiki. Then, people can debate the various interpretations over an evolving document. Just a guess - but are there any infosec wikis out there?)

Posted by iang at 06:46 AM | Comments (0) | TrackBack