Jim points at a good story surveying the slow arisal of payment mechanisms and revenue models - Subscriptions! Donations! - the market for online comics. It's in places like these that the business models are set for main street.
Just a week or two before the SHA-1 news from Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu, I wrote a paper on what it means to be secure. In a flash of inspiration, I hit on applying the methodology of Pareto-efficiency to security.
In brief, I define a Pareto-secure improvement to be one where a change made to a component results in a measurable and useful improvement in security results, at no commensurate loss of security elsewhere. And a component is Pareto-secure if there is no further improvement to be made. I further go on to say that a component is Pareto-complete if it is Pareto-secure under all reasonable assumptions and in all reasonable systems. See the draft for more details!
SHA-1 used to be Pareto-complete. I fear this treasured status was cast in doubt when Wang's team of champions squared up against it at Crypto2004 last year; the damaging note, 'Collision Search Attacks on SHA1,' of a week or two back has dealt a further blow.
Here's the before and after. I wrote this before the new information, but I haven't felt the need to change it yet.
Before Crypto 2004 | After Crypto 2004 | |||
---|---|---|---|---|
Pareto-secure (signatures) |
Pareto-complete | Pareto-secure (signatures) |
Pareto-complete | |
MD5 | ? | No | No | No |
SHA0 | Yes | ? | No | No |
SHA1 | Yes | Yes | ? | No |
SHA256 | Yes | Yes | Yes | ? |
To be Pareto-complete, the algorithm must be good for all uses. To be Pareto-secure, the algorithm must be good - offer no Pareto-improvement - for a given application and set of assumptions. In order to stake that claim, I considered above the digital signature, in the sense of for example DSA.
My counter to Peter Wayner's article kicked up a bit of a storm, regretfully carried on in the hallowed but private halls of IFCA's internal mail forums. (Yes, I did ask when they'll put up a forum for members and others to discuss matters FC.) One thing that was curious was the volume of response; finally it twigged to me, the conference starts on Monday!
Well, I'm not there, I'm sitting in the snow. You're not there either if you're reading this. Here's my list of top picks for those talks that might have made the trip worthwhile.
But before I get onto that, I have to report: shock! horror! every paper is online ! Thank you organisers, thank you papers committee, thank you all! We can now read these papers, and fame and fortune for the authors is assured:
Protecting Secret Data from Insider Attacks
David Dagon and Wenke Lee and Richard Lipton (Georgia Institute of Technology)
Trust and Swindling on the Internet
Bezalel Gavish (Southern Methodist University)
Risk Assurance for Hedge Funds using Zero Knowledge Proofs
Michael Szydlo
Probabilistic Escrow of Financial Transactions with Cumulative Threshold Disclosure
Stanislaw Jarecki (UC Irvine) and Vitaly Shmatikov (UT Austin)
A User-Friendly Approach to Human Authentication of Messages
Jeff King and Andre dos Santos (Georgia Institute of Technology)
Panel: Phishing - Organizer: Steve Myers
Tentative panelists: Drew Dean, Stuart Stubblebine, Richard Clayton, Steve Myers, Mike Szydlo
ChoicePoint (Roundups from Adam: today, 25th, 24th) is to receive help from Bank of America, which has just revealed that "a small number of computer data tapes were lost during shipment to a backup data center. The missing tapes contained U.S. federal government charge card program customer and account information."
This might seem like a silly way to run the privacy of a nation, but there is more to this than meets the eye. I've been writing on a draft paper on security, and one relevant observation seems to be that we have to get over this finger pointing, as the incentive for companies like ChoicePoint to hide and fudge their security is driven by the bad exposure, and not by the incentives related directly to the data.
A paper by Schechter and Smith at FC2003 raised the possibility that if companies openly share their threats and breaches data, they can reduce the overall risk [1]. They show this from the pov of the attacker; who has to see his costs rise because of the reduction in openings.
Yet, I see their suggestion as more in terms of game theory or prisoner's dilemma results, as to stand up and reveal weaknesses raises the possibility of punishment. The industry as a whole has no great understanding of the risks and threats, and the major cost they have to deal with is adverse exposure. Hence, finger pointing becomes the norm, and avoiding the blame becomes the commercial imperitive. (I develop this much more in the paper.)
In order to share the information, and raise the knowledge of what's important and what's not, we may have to get over the finger pointing. That may mean we have to go through several ChoicePoints, if only so that it can become routine and not scandalous. Bank of America is thus timely and expected; although I don't think anyone else is likely to see it that way.
[1] Stuart E. Schechter and Michael D. Smith, "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003 LNCS Springer-Verlag.
Twan points at an article on software teams boot camp that takes a highly military process. Worth reading if you touch software. There are a number of insights in there.
Another article he points at is also worth reading. It's the Shuttle Team. Although this might sound inspiring, it's not as relevant to the real world of software as all that; we don't have a single unchanging machine, we don't have the understanding required to create a 2,500 page spec for a 6,366 line change, we don't care if it crashes, and don't have the money to pay for any of that. Still, some of the things in the article resonate, so it is good to read and isolate what it is that is applicable to your world and what's not.
Triage is one thing, security is another. Last week's ground-shifting news was widely ignored in the press. Scanning a bunch of links, the closest I found to any acknowledgement of what Microsoft announced is this:
In announcing the plan, Gates acknowledged something that many outside the company had been arguing for some time--that the browser itself has become a security risk. "Browsing is definitely a point of vulnerability," Gates said.
Yet no discussion on what that actually meant. Still, to his sole credit, author Steven Musil admitted he didn't follow what Microsoft were up to. The rest of media speculated on compatibility, Firefox as a competitor and Microsoft's pay-me-don't-pay-me plans for anti-virus services, which I guess is easier to understand as there are competitors who can explain how they're not scared.
So what does this mean? Microsoft has zero, zip, nada credibility in security.
...earlier this week the chairman of the World's Most Important Software Company looked an auditorium full of IT security professionals in the eye and solemnly assured them that "security is the most important thing we're doing."And this time he really means it.
That, of course, is the problem: IT pros have heard this from Bill Gates and Microsoft many times before ...
Whatever they say is not only discounted, it's even reversed in the minds of the press. Even when they get one right, it is assumed there must have been another reason! The above article goes on to say:
Indeed, it's no accident that Microsoft is mounting another security PR blitz now, for the company is trying to reverse the steady loss of IE's browser market share to Mozilla's Firefox 1.0.
Microsoft is now the proud owner of a negative reputation in security,
Which leads to the following strategy: actions not words. Every word said from now until the problem is solved will just generate wheel spinning for no productivity, at a minimum (and not withstanding Gartner's need to sell those same words on). The only way that Microsoft can change their reputation for insecurity is to actually change their product to be secure. And then be patient.
Microsoft should shut up and and do some security. Which isn't entirely impossible. If it is a browser v. browser question, it is not as if the competition has an insurmountable lead in security. Yes, Firefox has a reputation for security, but showing that objectively is difficult: their brand is indistinguishable from "hasn't got a large enough market share to be worth attacking as yet."
"This is a work in progress," Wilcox says. "The best thing for Microsoft to do is simply not talk about what it's going to do with the browser."
IEEE Security & Privacy magazine has a special on _Economics of Information Security_ this month. Best bet is to simple read the editor's intro.
There are two on economimcs of disclosure, a theme touched upon recently:
Two I've selected for later reading are:
This is because they speak to a current theme - how to model information in attacks.
Gary North lashes out with some choice snippets: "According to ChoicePoint, there was no announcement because law authorities prohibited it." OK, so maybe they wanted to set up a sting. They're the good guys, and they're in control, right?
Well, Gary goes on to say: "Here is what is arguably the largest data base company on earth. It can't say what has or has not been breached. It is now four months after the breach took place."
Lots of other fun stuff. It's a bit early to tell - Gary also grumbled oddly that there has been no coverage - but this has the potential of becoming a major forest fire. Right now, the brush is burning brightly, the critters are restless, and the trees are shivering in anticipation.
Maybe it was the RSA security conference, but the Choicepoint heist seems to have touched a nerve. Adam pointed here, where it says that " investigators notified the company of the breach in October, but ChoicePoint did not send out the consumer warnings until last week." And now, someone sent me the below, apparently sent out on Reuters (no URL).
SAN FRANCISCO The ChoicePoint data theft case is shaping up to be a full-blown scandal with as many as a half million people nationwide potentially vulnerable to identity theft. Earlier this week word emerged that scammers had illegally obtained detailed files on 35-thousand Californians by posing as legitimate customers of ChoicePoint.Now attorneys general from 19 states demanded that ChoicePoint warn any victims in their states as well.
And security experts are calling for more federal oversight of a lightly regulated industry that gathers and sells personal data about nearly every adult American.
The Georgia-based company maintains databases that hold 19 billion pieces of information, including Social Security numbers, credit and medical histories and motor vehicle registrations.
Gee, if the AGs get that demand, it'll save them the cost of passing a law!
Addendum: Gary North commented.
Identity fits squarely in the Rights layer, as it establishes a way to get access to assets and resources. There are other ways, all methods having their pros and cons. The problem - the cons - with identity is that while it may well be fine for humans, it is a simply hopeless, intractable way to deal with computers and networks. To make matters worse, it is the only method that non-technical people understand, so we have a dichotomy between those who understand ... and those who really understand it. Or, at least, between those who use it and those who implement it.
All this notwithstanding, the identity project of national governments is rolling forward like a juggernaut, and rather than politicise this forum, I've tried to keep away from it. That project is much too much like "you're either with us or against us" and technical issues are swept aside in such debates. I fear that's a line in the sand, though, and the tide is rolling in.
Last December, The Economist weighed in against the juggernaut, and copies are circulating around the net. As a reputable listing of the dangers of one Identity project, this one is worth preserving. If I find a reputable listing for the benefits of the Identity project, I'll do the same.
Friday February 18th 2005
Border controls
New-look passports
Feb 17th 2005
From The Economist print edition
High-tech passports are not working
IN OLDEN days (before the first world war, that is) the traveller simply
pulled his boots on and went. The idea that he might need a piece of paper
to prove to foreigners who he was would not have crossed his mind. Alas,
things have changed. In the name of security (spies then, terrorists now),
travellers have to put up with all sorts of inconvenience when they cross
borders. The purpose of that inconvenience is to prove that the passport's
bearer is who he says he is.
The original technology for doing this was photography. It proved adequate
for many years. But apparently it is no longer enough. At America's
insistence, passports are about to get their biggest overhaul since they
were introduced. They are to be fitted with computer chips that have been
loaded with digital photographs of the bearer (so that the process of
comparing the face on the passport with the face on the person can be
automated), digitised fingerprints and even scans of the bearer's irises,
which are as unique to people as their fingerprints.
A sensible precaution in a dangerous world, perhaps. But there is cause for
concern. For one thing, the data on these chips will be readable remotely,
without the bearer knowing. And-again at America's insistence-those data
will not be encrypted, so anybody with a suitable reader, be they official,
commercial, criminal or terrorist, will be able to check a passport holder's
details. To make matters worse, biometric technology-as systems capable of
recognising fingerprints, irises and faces are known-is still less than
reliable, and so when it is supposed to work, at airports for example, it
may not. Finally, its introduction has been terribly rushed, risking further
mishaps. The United Sates want the thing to start running by October, at
least in those countries for whose nationals it does not demand visas.
Your non-papers, please
In theory, the technology is straightforward. In 2003, the International
Civil Aviation Organisation (ICAO), a UN agency, issued technical
specifications for passports to contain a paper-thin integrated
circuit-basically, a tiny computer. This computer has no internal power
supply, but when a specially designed reader sends out a radio signal, a
tiny antenna draws power from the wave and uses it to wake the computer up.
The computer then broadcasts back the data that are stored in it.
The idea, therefore, is similar to that of the radio-frequency
identification (RFID) tags that are coming into use by retailers, to
identify their stock, and mass-transit systems, to charge their passengers.
Dig deeper, though, and problems start to surface. One is interoperability.
In mass-transit RFID cards, the chips and readers are designed and sold as a
package, and even in the case of retailing they are carefully designed to be
interoperable. In the case of passports, they will merely be designed to a
vague common standard. Each country will pick its own manufacturers, in the
hope that its chips will be readable by other people's machines, and vice
versa.
That may not happen in practice. In a trial conducted in December at
Baltimore International Airport, three of the passport readers could manage
to read the chips accurately only 58%, 43% and 31% of the time, according to
confidential figures reported in Card Technology magazine, which covers the
chip-embedded card industry. (An official at America's Department of
Homeland Security confirmed that "there were problems".)
A second difficulty is the reliability of biometric technology.
Facial-recognition systems work only if the photograph is taken with proper
lighting and an especially bland expression on the face. Even then, the
error rate for facial-recognition software has proved to be as high as 10%
in tests. If that were translated into reality, one person in ten would need
to be pulled aside for extra screening. Fingerprint and iris-recognition
technology have significant error rates, too. So, despite the belief that
biometrics will make crossing a border more efficient and secure, it could
well have the opposite effect, as false alarms become the norm.
The third, and scariest problem, however, is one that is deliberately built
into the technology, rather than being an accident of its present
inefficiency. This is the remote-readability of the chip, combined with the
lack of encryption of the data held on it. Passport chips are deliberately
designed for clandestine remote reading. The ICAO specification refers quite
openly to the idea of a "walk-through" inspection with the person concerned
"possibly being unaware of the operation". The lack of encryption is also
deliberate-both to promote international interoperability and to encourage
airlines, hotels and banks to join in. Big Brother, then, really will be
watching you. And others, too, may be tempted to set up clandestine
"walk-through inspections where the person is possibly unaware of the
operation". Criminals will have a useful tool for identity theft. Terrorists
will be able to know the nationality of those they attack.
Belatedly, the authorities have recognised this problem, and are trying to
do something about it. The irony is that this involves eliminating the
remote readability that was envisaged to be such a crucial feature of the
system in the first place.
One approach is to imprison the chip in a Faraday cage. This is a
contraption for blocking radio waves which is named after one of the
19th-century pioneers of electrical technology. It consists of a box made of
closely spaced metal bars. In practice, an aluminium sheath would be woven
into the cover of the passport. This would stop energy from the reader
reaching the chip while the passport is closed.
Another approach, which has just been endorsed by the European Union, is an
electronic lock on the chip. The passport would then have to be swiped
through a special reader in order to unlock the chip so that it could be
read. How the European approach will interoperate with other countries'
passport controls still needs to be worked out. Those countries may need
special equipment or software to read an EU passport, which undermines the
ideal of a global, interoperable standard.
Sceptics might suggest that these last-minute countermeasures call into
doubt the reason for a radio-chip device in the first place. Frank Moss, of
America's State Department, disagrees. As he puts it, "I don't think it
questions the standard. I think what it does is it requires us to come up
with measures that mitigate the risks." However, a number of executives at
the firms who are trying to build the devices appear to disagree. They
acknowledge the difficulties caused by choosing radio-frequency chips
instead of a system where direct contact must be made with the reader. But
as one of them, who preferred not to be named, put it: "We simply supply all
the technology-the choice is not up to us. If it's good enough for the US,
it's good enough for us."
Whether it actually is good enough for the United States, or for any other
country, remains to be seen. So far, only Belgium has met America's
deadline. It introduced passports based on the new technology in November.
However, hints from the American government suggest that the October
deadline may be allowed to slip again (it has already been put back once)
since the Americans themselves will not be ready by then. It is awkward to
hold foreigners to higher standards than you impose on yourself. Perhaps it
is time to go back to the drawing board.
----------------
Related Items
From The Economist
Biometrics Dec 4th 2003
http://www.economist.com/science/displayStory.cfm?Story_id=2266290
Websites
America's State Department has information on the machine-readable passport requirement <http://www.state.gov/r/pa/ei/rls/36114.htm>. The Enhanced Border Security Act <http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.3525.ENR:> set the timetable for the introduction of the passports
<http://europa.eu.int/idabc/en/document/3669/194>. The EU has information on its own plans to introduce machine-readable passports. Ari Juels <http://www.rsasecurity.com/rsalabs/node.asp?id=2029> is a security expert at RSA laboratories.
----------------
Copyright ¿ The Economist Newspaper Limited 2005. All rights reserved.
http://www.economist.com/science/displaystory.cfm?story_id=3666171
The note on the SHA1 attack from the team from Shandong - Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu - is now available in PDF. Firstly, it is a summary, not the real paper, so the attack is not outlined. Examples are given of _reduced rounds in SHA1_ which is not the real SHA1. However, they established their credibility at Crypto 2004 by turning around attacks over night on new challenges. Essential text, sans numbers, below...
Collision Search Attacks on SHA1
Xiaoyun Wang, Yiqun Lisa Yin, Hongbo Yu
February 13, 20051 Introduction
In this note, we summarize the resulted of our new collision search attacks on SHA1. Technical details will be provided in a forthcoming paper.
We have developed a set of new techniques that are very effective for searching collisions in SHA1. Our analysis shows that collisions of SHA1 can be found with complextity less than 269 hash operations. This is the first attack on the full 80-step SHA1 with complexity less than 280 theoretical bound. Based on our estimation, we expected that real collisions of SHA1 reduced to 70-steps can be found using today's supercomputers.
In the past few years, there have been significant research advances in the analysis of hash functions. The techniques developed in the early work provide an important foundation for our new attacks on SHA1. In particular, our analysis is built upon the original differential attack on SHA0, the near collision attack on SHA0, the multi-block collision techniques, as well as the message modification techniques used in the collision search attack on MD5. Breaking SHA1 would not be possible without these powerful analytical techniques.
Our attacks naturally apply to SHA0 and all reduced variants of SHA1. For SHA0, the attack is so effective that we were able to find real collisions of the full SHA0 with less than 239 hash operations. We also implemented the attack on SHA1 reduced to 58 steps and found collisions with less than 233 hash operations. Two collision examples are given in this note.
2 A collision example for SHA0
<skip some numbers>
Table 1: A collision of the full 80-step SHA0. The two messages that collide are (M0, M1) and (M0 , M'1). Note that padding rules were not applied to the messages.
3 A collision example for 58-step SHA1
<skip some numbers>
"Table 2: A collision of SHA1 reduced to 58 steps. The two messages that collide are M0 and M'0. Note that padding rules were not applied to the messages."
The last footnote generated some controversy which is now settled: padding is irrelevant. A quick summary of our knowledge is that "the Wang,Yin,Yu attack that can reduce the strength of SHA-1 from 80 bits to 69 bits." This still falls short of a practical attack, as it leaves SHA-1 as stronger than MD5 (only 64 bit strength), but SHA-1 is now firmly on the "watch" list. To use my suggested lingo, it is no longer Pareto-complete, so any further use would have to be justified within the context of the application.
Gervase Markham has written "a plan for scams," a series of steps for different module owners to start defending. First up, the browser, and the list will be fairly agreeable to FCers: Make everything SSL, create a history of access by SSL, notify when on a new site! I like the addition of a heuristics bar (note that Thunderbird already does this).
Meanwhile, Mozilla Foundation has decided to pull IDNs - the international domain names that were victimised by the Shmoo exploit. How they reached this decision wasn't clear, as it was taken on insider's lists, and minutes aren't released (I was informed). But Gervase announced the decision on his blog and the security group, and the responses ran hot.
I don't care about IDNs - that's just me - but apparently some do. Axel points to Paul Hoffman, an author of IDN, who pointed out how he had IDN spoofing solutions with balance. Like him, I'm more interested in the process, and I'm also thinking of the big security risks to come and also the meta-risks. IDN is a storm in a teacup, as it is no real risk beyond what we already have (and no, the digits 0,1 in domains have not been turned off).
Referring this back to Frank Hecker's essay on the foundation of a disclosure policy does not help, because the disclosure was already done in this case. But at the end he talks about how disclosure arguments fell into three classes:
Literacy: “What are the words?” Numeracy: “What are the numbers?” Ecolacy: “And then what?”
"To that end [Frank suggests] to those studying the “economics of disclosure” that we also have to study the “politics of disclosure” and the “ecology of disclosure” as well."
Food for thought! On a final note, a new development has occurred in certs: a CA in Europe has issued certs with the critical bit set. What this means is that without the code (nominally) to deal with the cert, it is meant to be rejected. And Mozilla's crypto module follows the letter of the RFC in this.
IE and Opera do not it seems (see #17 in bugzilla), and I'd have to say, they have good arguments for rejecting the RFC and not the cert. Too long to go into tonight, but think of the crit ("critical bit") as an option on a future attack. Also, think of the game play that can go on. We shall see, and coincidentally, this leads straight back into phishing because it is asking the browser to ... display stuff about the cert to the user!
What stuff? In this case, the value of the liability in Euros. Now, you can't get more FC than that - it drags in about 6 layers of our stack, which leaves me with a real problem allocating a category to this post!
The Linux community just set up a new way to report security bugs. In the debate, it transpires that Linus Torvalds laid down a firm position of "no delay on fix disclosures." Having had a look at some security systems lately, I'm inclined to agree. It may be that delays make sense to let vendors catch up. That's the reason most quoted, and it makes entire logical sense. But, that's not what happens.
The process then gets hijacked for other agendas, and security gets obscured. Thinking about it, I'm now viewing the notion of security being discussed behind closed doors with some suspicion; it's just not clear how security by obscure committees divides their time between "ignoring the squeeky wheels" and "create space to fix the bugs." I've sent messages on important security issues to several closed security groups in the last 6 months, and universally they've been ignored.
So, zero days disclosure is my current benchmark. Don't like it? Use a closed product. Especially when considered that 90% of the actual implementations out there never get patched in any useful time anyway...
To paraphrase Adi, "[security] is bypassed not attacked."
Unlike most media articles, this one has body, in the form of a nice timeline showing the arisal and evolution of a threat. For that alone it is worth it! (Read the full story)
The following timeline gives a sense of the stepped-up pace of mobile phone attacks:Spring 2004: A trojanized game called "Mosquitos" secretly sends messages to expensive toll numbers at the user's expense.
15 June 2004: The Cabir worm replicates over-the-air via Bluetooth connections.
16 June 2004: Cabir.B, a variation on the Cabir worm, is discovered. Cabir.B, which began spreading in the wild in autumn 2004, continues to spread today. To date, it has been detected in China, India, Turkey, the Philippines, and Finland.
19 November 2004: The Skulls.A trojan replaces icons on the phone with skull images, making the phone almost useless.
29 November 2004: Skulls.B is discovered.
9 December 2004: Cabir.C is discovered.
9 December 2004: Cabir.D is discovered.
9 December 2004: Cabir.E is discovered.
21 December 2004: Skulls.C is discovered.
21 December 2004: Cabir.F is discovered.
21 December 2004: Cabir.G is discovered.
21 December 2004: The METAL Gear.a trojan encourages users to download and install it by masquerading as the popular mobile phone game Metal Gear Solid.
The most recent in this new wave of exploits, the trojan METAL Gear.a, targets mobile devices using the Symbian operating system. When run, it installs Skulls and Cabir variants and tries to disable antivirus and file-browsing products installed on the device - thus making the device extremely difficult for the user to repair. In addition, METAL Gear.a also makes a file called SEXXXY.sis available to any Bluetooth phones that happen to be within range; if the user of a nearby phone accepts that file, it will disable that phone's application selection button.
http://www.identitytheft911.com/education/articles/art20041223mobile.htm
Skype's success has caused people to start looking at the security angle. One easy claim is that because it is not open source, then it's not secure. Well, maybe. What one can say is that the open source advantage to security is absent, and there is no countervailing factor to balance that lack. We have trust, anecdotal snippets, and a little analysis, but how much is that worth? And how long will it last?
fm spotted these comments and attributed them to Sean D. Sollé:
Shame Skype's not open source really. Actually, it's pretty worrying that Skype's *not* open source.Given that you're happily letting a random app send and receive whatever it
likes over an encrypted link to random machines all over the interweb, the
best place for it is on a dedicated machine in a DMZ.Trouble is, you'd then have to VNC into it, or split the UI from the
telephony bits, but then you're back to needing the source again.Say what you like about Plain Old Telephones, but at least you know they're
not going to run through your house emptying the contents of your bookcases
through the letterbox out onto the pavement.Sean.
Stefan Brands postulates that the efforts of Passport/Liberty Alliance is leading to a convergance of thought between those who have been warning about privacy all these years, and those that want to build identity systems that share data across different organisations. Probably, this is a good thing; in that only the failures of these systems can lead these institutions to understanding that people won't support them unless they also deliver benefits with lowered risks to themselves.
Should the 'privacy nuts' just stand back and let them make mistakes? I don't know about that, I'd say the privacy community would be better off building their own systems.
Meanwhile, Stefan's musings and his view on the "7 laws of identity" that was postulated by Kim Cameron from the Passport project leads Stefan to postulate some design principles. First up:
"The technical architecture of an identity system should minimize the changes it causes to the legacy trust landscape among all system participants."
Sounds good to me. On two counts, it technically has a much better survival probability. One, it's a principle, and not a law. Laws don't just suddenly appear on blog entries, they are founded in much more than that. Secondly, it says "should" and so recognises its own frailty. There are cases where these things can be wrong.
For the other 9 design principles, we have to wait until Stefan writes them down!
Yesterday, the US house of Representatives approved the National Identity card.
This was first created in December 2004's Intelligence bill, loosely called the Patriot II act because it snuck in provisions like this without the Representatives knowing it. The deal is basically a no-option offer to the states: either you issue all your state citizens with nationally approved cards, or all federal employees are instructed to reject access. As 'public' transport (including flying) falls under federal rules now, that means ... no travel, so it's a pretty big stick.
If this doesn't collapse, then America has a national identity card. That means that Australia, Canada and the UK will follow suit. Other OECD countries - those in the Napoleonic code areas - already have them, and poorer countries will follow if and when they can afford them.
This means that financial cryptography applications need to start assuming the existence of these things. How this changes the makeup of financial cryptography and financial applications is quite complex, especially in the Rights and Governance layers. Some good, some bad, but it all requires thought.
http://tinyurl.com/4futv
Over on the mozilla-crypto group, discussions circulated as to how to fix the Shmoo bug. And phishing, of course. My suggestion has been to show the CA's logo, as that has to change if a false cert is used (and what CA wants to be caught having themselves issue the false cert?). I've convinced very few of this, although TrustBar does include this alongside it's own great idea, and it looks great! Then comes this stunning revelation from Bob Relyea:
"These same arguments played around and around at Netscape (back when it mattered to Verisign) about whether or not to include the signer's brand. In the end it was UI realestate (or the lack thereof given to security) argument that won the day. In the arena where realestate was less of an issue, but security was, the signer's logo and name *WERE* included (remember the 'Grant' dialogs for signed apps). They still contain those logos today."
Putting the CA logo on the web page was *always* part of the security model. So, that's why it always felt so right and so obvious. But why then have so many people argued against it? Is it the real estate - that never-ending battle to get as much on the screen as possible? Is it the status quo? Is it the triviality of pictures? The dread of marketing?
Which brings to mind experiences I've had chatting with people.
Not something I normally do, and to be honest, not something many of the readers of FC do, it seems. Techies have a very limited view of what people are and do, which is why the technical world has not yet grasped the importance of IM. It's still considered a sort of adjunct cousin to email. And even the whole p2p thing is considered fashionable for its technical interest, not for its social ramifications.
Here's what I've discovered in talking to people. Text is boring. Smileys are interesting. PIctures that move are better, and ones that talk and jump and say silly things are best of all! Out in userland, they love to have wonderful fun jumping little conversations with lots of silly smileys. Think of it as ring tones in a smiley ...
(Oh, and one other thing. It seems that users don't care much for real estate. They are quite happy to load up all these plugins and use them all ... to the point where the actual web page might be left with a third of the screen! This would drive me nuts, I'm one of those that turns off all the tool bars so I can get extreme vertical distance.)
Which leaves me somewhat confused. I know nothing about this world - I recognise that. But I know enough to recognise that here is one more very good idea. Think back to TrustBar. It allows the user to select their own logo to be associated with a secure site. Great idea - it dominated all the ideas I and others had thought of (visit counts, pet names, fingerprints) because of how it reaches out to that world - the world that is being hit by phishing. But over on the cap-talk list, Marc Stiegler goes one further:
"While I agree that an icon-only system might be unsatisfactory, there's more than one way to skin this cat. One of the ideas I have been inspired to think about (Ken Kahn got me started thinking about this, I don't remember how), was to assign faces to the identities. You could start with faces like pirates, scientists, bankers, priests, etc., and edit them with mustaches, freckles, sunglasses. For creating representations of trust relationships, it would be really entertaining, and perhaps instructive, to experiment with such mechanisms to allow users to create such icons, which could be very expressive."
Fantastic idea. If that doesn't kill phishing, it deserves to! If you aren't confused enough by now, you should re-examine your techie roots... and read this closing piece on how we can use a computer to deliver mental abuse. That's something I've always wanted!
Rude software causes emotional traumaBy Will Knight Published Monday 7th February 2005 13:03 GMT
Scientists at California University in Los Angeles (UCLA) have discovered computers can cause heartache simply by ignoring the user. When simulating a game of playground catch with an unsuspecting student, boffins showed that if the software fails to throw the ball to the poor student, he is left reeling from a psychological blow as painful as any punch from a break-time bully.
Matthew Lieberman, one of the experiment's authors and an assistant professor of psychology at UCLA explains that the subject thinks he is playing with two other students sitting at screens in another room, but really the other figures are computer generated. "It's really the most boring game you can imagine, except at one point one of the two computer people stop throwing the ball to the real player," he said.
The scientists used functional magnetic resonance imaging (fMRI) to monitor brain activity during a ball-tossing game designed to provoke feelings of social exclusion. Initially the virtual ball is thrown to the participating student but after a short while the computer players lob the ball only between themselves. When ignored, the area of the brain associated with pain lights up as if the student had been physically hurt. Being the class pariah is psychologically damaging and has roots deep in our evolutionary past. "Going back 50,000 years, social distance from a group could lead to death and it still does for most infant mammals," Lieberman said.
The fact that this pain was caused by computers ignoring the user suggests interface designers and software vendors must work especially hard to keep their customers happy, and it's not surprising that failing and buggy software is so frustrating. If software can cause the same emotional disturbance as physical pain, it won't be long before law suits are flying through the courts for abuse sustained at the hands of shoddy programming. R
¿ Copyright 2005 The Register
http://go.theregister.com/news/2005/02/07/rude_software/
Dan Kaminsky spotted this apparent repudiation of a "non-repudiable" digital signature. Even more interesting, it was over a Sarbanes-Oxley filing and certification, leading to the question of why the SEC thought it was OK for Penthouse founder Guccione to file a digitally signed document, and then claim his secretary did it? Is this like a get-out-of-trouble card for a Sarbanes-Oxley filing, or are we going to see digsig-fine-inflation until people stop using them?
Here's the article. You'll have to go to the site yourself to get the pictures. I'll put the URL at the end, just to make sure.
Penthouse's Guccione Settles with SEC
A $1 million dollar accounting problem caused big trouble for the former chief of the bare-all magazine.
Stephen Taub, CFO.com January 26, 2005
The Securities and Exchange Commission settled charges against Penthouse magazine's founder, Robert Guccione, who, without admitting or denying guilt, consented to the SEC's findings. The charge was filed against Penthouse, now known as PHSL Worldwide Inc., in the U.S. District Court for the Southern District of New York.
The commission also charged Penthouse International Inc. and two individuals formerly associated with the company with accounting fraud, reporting violations, and violations of the Sarbanes-Oxley Act certification rules.
According to the SEC’s complaint, in the quarter ended March 31, Penthouse improperly included as revenue $1 million received as an up-front payment in connection with a five-year Web site management agreement.
SEC officials asserted that the payment should not have been recognized in that quarter because the agreement was not actually signed until the following quarter. Further under generally accepted accounting principles, the $1 million payment should have been recognized as deferred revenue and amortized into income over the five-year life of the agreement.
By including the $1 million payment, Penthouse boosted its reported revenue by about 9 percent, to $12.72 million, and changed a quarterly net loss of $167,000 to a purported net profit of $828,000.
The SEC also asserts that the company's 10-Q bore an unauthorized electronic signature of Guccione -- who was Penthouse's principal executive officer and principal financial officer at the time. The signature indicated that Guccione had reviewed and signed the filing and the accompanying Sarbanes-Oxley certification. “This representation was false,” the SEC stated in its complaint.
http://www.cfo.com/article.cfm/3597911/c_3597966?f=home_todayinfinance
What is believe to be the first suit against a bank for online banking fraud has been launched in Florida. Because of the thorny issues involved, this could become a class action suit, and could establish precendents in the future of who's liable for online banking fraud in the US.
Mozilla Foundation is running a project to develop a policy for adding new Certificate Authorities to FireFox, Thunderbird and the like. This is so that more organisations can sign off on more certificates, so more sites can use SSL and you can be more secure in browsing. Frank Hecker, leading the project, has announced his "near ready" draft, intimating it will be slapped in front of the board any day now.
It transpires that this whole area is a bit of a mess, with browser manufacturers having inherited a legacy root list from Netscape, and modified it through a series of ad hoc methods that suit no-one. Microsoft - being the 363kg gorilla on the block - hands the whole lot over to a thing called WebTrust, which is a cartel of accountancy firms that audit CAs and charge for the privilege. Perfectly reasonable, and perfectly expensive; it's no wonder there are so few SSL sites in existence.
Netscape, the original and much missed by some, tried charging the CA to get added to its browser, but frankly that wasn't the answer. The whole practice of offering signed certificates is fraught with legal difficulties, and while the system wasn't under any form of attack, this lack of focus brought up all sorts of crazy notions like trying to make money in this business. Now, with phishing running rampant and certs sadly misdirected against some other enemy, getting out of selling CA additions is a Good Thing (tm). Ask your class action attorney if you are unsure on this point.
Which confusion and absence of founding in security and rationality Mozilla Foundation recognised, to their credit. The draft policy has evolved into a balanced process requiring an external review of the CA's operating practices. That review could be a WebTrust, some technical equivalent, or an independent review by an agreed expert third party.
Now in progress for a year, MF has added dozens of new CAs, and is proceeding on a path to add smaller non-commercial CAs like CACert, This is one of a bevy of non-"classical" CAs that use the power of the net to create their relationships network. This is the great white hope for browsing - the chance of unleashing the power of SSL to the small business and small community operators.
To get there from where they were was quite an achievement and other browser manufacturers would do well to follow suit. Crypto wasn't ever mean to be as difficult as the browser makes it. Go Mozilla!
The letter to ICANN on Verisign's conflict of interest received several additional ones agreeing, and as yet no demurrals. I'm looking forward to the response, as governance of the net is very important, and it's key that we get this conflict of interest thing before it gets us.
NTK, which sportingly referred to the antichrist of SSL sites, also pointed at an article Verisign's strategy. This analysis has it aiming to be the infrastructure behind ... well, everything. Can't say I blame them for trying, given the money involved. (Another sighting.)
Meanwhile, phishing is starting to enter the technical Internet community's open consciousness as 'a problem'. Verisign like everyone else is powerless to protect the market they invented, but there are good debates happening over on the Mozilla crypto group(s) about how to deal with it all. My goal: get Verisign's brand plastered all over Firefox, and then Verisign will make damn sure never to issue a dodgy cert to a phisher.
Few agree with me, although Amir and Ahmad have pretty much proven the case in their research. Perhaps not so curiously, CAs do agree, and Verisign tried a couple of years ago to ask for this (some press release which I no longer have). All CAs benefit from the branding approach because it allows them to do some marketing in an otherwise dysfunctional market. You can't market what can't be seen and there ain't no point in securing what ain't worth marketing...
Finally, to round out the recurring flushes of schitzophrenia derived from defending the CA's role as the trust vectors of our Internet, it transpires that our friends at Verisign have dropped the 'Trust' from their site. No longer does the logo say "The Value of Trust (tm)."
I think this is a good thing, and I'm not referring to the truly horrible HTML. Trust was a term that led people to think that by using a cert (from them, or anyone else) they had secured their trustworthy transactions. No chance! If security was that easy, we'd all be doing it by now, and "out phishing" would mean something else.
Adam and I have written to ICANN on the VeriSign conflict of interest. ICANN - the Internet numbers and names authority - are in the throes of awarding the top level domain (TLD) of .net to an operator. Currently VeriSign holds this contract, but we are concerned about their conflict of interest with their NetDiscovery service which facilitates intercepts for law enforcement.
Effectively, as a certificate authority (CA), they could be asked to issue false certificates in your name and eavesdrop on your communications. (More on this.) All legally of course, as per court order or subpoena, but the issue arises that they are now serving two masters - the company on whom the order is served, and you the user.
Not only is that a conflict of interest, but it is a complete breach in the spirit of the SSL's signed certificate security architecture. As each CA is meant to be trusted - by you - this means they need to avoid such conflicts.
Personally, I can't see any way out of this one. Either VeriSign gives up the certificate authority and TLD business, or its NetDiscovery business, or it's the end of any use of the word trust in the trusted third party concept.
I'd encourage you all to dive over to the ICANN site and file comments. VeriSign runs the domains, and issues half the net's secure certificates. It's also angling to be the net's intercept service. Enough is enough, let's spread these critical governance roles around a bit.
I know some have been banding around these ridiculous figures of phishing success, but I simply discounted them as being ridiculous. Yet, a new survey by Cyota in New York has said the same thing: "almost 5% [of banking account holders] admitted to responding and divulging information."
We're in the wrong business. Here are the other key results:
Key results found in the survey include:50% of accountholders have received at least one phishing email, compared to less than 25% back in April - representing 100% growth in just six months.
44% of online bankers utilize the same password for multiple online banking services - a password obtained by the fraudsters can be used at a number of banks.
37% of online bankers use their online banking password at other, less secure sites - these sites are typically less protected and this poses a security risk for banks.
79% of accountholders check for the little lock on the bottom of a secure web page, however less than 40% actually click on the lock to view the security certificate. Cyota's Anti-Fraud Command Center (AFCC) confirms: lock image can be and is easily spoofed by fraudsters.
70% of accountholders are less likely to respond to an email from their bank, and more than half are less likely to sign-up or continue to use their bank's online services due to phishing.
Nah... I cannot believe that "79% of accountholders check for the little lock..." And, as for "less than 40% actually click on the lock" ... well, I guess that includes 4%.