Mike Bond, an EMV researcher from Cambridge crypto labs and now Security Director at Cryptomathic, is giving the Kenote Address at FC. As it strongly rhymes with many of my rantings (GP, Pareto-secure, the hacker yin-yang relationship ...) here is the abstract in full. The other Invited Talk by Dawn Jutla also resonates with talk of end-to-end security and how Kherchhoffs' 6th says the user is the first requirement.
(Keynote - Mike Bond)Leaving Room for the Bad Guys
When designing a crypto protocol, or building a large security architecture, no competent designer ignores considering the bad guy, and anticipating his plans. But often we designers find ourselves striving to build totally secure systems and protocols -- in effect writing the bad guys entirely out of the equation. In a large system, when you exclude the bad guys, they soon muscle their way in elsewhere, and maybe in a new and worse way over which you may have much less control. A crypto protocol with no known weaknesses may be a strong tool, but when it does break, it will break in an unpredictable way.
This talk explores the hypothesis that it is safer and better for designers to give the bad guys their cut, but to keep it small, and keep in control. It may not just be our systems but also our protocol building blocks that should be designed to make room for the bad guy to take his cut. The talk is illustrated with examples of very successful systems with known weaknesses, drawn primarily from the European EMV payment system, and banking security in general. We also discuss a few "too secure" systems that end up failing in worse ways as a result.
(Invited Talk — Dawn Jutla)Title: Usable SPACE: Security, Privacy, and Context for the Mobile User
Users breach the security of data within many financial applications daily as human and/or business expediency to access and use information wins over corporate security policy guidelines. Recognizing that changing user context often requires different security mechanisms, we discuss end-to-end solutions combining several security and context mechanisms for relevant security control and information presentation in various mobile user situations. We illustrate key concepts using Dimitri Kanevsky's (IBM Research) early 2000s patented inventions for voice security and classification.
Curiously, these talks are the most encouraging for a long time. Does this signify a shift in IFCA focus away from academic crypto to practical security?
The rest of the programme I pass on in "title-only-peer-review-mode" so you can scan and click for anything that grabs attention.
Programme in title-only-peer-review-mode:
Posted by iang at January 8, 2007 08:49 AM | TrackBackVulnerabilities in First-Generation RFID-enabled Credit Cards Conditional E-Cash A Privacy-Protecting Multi-Coupon Scheme with Stronger Protection against Splitting (Panel) RFID - yes or no? A Model of Onion Routing with Provable Anonymity K-Anonymous Multi-party Secret Handshakes Using a Personal Device to Strengthen Password Authentication from an Untrusted Computer Scalable Authenticated Tree Based Group Key Exchange for Ad-Hoc Groups On Authentication with HMAC and Non-Random Properties Hidden Identity-Based Signatures Space-Efficient Private Search Cryptographic Securities Exchanges Improved multi-party contract signing Informant: Detecting Sybils Using Incentives Dynamic Virtual Credit Card Numbers The unbearable lightness of PIN cracking (Panel) Virtual Economies - Threats and Risks, Moderator The Motorola Personal Digital Right Manager Certificate Revocation using Fine Grained Certificate Space Partitioning An Efficient Aggregate Shuffle Argument Scheme
i would contend that a lot of vulnerabiilties raised by "yes card" exploit ... misc. past posts
http://www.garlic.com/~lynn/subintegrity.html#yescard
was looking at a very narrow set of threats ... specifically lost/stolen card threats ... and concentrating on countermeasures for attacks on the cards. the "yes card" exploits effectively turn out to be attacks on the terminals, not attacks on the cards.
the basis for the "yes" in the "yes card" reference comes somewhat from the standpoint that once it was assumed that the card had been secured, then a system could be deployed where the terminal could rely on business rules implemented in the card.
the issue was that skimming type of attacks (typically a terminal attack) ... recent reference
http://www.garlic.com/~lynn/aadsm26.htm#20
predated work on infrastructure that gave rise to "yes cards". a (terminal) skimming attack then could be used to produce a counterfeit "yes card" ... and letting the terminal rely on "judgement" of the card for offline transactions severely aggrevated the risk exposure (somewhat blindly assuming that there wouldn't be countefeit cards). It wasn't even necessary to skim the PIN ... since the terminal would asked the (potentially counterfeit) card, if the entered PIN was correct, and of course, a counterfeit "yes card" would always answer "YES".
So, another maxim would be to include the full end-to-end analysis of all the components when designing high integrity infrastructure..
However, the guideline that there is always possibility that something has been overlooked (the bad guys are so ingenious), either purposefully or not ... would promote design of security in-depth with multiple layers/levels of countermeasures (even when you may have otherwised believed all basis have been covered).
http://www.ddj.com/dept/security/196802528 has this comment that echoes the above:
[Bruce Schneier's] latest work is on brain heuristics and perceptions of security, and he'll be doing a presentation on that topic at the RSA Conference next month. "I'm looking at the differences between the feeling and reality of security," he says. "I want to talk about why our perceptions of risk don't match reality, and there's a lot of brain science that can help explain this."
Posted by: DDJ interview with Bruce Schneier at January 12, 2007 02:57 PM