August 20, 2005

Notes on security defences

Adam points to a great idea by EFF and Tor:

Tor is a decentralized network of computers on the Internet that increases privacy in Web browsing, instant messaging, and other applications. We estimate there are some 50,000 Tor users currently, routing their traffic through about 250 volunteer Tor servers on five continents. However, Tor's current user interface approach — running as a service in the background — does a poor job of communicating network status and security levels to the user.

The Tor project, affiliated with the Electronic Frontier Foundation, is running a UI contest to develop a vision of how Tor can work in a user's everyday anonymous browsing experience. Some of the challenges include how to make alerts and error conditions visible on screen; how to let the user configure Tor to use or avoid certain routes or nodes; how to learn about the current state of a Tor connection, including which servers it uses; and how to find out whether (and which) applications are using Tor safely.

This is a great idea!

User interfaces is one of our biggest challenges in security, alongside the the challenge of shifting mental models from no-risk cryptography across to opportunistic cryptography. We've all seen how incomplete UIs can drain a project's lifeblood, and we all should know by now just how expensive it is to create a nice one. I wish them well. Of the judges, Adam was part of the Freedom project which was a commercial forerunner for Tor (I guess) and is a long time privacy / security hacker. Ping runs Usable Security - the nexus between UIs and secure coding. (The others I do not know.)

In other good stuff, Brad Templeton was spotted by FM carrying on the good fight against the evils of no-risk cryptography (1, 2, 3). Unfortunately there is no blog to point at but the debate echoes what was posted here on Skype many moons back.

The mistakes that the defender, SS, makes are routine. 1. confused threat models in the normal fashion: because we can identify one person on the planet who needs ultimate, top notch security, then we can assert that all people need this. (c.f., airbags and human rights workers. Pah.) 2, confused user needs model, again normal. The needs of a corporation to sell to fee-paying clients is completely disjoint from the needs of the Internet user. The absence of concern for the Internet's security at systemic levels in place of the need to sell product is the mess we have to clean up today. E.g., "There is an abundance of encryption in use today where it's needed." should be read as "Cryptography should be everywhere where we can sell it, and is dangerous elsewhere." 3. Ye Olde Mythe of security-by-obscurity results in a false sense of security. In practice, security-by-sales has led to a much greater and thus falser false sense of security, and is part and parcel of the massive security mess we face now. 4. because opportunistic cryptography so challenges the world view of old timers and commercial security providers, they have a lot of trouble being scientific about it and often feel they are being attacked personally. This means they are unable to contribute to the debate about phishing, malware and the like, as they are still blocked on selling the very model that brought about these ills. Every time they get close to discovering how these frauds come about, they have to reject the logic that points at the solution as the problem.

Breaking through this barrier, getting people to think scientifically and use cryptography to benefit has proven to be the hardest thing! But the meme might be starting to take hold. Scanning Ping's blog, it seems he has been pushing the idea of a competition for some time:

The best I’ve been able to come up with so far is this:
Hold yearly competitive user studies in which teams compete to design secure user interfaces and develop attacks against other teams’ user interfaces. Evaluate the submissions by testing on a large group of users.

And this logic is in direct response to the discord between what users will use and what security practitioners will secure. (And, note that Brad Templeton, Jedi noted above, is apparently the chair of the EFF running this competition.)

Ping asks is there any other way? Yes, I believe so. If you are mild of heart, or are not firmly relaxed, stop reading now.

<ContoversyAlert>

The ideal security solution is built with no security at all. Only later, after the model has been proven as actually sustainable in an economic fashion, should security be added.

And then, only slowly, and only in ways that attempt not to change or limit the user's ability. The only reason to take away usability from the users is if the system is performing so badly it is in danger of collapse.

</ControversyAlert>

This model is at the core of the successful security systems we generally talk about. Credit cards worked this way, adding security hacks little by little as fraud rates rise. Indeed, the times something really truly secure was added, it either flopped (smart cards were rejected) or backfired (SSL led to phishing).

To switch sides and maintain the same point :-) just-in-time cryptography is how the web was secured! First HTTP was designed, and then a bandaid was added. (Well, actually TLS is more of a plaster cast where a bandaid should have been used, but the point remains valid, and bidirectional.)

Same with SSH - first Telnet proved the model, then it was replaced by SSH. PGP came later than email and secured it after email was proven. WEP is adequate for wireless in strength but is too hard to turn on so it has failed to impress. Why it is too hard to turn on is because it was built too securely, perversely.

Digital cash never succeeded - and I suggest it is in part because it was designed to be secure from the ground up. In contrast look at Paypal and the gold currencies. Basically, they are account money with that SSL-coloured plaster cast slapped on. (They succeeded, and note that the gold currencies were also the first to treat and defeat phishing on a mass scale.)

So, over to you! We already know this is an outrageous claim. The question is why is JIT crypto so close to reality? How real is it? Why is it that everything we've practiced as a discipline has failed, or only worked by accident?

(See also Security Usability with Peter Gutmann.)

Posted by iang at August 20, 2005 10:00 AM | TrackBack
Comments
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.