October 19, 2009

Denial of Service is the greatest bug of most security systems

I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.

But more than half the time, my comment is rejected. The problem is on the one hand overly sophisticated comment boxes that rely on exotica like javascript and SSO through some place or other ... and spam on the other hand.

These things have destroyed the credibility of the blog world. If you recall, there was a time when people used blogs for _conversations_. Now, most blogs are self-serving promotion tools. Trackbacks are dead, so the conversational reward is gone, and comments are slow. You have to be dedicated to want to follow a blog and put a comment on there, or stupid enough to think your comment matters, and you'll keep fighting the bl**dy javascript box.

The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.

This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.

Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.

According to the blog post on it:

The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.

Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?

But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:

If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).

If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.

We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.

Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.

This works. It is an undeniable benefit:

There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.

That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.

Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.

No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:

There is only one mode, and it is secure.

And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).

And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?

So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.

SSL security is like Bismarck's sausages: "making laws is like making sausages, you don't want to watch them being made." The difference is, at least Bismark got a sausage!

Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.

Posted by iang at October 19, 2009 10:47 AM | TrackBack
Comments

OpenID requires as much if not more effort compared to client certs, client certs are setup once and then send many times.

OpenID has proven less than optimal and some sites are dropping it because it was more trouble than it was worth.

A lot of people claimed spammers would over come the problem with grey listing once it became wide spread, it never did become wide spread so it's still very effective against spam.

I doubt SSCs will become main stream because most people won't understand it, and not just users but server admins as well, especially in light of how many web server admins don't bother even installing optional SSL certs to protect people's passwords from being sniffed, SSC would of course solve both issues.

Posted by: Anonymous at October 19, 2009 05:07 AM

Iang,

This is a long post 8( but it's a complex issue to get to grips with so my apologies in advance.

In short the problem you have identified is not being a dog but being a good dog or a bad dog ("because on the internet nobody knows your a dog" 8).

However the flip side is not being "cast" as either not just by the blog owner but by "unknown others"...

Thus on-line people desire for various reasons to be both "A Non Y Mouse" and "trusted".

This is because the context of comments does not always come across at the time or after the fact, even in the off-line Face to Face (F2F) world.

And likewise occasionally in the Individual to Individual (I2I) relationships of friends and spouses etc.

In off line (F2F) settings the context is usually clear and any mixup (gaff) is usually recognised and resolved at the time, and often fairly soon forgiven or forgoton.

However the advent of recording by writing and later sound recording has meant that a social whitticism made by a professional person can come back to haunt them (especially by certain types of people with agendas).

It is possibly why Politicos never say anything these days due to fear of being quoted out of context at a later time, and as a result come across as superficial and out of touch (then again... ;)

As if the risk of inadvertant or covert recording was not enough we now have the Internet... And the Internet "never forgets" and has a nasty habit of reminding you at "inconvenient times", some times in a show-stopping way.

Therefore in the on-line world you need to consider context and the role you have within it.

Generally (unlike some of those in the public eye) due to our F2F existence we don't think consciously about context and roles.

To see why, just think and try and list how many roles you have in your F2F existence (friend, parent, spouse, colleague, boss, writer, etc) and how they break down into groups of people, individuals, and individual roles with individuals.

If you are "normal" (or fairly so... ;) you have a handful of real friends ranging through to several hundred acquaintances at various levels in your social life. And likewise you may have similar numbers in your work or professional life. Oh and then there's family, best not to forget them (though sometimes... ;)

As humans we have millenia of experience of F2F sub-conscious context and role setting which we still get wrong and have a myriad of unofficial protocols to partially deal with it. And we still make gaffs, some of us more than others (think of all the high functioning autistics that populate the techie world)

Most of us have learnt to deal with roles and context from those around us and the protocols we have developed relate to them and often do not travel beyond the fringes of our off-line F2F networks.

And "context and roles" is but one of many problems in the on-line world (basic authentication being an as yet unresolved issue).

You are correct that people do not understand certificates (this includes security professionals as well) because they are more than dual use, and have many and varied non-technical properties that are not even yet known let alone understood.

For instance do you know why you need a Pub Key Cert and a Pub signing Cert and why they should be different unrelated pub-pri pairs?

(The answer to why you need a "key" pair and a "signing" pair can be found by examining the UK RIPA and other legislation. You can be forced at any time to hand over your encryption keys because the legislation equates them to a "physical" device for which a warrant can be served, not an authenticator of consent to validate an "instrument" which a signing key is for which you legally can not be coerced.)

And further why you should need an unrelated set for every different "role" you have within every on-line "context"?

(Simple answer is traceability and association).

Therefore you would need at least one set of certificates for each and every blog site you used.

Why not just one set per site?

People may want to be two or more different personas on the same site for "good" (professional, social, friend, etc) not "bad" (sockpuppet, flamer etc) reasons.

We have the layer "0" technology to do these things, but as yet not the understanding of what the complexities really are and thus how many layers there are or the what and how of them or what protocols are needed.

Which makes the current tools "stabs in the dark" and therefore somewhat inflexible at present.

Technology issues aside, there is the management of the context and roles.

In the off-line F2F world they are implicit by all sorts of almost subliminal ques the subconscious has familiarity with.

There is no real equivalent for the non F2F on-line world, at best we have emoticons or smiley faces such as ";)" that I have used above to show I'm gently teasing.

But these are not universally understood let alone recognised or used. So we have no protocols to prevent or correct "gaffs" at present in the non F2F world, and as I noted the Internet "does not forget".

Worse you do not know who has been party to whatever you have said or if they where aware of the context and the role you where playing within it.

Then of course there are those that for whatever reason are seaking out what "you" might or might not have said for their own reasons. Not only is the Internet not ephemeral it is relatively easily (by size) searchable and fairly rapidly and at little or no cost to the searcher.

This all comes before the (by comparison minor) problem of "authentication".

Then there are the problems of "traceability" and being "A Non Y Mouse".

This requires not only the myriad of certificates, BUT your extremely careful use of them to make sure you don't inadvertently make a "gaff" the bones of which may come back to rattle noisily in the cupboard (or closet) in your as yet unknown future...

Let's face it, we have not even solved (the again simple by comparison) technical issues of "key managment".

Most people I know only have two bank accounts, one for saving the other (current / cheque) for day-to-day finances. And in the main, they cannot manage either effectively (and don't banks just love this... 8(

I can't see even the best of us maintaining many on-line persona properly.

And it is this issue of certificates not making you feel anonymous that will probably stop them being used at the end of the day.

Posted by: Clive Robinson at October 25, 2009 01:09 AM

Iang,

Sorry for the above multiple post.

The mobile phone I post from informed me that "it could not get the page" on posting...

Regards,

Clive.

Posted by: Clive Robinson at October 25, 2009 01:23 AM

Clive, on behalf of the entire internet, I apologise for our terrible record of reliability engineering. We know how to build reliable systems, but for some reason we got distracted by the notion of reliable protocols, and forgot about the end-to-end system. In this case, your phoned-in comment has to go through about a half-dozen reliable protocols, which when summed-together in pathetic serialisation are a mess.

If you give me a DARPA contract, I'll fix it ;-)

Posted by: Iang (on reliability) at October 25, 2009 08:08 AM

Clive, thanks for your response. It's probably worth a blog post on its own.

You are right that this is a very difficult topic. My grand sweeping post did indeed sweep a lot of detail under the carpet. Including "how to use certs safely," which I grant is a tricky issue.

I think one of the huge problems with PKI is it only works if people use it the way it was designed, and the way it was designed is beneficial to the sellers of the product, but harmful to the consumer.

You can see this dilemma more clearly with Qualified Certificates, which make signatures that can be equivalent to a human signature. These are fundamentally harmful to a human being because they can sign away grandma's house (insert here smartcards, chip&pin, MITB, 100 FC posts, etc etc).

In essence the pen and hand combination is totally safe to me because I am in total control. Whereas the smart card is totally dangerous to me the moment I insert it into a slot; at that point I have to put my total trust into some hardware/software/mushware.

In such a scenario, no user would want to lose that control without being given a big benefit. So QCs only work where the system provides them for free, and provides the investment to make them work for a lot of places. And preferably picks up the losses when the mushware takes control. Tall order.

On the other hand, what I suggest with SSCs works in a limited context: really we want what you suggest, for the browser to create an SSC pair for every website, whitelisted. That way the tracking is dealt with (and with a little bit of advancement, we can see that the whitelisting work will also handle the multiple-personas-one-site scenario just by expanding the tuple to {site,key,persona} ).

Posted by: Iang (on preservation of self) at October 25, 2009 08:20 AM

@ iang,

You reply reminded me of somethig you will probably have sympathy with...

You said,

"In essence the pen and hand combination is totally safe to me because I am in total control. Whereas the smart card is totally dangerous to me the moment I insert it into a slot..."

It's not just smart cards it's also "transaction autherisation tokens".

People talk about using SMS's (zero security there) or software on a mobile phone that uses the phones camera to read a two dimensional bar code or colour equivalent (see Chronos technology). Or as IBM have pluging the token into the USB port....

Essentialy they all have two faults in common,

1, They are mutable
2, They have a high bandwidth channel to the untrusted PC.

Together it means the token can be as insecure as the PC, which kind of defeats the purpose of a trusted token being used as a reliable side channel.

Apart from this most of the systems that do two way transaction authentication work very much the way I proposed back in 2000 after identifying a very clear need for it (why has it taken 9 years...)

My concern is that now mobile phones are (almost) as good for web browsing as PC's malware writers will target these devices and use that "high bandwith" channel in a covert way to get around the token...

We are already seeing "target specific" malware that recognises when you log into a speific bank and alters values in statments etc to cover the sums withdrawn from your account.

Which was a possibility I had identified back in 98, so how long before these mobile phone and USB based tokens get hit 5 years or less is my guess.

Posted by: Clive Robinson at October 26, 2009 07:32 AM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.