The DDOS (distributed denial of service) attack is now a mainstream threat. It's always been a threat, but in some sense or other, it's been possible to pass it off. Either it happens to "those guys" and we aren't effected. Or, the nuisance factor of some hacker launching a DOS on our ISP is dealt with by waiting.
Or we do what the security folks do, which is to say we can't do anything about it, so we won't. Ignoring it is the security standard. I've been guilty of it myself, on both sides of the argument: Do X because it would help with DOS, to which the response is, forget it, it won't stop the DOS.
But DOS has changed. First to DDOS - distributed denial of service. And now, the application of DDOS has become over the last year or two so well institutionalised that it is a frequent extortion tool.
Authorize.com, a processor of credit cards, suffered a sustained extortion attack last week. The company has taken a blow as merchants have deserted in droves. Of course, they need to, because they need their payments to keep their own businesses alive. This signals a shift in Internet attacks to a systemic phase - an attack on a payment processor is an attack on all its customers as well.
Hunting around for some experience, Gordon of KatzGlobal.com gave this list of motives for DDOS:
Great list, Gordon! He reports that extortion attacks are 1 in 100 of the normal, and I'm not sure whether to be happier or more worried.
So what to do about DDOS? Well, the first thing that has to be addressed is the security mantra of "we can't do anything about it, so we'll ignore it." I think there are a few things to do.
Firstly, change the objective of security efforts from "stop it" to "not make it worse." That is, a security protocol, when employed, should work as well as the insecure alternative when under DOS conditions. And thus, the user of the security protocol should not feel the need to drop the security and go for the insecure alternative.
Perhaps better put as: a security protocol should be DOS-neutral.
Connection oriented security protocols have this drawback - SSL and SSH both add delay to the opening of their secure connection from client to server. Packet-oriented or request-response protocols should not, if they are capable of launching with all the crypto included in one packet. For example, OpenPGP mail sends one mail message, which is exactly the same as if it was a cleartext mail. (It's not even necessarily bigger, as OpenPGP mails are compressed.) OpenPGP is thus DOS-neutral.
This makes sense, as connection-oriented protocols are bad for security and reliability. About the best thing you can do if you are stuck with a connection-oriented architecture is to turn it immediately into a message-based architecture, so as to minimise the cost. And, from a DOS pov, it seems that this would bring about a more level playing field.
A second thing to look at is to be DNS neutral. This means not being slavishly dependent on DNS to convert ones domain names (like www.financialcryptography.com) into the IP number (like 62.49.250.18). Old timers will point out that this still leaves one open to an IP-number-based attack, but that's just the escapism we are trying to avoid. Let's close up the holes and look at what's left.
Finally, I suspect there is merit in make-work strategies in order to further level the playing field. Think of hashcash. Here, a client does some terribly boring and long calculation to calculate a valid hash that collides with another. Because secure message digests are generally random in their appearance, finding one that is not is lots of work.
So how do we use this? Well, one way to flood a service is to send in lots of bogus requests. Each of these requests has to be processed fully, and if you are familiar with the way servers are moving, you'll understand the power needed for each request. More crypto, more database requests, that sort of thing. It's easy to flood them by generating junk requests, which is far easier to do than to generate a real request.
The solution then may be to put guards further out, and insist the client matches the server load, or exceeds it. Under DOS conditions, the guards far out on the net look at packets and return errors if the work factor is not sufficient. The error returned can include the current to-the-minute standard. A valid client with one honest request can spend a minute calculating. A DDOS attack will then have to do much the same - each minute, using the new work factor paramaters.
Will it work? Who knows - until we try it. The reason I think of this is that it is the way many of our systems are already structured - a busy tight center, and a lot of smaller lazy guards out front. Think Akamai, think smart firewalls, think SSL boxes.
The essence though is to start thinking about it. It's time to change the mantra - DOS should be considered in the security equation, if only at the standards of neutrality. Just because we can't stop DOS, it is no longer acceptable to say we ignore it.
Posted by iang at September 28, 2004 06:38 AM | TrackBackThinking about how to get paid to think about it. The money around the next crazy security idea will go to those that control the hype. Poorly constructed models hoisted upon large user bases seems the norm and as such provide a unique situation for monopolistic adventures. The total solution will appear and dominate the security market. DRM will be enhanced and hailed as the great answer. Attacks are the mother of security solutions firms and patented crypto regimes. If we spend more to get less those that recieve will feed our hype hungry minds with the stuff that lets us sleep at night, who cares if we wake up. Sleep, peaceful sleep while the hype crews work hand in hand with the hackers building fortunes perhaps there is a level of cooperation that needs to be explored? Airport security was too hard now security experts work on hacking scenarios kind of like being an allergist your patients never get better but they never get worse. If we could only get Islamic Terrorist to behead a couple of hackers online the world would be a perfect place. Since the providers of security cannot protect us from either who cares. I wonder why hackers fail to attack pro Islamic Terrorist sites maybe their working together? Is it possible to take a threat seriously? Perhaps a clear link between Eastern European Hackers and Islamic Terror needs to be shown? Maybe a simple solution like site designed for commerce and those designed for other stuff need to be seperated in the world of DNS? Maybe the hackers know this and an isolated group of Midget Clown Islamic Hackers AKA The Big Shoes of the Jihad will develop a method to attack only commercial sites. Then what the world is doomed run by a small sect of exstream Midget Clown Hackers dedicated to an Islamic state. This is a Bozo no no we must fight the Clowns of Islam and save humanity. I suggest we create a Super Hero that always helps at least in make believe land where I live. Lets make believe that real students of network security study scenarios to improve the over all conditions of the users against those that would corrupt the intent. Now these pretend students actually find an answer or a solution that works what then? Well the pretend industry tries to steal the idea or launch their own make believe solution and discredits the student concept. Again we have a clown scenario these are the money clown attacks centered around Microsoft the training camp for the Money Clown Terrorist. Known as the big shoes of anywhwere they go.
Posted by: Jimbo at September 28, 2004 09:19 AMFYI there are a number of moves about at ISP level to throttle flooding activity at the sources (e.g. if a given ISP detects massive flooding, be it worm-related or DDOS, they communicate to their peers from whom the connects originate and get them to drop bandwidth availability of the clients.)
Most firewalls nowaday can also do syn throttling; services like akamai are also pretty decent at stopping the results. A lot of the sites affected (online gambling, essentially) often operate in legal gray areas and are afraid to do anything about it for fear of reprisals--these are the extortion attacks. However, very often the affected parties' technical architecture is just not particularly well-designed. DDOS is not the boogeyman it once was.
-John
Posted by: John M at September 28, 2004 09:31 AMHello Ian,
np...
Thanks for the mention...
I think attacks are much easier to beat with just a few adjustments in the way ips and dns work.
One can beat 99% of attacks just by flipping ips or flipping to waiting ips by changing dns nameservers.
I am not sure about this totally, but from what I can get a feel for, it seems to be that any significant attack is not just an on/off situation. When servers come under attack the d "Dos" builds its power and ramps up to full steam and once set in motions cannot easily be wound down again. If the entire bot network gets used then they have no resources left to follow the victim to a new location.
Im not 100% sure it works like this but that is what it feels like many times on a receiving end of one.
The bottom line is if we change the ip out from under the attack and block the attacked ip then they all in our experience have not been able to follow this change.
The cost of using instant dns has just improved. Today we switched two sites to a new server and 1 site took 1 hour and the other took 1 minute. This is a huge improvement from 12-24 hours it used to take for .com and .net domains and may just render dns services pointless pretty soon as the web hosting provider can do the same thing as long as the registrar supports the speed.
So time is improving and tools still need to be developed.
To me what needs to happen is that we should be able to flip ips instantly and flip nameservers instantly anywhere in the world.
I mean push a few buttons and drop in a new subnet across a server with less than 30 seconds of downtime.
Another thing we need to do is make ips a commodity. Every hosting account should be on a static ip becasue the notion of shared ips is a failed notion. I understand why we have shared ips, but its time to move past that now.
ex. if I have 300 websites all sharing one ip and site number 243 gets a ddos you lose all 300 sites immediately.
If all 300 sites each have 1 static ip and site 243 gets a ddos you block sites 243's ip over port 80. Guess what? If you have to block the entire server becasue of a shared ip problem on port 80 the sites with ssl certs and static ips continue running over the ssl port 443 or whatever port its set for.
This tells me another product in the security realm is to make use of other ports by stripping the feature out of the certificate then enable sites to automaticaly run on a different port and then expand that so that you can assign port useage as easy as you might assign an ip thus upping the ante and your options.
From there you could create a bot to backup and restore large numbers of sites across continents, manage ips and nameservers and watch for attacks at the same time. If an attack were to start then the bot goes to work doing a ddoshuffle for the victim site.
There are almost no tools out there to visualize attacks and search for victim sites either. This is an interesting area of comp. science no doubt, but I think you need to live in the muck to do something about it or else it has little meaning.
My opinion is that you cannot move the fence further out and block an attack. Blocking attacks causes loopback 'noise' problems and this in turn runs up the bandwidth until it pops. It does not matter where the vicinity of this bandwidth is in relationship to the server becasue it is theoretically possible to ddos the entire internet and saturate all the bandwidth available. I believe there is a recorded case of the entire internet wobbling becasue of this.
So my opinion is that duck and run is a strategy worth investigating. The single point of error in the duck and run strategy is that it is too slow at this time.
Slowness directly translates to -$. The faster your tools can duck and run and re-establish in a safe space the less it will cost to be secure.
When this has reached the zero point in time, we will no longer have ddos attacks as evasion security will then be a commodity and seamless in the network. But we need to get on IPv6 or else find another way to solve this ip shortage issue. Ips are too damn expensive right now.
IPs should be free and unlimited.
-- Best regards, -- Gordon Hayes mailto:info@katzglobal.com
Posted by: Gordon Hayes at September 28, 2004 11:02 AMIan,
It strikes me that DDOS is a lot like spam.
The email address I use for this list uses Greylisting.
http://projects.puremagic.com/greylisting/
Greylisting (at the present time) eliminates essentially all spam painlessly. Basically it does this by using the SMTP protocol in a way it was not originally intended to exploit the fact that the spammers sending engines do not fully implement SMTP (which it would increase their costs considerably to do). Whenever a message is received the SMTP server replies: "resend later". The spammers never do that but legitimate mail does and it is let thru after the second time. In spite of my address being exposed on this and the e-gold list for years I get virtually no spam. On the downside, there is sometimes an initial delay from a new correspondent and there are a few commercial businesses (like E*Trade!) that use spammer-like bulk mailers that also fail Greylisting.
So, my thought is that perhaps there is a way to use the HTTP protocol in a way that accomplishes something similar to Greylisting for DDOS.
For instance, what if the server, instead of directly fullfilling and HTTP request just sent a redirection header with a 1 second delay in it. The DDOS might well not follow the redirection at all (and the server would be saved the time it took to fullfill the HTTP request). And if it did it would cost them time.
Best,
CCS
I don't see how evasion can work in the long run; if your customers can find you, your attackers can do so as well. If it works today it is only because it is so seldom used that attackers have not bothered to enhance their tools to track a moving target. But I don't see any reason the attacker can't write his code to do a DNS lookup every few minutes to see if you have gone to a new address, and direct the attack there.
Likewise, greylisting is another measure whose success depends largely on its unpopularity. If this practice became widespread, spammers could adapt their software to perform the resend when requested. The initial rejection uses few of the spammer's resources and so can be handled very quickly in a preliminary phase, then the messages can be sent subsequently and they will be accepted.
Posted by: Hal Finney at September 28, 2004 06:36 PM