September 28, 2004

Microsoft's dilemma - they finally changed the mantra!

Over in the Alice-in-Wonderland world of Microsoft security, people are reacting in shock and horror at the decision to not upgrade (read: make secure) browsers older than the current Internet Explorer.

Can this be? Has Microsoft lost its collective marbles? The analysts are spinning on this one.

Yes, and no - wake up guys! This is as obvious as it gets. When Bill Gates sent around his security memo, and turned the focus over to security, we all knew this was an impossible task. Now, 2 years later, the impossible task has been realised. In the sense that now they know, not that they've done the impossible!

How do they deal with it? Well, there is no way that Microsoft can just turn around and release "secure" patches for all its software. That's not how security works, and that's not how big projects work.

This is a rewrite. Operating systems are not built in a day, nor a year. And security is done from the ground up, from the beginning, if it's to be any good.

To fix the OS for security, where before there was the kiss of marketing, is fundamentally a rewrite. Can it be done? Unlikely. Microsoft's internal challenge is to do as much as possible as quickly as possible and not blow their market away in the process. Something has to give, the task is just too big.

What has given is the older releases. Makes sense! If Microsoft has to concentrate its forces, it has to put the older stuff to pasture. Even if it means that the older users now find themselves loading up Opera, Firefox, Konqueror and others...

I believe this to be a conscious decision on the part of Microsoft. In military terms, this is a strategic withdrawal. They are giving up territory that they can't defend. The surprising thing is that it has taken this long to be revealed, even to them. Maybe Bill Gates vaunted ability to spin the company on a dime was a once off.

Posted by iang at 05:55 PM | Comments (0) | TrackBack

The DDOS dilemma - change the mantra

The DDOS (distributed denial of service) attack is now a mainstream threat. It's always been a threat, but in some sense or other, it's been possible to pass it off. Either it happens to "those guys" and we aren't effected. Or, the nuisance factor of some hacker launching a DOS on our ISP is dealt with by waiting.

Or we do what the security folks do, which is to say we can't do anything about it, so we won't. Ignoring it is the security standard. I've been guilty of it myself, on both sides of the argument: Do X because it would help with DOS, to which the response is, forget it, it won't stop the DOS.

But DOS has changed. First to DDOS - distributed denial of service. And now, the application of DDOS has become over the last year or two so well institutionalised that it is a frequent extortion tool.

Authorize.com, a processor of credit cards, suffered a sustained extortion attack last week. The company has taken a blow as merchants have deserted in droves. Of course, they need to, because they need their payments to keep their own businesses alive. This signals a shift in Internet attacks to a systemic phase - an attack on a payment processor is an attack on all its customers as well.

Hunting around for some experience, Gordon of KatzGlobal.com gave this list of motives for DDOS:

  • #1. Revenge Attack. A site may have ripped off someone and in revenge they ddos the site or hire it out. You can actually hire this service like a website hitman. $50 to kill your site sort of thing from most eastern block countries. Did I just coin something? The Webhit.
  • #2. Field Reduction: Say I no longer like Robert and he no longer likes me and we get into a pissing match and we ddos each other... Or say we were jerks and didn't want any competition at all. We could ddos all of our competitiors into oblivion. This type of thing happens more than you might think.
  • #3. Enacting God's Will: These are the DDOSERS from GOD. They are rightous and they ddos sites that do not align with their views on the world.
  • #4. HYIPer Dossers: If anything looks scammy about a HYIP (high yield investment programme) site there is a particular DDOS team running around out there who ddosses HYIPs damn near on a daily basis. This falls into category 3 and 5 below:
  • #5. Extortioners: Hackers or wannabees usually from Eastern European countries (usually) trying to make a living becasue their governemnt could gve a crap what they do to us citizens in general.

Great list, Gordon! He reports that extortion attacks are 1 in 100 of the normal, and I'm not sure whether to be happier or more worried.

So what to do about DDOS? Well, the first thing that has to be addressed is the security mantra of "we can't do anything about it, so we'll ignore it." I think there are a few things to do.

Firstly, change the objective of security efforts from "stop it" to "not make it worse." That is, a security protocol, when employed, should work as well as the insecure alternative when under DOS conditions. And thus, the user of the security protocol should not feel the need to drop the security and go for the insecure alternative.

Perhaps better put as: a security protocol should be DOS-neutral.

Connection oriented security protocols have this drawback - SSL and SSH both add delay to the opening of their secure connection from client to server. Packet-oriented or request-response protocols should not, if they are capable of launching with all the crypto included in one packet. For example, OpenPGP mail sends one mail message, which is exactly the same as if it was a cleartext mail. (It's not even necessarily bigger, as OpenPGP mails are compressed.) OpenPGP is thus DOS-neutral.

This makes sense, as connection-oriented protocols are bad for security and reliability. About the best thing you can do if you are stuck with a connection-oriented architecture is to turn it immediately into a message-based architecture, so as to minimise the cost. And, from a DOS pov, it seems that this would bring about a more level playing field.

A second thing to look at is to be DNS neutral. This means not being slavishly dependent on DNS to convert ones domain names (like www.financialcryptography.com) into the IP number (like 62.49.250.18). Old timers will point out that this still leaves one open to an IP-number-based attack, but that's just the escapism we are trying to avoid. Let's close up the holes and look at what's left.

Finally, I suspect there is merit in make-work strategies in order to further level the playing field. Think of hashcash. Here, a client does some terribly boring and long calculation to calculate a valid hash that collides with another. Because secure message digests are generally random in their appearance, finding one that is not is lots of work.

So how do we use this? Well, one way to flood a service is to send in lots of bogus requests. Each of these requests has to be processed fully, and if you are familiar with the way servers are moving, you'll understand the power needed for each request. More crypto, more database requests, that sort of thing. It's easy to flood them by generating junk requests, which is far easier to do than to generate a real request.

The solution then may be to put guards further out, and insist the client matches the server load, or exceeds it. Under DOS conditions, the guards far out on the net look at packets and return errors if the work factor is not sufficient. The error returned can include the current to-the-minute standard. A valid client with one honest request can spend a minute calculating. A DDOS attack will then have to do much the same - each minute, using the new work factor paramaters.

Will it work? Who knows - until we try it. The reason I think of this is that it is the way many of our systems are already structured - a busy tight center, and a lot of smaller lazy guards out front. Think Akamai, think smart firewalls, think SSL boxes.

The essence though is to start thinking about it. It's time to change the mantra - DOS should be considered in the security equation, if only at the standards of neutrality. Just because we can't stop DOS, it is no longer acceptable to say we ignore it.

Posted by iang at 06:38 AM | Comments (5) | TrackBack

September 26, 2004

Eavesdropping threats: Listening to chat

In the IHT, Hal Varian reports analysis of 1.5 million messages from online trading chat rooms [1]. Two researchers, Werner Antweiler and Murray, used their statistical tools and were able to glean a few clues and predictions for markets trades [2].

Not terrifically much, it seems, but that's to be expected if one follows the efficient market hypothesis. However, it's not that far off a scam or two that's been going on. Allegedly, I hear, rumour has it, and all normal caveats....

The place to look at is not the public chat rooms, which anyone will tell you is mostly noise, but the private places. The chat community of choice is the traders in the large banks. To get access to their chat, which is much more focused on big and real trading, you either have to be an insider, or very cunning.

The cunning way to do it is to attract the traders to get into your own chatroom and encourage them to think about trading. This can be done by setting up a fantasy trading system. In these systems, you post trades that are nominal or virtual, and encourage traders to get involved and play. It's a bit like fantasy football, if you've seen that. As traders do well on the predictions, they get rewarded, with value that is "outside" their normal salary system in some sense or other (extra points if the rewards can be delivered tax free!).

So the traders are encouraged to reveal the insider secrets of their employer banks in order to score big in the game. Is it that simple? Yup. Traders really don't care about the secrets, but they sure as hell care about scoring big in the games ... and perversely, the banks enourage this behaviour as they go hunting for analysts who've proven themselves in the game world!

Then, of course, the system operator takes all the chat and all the trades and pumps it into their black boxes. Out spews the predictions, the operator passes it on to another sister company who arbitrages the banks, and they make out like bandits.

Here's how the insider does it. Being on the inside, and being able to tap directly into the networks, one can collect all the trader and analyst chat in real time, and pump it into an analytics engine. Then out comes all the predictions ... so far so good.

But, if the insider does all this, they need some infrastructure, and it's hard to do that without getting noticed. Not to mention that if they were able to make a trade, they'd be betting against their own bank!

So one could be excused for thinking this was unlikely. Until the advent of the mutual funds scandal that is, where we discover all sorts of egregious behaviour, including mutual funds trading against their own parent manager banks [3]. In that scandal, many mutual funds were operated by major banks or other large institutions. Some of them, it seems, were simply listening to all the insider info, analysing it, and betting against their own organisation.

What's going on here? Well, the mistake of the outsider is two-fold: firstly, to assume that the bank operates with one goal in mind, and secondly to assume that if they bet against their own trades, they are stealing their own money.

It turns out that all the trading that they are arbitraging is trades of clients. For which fees are earnt, in a competitive framework. So as long as the client doesn't know, nobody cares. And, it turns out that what with Chinese walls and results-oriented management, there is no problem whatsover with one department betting against another; some banks even do it as a matter of policy.

So, what we have is a fairly aggressive framework where managers of mutual funds are sucking their own trading floors for profits. Where it gets eggregious would be if they are doing it via insider information - chat, for example. And if they are doing it in such a fashion where the insiders that are getting paid off are keeping the scam a secret: encouraging the traders to reveal and perform worse than they otherwise would.

Of course, we don't know where or if this precise scam is going on. But, all the building blocks are in place: the means, the motive, the weapon. It would be a shock to me if it wasn't a done deal.



Commentary: Internet stock talk may have statistical meaning

Hal R. Varian NYT Friday, September 24, 2004

Online messages presage active trading

Talk is cheap, particularly on the Internet. Stock message boards are a case in point. Every day, participants post tens of thousands of tips about which way various stocks are heading. Is any of this worth reading?

Recently, two financial economists from the University of British Columbia, Werner Antweiler and Murray Frank, examined the message board phenomenon in a paper titled "Is All That Talk Just Noise? The Information Content of Internet Stock Message Boards," published in the June 2004 issue of The Journal of Finance.

They collected more than 1.5 million messages from two online boards, Yahoo Finance and Raging Bull, and analyzed them using methods of computational linguistics and econometrics.

The computational linguistics techniques allowed them to classify messages with respect to whether they advocated buying, holding, or selling the stock in question. Most of the messages were short and direct, allowing the algorithms to do a pretty good job of classification.

Antweiler and Frank then merged the estimated buy, sell or hold signals into one "bullishness measure," which was a slightly modified version of the ratio of buy to sell recommendations.

During the period they examined in their article, January to December 2000, the Internet bubble dissipated. Bullish messages, however, continued to proliferate.

But did the message volume, timing, and sentiment forecast anything useful? The three most interesting features of a stock on a given day are its return, its volume and its volatility.

The authors found that the characteristics of messages helped predict volume and volatility. Perhaps more surprisingly, they also found that the number of messages on one day helped predict stock returns the next day. The degree of predictability, however, was weak and reversed itself the next trading day.

The bullish sentiment of messages was positively associated with contemporaneous returns, but has no predictive power for future returns. Traders post bullish messages about a given stock on days when its price goes up, but it is hard to determine which way the causation runs. Since postings do not predict future returns, it may be that the returns cause the postings.

The story was different with respect to volatility. It appeared that the more messages posted about a stock one day, the higher its volatility was the next day.

Trading volume was also correlated with messages. However, the apparent causation here was somewhat subtle. Message posting appeared to cause volume when the researchers examined daily data. But when they looked at the market at 15-minute intervals, trading volume seemed to cause messages.

One hypothesis consistent with these observations about volume and messages was that people might tend to post messages shortly after buying a stock. Even though there are two sides to every trade, the seller has little incentive to brag about the dog he just unloaded, while the buyer has a strong incentive to recommend that others buy the stock he has just purchased.

The authors conclude that the talk on message boards is not just noise. Though the predictive power for returns is too small to be meaningful, message board activity does seem to help predict volatility and volume. Varian is a professor of business, economics and information management at the University of California at Berkeley.


[1] Hal R. Varian NYT, "Internet stock talk may have statistical meaning"
[2] Werner Antweiler and Murray Frank, "Is All That Talk Just Noise? The Information Content of Internet Stock Message Boards," published in the June 2004 issue of The Journal of Finance.
[3] James Nesfield and Ian Grigg, Mutual Funds and Financial Flaws, testimony to the U.S. Senate's finance subcommittee, 27th January 2004.

Posted by iang at 01:54 PM | Comments (1) | TrackBack

September 22, 2004

WebTrust: "It's about not causing popups..."

WebTrust is the organisation that sets auditing standards for certificate authorities. It's motto, "It's a matter of trust," is of course the marketing message they want you to absorb, and subject to skepticism. How deliciously ironic, then, that when you go to their site, click on Contact, you get redirected to another domain that uses the wrong certificate!

http://www.cpawebtrust.org/ is the immoral re-user of WebTrust's certificate. It's a presumption that the second domain belongs to the same organisation (the American Institute of Certified Public Accountants, AICPA), but the information in whois doesn't really clear that up due to conflicts and bad names.

What have WebTrust discovered? That certificates are messy, and are thus costly. This little cert mess is going to cost them a few thousand to sort out, in admin time, sign-off, etc etc. Luckily, they know how to do this, because they're in the business of auditing CAs, but they might also stop to consider that this cost is being asked of millions of small businesses, and this might be why certificate use is so low.

Posted by iang at 07:04 AM | Comments (0) | TrackBack

September 21, 2004

To Kill an Avatar

"To Kill an Avatar", an article by Dan Hunter and F. Gregory Lastowka, explores how online gaming has coped with unexpected behaviour by members - fraud, murder, rape, verbal abuse, hit & run driving all get a mention.

The gaming world bears some relationship to the world of ecommerce, in that the participants are protected from the consequences of their acts. In ecommerce, anonymous fraudsters seek to steal your identity data by tricking you into entering their counterfeit site. In the game world, a character who raped two others in public said he was just experimenting in a game.

One solution for the gaming world is the end-user licence, but direct policing has the same effect in the gaming world as it does in digital cash issuance: it's too expensive. The cost of support calls is why we do things as we do, not because we're nice guys or bad guys. If the revenue generated by each average user is $10 over a year, that doesn't leave much for any support, however noble the Issuer, and however aggrieved the user.

I've predicted that dispute resolution will become an outsourced role in the currency world, and in the game world they've taken it further: people have banded together and deputised posses to hunt down the miscreants, execute them and confiscate their property.

Could a digital payments world outsource dispute resolution to the point where dodgy merchants lose their assets under sanction from their peers? Possibly. I'd personally reserve execution for spammers, but there is tantalising merit in confiscation of property. In this sense, it would mirror physical world commerce, where bonds and reserves are posted.

There have been some efforts at associations that seek to bind their members to good behaviour. Predictably, these efforts, like the GDCA, stalled when the perpetrators got too caught up in selling their own economic model, rather than meeting the needs of their members. Like the online gaming world, it seems that every new effort is caught in the net of its own rules, so experimentation moves forward a currency at a time, a world at a time.

Posted by iang at 04:32 AM | Comments (0) | TrackBack

September 18, 2004

The Node is the Threat

Slowly, inch by inch, by hack and by phish, the Internet security community is coming to realise that the Node is the Threat, and the Wire is incidental. The military-inspired security textbooks of the 80s that led to 90s security practice never had much relevance for the net, simply because in military and national security places, the nodes were well guarded. It was the antennas of the NSA and the GRU that those guys were focused on.

Whether you agree with the following judgement or not (and one judge did not), it highlights how the node is where the action is [1]. If you secure the wire and ignore the node, then you've only done a tiny part of the job. Probably still worth doing, but don't hold any illusions as to the overall security benefit, and don't sell any of those illusions to the customer.

[1] See also the Wired article on E-Mail Snooping Ruled Permissible and the court record.


ISPs Can Check E-Mail

By Roy Mark July 1, 2004

Backed by the clear, though perhaps out-of-date, language of the Wiretap Act, e-mail providers have the right to read and copy the inbound e-mail of their clients, a federal appeals court ruled Wednesday. The decision sparked howls of protest from privacy advocates, despite immediate assurances from some of the nation's largest ISPs that they would never engage in such practices.

The U.S. Court of Appeals for the 1st Circuit in Boston voted 2-1 Wednesday to uphold the dismissal of a 2001 indictment against Branford C. Councilman, the vice president of now-defunct Interloc Inc., a rare and out-of-print books site.

The U.S. district attorney for Massachusetts charged Councilman, who maintained an e-mail service for his clients, with illegal wiretapping for making copies of e-mail sent to his clients from Amazon.com. The district attorney claimed Councilman copied the e-mail to gain a competitive advantage.

But Councilman's attorneys argued his actions were within the legal limits, because he did not intercept the messages in transit. E-mail, often only momentarily, is routed through servers. The U.S. District Court for Massachusetts ultimately ruled that counted as storage and dismissed the indictment.

Yesterday's decision echoed that ruling in voting to uphold the indictment dismissal, saying e-mail does not enjoy the same eavesdropping protections as telephone conversations, because it is stored on servers before being routed to recipients.

Samantha Martin of the Massachusetts district attorney's office said, "We are still in the process of reviewing that decision and weighing our options on what steps to take next."

The two-judge majority deferred to the language of the Wiretap Act, noting it prohibits the unauthorized interception and storage of "wire communications," but only makes reference to the interception of "electronic communications."

"The Wiretap Act's purpose was, and continues to be, to protect the privacy of communications. We believe that the language of the statute makes clear that Congress meant to give lesser protection to electronic communications than wire and oral communications," Appeals Court Judge Juan R. Torruella wrote. "Moreover, at this juncture, much of the protection may have been eviscerated by the realities of modern technology."

"We observe, as most courts have, that the language may be out of step with the technological realities of computer crimes," Torruella wrote. "However, it is not the province of this court to graft meaning onto the statute where Congress has spoken plainly."

In his dissenting opinion, Appeals Court Judge Kermit V. Lipez said there is no distinction between the electronic transmission and storage of e-mail.

"All digital transmissions must be stored in RAM or on hard drives while they are being processed by computers during transmission," Lipez wrote. "Every computer that forwards the packets that comprise an e-mail message must store those packets in memory while it reads their addresses, and every digital switch that makes up the telecommunications network through which the packets travel between computers must also store the packets while they are being routed across the network."

Lipez concluded: "Since this type of storage is a fundamental part of the transmission process, attempting to separate all storage from transmission makes no sense."

America Online, EarthLink (Quote, Chart) and Yahoo (Quote, Chart), three of the country's largest ISPs, all have privacy policies prohibiting them from reading customer e-mail, except in the case of a court order.

"Consistent with Yahoo's terms of service and privacy policy, we do not monitor user communications," said Yahoo spokeswoman Mary Osako said.

Also jumping to the ISP defense was EarthLink spokeswoman Carla Shaw.

"EarthLink has a long history of protecting customer privacy, and that protection extends to e-mail," she said. "We don't retain copies of customer e-mail, and we don't read customer e-mail."

But this does little to alleviate the concerns of folks like Kevin Bankston, an attorney for the online privacy group Electronic Frontier Foundation.

"By interpreting the Wiretap Act's protections very narrowly," Bankston said in a statement, "this court has effectively given Internet communications providers free reign to invade the privacy of their users for any reason and at any time."

Bankston added that the law "has failed to adapt to the realities of Internet communications and must be updated to protect online privacy."

Posted by iang at 08:00 AM | Comments (1) | TrackBack

September 17, 2004

Normal Accident Theory

A long article by Dan Bricklin entitled "Learning From Accidents and a Terrorist Attack" reviews a book about Normal Accident Theory (book entitled "Normal Accidents" by Charles Perrow). Normal Accident Theory encapsulates the design of systems that are tightly coupled and thus highly prone to weird and wonderful failures. Sounds like financial cryptography to me! The article is long, and only recommended for die-hard software engineers who write code that matters, so I'll copy only the extracts on missile early warning failures, below. For the rest, go to the source.

More examples from "Normal Accidents"

An example he gives of independent redundant systems providing operators much information was of the early warning systems for incoming missiles in North America at the time (early 1980's). He describes the false alarms, several every day, most of which are dismissed quickly. When an alarm comes in to a command center, a telephone conference is started with duty officers at other command centers. If it looks serious (as it does every few days), higher level officials are added to the conference call. If it still looks real, then a third level conference is started, including the president of the U.S. (which hadn't happened so far at the time). The false alarms are usually from weather or birds that look to satellites or other sensors like a missile launch. By checking with other sensors that use independent technology or inputs, such as radar, they can see the lack of confirmation. They also look to intelligence of what the Soviets are doing (though the Soviets may be reacting to similar false alarms themselves or to their surveillance of the U.S.).
In one false alarm in November of 1979, many of the monitors reported what looked exactly like a massive Soviet attack. While they were checking it out, ten tactical fighters were sent aloft and U.S. missiles were put on low-level alert. It turned out that a training tape on an auxiliary system found its way into the real system. The alarm was suspected of being false in two minutes, but was certified false after six (preparing a counter strike takes ten minutes for a submarine-launched attack). In another false alarm, test messages had a bit stuck in the 1 position due to a hardware failure, indicating 2 missiles instead of zero. There was no loopback to help detect the error.
Posted by iang at 06:10 AM | Comments (3) | TrackBack

September 16, 2004

CPUs are now a duopoly market

Reports coming out suggest that AMD has outsold Intel in the desktop market [1].

So it's official: CPUs are now in a two-player game. This is only good news for us CPU users, for the last 20 years we've watched and suffered the rise and rise of Intel's 8086 instruction set and their CPUs. It was mostly luck, and marketing: Intel, like Microsoft, got picked by IBM for the PC, and thus were handed the franchise deal of the century. The story goes that Motorola wouldn't lie about their availability date, and it was a month late for the MC68000.

Then, the task became for Intel to hold onto the PC franchise, something any ordinary company can do.

20 years later, Intel's luck ran out. They decided to try a new instruction set for their 64 bit adventure (named Itanium), thinking they could carry the day. It must have been new people at the helm, not seasoned computing vets, as anyone with any experience could tell you that trying a new instruction set in an established market was a dead loss. The path from 1983 to here is littered with the bodies: PowerPC, Sparc, Alpha, ...

In the high end (which is now thought of simply but briefly as the 64 bit end) Itanium is outsold 10 to 1 by AMD's 64 bit ... which just so happens to be the same instruction set as its 32 bit 8086 line offering [2]. By leaving compatibility aside, Intel left themselves wide open to AMD, and to the latter's credit, they took their 32 balls and ran twice as fast.

Sometime about 6 months back Intel realised their mistake and started the posturing to rewind the incompatibility. (Hence the leaks of 64 bit compatibility CPUs.)

But by then it was way too late. Notice how in the above articles, AMD is keeping mum, as is Intel. For the latter, it doesn't want the stock market to realise the rather humungous news that Intel has lost the crown jewels. For the former, there are equally good reasons: AMD's shareholders already know the news, so there's no point in telling them. But, the more it keeps the big shift out of the media, and lets Intel paper up the disaster, the slower Intel is in responding. Only when the company is forced to admit its mistake from top to bottom will it properly turn around and deal with the upstart.

In other words, Intel has no Bill Gates to do the spin-on-a-sixpence trick. So AMD is not trying too hard to let everyone know, and getting on with the real business of selling CPUs against an internally-conflicted competitor. They've cracked the equal position, and now they have a chance of cracking leadership.

Keep quiet and watch [3] !

[1] AMD desktops outsell Intel desktops 54% to 45%
[2] AMD Opteron outsold Intel Itanium by 10X
[3] You can buy this sort of education at expensive B-schools, or get it here for free: AMD Adds Athlon to its Fanless Chips

Posted by iang at 04:06 AM | Comments (1) | TrackBack

September 15, 2004

Paypal fines arbitrageurs

Paypal, the low-value credit card merchant processor masquerading as a digital currency, moved to bring its merchant base further into line with a new policy: fines for those who sell naughty stuff [1] [2]. Which, of course, is defined as the stuff that American regulators are vulnerable too, reflecting the pressure from competitive institutions duly forwarded to the upstart.

This time, it includes a new addition: cross-border pharmaceuticals that bust the US-FDA franchise. Paypal is the new bellweather of creative destruction, although strangely, no complaints by the RIAA as yet.

[1] PayPal to impose fines for breaking bans
[2] PayPal to Fine Gambling, Porn Sites

Posted by iang at 06:13 PM | Comments (3) | TrackBack

September 06, 2004

CPUs going dual core

In the news this week is AMD's display of dual-core CPUs - chips with two processors on them that are promised available mid next year. Intel are expected to show the same thing next week [1]. AMD showed Opterons with dual cores, and Intel are expected to show Itaniums (which puts Intel behind as Itaniums haven't really succeeded). Given the hesitation evident in discussions of clock speed improvements, this may signal a shift towards a symmetric future [2].

What this means for the FCer is that we need to assume a future of many smaller processors rather than one big hulking fast one. Which means our state machines need to be more naturally spread across multiple instances. Already covered if you use a threading architecture, or a DB or disk-based state machine architecture, but a nuisance if your state machines are all managed out of one single Unix process.

This isn't as ludicrous as it seems - unless one is actually running on multiprocessor machines already, there are good complexity and performance reasons to steer clear of complicated threading methods.

[1] http://www.eweek.com/article2/0,1759,1642846,00.asp
[2] http://www.informationweek.com/story/showArticle.jhtml?articleID=46800190&tid=5978

Posted by iang at 05:12 AM | Comments (0) | TrackBack

September 05, 2004

Financial Cryptography v. The Enterprise

People often say, you should be using XXX. Where, XXX includes for today's discussion, J2EE, and various other alphabet soup systems, also known hilariously as "solutions". (I kid you not, there is a Java Alphabet Soup!) I've been working on one area of this jungle for the last many months, for a website frontend to payments systems - mostly because when it comes to websites, simple solutions don't cut it any more.

I came across this one post by Cameron Purdy which struck as a very clear example of why FC cannot use stuff like J2EE application servers for the backends [1]. The problem is simple. What happens when it crashes? As everything of importance is about transactional thinking, the wise FCer (and there are many out there) builds only for one circumstance: the crash.

Why do we care so much? It's straight economics. Every transaction has the potential to go horribly wrong. Yet, almost all transactions will earn about a penny if they go right. This means that the only economic equation of import in this world is: how many support calls per 1000 transactions, and how much revenue per 1000 transactions? If the answer to the first question is anything different to zero, worry. The worst part of the worry is that it will all seem ok until well after you think you are successful... You won't see it coming, and neither did a half dozen examples I can think of right now!

So every transaction has to be perfect and flawless. To do that, you have to be able to answer the question, what happens if it crashes? And the answer has to be, restart and it carries on. You lose time, but you never have to manually bury a transaction.

And here's the rub: as soon as you pull in one of these enterprise tools (see below for a definition of enterprise!) you can't answer the question. For JBoss, the open source application server under question below, it's "maybe." For all those big selling solutions like Oracle, IBM, SAP etc etc, the answer is: "Oh, yes, we can do that," and how is what Cameron's post describes. In words very briefly, it's a maybe, and it's more expensive [2].

Assuming that you can do all that, and you really do know the issues to address, and you pick the right solution, then the cost to take a fully capable transaction system, and show that it is "right" is probably about the same cost as writing the darn thing from scratch, and making it right. The only difference is that the first solution is more expensive.

That's all IMHO - but I've been there and done that about a half-dozen times now, and I know others in who've made the same soup out of the same staple ingredients. Unfortunately, the whole software market place has no time for anything but an expensive tin of jumbled letters, as you can't sell something that the customer does himself.

[1] What is the recovery log for in a J2EE engine? Cameron Purdy answers: Recovery Log.
[2] Scroll down a bit to message 136589 if you're unsure of this. Ignore the brand and keep the question firmly in mind.


Recovery log

Posted By: Cameron Purdy on September 01, 2004 @ 08:58 AM in response to Message #136449 1 replies in this thread

Does it have a recovery log now? The ability to survive a crash is important.

??? I thought recovery logs was a feature of DB engines. I've never seen that in a J2EE application server. Did you mean a JTA transactions recovery service, used to automatically switch the started JTA transactions to another server in the cluster in case of failure ?

It is for transactions that contain more than one transactional resource, or "Resource Manager" (RM) in OTS parlance. For example, let's say I have MQ Series and an Oracle database, and one of my transactions processes a message from MQ Series and does updates to Oracle. If I fail to commit, then I don't want my partial changes being made to Oracle, which is what a simple JDBC transaction does for me. Even more importantly, if I fail to commit, then I want someone else to get the message from MQ Series, as if I had never seen it.

This is called "recoverable two-phase commit with multiple resource managers," and implies that a recoverable transaction manager log the various steps that it goes through when it commits a transaction. In this case, the "transaction" is a virtual construct with two "branches" - one going to an Oracle transaction and one going to MQ Series. Let's consider what can happen:

1) Something screws up and we roll back the transaction. In this case both branches are rolled back, and everyone is happy. Since we never tried to commit (i.e. we never "prepared" the transactions,) each of the branches would know to automatically roll back if a certain time period passed without notification. This is called "assumed rollback before prepare."

2) Nothing screws up and we commit the transaction. This commit in turn prepares both branches, and if both succeed, then it does a commit on both branches. Everyone is happy.

3) The problem areas exist only once the first branch is requested to prepare, until all branches have been either committed or rolled back. For example, if both branches are prepared, but then the server calling "prepare" and "commit" dies before it gets both "commit" commands out. In this case, the transaction is left hanging and has to be either manually "rolled forward" or "rolled back," or the transaction log needs to be recovered.

This "transaction log" thingie is basically a record of all the potential problem points (prepares, commits, post-prepare rollbacks) that the server managing the "virtual transaction" has encountered. The problem is that when the server restarts and reads its log, it doesn't have the old JDBC connection object that it was using to manage the Oracle transaction, and it doesn't have whatever JMS (etc.) objects that it was using to manage MQ Series. So now it has to somehow contact Oracle and MQ Series and figure out "what's up." The way that it keeps track of the transactions that it no longer has "references" to is to create identifiers for the transactions and each branch of the transaction. These are "transaction IDs", or "XIDs" since "X" is often used to abreviate "transaction." These XIDs are logged in a special kind of file (called a transaction log) that is supposed to be safely flushed to disk at each stage (note that I gloss over the details because entire books are written on this tiny subject) so that when the server comes back up, it is sure to be able to find out everything important that happened before it died.

Now, getting back to the JBoss question, it used to have a non-recoverable implementation, meaning that if the server died during 2PC processing, those transactions would not be recoverable by JBoss. I haven't looked lately, so it could have been fixed already .. there are several open source projects that they could have glued in to resolve it with minimal effort. (JBoss is 90% "other projects" anyway .. which is one of the benefits of being able to make new open source stuff out of existing open source stuff.)

As far as whether it is an important feature is largely irrelevant to most "J2EE" applications, since they have exactly one transactional resource -- the database. However, "enterprise" applications often have more than one RM, and in fact that is what qualifies the applications as being "enterprise" -- the fact that they have to glue together lots of ugly crap from previous "enterprise" applications that in turn glued together ugly crap from previous generations and so-on. (For some reason, some people think that "enterprise applications" are just apps that have lots of users. If that were the case, Yahoo! would be an "enterprise application." ;-)

The funny thing about this particular article is that it's written by a guy who gets paid to sell you open source solutions, so he's writing an article that says it's OK for you to pay him to sell you on JBoss. That doesn't mean that he's right or wrong, it just means that the article is about as objective as a marketing user story .. but at least it was submitted to TSS by the US Chamber of Commerce. ;-)

To answer the question, is JBoss ready for enterprise deployment .. that's a tough one. I know what Bill Burke would say, and if you had Bill Burke working for you full time, then you would probably be comfortable using JBoss for some enterprise deployments. I think that the best way to ascertain the applicability of JBoss (or any product) to a particular problem is to find users who were braver than you and already tried it to solve a similar problem. Further, it's more than just asking "Does it work?" but also finding out "How does it react when things go wrong?" For example, without a recovery log, JBoss will work fine with 2PC transactions .. until a JBoss server crashes. How does it react when you reboot the server? How do you deal with heuristically mixed transactional outcomes? Is it a manual process? Do you know how to resolve the problem? How much does it cost to get support from JBoss to answer the questions?

JBoss if fine for 90% of "J2EE" applications. In fact, it's probably overkill for the 75% of those that should just use Caucho Resin. ;-) The question remains, is it fine for "enterprise deployments." I'm not convinced, but it's only a matter of time until it (or Apache Geronimo) gets there.

Peace,

Cameron Purdy
Tangosol, Inc.
Coherence: Shared Memories for J2EE Clusters

Posted by iang at 07:04 AM | Comments (4) | TrackBack

September 03, 2004

DNS spoofing - spoke too soon?

Just the other day, in discussing VeriSign's conflict of interest, I noted that absence of actual theft-inspired attacks on DNS. I spoke too soon - The Register now reports that the German eBay site was captured via DNS spoofing.

What makes this unusual is that DNS spoofing is not really a useful attack for professional thieves. The reason for this is cost: attacking the DNS roots and causing domains to switch across is technically easy, but it also brings the wrath of many BOFHs down on the heads of the thieves. This doesn't mean they'll be caught but it sure raises the odds.

In contrast, if a mail header is spoofed, who's gonna care? The user is too busy being a victim, and the bank is too busy dealing with support calls and trying to skip out on liability. The spam mail could have come from anywhere, and in many cases did. It's just not reasonable for the victims to go after the spoofers in this case.

It will be interesting to see who it is. One thing could be read from this attack - phishers are getting more brazen. Whether that means they are increasingly secure in their crime or whether the field is being crowded out by wannabe crooks remains to be seen.


Addendum 20040918: The Register reports that the Ebay domain hijacker was arrested and admitted to doing the DNS spoof. Reason:

"The 19 year-old says he didn't intend to do any harm and that it was 'just for fun'. He didn't believe the ploy was possible.

So, back to the status quo we go, and DNS attacks are not a theft-inspired attack. In celebration of the false alert to a potential change to the threats model, I've added a '?' to the title of this blog.

Posted by iang at 01:15 PM | Comments (0) | TrackBack

Sarbanes-Oxley - what the insiders already know

Sarbanes-Oxley is the act to lay down the law in financial reporting. It's causing a huge shakeup in compliance. On the face of it, better rules and more penalties should be good, but that's not the case here. Unfortunately, the original scams that brought about Sarbanes-Oxley, and its Basel-II cousin, were based on complexity - hiding stolen money in plain sight. The more complex things get the more scope there is to hide one stolen millions.

Insiders already know this. The worry-warts have pointed it out, and been ignored. Others are silently waiting, rubbing their hands in glee at the prospects to be opened up.

Here's another twist. In an article on how complexity and penalties will lead to more cover-ups and more rot, Paul Murphy points out that there's now an easy way to get the CFO fired - simply futz with the server and push the results around. The hapless CFO has two only two choices, cover-up or falling on his sword.

Of course, this is possible, unless one is using the strong accounting techniques of financial cryptography ... so if you do find yourself employing this rapid promotion strategy, make sure you fix it before it's done to you!



INDUSTRY ANALYSIS:
Sarbanes-Oxley: More Cause Than Cure?

By Paul Murphy LinuxInsider 07/29/04 6:00 AM PT

From a social perspective, legal consequences tend to be associated with being caught, not with committing the action and Sarbanes-Oxley may therefore "incent" more cover-ups than compliance. From a technical perspective, little can be done without fully integrating production and reporting -- something that can't be done in any practical way with Wintel's client-server architecture.

At a working lunch last week I had the misfortune of being seated next to some guy from Boston whining about the misery and risk introduced into his life by Sarbanes-Oxley. I kept wanting to ask him what he thought his job was as a CFO, since all Sarbanes-Oxley really does is establish a basis for legal penalties against financial executives who dishonor the job description by failing to understand, apply and maintain adequate internal financial controls.

I didn't. In the end I told him he could always get his CIO fired rather than take the heat himself because I've never seen a company in which the CFO didn't outrank the CIO. Now, in reality, that doesn't have anything to do with the central issues raised by Sarbanes-Oxley but the idea certainly seemed to cheer him up.

Sarbanes-Oxley provides the classic legislative response to a perceived abuse: legally defining responsibilities and setting forth penalties for failures to meet them. In doing that, however, it fails to address the underlying issue, which isn't why a few people lied, cheated and stole, but why a much larger number of people let them get away with it for so long.

Remember, few of what we now clearly see as abuses were secret: Enron's CFO won major financial management awards for what he was doing, most wall street players used personal IPO allocations to buy customer executives, and dozens of analysts wrote about the obvious mismatch between real revenues and the financial statements underlying market valuations at companies like Global Crossing and MCI/WorldCom.

Wider Context

Look at this in the wider context of overall financial market management and this becomes a chicken and egg type question. It's clear that the financial market failed to self-correct with the majority of the people involved closing both eyes to abuses while deriding or ignoring those who tried to uphold previously normal standards of personal and professional integrity.

But what made that mob response possible? Were financial market systems failures induced for personal gain, or did the players involved slide down the slippery slope to corruption because the checks and balances built into the system failed? How was it possible for some brokers to brag to literally hundreds of their colleagues about their actions without having those colleagues drum them out of the business?

My personal opinion is that a fish rots from the head down. In this case, that the Clintons' sleazy example in the White House combined with easy money from the dot dummies to create an atmosphere of greed and accommodation in which it became easy for otherwise responsible people to rationalize their own abdication of professional responsibilities in favor of personal advantage.

Bottom Line

Whether that's true or not, the bottom line on Sarbanes-Oxley is that it doesn't address the major public market abuses but is likely to have some serious, although counter-intuitive, consequences.
In establishing penalties ranging from fines to jail time and the public humiliation of the perp walk, Sarbanes-Oxley creates both incentives to cover up failures and opportunities for those with axes to grind, people to hurt, or shares to short.

The cover-up side of this is obvious. Imagine a CFO, popular with the other executives and the board, who discovers that the financial statements have been substantially misstated for some time. In this situation the threat posed both to the individual and the organization by Sarbanes-Oxley could easily tip a decision toward covering up, either through the intentional continuation of the erroneous reporting or through some longer run corrective process.

The incentives to attack have to be coupled with opportunities to mean anything. That's less obvious, but I admit I enjoyed my lunch rather more after imagining how little access to my tormentor's financial server Relevant Products/Services from Intel Enterprise Solutions Latest News about Servers would really be needed to send him all undeservedly to jail.

The key enabler here, besides inside access of the kind you get by infiltration, is the separation of financial reporting from production transactions. In his case, the financial statements are drawn from a data mart that gets its input at second hand from a bunch of divisional financial systems.

Faking business transactions is difficult and risky because there are lots of real-world correlates and you have to fake or modify a lot of them to have a material impact. That's not true, however, where the financial statements are drawn from a data warehouse disconnected from the actual transactions underlying the numbers.

Installing a Stored Procedure

In this situation, the external referents are difficult to track and all an attacker has to do is install a stored procedure that transfers small amounts from one of the imaginary accounts -- say, goodwill amortization -- to another every time one of the bulk updates runs.

Over time, this will have an effect like that of the butterfly flapping its wings in China to cause storms in California, slowly and invisibly undermining the integrity of the financial reports.

Eventually, of course, some external event will trigger an investigation. Then he's toast, and no amount of pointing at internal controls and auditors, public or otherwise, will make any difference. The system will have been turned on itself with the books balancing perfectly and all checks checking, even while the published profit and loss numbers have been getting "wronger" by the quarter.

Once that's discovered, the company's executive will face a choice -- cover-up or mea culpa -- and either way Sarbanes-Oxley's threat of legal process will be the biggest scarecrow on the playing field.

Integrity Guarantees

From a social perspective, legal consequences tend to be associated with being caught, not with committing the action. Sarbanes-Oxley might therefore "incent" more cover-ups than compliance. From a technical perspective, little can be done without fully integrating production and reporting -- something that can't be done in any practical way with Wintel's client-server architecture.

I'm really looking forward to the case law on this. After all, if a porn user can't be held responsible because Wintel's vulnerabilities mean that anyone could have put the incriminating materials on his PC, shouldn't a CFO with bad numbers have access to the same defense?

More interestingly, what happens when a prosecutor with a sense of irony puts some Microsoft (Nasdaq: MSFT) Latest News about Microsoft experts on the stand to testify against a CFO (or porn user) who tries this defense but doesn't have Wintel installed?

All joking aside, however, the real bottom line on Sarbanes-Oxley might well turn out to be that it weakens rather than strengthens integrity guarantees in public accounting by tilting judgment decisions toward cover-ups in the short term and may threaten Microsoft's client-server architecture in the long term.

Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.

Posted by iang at 05:34 AM | Comments (2) | TrackBack

September 01, 2004

VeriSign's conflict of interest creates new threat

There's a big debate going on the US and Canada about who is going to pay for Internet wire tapping. In case you hadn't been keeping up, Internet wire-tapping *is* coming. The inevitability of it is underscored by the last ditched efforts of the ISPs to refer to older Supreme Court rulings that the cost should be picked up by those requiring the wire tap. I.e., it's established in US law that the cops should pay for each wiretap [1].

I got twigged to a new issue by an article [2] that said:

"To make wiretapping possible, Internet phone companies have to buy equipment and software as well as hire technicians, or contract with VeriSign or one of its competitors. The costs could run into the millions of dollars, depending on the size of the Internet phone company and the number of government requests."

What caught me by surprise was the mention of Verisign. So I looked, and it seems they *are indeed* in the business of subpoena compliance [3]. I know most won't believe me, given their public image as a trusted ecommerce player, so here's the full page:

NetDiscovery Service for CALEA Compliance

Complete Lawful Intercept Service

VeriSigns NetDiscovery service provides telecom network operators, cable operators, and Internet service providers with a streamlined service to help meet requirements for assisting government agencies with lawful interception and subpoena requests for subscriber records. Net Discovery is the premier turnkey service for provisioning, access, delivery, and collection of call information from operators to law enforcement agencies (LEAs).

Reduce Operating Expenses

Compliance also requires companies to maintain extensive records and respond to government requests for information. The NetDiscovery service converts content into required formats and delivers the data directly to LEA facilities. Streamlined administrative services handle the provisioning of lawful interception services and manage system upgrades.

One Connection to LEAs

Compliance may require substantial capital investment in network elements and security to support multiple intercepts and numerous law enforcement agencies (LEAs). One connection to VeriSign provides provisioning, access, and delivery of call information from carriers to LEAs.

Industry Expertise for Continued Compliance

VeriSign works with government agencies and LEAs to stay up-to-date with applicable requirements. NetDiscovery customers benefit from quick implementation and consistent compliance through a single provider.

CALEA is the name of the bill that mandates law enforcement agency (LEA) access to telcos - each access should carry a cost. The cops don't want to pay for it, and neither do the suppliers. Not to mention, nobody really wants to do this. So in steps VeriSign with a managed service to handle wiretaps, eavesdropping, and other compliance tasks as directed under subpoena. On first blush, very convenient!

Here's where the reality meter goes into overdrive. VeriSign is also the company that sells about half of the net's SSL certificates for "secure ecommerce [4]." These SSL certificates are what presumptively protect connections between consumers and merchants. It is claimed that a certificate that is signed by a certificate authority (CA) can protect against the man-in-the-middle (MITM) attack and also domain name spoofing. In security reality, this is arguable - they haven't done much of a job against phishing so far, and their protection against some other MITMs is somewhere between academic and theoretical [5].

A further irony is that VeriSign also runs the domain name system for the .com and the .net domains. So, indeed, they do have a hand in the business of domain name spoofing; the trivial ease of mounting this attack has in many ways influenced the net's security architecture by raising domain spoofing to something that has to be protected against [6]. But so far nothing much serious has come of that [7].

But getting back to the topic of the MITM protection afforded by those expensive VeriSign certificates. The point here is that, on the one hand, VeriSign is offering protection from snooping, and on the other hand, is offering to facilitate the process of snooping.

The fox guarding the chicken coop?

Nobody can argue the synergies that come from the engineering aspects of such a mix: we engineers have to know how to attack it in order to defend it. This is partly the origin of the term "hacker," being one who has to crack into machines ... so he can learn to defend.

But there are no such synergies in governance, nor I fear in marketing. Can you say "conflict of interest?" What is one to make of a company that on the one hand offers you a "trustworthy" protection against attack, and on the other hand offers a service to a most likely attacker [8]?

Marketing types, SSL security apologists and other friends of VeriSign will all leap to their defence here and say that no such is possible. Or even if it was, there are safeguards. Hold on to that thought for a moment, and let's walk through it.

How to MITM the CA-signed Cert, in one easy lesson

Discussions on the cryptography list recently brought up the rather stunning observation that the Certificate Authority (CA) can always issue a forged certificate and there is no way to stop this. Most attack models on the CA had assumed an external threat; few consider the insider threat. And fair enough, why would the CA want to issue a bogus cert?

In fact the whole point of the PKI exercise was that the CA is trusted. All of the assumptions within secure browsing point at needing a trusted third party to intermediate between two participants (consumer and merchant), so the CA was designed by definition to be that trusted party.

Until we get to VeriSign's compliance division that is. Here, VeriSign's role is to facilitate the "provisioning of lawful interception services" with its customers, ISPs amongst them [9]. Such services might be invoked from a subpoena to listen to the traffic of some poor Alice, even if said traffic is encrypted.

Now, we know that VeriSign can issue a certificate for any one of their customers. So if Alice is protected by a VeriSign cert, it is an easy technical matter for VeriSign, pursuant to subpoena or other court order, to issue a new cert that allows them to man-in-the-middle the naive and trusting Alice [10].

It gets better, or worse, depending on your point of view. Due to a bug in the PKI (the public key infrastructure based on x.509 keys that manages keys for SSL), all CAs are equally trusted. That is, there is no firewall between one certificate authority and another, so VeriSign can issue a cert to MITM *any* other CA-issued cert, and every browser will accept it without saying boo [11].

Technically, VeriSign has the skills, they have the root certificate and now they are in the right place. MITM never got any easier [12]. Conceivably, under orders from the court Verisign would now be willing to conduct an MITM against its own customers and its own certs, in every place that it has a contract for LEA compliance.

Governance? What Governance?

All that remains is the question of whether VeriSign would do such a thing. The answer is almost certainly yes: Normally, one would say that the user's contract, the code of practice, and the WebTrust audit would prevent such a thing. After all, that was the point of all the governance and contracts and signing laws that VeriSign wrote back in the mid 90s - to make the CA into a trusted third party.

But, a court order trumps all that. Judges strike down contract clauses, and in the English common law and the UCC, which is presumably what VeriSign uses, a judge can strike out clauses in the law or even write an entire law down.

Further, the normal way to protect against over zealous insiders or conflicts of interests is to split the parties: one company issues the certs, and another breaches them. Clearly, the first company works for its clients and has a vested interest in protecting the clients. Such a CA will go to the judge and argue against a cert being breached, if it wants to keep selling its wares [13].

Yet, in VeriSign's case, it's also the agent for the ISP / telco - and they are the ones who get it in the neck. They are paying a darn sight more money to VeriSign to make this subpoena thing go away than ever Alice paid for her cert. So it comes down to "big ISP compliance contract" versus "one tiny little cert for a dirtbag who's probably a terrorist."

The subpoena wins all ways, well assisted by economics. If the company is so ordered, it will comply, because it is its stated goal and mission to comply, and it's paid more to comply than to not comply.

All that's left, then, is to trust in the fairness of the American juridical system. Surely such a fight of conscience would be publically viewed in the courts? Nope. All parties except the victim are agreed on the need to keep the interception secret. VeriSign is protected in its conflict of interest by the judge's order of silence on the parties. And if you've been following the news about PATRIOT 1,2, National Security Letters, watchlists, no-fly lists, suspension of habeus corpus, the Plame affair, the JTTF's political investigations and all the rest, you'll agree there isn't much hope there.

What's are we to do about it?

Then, what's VeriSign doing issuing certs? What's it doing claiming that users can trust it? And more apropos, do we care?

It's pretty clear that all three of the functions mentioned today are real functions in the Internet market place. They will continue, regardless of our personal distaste. It's just as clear that a world of Internet wire-tapping is a reality.

The real conflict of interest here is in a seller of certs also being a prime contractor for easy breachings of certs. As its the same company, and as both functions are free market functions, this is strictly an issue for the market to resolve. If conflict of interest means anything to you, and you require your certs to be issued by a party you can trust, then buy from a supplier that doesn't also work with LEAs under contract.

At least then, when the subpoena hits, your cert signer will be working for you, and you alone, and may help by fighting the subpoena. That's what is meant by "conflict of interest."

I certainly wouldn't recommend that we cry for the government to fix this. If you look at the history of these players, you can make a pretty fair case that government intervention is what got us here in the first place. So, no rulings from the Department of Commerce or the FCC, please, no antitrust law suits, and definitely no Star Chamber hearings!

Yet, there are things that can be done. One thing falls under the rubric of regulation: ICANN controls the top level domain names, including .net and .com which are currently contracted to VeriSign. At least, ICANN claims titular control, and it fights against VeriSign, the Department of Commerce, various other big players, and a squillion lobbyists in exercising that control [14].

It would seem that if conflict of interest counts for anything, removing the root server contracts from VeriSign would indicate displeasure at such a breach of confidence. Technically, this makes sense: since when did we expect DNS to be anything but a straight forward service to convert domain names into numbers? The notion that the company now has a vested interest in engaging in DNS spoofing raises a can of worms that I suspect even ICANN didn't expect. Being paid to spoof doesn't seem like it would be on the list of suitable synergies for a manager of root servers.

Alternatively, VeriSign could voluntarily divest one or other of the snooping / anti-snooping businesses. The anti-snooping business would be then a potential choice to run the DNS roots, reflecting their natural alignments of interest.


Addendum: 2nd February 2005. Adam Shostack and Ian Grigg have written to ICANN to stress the dangers in conflict of interest in selection of the new .net TLD.

[1] This makes only sense. If the cops didn't pay, they'd have no brake on their activity, and they would abuse the privilege extended by the law and the courts.

[2] Ken Belson, Wiretapping on the Net: Who pays? New York Times, http://www.iht.com/articles/535224.htm

[3] VeriSign's pages on Calea Compliance and also Regulatory Compliance.

[4] Check the great statistics over at SecuritySpace.com.

[5] In brief, I know of these MITMs: phishing, click-thru-syndrome, CA-substitution. The last has never been exploited, to my knowledge, as most attacks bypass certificates, and attack the secure browsing system at the browser without presenting an SSL certificate.

[6] , D. Atkins, R. Austein, Threat Analysis of the Domain Name System (DNS), RFC 3833.

[7] There was the famous demonstration by some guy trying to get into the DNS business.

[8] Most likely? 'fraid so. The MITM is extraordinarily rare - so rare that it is unmeasurable and to all practical intents and purposes, not a practical threat. But, as we shall see, this raises the prospects of a real threat.

[9] VeriSign, op cit.

[10] I'm skipping here the details of who Alice is, etc as they are not relevent. For the sake of the exercise, consider a secure web mail interface that is hosted in another country.

[11] Is the all-CAs-are-equal bug written up anywhere?

[12] There is an important point which I'm skipping here, that the MITM is way too hard under ordinary Internet circumstances to be a threat. For more on that, see Who's afraid of Mallory Wolf?.

[13] This is what is happening in the cases of RIAA versus the ISPs.

[14] Just this week: VeriSign to fight on after ICANN suit dismissed
U.S. Federal District Court Dismisses VeriSign's Anti-Trust Claim Against ICANN with Prejudice and the Ruling from the Court.
Today: VeriSign suing ICANN again

Posted by iang at 06:20 AM | Comments (5) | TrackBack