June 27, 2006

on Leadership - tech teams and the RTFM factor

Leadership is not measured by titles, or by pay or by department sizes. Nor is it measured by columns of ink in a sunday rag. Quite the reverse, really -- when some journo starts talking about leadership it's generally a proxy for Company's stock price, PR budget and journalists' ignorance or lack of inspiration.

To get to today's point we have to build up a picture of structure - what it means to be part of a team. Let's talk about people that are team members, the vast majority, even though we are writing to leaders today, whatever they are.

Consider Alice, a new, junior and very inexperienced programmer. She attempts to establish territory in an area, her first area of deep knowledge. This is a natural desire as it gets a foot on the ground, from which she can start to contribute.

In a complex engineering field like Internet, we have lots of different territories: systems programmers (where I hale from), applications programmers, graphical programmers, graphical artists, ... etc etc. Generally, a programmer dives into one of these areas and tries to make it their profession - their specialty. But in doing that, they inevitably find that this is at the cost of the other areas.

I for example know lots about systems programming, but nothing about graphical programming. Indeed, it's worse than that, because within systems programming there are dozens of areas (crypto, protocols, device drivers, OS, client-server, database, accounting, languages, real-time, capabilities, ...), and I imagine the same is true of graphical programming, if I ever bothered to ask.

How then does our new programmer thrive and learn in her chosen specialty? Traditionally she does it by herself. Alice learns. Swots, reads, surfs, codes up some stuff, etc etc.

What she doesn't do is ask someone to teach her. Bob, who may be the world's expert in GUI programming, for example may be sitting next to her and have all the knowledge she needs. But, if Alice asks "how do I create a widgit?" she'll be told *RTFM* -- read the f&&king manual. The best Alice can do is talk to Dave who happens to be at her level, and in her specialty. Dave also doesn't know how to create a widgit, but at least they can enjoy their conversation about it.

The reason for this is perverse: in order for Alice to deserve to be given that help from Bob, she has to prove that she is capable of using and appreciating that information. There's no point in passing on that knowledge to someone who isn't capable of using it! But the only way for Alice to show she can acquire that knowledge is to do it -- to acquire the knowlege.

This chicken and egg situation explains why computing people are so apparently antagonistic to each other if they arrive from different levels of knowledge. By asking for something you should be able to get yourself, you are revealling yourself as incapable of figuring it out. In contrast, when you know, and someone asks you some small informational question, you are bound by the arcane unwritten rules of the league of extraordinary programmers to tell that person to RTFM. It's an unfortunate truth, an economic hard reality that self-learning is more effective that teaching in the computing world.

So we have a communication issue up and down the levels. Communication between peers at the same level is good, and I guess it has to be otherwise we'd all die of loneliness. But now couple this situation to the above - the diversity of specialties. (We are getting closer to the point now.)

When I come in with all the knowledge in some specialty (today I should be working on datagram security protocols) and talk to some other expert, Carol, in some other field (say, GUIs for email clients), I have a problem. My brain is steeped, literally soaked in RTFM juice. So when Carol asks how to do a cryptoblahblah operation, my kneejerk reaction is to tell her to RTFM. And, even if I can see that she is in the wrong specialty, and she can't possibly be expected to know this or aquire this information, *and* I manage to quell the quivering knees by tying myself in knots of politeness, I still can't help her by explaining it to her, because ... I don't know how!

As a specialist I've spent my entire life reading, learning, coding, doing great things, but all of this has been done by myself. I've never had to teach anything to anyone, and so I simply don't know how to do it. Carol of course is exactly the same, except she's more colourful at it.

So now we see how complex heterogeneous teams have a real problem that homogeneous teams do not have -- and hence a real need for leadership. The more diverse the people are, the more that they lack a common language.

Imagine it like silos -- language frequently used in the financial world. In an OS project like Linux, the teams are all quite homogoneous. All the people in the kernel team are kernel hackers, pretty much, and the Unix kernel was famously designed to be comprehensible by one talented person (the late great Professor John Lions made the point that 10k lines of code, which was what v6 kernel was, could just about fit in a person's brain).

But over in the heterogeneous world of say Firefox, we likely have lots of silos: graphics, layout, protocol, systems interfaces, plugins, crypto, certs, human languages, computer languages ... and these are all completely distinct specialties. Probably every area alone exceeds John Lions' 10k brain limit.

Now look at a leader's problem. You have a lot of people who know a lot of stuff, are convinced that they know a lot of good stuff, yet they know very little of each other's areas, and what's worse, they lack the ability or skills to communicate. Even if they wanted to, they cannot. A nightmare!

A leader in this field then needs exceptional communication skills, to make up for the lack within the team. She has to bring together two or more experts who are both desperately trying to yell as loud as possible "RTFM!" at each other. Unto the breach, she goes, and like as not, they'll both turn and attack her, because she reveals that she doesn't know enough about *either* speciality.

So, it is no small wonder that these teams are hard, and that there are very few leaders. Who'd want the job? It's the worst of all jobs: you get RTFM'd by all your team members, when they are not RTFMing each other, and because you don't get seen as a specialist, you can't even stand on your turf and defend that. It's lose-lose-lose all the way, and in a world where the star is the loudest, the brightest, the most colourful, you also end up getting the lowest pay.

It's worse than that even, in the open source world, as we haven't even got the facade of professionalism and pay to smooth over the gruesome truth. But we've got to the essential point - leadership in the heterogenous team situation is hugely about communication. Massively. And, we are not talking "good communication skills," we are talking exceptional, gifted skills. These are lion-tamer grade, capable of making hungry carnivores purr with pleasure, quelling riots with a look, redirecting warring armies into cooperative ventures.

So, you ask, how then do we acquire these skills? RTFM! The only problem is, the manual for these skills hasn't been written. I am (ahem) writing it and will no doubt get around to it some day, but as you know, I am congenitally impaired when it comes to explaining these things. I've never had to, you see.

Posted by iang at 05:01 PM | Comments (5) | TrackBack

It's official! SSH whips HTTPS butt! (in small minor test of no import....)

Finally some figures! We've known for a decade that the SSH model consumes all in its path. What we haven't known is relative quantities. Seen somewhere on the net, this week's report shows Encrypted Traffic. In SSH form: 3.42% In HTTPS form: 1.11%, by volume. For number of packets, it is 3.51 and 1.67 respectively.

SSH3.42%17.45T3.51%20.98%
HTTPS1.11%5.677T1.67%10.00G
IPsec ESP0.14%0.697T0.21%1.211G
IPsec AH0.01%0.054G0.01$0.089G
IPsec IKE0.00%0.001G0.00%0.006G

Approximately a three times domination which is our standard benchmark for a good whipping in military terms. Although this is not a pitched battle of like armies contesting the same space (like the VPN bloodletting to come) it is important to establish that SSH usage is significant, non trivial and exceeds HTTPS on all measures.

IPsec barely twitched the needle and others weren't reported. Curiously, the amount of HTTPS is way up compared to HTTP: about 7-8%. I would have expected much less, the many silent but resiliant readers of FC have more impact than previously thought.

There's one monster catch: this is "Internet 2" which is some weird funded different space, possibly as relevant to the real net as candy prices are to the economy. Also, no mention of Skype. Use of Rsync and FTP slightly exceeds that of all encrypted traffic. Hmmm.... people still use Rsync. What is wrong here? I have not come across an rsync user since ... since ... Any clues?

Still it's a number. Any number is good for an argument.

Posted by iang at 04:05 PM | Comments (5) | TrackBack

June 26, 2006

Sealand - more pictures

Fearghas points to more graphical evidence of risk practices from the Lifeboat station in Harwich.

For those bemused by this attention -- a brief rundown of "Rough Towers," a.k.a. Sealand. Some 5 or so years ago, some colourful libertarian / cypherpunk elements started a "data haven" called Havenco running in this "claimed country". Rumour has it that after the ISP crashed and burnt, all the servers were moved up the Thames estuary to London.

Not wishing to enter into the discussion of whether this place was *MORE* risky or less risky ... Chandler asks:

What kind of country doesn't have a fire department? One that doesn't plan on having a fire, as evidenced by the fact that Sealand/HavenCo didn't have fire insurance.

Well, as they were a separate jurisdiction, they probably hadn't got around to (ahem) legislating an insurance act? Or were you thinking of popping into the local office in Harwich and picking up a home owner's policy?

:) More pictures on site.

Posted by iang at 07:14 PM | Comments (1) | TrackBack

on Leadership - roles around the May Pole

Some blogs over in the Mozilla camp are talking about leadership. The thrust of the debate seems to be that it would be nice if we could add a dash of it into open source communities. Sure would, but other than it being nice if it was Christmas every day, what is this really about?

Part of the problem with leadership resolves around what it means. I'll declare my colours up front - if you said that leadership was simply a journo-term for something we don't understand, I wouldn't be arguing strongly with you. There is massive amounts of confusion about it, and we should start by stating that what we don't know about leadership is ... almost everything. Here's what John Kotter, perhaps the world's leading academic on leadership, says:

People are put into leadership positions and ‘magically’ expected to know how to be a leader. Sometimes certifications, such as an MBA or CA are assumed to infer leadership ability. While there are certainly valid assets to these qualifications, they don’t teach you how to be a leader. You can’t learn that in a classroom. You can only learn that in the trenches, emulating others, trying things, and making mistakes as you go. Unfortunately, many of the role models are autocrats, and there is rarely a regular, formal measurement system for leadership. So leaders are left on their own to create their own self-perceptions of their effectiveness.

Kotter wrote two highly influential articles called What do leaders do? and What do managers do? You should look them up in HBR and read them, for many reasons. Here's an example from FactoryJoe (right before he called me egalitarian?!) on what people do:

Here’s what’s strange about it: throughout the meeting (I can’t be sure but…) I did feel like I was sitting in the role of facilitator — not exactly the leader, but close enough. I mean, that’s a pretty common role to play, right? Most meetings need a leader of sorts, right?

Facilitator, definately, that's a key role. But leader? Of a meeting? No, not quite, a faciliatator is a totally normal and necessary process in a meeting. Here's why:

When groups come together and get into the conquering of some grand plan, many roles emerge. A facilitator is one, but there are others. Some research has it that there are 8 or 9 roles, and curiously the most effective teams are with 4 people, where each person plays 2 roles! That research (Belbin) does not identify leadership per se, but it does identify a role that might be traditionally associated: the Chair. This is the person who rules and overrules, keeps the agenda, etc on track.

However that Chair person is not the leader -- a good leader writes themselves out of the forefront role if they can, and will if there is a better person to play that role. In fact, some other research has indicated that a good leader is the person who doesn't do what the others are good at doing, but instead picks out the roles that are missing. So they may play the facilitator today, but in tomorrow's meeting they may be acting as the architect, whereas yesterday they were pushing the tea trolley.

Which leads us to another observation - there can be many leaders, and leadership is not a monopoly.


There might be a manager responsible for a department, but that doesn't mean that his department can't be full of leaders. And what do these leaders do, when not being boss of a department, or being bossed by a bossy boss? Not lead? Well, of course not -- they lead by shifting to other roles and other capabilities, by identifying what's missing, how to get the solution to what's missing, and trying to juggle the factors to make that happen (where, factors include self). It's a bit like that complicated dance around the May Pole, everything keeps shifting, in a merry dance, but that's because the dancers know that the end result is important, not the position they play.

( So, for example, check out the Mitchell posts. In brief but rude terms - 1. some decision lacks are killing us. 2. here's what we are limited to. 3. here's what we should be doing ... each one successively pushing closer to what those decisions should be based on. Not because Mitchel likes to write, but because there is a lack of foundation in what to do. Or, when Frank says that he believes he can list the things that make leadership work, the interesting thing is not whether he's right or whether I'm wrong in saying it can't be done ... rather it is the fact that it is needed, so someone is doing it! )

Tomorrow, I'll post on communications, if someone reminds me.

Posted by iang at 07:12 PM | Comments (4) | TrackBack

How many people are turned away by the FC certificate?

Peter Gutmann asks:

Do you have any figures on how many security people your self-signed certificate is turning away? I'd be interested in knowing whether the indistinguishable-from-placebo effect of SSL certs also extends to a site used mainly by security people.

I have no idea! Anybody?

Posted by iang at 04:48 AM | Comments (9) | TrackBack

June 25, 2006

FC++3 - Concepts against Man-in-the-Browser Attacks

This emerging threat has sent a wave of fear through the banks. Different strategies have been formulated and discussed in depth, and just this month the first roll-outs have been seen in Germany and Austria. This information cries out for release as there are probably 10,000 other banks out there that would have to go through and do the work again.

Philipp Gühring has collected the current best understanding together in a position paper entitled "Concepts against Man-in-the-Browser Attacks."

Abstract. A new threat is emerging that attacks browsers by means of trojan horses. The new breed of new trojan horses can modify the transactions on-the-fly, as they are formed in in browsers, and still display the user's intended transaction to her. Structurally they are a man-in-the-middle attack between the the user and the security mechanisms of the browser. Distinct from Phishing attacks which rely upon similar but fraudulent websites, these new attacks cannot be detected by the user at all, as they are use real services, the user is correctly logged-in as normal, and there is no difference to be seen.

The WYSIWYG concept of the browser is successfully broken. No advanced authentication method (PIN, TAN, iTAN, Client certificates, Secure-ID, SmartCards, Class3 Readers, OTP, ...) can defend against these attacks, because the attacks are working on the transaction level, not on the authentication level. PKI and other security measures are simply bypassed, and are therefore rendered obsolete.

If you are not aware of these emerging threats, you need to be. You can either get it from sellers of private information or you can get it from open source information sharing circles like FC++ !

Posted by iang at 12:43 PM | Comments (8) | TrackBack

FC++3 - The Market for Silver Bullets

In this paper I dip into the esoteric theory of insufficient markets, as pioneered by Nobel Laureate Michael Spence, to discover why security is so difficult. The results are worse than expected - I label the market as one of silver bullets. Yes, there are things that can be done, but they aren't the things that people have been suggesting.

This paper is a bit tough - it is for the serious student of econ & security. Far from being the pragmatic "fix this now" demands of Philipp Gühring and the "rewrite it all" diagnosis of Mark Miller, it offers a framework of why we need this information out there in the public sphere.

What is security?

As an economic `good' security is now recognised as being one for which our knowledge is poor. As with safety goods, events of utility tend to be destructive, yet unlike safety goods, the performance of the good is very hard to test. The roles of participants are complicated by the inclusion of agressive attackers, and buyers and sellers that interchange.

We hypothesize that security is a good with insufficient information, and reject the assumption that security fits in the market for goods with asymmetric information. Security can be viewed as in a market where neither buyer nor seller has sufficient information to be able to make a rational buying decision. These characteristics lead to the arisal of a market in silver bullets as participants herd in search of best practices, a common set of goods that arises more to reduce the costs of externalities rather than achieve benefits in security itself.

Does it really show that the security market is one of silver bullets, and best practices are bad, not good? You be the judge! That's what we do in FC++, put you in the peer-review critic's seat.

Posted by iang at 11:53 AM | Comments (1) | TrackBack

June 24, 2006

Sealand burnt out - aid sent by neigbour UK - security guard airlifted

All things come to pass. Sealand, the erstwhile independent country in the Thames estuary and home of the HavenCo ISP for arbitrage businesses, burns out. Not to the waterline, but early reports have it as destroyed by a generator fire.

Sealand had one security guard on site. It looks like they have paid the price for high aspirations and low extinguishers.

The UK, being the nearest neighbouring country, immediately sent in disaster relief. Chances are they will probably stay to help the country back on its feet. And stay, and stay...

Late breaking news: "Michael Bates told the Evening Star the family would not give up its ownership of the former war-time fort and wanted to carry on running it."

Princely bow to JPM's wife who told him what's current and interesting.

Posted by iang at 04:41 PM | Comments (1) | TrackBack

Identity 7, watchlist error rate, $300 to get off the watchlist

I love this article, it's cracker-jack full of interesting stuff about a crime family who have industrialised identity document production in the US.

The dominant forgery-and-distribution network in the United States is allegedly controlled by the Castorena family, U.S. Immigration and Customs Enforcement officials say. Its members emigrated from Mexico in the late 1980s and have used their printing skills and business acumen to capture a big piece of the booming industry.

Nice colour, there. Actually the entire article is full of colour, well worth reading. We'll just do the dry facts here:

Federal authorities said that calculating the financial scope of document forgery is virtually impossible but that illicit profits easily amount to millions of dollars, if not billions. One investigation of CFO operations in Los Angeles alone resulted in *the seizure of 3 million documents with a street value of more than $20 million.*

"We've hit them pretty hard, but have we shut down the entire operation? I don't think we can say that yet," said Scott A. Weber, chief of the agency's Identity and Benefit Fraud Unit. "We know there are many different cells out there, and they are still providing documents."

Ouch. 20 millions divided by $3 millions is $7. Identity 7, here we come.

Illegal immigrants are often given packages of phony documents as part of a $2,000 smuggling fee. Others can easily make contact with vendors who operate on street corners or at flea markets in immigrant communities in virtually every city. .. . A typical transaction includes key papers such as a Social Security card, a driver's license and a "green card" granting immigrants permanent U.S. residency. Fees range from $75 to $300, depending on quality.

Identity is a throw-in for a $2000 package tour sold out of Mexico. Say no more. Obviously, these numbers are all screwed up as there is a big difference between $75 and $7. But, consider. Even at $300, it would be more cost-effective for the average American business traveller to travel on false documentation than to do the following:

Currently, individuals who want to clear their names have to submit several notarized copies of their identification. Then, if they're lucky, TSA might check their information against details in the classified database, add them to a cleared list and provide them with a letter attesting to their status.

More than 28,000 individuals had filed the paperwork by October 2005, the latest figures available, according to TSA spokeswoman Amy Kudwa. She says the system works. "We work rigorously to resolve delays caused by misidentifications," Kudwa says.
...
The TSA's lists are only a subset of the larger, unified terrorist watch list, which consists of 250,000 people associated with terrorists, and an additional database of 150,000 less-detailed records, according to a recent media briefing by Terrorist Screening Center director Donna Bucella. The unified list is used by border officials, embassies issuing visas and state and local law enforcement agents during traffic stops.

This programme is of interest because its identity keystone drives other programmes. We are looking at a 7% error rate as a minimum, which should come as no surprise - of course, there are unlikely to be more than a 100 people on the list that really qualify as "terrorists who are likely to do some damage on a plane" so if the error rate is anything less than 99% then we should probably be stopping the planes right now. About the best we can conclude is that the strategy of stopping terrorists by identifying them doesn't seem worth emulating in financial cryptography.

And Darren points out the statistical unwisdom of relying on such programmes:

Suppose that NSA's system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA's domestic monitoring of everyone's email and phone calls is useless for finding terrorists.

Sure. But the NSA are not using the databases to find terrorists. Instead, when other leads come in, they look to see what they have in their databases -- to add to the lead they already have. Simple. With this strategy, clearly, the more data, the more databases, the better this works.

But, again, it doesn't seem a strategy that we'd emulate in FC.

Posted by iang at 12:29 PM | Comments (2) | TrackBack

June 23, 2006

The Fed knows - more evidence that the Fed is managing the washback

I proposed a hypothesis on US debt levels few weeks back: "US debt has dramatically expanded not (only) because of Bush administration but (also) because of the buyback of US currency as it shifts its status from 'absolute reserve' to 'leading reserve' ."

I asked around and didn't get any confirmation on the hypothesis of the managed washing of the US currency! Here's an indication that others are also spotting it. Anne Streiber writes:

There is evidence that the US is attempting to manage the decline by purchasing its own debt. As Asian purchasing of US paper declined last month, the slack was taken up by Caribbean and UK banks that would not normally have the liquidity to make such purchases. Therefore, they are acting for a third party, and the only party that would buy dollars when a loss in value is inevitable is the US Treasury.

Curiously, we've known about the USD washback to the US for years, in the sense of an expectation. And we've known that US debt levels have been soaring for a long time. But it took the confirmatory evidence from central banks around the world, post facto, before we were confident enough in our understanding to take the next step - join the two as causal.

Which means, the Fed does know a lot more than we think. We are years behind.

Posted by iang at 06:41 AM | Comments (2) | TrackBack

June 19, 2006

White Helicopter - Is eavesdropping a "Clear and Present Danger" - the definition of a validated threat?

We have often discussed how threats arise and impact security models. The hugely big question is whether to include this threat or this other threat? I think there is a metaphor to address part of this question - whether a threat is a clear and present danger. Let me meander in that general direction before I try and define it.

We cannot include all threats as to some extent everything is a threat - a chance of stubbing ones toe, a harsh word from your spouse, a neighbour looking over the fence, the theft of your notebook. Removal of all threats would result in death of the soul, and can probably only be accomplished by death of the body.

So we must choose - which threats to protect against and which to accept. We give it the exotic title of risk management, but a more common definition is real life: we can only live by choosing to accept the greater body of the minor threats to us, and minimising the dangerous ones.

How we choose which threats to address is based on many factors. Some are easy - we can defend against them for free. Others are so cheap that we don't notice, or they come with substantial other benefits. So, to preserve our modesty, we wear clothes - and that keeps us warm so we get the security for free. Except in summer, where humans are exposed to interesting social games between the threat model of modesty and the heat of the sun, which raise for some deep questions as to whether nudity is the threat, or is the modesty? But then winter comes again and the argument is shelved for another year.

Other threats are not so easy nor so endearing to discuss. These are the ones that run slap bang into costs. The canonical case in financial cryptography is the MITM - the man in the middle attack -- and its defence in the SSL protocol. I think I have shown in compelling, albeit long winded, terms, that this threat was not valid and not worth protecting against in the application sometimes known as ecommerce. It raised many costs, one amongst them being a heightened risk of MITM in another form - phishing. See many rants on that elsewhere.

One of the things that came out of that long research into SSL and its failure to preserve the very harm it was intended to protect is the concept that a threat should be validated. I only had a hazy idea that this meant that we should be able to prove its danger to us, clearly, enough for us to protect against it.

Now, in addressing the emerging threat of eavesdropping, the question arises whether it is validated? We can see it, we can feel it - it is in the papers and the blogs and in the "denials." But is it validated?

I think not. Until we know how much it is, we don't know how what the risk of it happening to us is, and therefore we do not know how much to spend protecting against it. We simply do not know -- yet -- how much the eavesdropping is going to cost us, either individually or as a society. So we are not informed enough to make economic decisions.

But wait! I've laid out a case that the danger of eavesdropping is right there in front of us -- how can we possibly ignore it? Let's look a little further.

The possibility of eavesdropping by national agencies has always been there. I first heard of Echelon in the early 80s -- so far back in time that I can't recall where or when it was mentioned. But, I also knew -- or discovered at that time -- that it was ineffective. That is, it did not achieve the dream of the technologists at the UKUSA agencies. (For the answer to why you probably need to resort to computer science and the emergence of datamining.)

So we know that eavesdropping has always been there, in Internet time. It is present. But also, we know that traditionally, societies did not permit the eavesdroppers to share that information. History is replete with examples where the spooks were not permitted to pass valuable local intelligence to the authorities, and no doubt they all have stories about how they know who the killer was in this or that unsolved murder case.

So we know that however effective the eavesdropping was, it wasn't dangerous to us because it was so tightly constrained that it would never be passed into general society. That was the quid pro quo, the deal with the devil.

And indeed, that is what has changed -- the eavesdropping information is now being shared across a wide group of agencies. It's only a step away from being commercially shared, once you can pick and choose which agency to pervert. So it is now dangerous to society - to you, me and everyone - because there is always someone with money to pay for data that we are trying to keep private.

But we lack clarity. As a community of Internet engineers, we still do not know how much this danger is going to cost us. I simply do not know whether to drop everything I'm doing and start working on cryptoplumbing again, or whether for the most part, someone can still hide in the noise levels of the net? We lack clarity, or clearness, in our threat.

Out of which thoughts gives me a general definition for a validated threat: Is it a clear and present danger?

  • It is Clear if I can measure it and risk-analyse it, so as to present a cost model to drive our economic choices.
  • It is Present if I can show it to exist, actively, today.
  • It is a Danger if it can do our beneficiaries harm.

Eavesdropping is Present and Dangerous. It is not yet Clear, so we are now challenged as a community to measure it. Once we can inform ourselves of the clarity of the threat, we can declare it to be a validated threat - a clear and present danger. We're not there yet, but at least I can propose a definition on how to get there!

Posted by iang at 05:56 PM | Comments (0) | TrackBack

Black Helicopter #2 (ThreatWatch) - It's official - Internet Eavesdropping is now a present danger!

A group of American cryptographers and Internet engineers have
criticised the FCC for issuing an order that amounts to a wiretap instruction for all VoIP providers.

For many people, Voice over Internet Protocol (VoIP) looks like a nimble way of using a computer to make phone calls. Download the software, pick an identifier and then wherever there is an Internet connection, you can make a phone call. From this perspective, it makes perfect sense that anything that can be done with a telephone, including the graceful accommodation of wiretapping, should be able to be done readily with VoIP as well.

The FCC has issued an order for all ``interconnected'' and all broadband access VoIP services to comply with Communications Assistance for Law Enforcement Act (CALEA) --- without specific regulations on what compliance would mean. The FBI has suggested that CALEA should apply to all forms of VoIP, regardless of the technology involved in the VoIP implementation.

In brief the crypto community's complaint is that it is very difficult to implement such enforced access, and to do so may introduce risks. I certainly agree with the claim of risks, as any system that has confused requirements becomes brittle. But I wouldn't bet on a company not coming out with a solution to these issues, if the right way to place the money was found. I've previously pointed out that Skype left in a Centralised Vulnerability Party (CVP, sometimes called a TTP), and last week we were reminded of the PGP Inc blunder by strange and bewildering news over in Mozilla's camp.

So where are we? The NSA has opened up the ability to pen-trace all US phones, more or less. Anyone who believes this is as far as it goes must be disconnected from the net. The EFF's suit alleges special boxes that split out the backbone fibre and suck it down to Maryland in real time. The FBI has got the FCC to order all the VoIP suppliers into line. Mighty Skype has been brought to heel by the mighty dollar, so it's only a phone call away.

Over in other countries - where are they, again? - there is some evidence that police in European countries have routine access to all cellphone records. There is other evidence that the EU may already have provided the same call records to the US (but not the other way around, how peculiar of those otherwise charming Europeans) in much the same way as last week the EU were found to be illegally passing private data on air travellers. To bring this into perspective, China of course leads the *public* battle for most prominent and open eavesdropper with their Cisco Specials, but one wonders whether they would be actually somewhat embarrassed if their capabilities were audited and compared?

If you are a citizen of any country, it seems, you need not feel proud. What can we conclude?

  1. Eavesdropping has now moved to a real threat for at least email and VoIP, in some sense or other.
  2. Can we say that it is a validated threat? No, I think not. We have not measured the frequency and cost levels so we have no actuarial picture. We know it is present, but we don't know how big it is. I'll write more on this shortly.
  3. The *who* that is doing it is no longer the secure, secret world of the spooks who aren't interested in you. The who now includes the various other agencies, and they *are* interested in you.
  4. Which means we are already in a world of widespread sharing across a wide range of government agencies. (As if sharing intel has not been a headline since 9/11 !)
  5. it is only one step from open commercial access. Albeit almost certainly illegal, there isn't likely to be anything you can do about illegally shared data, because it is the very agents of the law which are responsible for the breach, and they will utter the defence of "national security," to you, and the price, to your attacker.
  6. An assault on crypto can't be that far off. The crypto wars are either already here again, or so close we can smell them.
  7. We are not arguing here, today, whether this is a good thing for the mission to keep us safe from terrorists, or a bad thing. Which is just as well, because it appears that when they are given the guy's head on a plate, the law enforcement officers still prefer to send out for takeaway.

My prediction #1for 2006 that government will charge into cyberspace in a big way is pretty much confirmed at this stage. Obviously this was happening all along, so it was going to come out. How important is this to you the individual? Here's an answer: quite important. And here's
some evidence:

What is Political Intelligence?Political intelligence is information collected by the government about individuals and groups.
Files secure under the Freedom of Information Act disclose that government officials have long been
interested in all forms of data. Information gathered by government agents ranges from the most personal data about sexual liaisons and preferences to estimates of the strength of groups opposing U.S. policies. Over the years, groups and individuals have developed various ways of limiting the collection of information and preventing such intelligence gathering from harming their work.

It has now become routine for political activists -- those expressing their rights under democracy -- to be investigated by the FBI. In what is a blowback to the days of J.Edgar Hoover, these activists now routinely advising their own people on how to lawfully defend themselves.

Hence the pamphlet above. There are two reasons for gathering information on 'sexual liasons and preferences.' Firstly, blackmail or extortion. Once an investigator has secret information on someone, the investigator can blackmail -- reveal that information -- in order to extort the victim to turn on someone else. Secondly, there may be some act that is against the law somewhere, which gives a really easy weapon against the person. Actually, they are both the same reason.

If there is anyone on the planet who thinks that such information shouldn't be protected then, I personally choose not to be persuaded by that person's logic ("I've got nothing to hide") and I believe that we now have a danger. It's not only from the harvesting by the various authorities:

Peter G, 41, asked for a divorce from his wife of six years, Lori G, 38, in March 2001. ... Lori G filed a counterclaim alleging the following: <snip...> and wiretapping. The wiretapping charges are what make this unfortunate case relevant to Police Blotter. ... But Peter admitted to "wiretapping" Lori's computer.

The description is general: Peter used an unspecified monitoring device to track his wife's computer transactions and record her e-mails. Lori was granted $7,500 on the wiretapping claim. ...

This is hardly the first time computer monitoring claims have surfaced in marital spats. As previously reported by CNET News.com, a Florida court ruled last year that a wife who installed spyware on her husband's computer to secretly record evidence of an extramarital affair violated state law.

Some hints on how to deal with that danger. Skype is probably good for the short term in talking to your loved one while he still loves you, notwithstanding their CVP, as that involves an expensive, active aggressive act which incurs a risk for the attacker. However, try and agree to keep the message history off - you have to trust each other on this, as the node and your partner's node remain at greater danger. Email remains poor because of the rather horrible integration of crypto into standard clients - so use Skype or other protected chat tools.

Oh, and buy a Mac laptop. Although we do expect Macs to come under increased attention as they garner more market share, there is still a benefit in being part of a smaller population, and the Mac OS is based on Unix and BSD, which has approximately 30 years of attention to security. Windows has approximately 3 years, and that makes a big difference.

(Disclosure: I do not own a Mac myself, but I do sometimes use one. I hate the GUI, and the MacMini keyboards are trash.)

Posted by iang at 01:20 PM | Comments (1) | TrackBack

June 12, 2006

Naked Payments IV - let's all go naked

Dave Birch says let's all get it off:"

I've got a very simple, and absolutely foolproof, plan to reduce payment card fraud (much in the news recently) to zero. It's based on ... So here goes:

Change the law. Have the government pass a bill that says that, as from 1st January 2011, it won't be against the law to use someone else's payment card. Result: on 1st January 2011, card fraud falls to zero because there won't be any such thing as card fraud.

This has two benefits, both of which greatly increase the net welfare.

Firstly, it would to stimulate competition between payment card companies to provide cards that could not be used by anyone other than the rightful owner.

OK, logical, coherent and a definate brain tease. Much of the underlying reason that naked payments waft comfortably around inside the network is that the inside network is built of corporations that rely on the crime of misusing a payment, naked or otherwise. With such strong criminal punishments in place, they can push the naked and vulnerable payments around.

Before you discount the idea totally, consider this: it is already in operation to some extent. In the open governance payments world, there is no effective "law" operating that makes it "illegal" to use some account or other. Rather, the providers live in what we might term the "open governance" regime, and there, they use a balance of techniques to defend themselves and their customers. Those techniques refer often to contract laws, but try not to rely on criminal laws.

Does it work? I think so. Costs are lower, most such systems operate at under 1% transaction fees whereas the regulated competitors operate around 2-5%. P2p fraud seems lower, but unfortunately nobody talks about the fraud rates that much (and in this way, the open governance world mirrors the regulated world), so it is difficult to know for sure. Succesful attacks appear lower than with regulated US/UK systems, although not lower than mainland Europe. Possibly this is a reflection of the lack of anyone backstopping them, and the frequency of unsuccessful attacks giving lots of practice.

One thing's for sure - the open governance providers would be quite happy to get rid that law as they don't expect to benefit from it anyway.

Probably a useful area to research - although I get the feeling that nobody in the regulated world wants to honour the alternate with admission, and the same scorn exists in the governed world, so a researcher would have to be careful not to give the game away.

Posted by iang at 03:20 PM | Comments (8) | TrackBack

June 11, 2006

USD shift in reserve currency status confirmed - call it 10% per year

Below are some figures below about how the USD is now losing some of its power as world currency. Note that this has been expected for many years now, but obviously if you are one of the CBs that wants to shift reserves, you want to do it without stating it. So we've had to sit here on the prediction for some time, biting our fingernails.

On Thursday, June 8, Russia became the latest in the list of countries that shifted a part of its Central Bank reserves from the dollar. Sergei Ignatyev, chairman of the Central Bank, said that only 50 percent of its reserves are now held in dollars, with 40 percent in euros and the rest in pounds sterling. Earlier it was believed that just 25-30 percent of Russia's reserves were held in euros, with virtually all the rest held in dollars.

Let's do the maths, so as to explain why this is significant. If we take the shift as from 60% to 50%, allowing euros to rise from 30% to 40%, then we see a relative shift in USD demand of say 20%. Call it over 2 years, and we can guess at a shift of 10% per year in the total international currency use of USD.

If all countries are doing this - and there are good game theory, trade and geopolitical reasons to suspect this - then we see a massive washing around the world of some 10% of the USD during the space of a year. This will go on until we reach a new stability, a level which is anyone's guess at the moment.

What then happens to the "value?" Obviously, the music stops at some point. Now, my macroecon is a bit rusty, but here's what I think happens. Most of the money bounces around the world and demand then exceeds supply, so the prices starts dropping. As there is a clear need to totally get rid of a substantial lump of it, this goes on until that is "got rid of." But how do we get rid of currency? Who takes it back these days?

The mechanism for this apparent paradox is US assets. As the USD price goes down, US assets start to look cheaper and cheaper. So more and more of the value finds itself coming out of the international washing machine and into the US markets. Stocks (shares, companies) and real estate. IP catalogues. Money market investment. Anything available that will be for sale in USD will be purchased.

Now, the sellers of these things will either be foreigners (in which case nothing changes, the music is still playing) or they will be US persons, in which case, they are happy to hold dollars. Demand for dollars is always firm in the US, by definition.

So the music stops when that above value lands in the US. The foreign dollars are exchanged for US assets. A great sell-off, in other words.

But wait - what happens to the dollars then? Well, there are now too many of them in the US. Now we see why the US economy is continuing to boom. The dollars are coming back home, and *effective inflation* is running at the amount calculated above.

(Well, it's a bit worse than that. 2/3 of the dollar is outside the US. So a 10% shift from outside to inside means a doubling effect on local dollars. Yup, there is in these assumptions a 20% increase in the number of dollars washing back to the US every year, but bear in mind these are napkin numbers.)

What does this mean? Likely that the housing bubble will not burst, or not burst so aggressively. Likely that businesses will find plenty of cash for loans, so they'll be running on infinite credit. The stock market is still pointed up! But prices will be shifting against companies and individuals in a fairly significant jolt of inflation, and what's more, the Fed won't be able to curtail it as it normally does.

To stop it, the Fed would have to soak up that liquidity. How's it going to do that? Issue more bonds? Hmmm, there's a thought. Is the massive debt increase over the last 6 years really nothing to do with the administration, but it is all the flip side of soaking up the wash back? Any real macroeconomists in the house? Can we do some napkin numbers on how much additional debt has been issued and how much currency is washing in? (Ed: Confirmation?)

Full article:

Russia Shifts Part of Its Forex Reserves from Dollars to Euros

Created: 09.06.2006 11:02 MSK (GMT +3), Updated: 16:06 MSK MosNews

On Thursday, June 8, Russia became the latest in the list of countries that shifted a part of its Central Bank reserves from the dollar. Sergei Ignatyev, chairman of the Central Bank, said that only 50 percent of its reserves are now held in dollars, with 40 percent in euros and the rest in pounds sterling. Earlier it was believed that just 25-30 percent of Russia's reserves were held in euros, with virtually all the rest held in dollars.

Russia's gold and foreign currency reserves have grown rapidly over the last few years in tandem with high oil and gas prices. As MosNews has reported earlier, Russia currently has the world's fourth-largest reserves, after China, Japan and Taiwan, and it looks to overcome Taiwan by the end of the year, with reserves growing by $5-6 billion monthly.

The Russian Central Bank's move ties in with increasing signs that Middle Eastern oil exporters are also looking to diversify their reserves out of the dollar. "This is a bearish development for the dollar," Chris Turner, head of currency research at ING Financial Markets, told the British Financial Times. "It reminds us that global surpluses are accumulating to the oil exporters, and Russia is telling us that an increasingly lower proportion of these reserves will be held in dollars. This suggests there is a trend shift away from the dollar."

Clyde Wardle, senior Emerging Market Currency strategist at HSBC, told the paper: "We have heard talk that Middle Eastern countries are doing a similar thing and even some Asian countries have indicated their desire to do so."

Moscow's move was unsurprising. Russia's $71.5billion Stabilization fund, which accumulates windfall oil revenues, is due to be converted from rubles to 45 percent dollars, 45 percent euros and 10 percent sterling. The day-to-day movements of the ruble are monitored against a basket of 0.6 dollars and 0.4 euros. About 39 percent of Russia's goods imports came from the eurozone in 2005, against just 4 percent from the US.

The statement plays into a perception that central banks, which together hold $4.25 trillion of reserves, are increasingly channeling fresh reserves away from the dollar to reduce potential losses if the dollar was to fall sharply.

Copyright ¿ 2004 MOSNEWS.COM
http://www.mosnews.com/money/2006/06/09/dollarshift.shtml

Posted by iang at 01:29 PM | Comments (4) | TrackBack

June 10, 2006

Naked Payments I - New ISO standard for payments security - the Emperor's new clothes?

[Anne & Lynn Wheeler do a guest post to introduce a new metaphor! Editorial note: I've done some minor editorial interpretations for the not so eclectic audience.]

From the ISO, a new standard aims to ensure the security of financial transactions on the Internet:

ISO 21188:2006, 'Public Key Infrastructure for financial services - practices and policy framework', offers a set of guidelines to assist risk managers, business managers and analysts, technical designers and implementers and operational management and auditors in the financial services industry.

My two bits [writes Lynn], in light of recent British pin&chip vulnerability thread, is to consider another metaphor for viewing the session authentication paradigm: They tend to leave the transaction naked and vulnerable.

In the early 1990s, we had worked on the original payment gateway for what become to be called e-commerce 1, 2 (as a slight aside, we also assert it could be considered the first SOA implementation 3 - Token-ring vs Ethernet - 10 years later ).

To some extent part of the transaction vulnerability analysis for x9.59 transaction work done in the mid-90s was based on analysis and experience with that original payment gateway as it was implemented on the basis of the session-oriented paradigm 4.

This newer work resulted in something that very few other protocols did -- defined end-to-end transactions with strong authentication. Many of the other protocols would leave the transaction naked and vulnerable at various steps in the processing. For example, session-oriented protocols would leave the entire transaction naked and vulnerable. In other words, the bytes that represent the transaction would not have a complete end-to-end strong authentication related to exactly that transaction, and therefore leave it naked and vulnerable for at least some part of the processing.

This then implies that the complete end-to-end business process has to be heavily armored and secured, and even minor chinks in the business armoring would then result in exposing the naked transaction to the potental for attacks and fraud.

If outsider attacks aren't enough, naked transactions are also extremely vulnerable to insider attacks. Nominally, transactions will be involved in a large number of different business processes, exposing them to insider attacks at every step. End-to-end transactions including strong authentication armors the actual transaction, and thus avoids leaving the transaction naked and vulnerable as it travels along a vast array of processing steps.

The naked transaction paradigm also contributes to the observation that something like seventy percent of fraud in such environments involve insiders. End-to-end transactions with strong authentication (armoring the actual transaction) then also alleviates the need for enormous amounts of total business process armoring. As we find it necessary to protect naked and vulnerable transactions, we inevitably find that absolutely no chinks in the armor can be allowed, resulting in expensive implications in the business processing - the people and procedures employed.

The x9a10 working group (for what become the x9.59 financial standard) was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. This meant not only having countermeasures to things like replay attacks (static data that could be easily skimmed and resent), but also having end-to-end transaction strong authentication (eliminating the vulnerabilities associated with having naked and vulnerable transactions at various points in the infrastructure).

The x9.59 financial standard for all retail payments then called for armoring and protecting all retail transactions. This then implied the business rule that account numbers used in x9.59 transactions could not be used in transactions that didn't have end-to-end transaction strong authentication. This eliminated the problem with knowledge leakage; if the account number leaked, it no longer represents a vulnerability. I.e. an account number revealed naked was no longer vulnerable to fraudulent transactions 5.

Part of the wider theme on security proportional to risk is that if the individual transactions are not armored then it can be extraordinarily expensive to provide absolutely perfect total infrastructure armoring to protect naked and vulnerable transactions. Session-hiding cryptography especially is not able to absolutely guarantee that naked, vulnerable transactions have 100% coverage and/or be protected from all possible attacks and exploits (including insider attacks) 6.

There was yet another issue with some of the payment-oriented protocols in the mid-90s looking at providing end-to-end strong authentication based on digital signature paradigm. This was the mistaken belief in appending digital certificates as part of the implementation. Typical payment transactions are on the order of 60-80 bytes, and the various payment protocols from the period then appended digital certificates and achieved a payload bloat of 4k to 12k bytes (or a payload bloat of one hundred times). It was difficult to justify an enormous end-to-end payload bloat of one hundred times for a redundant and superfluous digital certificate, so the protocols tended to strip the digital certificate off altogether, leaving the transaction naked and vulnerable during subsequent processing.

My response to this was to demonstrate that it is possible to compress the appended digital certificates to zero bytes, opening the way for x9.59 transactions with end-to-end strong authentication based on digital signatures. Rather than viewing x9.59 as using certificate-less digital signatures for end-to-end transaction strong authentication, it can be considered that an x9.59 transaction appends a compressed zero-byte digital certificate to address the severe payload bloat problem 7.

To return briefly to Britain's Chip&Pin woes, consider that the issue of SDA (static data authentication vulnerable to replay attacks) or DDA (countermeasure to replay attacks) is somewhat independent of using a session-oriented implementation. That is, the design still leaves transactions naked and vulnerable at various points in the infrastructure.

Posted by iang at 03:49 PM | Comments (0) | TrackBack

June 04, 2006

CryptoKids, education or propaganda, ECC, speed or agenda capture?

The NSA has a newish site for kids at http://www.nsa.gov/kids/ with a Flash download and a bunch of cartoon characters. It might be fun for kids interested in crypto. Of course it is imbued with current political policies or moralities of the Bush era, and there is a slamming over on Prison Planet.

I think quite mild really. Educating kids is relatively benign as long as they don't cross the line into propaganda. What is of more worry is the continued policy of organised and paid-for propaganda by western governments through all sorts of channels, domestic and foreign. This in my view is unacceptable. In a democratic nation, the people decide such questions and vote. In a dictatorship, the dictator decides and imposes by means of control of the media.

While we are on the subject, Philipp asked me why everyone keeps asking for 16k keys. Well, other than just being perverse in the normal crypto blah blah sense, there turns out to be a reason. I'll leave it to you to decide whether this is a good reason or not.

I discovered this browsing over on Mozo's site in pursuit of something or other. Mozilla are planning to introduce SNI - the trick needed to do SSL virtual hosting - in some near future, as are Microsoft and Opera. But also mentioned was that Mozilla are introducing elliptic curve cryptography, at least into their crypto suite 'NSS'.

ECC is an emerging cryptographic standard which can be used instead of the RSA algorithm. It uses smaller keys than RSA, which means it can be faster than RSA for the same level of cryptographic strength. The US Government is moving away from the RSA cryptosystem, and onto ECC, by the year 2010. See this page from the NSA for more information.

So jumping over to the always engaging NSA's pages on ECC:

... The following table gives the key sizes recommended by the National Institute of Standards and Technology to protect keys used in conventional encryption algorithms like the (DES) and (AES) together with the key sizes for RSA, Diffie-Hellman and elliptic curves that are needed to provide equivalent security.
Symmetric Key Size
(bits)
RSA and Diffie-Hellman
Key Size (bits)
Elliptic Curve Key Size
(bits)
801024160
1122048224
1283072256
1927680384
25615360521

Table 1: NIST Recommended Key Sizes

To use RSA or Diffie-Hellman to protect 128-bit AES keys one should use 3072-bit parameters: three times the size in use throughout the Internet today. The equivalent key size for elliptic curves is only 256 bits. One can see that as symmetric key sizes increase the required key sizes for RSA and Diffie-Hellman increase at a much faster rate than the required key sizes for elliptic curve cryptosystems. Hence, elliptic curve systems offer more security per bit increase in key size than either RSA or Diffie-Hellman public key systems.

And, if you wish to use AES 256, then the NIST suggested length for RSA is 15360, or 16k in round numbers. The NSA also points out that the equivalent strengths in that area are computationally more expensive, perhaps 20 times as much.

Does all this matter? Not as much as one would think. Firstly, for financial cryptography, we are not so fussed about the NSA's ability to attack and crack our codes. So the Suite B standard is not so relevant, although it is an interesting sign post to what the NSA thinks is Pareto-secure (or more likely Pareto-complete) according to their calculations.

For protecting both classified and unclassified National Security information, the National Security Agency has decided to move to elliptic curve based public key cryptography. Where appropriate, NSA plans to use the elliptic curves over finite fields with large prime moduli (256, 384, and 521 bits) published by NIST.

And, we'd better not be worried about that, because when the NSA starts cracking the financial codes and sharing that data, all bets in modern democracy are off. The definition of a fascist state is that you are allowed to own stuff, but the government controls that ownership via total control of the financial apparatus. In financial cryptography, we're quite happy to deal with the 128 bit strength of the smaller AES, and 4k RSA keys or less, and rely on warnings about what's reasonable behaviour. It's called risk management.

Further, machines are fast and getting faster. Only at the margin is there an issue, and most big sites offload the crypto to hardware anyway, which perforce limits the crypto sizes to what the hardware can handle (notice how the NSA even agrees that we are still mucking around at 1k keys for the most part).

Literally, if you are worried about key sizes, you are worried about the wrong thing (completely, utterly). So it is important to understand that even though the browsers (IE7 as well, not sure about others) are moving to add ECC, and this involves sexy mathematics and we get to share beers and tall stories with the spooks, this development has nothing to do with us. Society, the Internet, the world at large. It is a strictly USG / NSA issue. In fact:

Despite the many advantages of elliptic curves and despite the adoption of elliptic curves by many users, many vendors and academics view the intellectual property environment surrounding elliptic curves as a major roadblock to their implementation and use. Various aspects of elliptic curve cryptography have been patented by a variety of people and companies around the world. Notably the Canadian company, Certicom Inc. holds over 130 patents related to elliptic curves and public key cryptography in general.

As a way of clearing the way for the implementation of elliptic curves to protect US and allied government information, the National Security Agency purchased from Certicom a license that covers all of their intellectual property in a restricted field of use. The license would be limited to implementations that were for national security uses and certified under FIPS 140-2 or were approved by NSA. ... NSA's license includes a right to sublicense these 26 patents to vendors building products within the restricted field of use. Certicom also retained a right to license vendors both within the field of use and under other terms that they may negotiate with vendors.

Commercial vendors may receive a license from NSA provided their products fit within the field of use of NSA's license. Alternatively, commercial vendors may contact Certicom for a license for the same 26 patents. Certicom is planning on developing and selling software toolkits that implement elliptic curve cryptography in the field of use. With the toolkit a vendor will also receive a license from Certicom to sell the technology licensed by NSA in the general commercial marketplace. Vendors wishing to implement elliptic curves outside the scope of the NSA license will need to work with Certicom if they wish to be licensed.

The NSA is being quite proper and is disclosing it in full. If you didn't follow here it is: You can't use this stuff without a licence. The NSA has one for USG stuff. You don't.

The RSA algorithm and the related DH family now go head-to-head with a patented and licensed alternative. As a curious twist in fate, this time RSA and friends are on the other side. We fought this battle in the 90s, as the RSA patent was used as a lever to extract rents - that's the point of the patent - but also to roll out agendas and architectures that ultimately failed and ultimately cost society a huge amount of money. (Latest estimate for America is $2.7 bn per year and the UK is up to UKP800 mn. Thanks guys!)

The way I see it, there is no point in anyone using elliptic curve crypto. It could even be dangerous to you to do this - if it results in agendas being slipped in via licensing clauses that weaken your operations (as happened last time). I can't even see the point of the NSA doing it - they are going to have to pay through the nose to get people to touch this stuff - but one supposes they want this for on-the-margin hardware devices that have no bearing on the commercial hard reality of economics.

Indeed, somewhere it said that the Mozo code was donated by Sun. One hopes that these guys aren't trying too hard to foister another agenda nightmare on the net, as we still haven't unwound the last one.

Posted by iang at 12:54 PM | Comments (6) | TrackBack

Courts as Franchises - the origins of private law as peer-to-peer government

Over on Enumerated, Nick Szabo posts twice on the framework of the courts in anglo-norman history. He makes the surprising claim that up until the 19th century, the tradition was all of private courts. Franchises were established from the royal perogative, and once granted as charters, were generally inviolate. I.e., courts were property.

There were dozens of standard jurisdictional franchises. For example, "infangthief" enabled the franchise owner to hang any thief caught red-handed in the franchise territory, whereas "outfangthief" enabled the owner to chase the thief down outside the franchise territory, catch him red-handed, and then hang him. "Gallows" enabled the owner to try and punish any capital crime, and there were a variety of jurisdictions correponding to several classes of lesser offenses. "View of frankpledge" allowed the owner to control a local militia to enforce the law. "The sheriff's pleas" allowed the owner to hear any case that would normally be heard in a county court. There were also franchises that allowed the collection of various tolls and taxes.
A corporation was also a franchise, and corporations often held, as appurtenances, jurisdictional franchises. The City of London was and is a corporate franchise. In the Counties Palatine the entire government was privately held, and most of the American Colonies were corporate franchises that held practically all jurisdiction in their territory, sometimes subject to reservations (such as the common law rights of English subjects and the right of the king to collect customs reserved in the American charters). The colonies could in turn grant franchises to local lords (as with the Courts Baron and Courts Leet in early Maryland) and municipalities. American constitutions are largely descended from such charters.

Consider the contrast with the European hierarchical view. Not property but master-servant dominated, as it were. And, some time in the 19th century the European hierarchical view won:

The Anglo-Norman legal idea of jurisdiction as property and peer-to-peer government clashed with ideas derived from the Roman Empire, via the text of Justinian's legal code and its elaboration in European universities, of sovereignty and totalitarian rule via a master-servant or delegation hierarchy. By the 20th century the Roman idea of hierarchical jurisdiction had largely won, especially in political science where government is often defined on neo-Roman terms as "sovereign" and "a monopoly of force." Our experience with totalitarianism of the 19th and 20th centuries, inspired and enabled by the Roman-derived procedural law and accompanying political structure (and including Napoleon, the Csars, the Kaisers, Communist despots, the Fascists, and the National Socialists), as well as the rise of vast and often oppressive bureaucracies in the "democratic" countries, should cause us to reconsider our commitment to government via master-servant (in modern terms, employer-employee) hierarchy, which is much bettter suited to military organization than to legal organization.

Why is that? Nick doesn't answer, but the correlation with the various wars is curious. In my own research into Free Banking I came to the conclusion that it was stronger than any other form, yet it was not strong enough to survive all-out war - and specifically the desires of the government and populace to enact extraordinary war powers. Which led to the annoying game theory result that central banking was stronger, as it could always pay the nation into total war. If we follow the same line in causality, Nick suggests that the hierarchical government is stronger because it can control the nation into total war. And, if we assume that any nation with these two will dominate, this explains why Free Banking and Franchise Law both fell in the end; and fell later in Britain.

Posted by iang at 09:52 AM | Comments (5) | TrackBack

June 02, 2006

ThreatWatch - the war on our own fears

Several articles on scary, ooo, so scary cyberwar scenarios. We will see a steady stream of this nonsense as the terrorist-watchers, china-watchers and every-other-bogeyman-watchers all combine in a war on our own fears.

A hyperventilating special from the US DHS:

According to cyber-security experts, the terror attacks of 11 September and 7 July could be seen as mere staging posts compared to the havoc and devastation that might be unleashed if terrorists turn their focus from the physical to the digital world.

Scott Borg, the director and chief economist of the US Cyber Consequences Unit (CCU), a Department of Homeland Security advisory group, believes that attacks on computer networks are poised to escalate to full-scale disasters that could bring down companies and kill people. He warns that intelligence "chatter" increasingly points to possible criminal or terrorist plans to destroy physical infrastructure, such as power grids. Al-Qa'ida, he stresses, is becoming capable of carrying out such attacks.

Summarised over on risks, the US DOD also partakes liberally of the oxygen tank:

From the nation that enjoys U.S. Most Favored Nation trade status, and a permanent member of the WTO...

China is stepping up its information warfare and computer network attack capabilities, according to a Department of Defense (DoD) report released last week. The Chinese People's Liberation Army (PLA) is developing information warfare reserve and militia units and has begun incorporating them into broader exercises and training. Also, China is developing the ability to launch preemptive attacks against enemy computer networks in a crisis, according to the document, ...

The tendency for public officials to try and scare the public into more funding is never-ending. The positive feedback loop is stunningly safe for them - if there is a cyber attack, they are proved right. If there isn't a cyber attack, it's just about to happen and they'll be proven right. And every one of our enemies is ... Huff, puff, huff!

Is D.C. ready for terrorist attack? Two unrelated traffic accidents within an hour of each other yesterday in Northeast shut down two major highways during the busy morning commute, causing massive gridlock and seemingly endless delays -- but also providing an ominous warning: What if it had been a terrorist attack?

About the best we can do is patiently point out that what they are talking about doesn't happen because in most cases it is evidently uneconomic. When the economic attack develops, we'll deal with it. Londoners walked home, that one day, and those that were a bit late that morning like me stayed home. The next day everyone went back to work, the same way as before. The next week, nobody noticed. It's not a particularly economic attack.

Some things you deal with by preparation. But other things you just have to let happen, because the attack goes around your preparations. By definition. Can anybody guess what would have happened if instead of a couple of traffic accidents, it was a couple of bombs? All of the greater Washington area would probably enter gridlock, because the authorities would hand it to the terrorists.

Posted by iang at 05:27 PM | Comments (1) | TrackBack

June 01, 2006

Dodgy practices - and how to defend against them with Audits

One of the things that we as society do to protect us against dodgy practices is to employ specialists to prepare considered reports. Often known as audits, these reports solve a particular economic problem for us - it is too expensive for all or even any of us to gain the knowledge, travel to the site and review all the requirements. So we elect a specialist who can do it for all of us, thus saving our scarce resources.

(I stress this economic equation so that those familiar with open governance can compare & contrast.)

The downside of the auditing approach is that we are now beholden to the quality of the audit. How do we ensure that the process is good? In principle, we let institutions such as auditor's associations provide principles, standards of quality, and ethics. In practice, we hope they are followed, but practice often lags principle because of the dramatic costs of audits, and the less than transparent information provided. In effect, the problem is shifted from the company to the audit, resulting in what amounts to being two points of failure.

For CAs, a widely used standard is the WebTrust criteria, which have been written by bodies in the US and Canada. This backs into various other documents and standards of those bodies, but tries to narrow down the essentials for CA practice and policy.

It is perhaps interesting to view the WebTrust against yesterday's news of VeriSign being sued on alleged misrepresentation of security levels. Here is one snippet from their "WebTrust Program for Certification Authorities:"

Client/Engagement Acceptance
The practitioner [auditor] should not accept an engagement where the awarding of a WebTrust seal would be misleading.
The WebTrust seal implies that the entity is a reputable site that has reasonable disclosures and controls in a broad range of areas. Accordingly, the practitioner would avoid accepting a WebTrust engagement when the entity’s disclosures outside the scope of the engagement are known by the practitioner to be misleading, when there are known major problems with controls not directly affecting the scope of the engagement, or when the entity is a known violator of laws or regulations.

(My emphasis.) Some have found WebTrust controversial because it is (allegedly) permissive of any procedures, as long as you document them. By way of example, it has been commented (alleged) that you can get a WebTrust for spying on your customers, as long as that is what you say you do in your CPS.

(Whether that is true or not I have no idea.)

But, above, it says there in black and white that misleading disclosures are not acceptable. What's more, it states it broadly - even including disclosures _outside the scope of engagement_ which is likely to be a problem for a company as large and spread as VeriSign, as there are a lot of other areas they get into.

If the court case does find that there is misleading selling going on, there are going to be some questions to be asked. Indeed, maybe the auditor should be asking those questions now, or even in the past. Here's some of the questions that spring to my mind.

Should an auditor necessarily be aware of those sorts of disclosures? WebTrust says that the practitioner has to get into the business model of the CA, which necessarily involves understanding the pricing and product models to at least some extent. So it's a definate maybe. I know in my work, I would need to know the different pricing arrangments to form an assessment, but that's just me. Also, as the two products are apparently discriminated in terms of higher security, that would appear to be something the auditor would look at, as the public is relying on the security if nothing else.

Should this effect the current situation? WebTrust is determined on this issue - it is supposed to a more or less continuous engagement, with updates no more than every 12 months. Further:

During the period between updates, the CA undertakes to inform the practitioner of any significant changes in its business policies, practices, processes, and controls, particularly if such changes might affect the CA’s ability to continue meeting the WebTrust Principles and Criteria for Certification Authorities, or the manner in which they are met. Such changes may trigger the need for an assurance update or, in some cases, removal of the seal until an update examination by the practitioner can be made. If the practitioner becomes aware of such a change in circumstances, he or she determines whether the seal needs to be removed until an update examination is completed and the updated auditor’s report is issued.

In an actual engagement, an auditor can instruct the CA to pull the switch. "Now!" It is unlikely that the auditor will go that for, nor are they likely to pull the Seal. No auditor ever wields the stick that is written into the arrangements, they find other ways to deal with the issue, because any auditor that ever actually did that would not get another engagement. (This of course is part of the reality of auditing, something that is more evident from the Arthur Andersen experience.)

What's more there are plenty of arguments in favour of the practice. E.g., airline seats are quite happily sold at different prices, and the old game with computers is to sell the same machine for different prices but with internal switches set at different speeds. (In the really good old days, you could pay for an engineer to come out and turn the speed doubling switch.) So there are at least some bona fide arguments why this practice is beneficial, leaving aside the fair representation question.

What then should the Auditor do? Initiate a re-engagement? That is one plausible action. Another is simply drop the client, quietly or otherwise. A company that has to search for a new Auditors is sufficiently warned not to do it so many times that it runs out of potential suppliers. Alternatively, the auditor could simply put pressure on the company to sort out its misleading practices and also to settle any case forthwith.

What should relying parties do? Well, purchasers will think twice, but they do anyway, and this is only likely to add to the rumble of discontent with CAs, not change anything.

What about browser manufacturers? I think that depends on the Auditor's actions. Software manufacturers have already passed much of their reliance (but not their liability) onto the auditor by specifying the WebTrust. So they will now need to sit back and wait for the auditor to do the job, which is what the auditor wanted in the first place.

As long as something is done. If the auditor does nothing, or equally uncompellingly, does not inform the relying parties of any actions taken, a browser manufacturer might now wonder if the auditor can be relied upon for the very role they were selected. "Just exactly what would cause the auditor to respond, if not misleading practices towards the users, the very users that we the software suppliers serve and protect?"

Or, a software supplier might have to re-evaluate its own wider sense of duty of care. As the browsers sit in the front row of any liability for the use of SSL (by dint of embedding and hiding the certs and CAs), we could speculate how this case could spread wider. Could this be pinned on the software suppliers? Again, I can't quite see it, although if there is a rash of cases, including to do with phishing, then it is more plausible as the traditional relationship is not exactly an arms-length one.

Finally, what about the buyout of GeoTrust? Well, GeoTrust owners are likely to want to grab their cash and run faster rather than slower, and such events might conceivably destroy the buyout - although I can't see why it would have a material effect. More importantly, if the anti-trust grumbling took root, the law suit might have more an impact.

Of course, such musings are definately above our paygrade. But it is certainly one to watch.

Posted by iang at 05:21 AM | Comments (0) | TrackBack