May 07, 2006

Reliable Connections Are Not

Someone blogged a draft essay of mine on the limits of reliability in connections. As it is now wafting through the blog world, I need to take remedial action and publish it. It's more or less there, but I reserve the right to tune it some :)

When is a reliable connection not a reliable connection?

The answer is - when you really truly need it to be reliable. Connections provided by standard software are reliable up to a point. If you don't care, they are reliable enough. If you do care, they aren't as reliable as you want.

In testing the reliability of the connection, we all refer to TCP/IP's contract and how it uses the theory of networking to guarantee the delivery. As we shall see, this is a fallacy of extension and for reliable applications - let's call that reliability engineering - the guarantee is not adequate.

And, it specifically is not adequate for financial cryptography. For this reason, most reliable systems are written to use datagrams, in one sense or another. Capabilities systems are generally done this way, and all cash systems are done this way, eventually. Oddly enough, HTTP was designed with datagrams, and one of the great design flaws in later implementations was the persistent mashing of the clean request-response paradigm of datagrams over connections.

To summarise the case for connections being reliable and guaranteed, recall that TCP has resends and checksums. It has a window that monitors the packet reception, orders them all and delivers a stream guaranteed to be ordered correctly, with no repeats, and what you got is what was sent.

Sounds pretty good. Unfortunately, we cannot rely on it. Let's document the reasons why the promise falls short. Whether this effects you in practice depends, as we started out, on whether you really truly need reliability.

Read the rest...

Posted by iang at May 7, 2006 07:53 AM | TrackBack
Comments

Well this will give someone who purchased application software that claims security a nightmare. But they are the users and will never read this. Of course the more money they paid the more secure their lines of communication are right? I thought so.

Posted by: Jimbo at May 7, 2006 08:38 AM

we had done a high-speed backbone in the mid-80s and had to address lots of these issues.
http://www.garlic.com/~lynn/subnetwork.html#hsdt

in the late 80s & early 90s were were on the XTP (protocol) technical advisory board ... which had a specification that addressed a number of these issues. one of the participants was from the naval surface warfare center (nswc) ... they were looking at using it for shipboard fire control systems ... and were assuming an extremely hostile environment with possibly significant failure scenarios.

it was also billed as high-speed operation ... including having a reliable datagram delivery.

one of the comparisons that was done at the time was that TCP/IP requires a minimum 7-packet exchange. at the time, there had been work on VMTP (RFC1045) for "reliable" protocol in minimum 5-packet exchange. XTP was specifying a reliable protocol in minimum 3-packet exchange.

misc. past posts mentioning XTP and/or its proposal in ANSI/ISO as high-speed protocol.
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

one of the things that plagued webservers in the early days was that most tcp/ip implementations had assumed relatively long running sessions. as a result, the termination processing had processing of linear lists (FINWAIT). One of the first mismatches between HTTP (as a datagram protocol) implementations in TCP (a session protocol) was when some number of webservers were finding that the CPU was spending 95% of its time running through FINWAIT processing (the high-rate of extremely short sessions was resulting in extremely long FINWAIT lists).

when we were asked to consult with this small client/server startup that wanted to do payment transactions on the server
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

which is now comingly refferred to as e-commerce. a lot of people assumed that you could take the message format definitions from the payment industry telco circuit-based operation and map it into a tcp/ip packet network. the messages flowed ... but it had none of the traditional telco industry provisioning that come to be implicitly assumed. An early test had a problem and a trouble call logged with the payment transaction trouble desk. the trouble desk nominally expected to do first level problem determination within five minutes elapsed time. three hours later, they were still not able to resolve what was going on and closed it as NTF. another issue was that their SSL technology was dependent on being used in a very specific sequence of processes (which is frequently not observed these days).

out of that we spent several week doing a detailed failure mode analysis and coming up with a bunch of documentation and processes as compensating procedures for mapping a telco provisioned circuit operation into a tcp/ip network.

some of this we were able to draw on earlier experience having previously done an extremely detailed vulnerability analysis of tcp/ip protocol as part of turning out our ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

where vulnerability included full gamete of operational type failures as well as possibility of active attacks. recent post mentioning another aspect of this
http://www.garlic.com/~lynn/2006e.html#11 Caller ID "spoofing"

Posted by: Lynn Wheeler at May 7, 2006 11:07 AM

The TCP checksum issue (actually, there are two 16-bit checksums, one covering the IP header and one covering the TCP payload inside an IP packet) is actually a bit worse: it's not a CRC or something similar, but really just a checksum. This means that two-bit errors which come at distances which are integer multiples of 16 bits (which can be caused by a broken SRAM in a router, for example) go by undetected.

Posted by: Florian Weimer at May 7, 2006 02:33 PM

@Lynn:

"full gamete of failures" sounds rather icky.

I hope you meant to type 'gamut' :^)

@florian:

Had that happen with NFS traversing an AGS+, back in the day. Did I mention that SunOS having ethernet checksumming off by default is evil? :^)

Posted by: Chris Walsh at May 8, 2006 12:51 PM

part of doing ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

we had a bunch of stuff that did ip-address take-over when a server/router failed (we also had stuff to do MAC-address take-over) for fall-over.

part of this was dependent on ARP table time-outs of the ip-address->MAC-address (so a different MAC-address could come in with the same MAC-address).

turns out that it wasn't working very well. there was a bug in bsd tahoe/reno implementation that was in used by a large number of different platforms (so any client platform network code built using the bsd tahoe/reno code base had the problem).

the ip-address to mac-address routine saved the last response from calling the ARP routine. if the next request was for the same ip-address, the code would use the saved response ... rather than (re-)calling the ARP routine. this saved response had no time-out (compared to the table entries managed by the ARP routine). ARP flush command had no effect on this saved value either.

so a large number of client configurations were heavily asymmetric traffic ... nearly all client traffice going through/to a server or router.

so an administrative work-around was to send a packet from a "differnet" ip-address to the list of all clients. this would force the clients ip-address to do a lookup on a different ip-address (resetting the single saved value and forcing a call to the ARP code).

Posted by: Lynn Wheeler at May 8, 2006 02:40 PM

Just re-read your note on that and thought of two pieces of data that you might find of interest - one very old, one quite recent (to me).

1. Just because there are standards doesn't mean that anyone implements them. Consider urgent mode AKA Out of Band data in TCP. This has been standardized since day 1. The standards themselves are a bit vague about the details, but you can look at Stevens and get a good exposition. The only problem is ... real-world TCP basically doesn't implement the standards. In an attempt to use OOB data in a fairly obvious way (as a keep-alive "under" an existing protocol), we've found bugs in several TCP stacks, both on different Unix systems and on Windows. These aren't simple implementation bugs - they are part of the basic design of the stacks, and non-trivial to fix. (For example: You can set the Urgent Pointer on any write call, and there is only one Urgent pointer per TCP segment sent. But there is no direct connection between writes and segments, and typically the layers of the stack that form segments have no access to information about what writes were done. So what happens if two writes with Urgent pointers set get mashed into the same segment?)

Recently, we've discovered that things are actually much worse "out there". Many common firewalls - the Cisco PIX series is one - are configured "out of the box" to simply turn off the Urgent Pointer Valid flag. So not only is it true that "inspection isn't far from re-writing" - it's here, in most fielded firewalls. (Note that turning this bit off can corrupt data.)

Granted, "no one uses OOB" - a self-fulfilling prophecy: It doesn't work, so no one relies on it, so no one complains that it doesn't work, so it isn't fixed. Who knows what other bits of TCP don't really work as standardized?

2. Stuff deep in the details aren't as transparent as you think. Many, many years ago, in the early days of DECnet, you would transfer large files over the DEC internal network and, at the end, get a "DAP CRC checksum failure". DAP was the remote file access protocol, which ran a 32-bit CRC over the entire session and it checked on close. To the cognescenti, this was almost always the result of a known hardware problem: Many of the machines on the DEC network in those days were VAX 11/780's, which connected to Ethernets using cards that plugged into Unibus adapters. The Unibus had no check bits in the hardware, and were known to drop bits when their power supplies were marginal. And ... the early Ethernet cards were power hogs. DECnet used hop-by-hop checksums and UBA's relied on the hardware checksum for links, like Ethernet, that provided it - but once the data got off the card onto the Unibus, it was "in play". That's why the end-to-end DAP checksum was there - and it ended up serving as a good hardware diagnostic! (If you knew about this stuff, when you saw a DAP CRC failure, you traced the route your data had followed and constructed a few connections to isolate the failing system. Many DEC system managers got these mysterious email messages from out of the blue, telling them to check the voltage margins on their system's Unibus adapters!)

IP does end-to-end checksums, but these days, who can tell that "end-to-end" means? Proxies, firewalls, forwarders of various sorts - they all need to regenerate checksums, which means the data is unprotected by a checksum for some portion of its flight. Because of this, any attempt to calculate the probability of accepting bad data as good by looking at properties of the IP checksum is meaningless.

-- Jerry

Posted by: Jerrold Leichter at July 10, 2006 11:09 AM

as anyone checked out thezerogroup.com, and their caller id and anti caller id spoofing services?

Posted by: Caller ID Spoofing at June 21, 2007 12:43 PM

I have. I prefer SpoofCard's caller id spoofing phone card. The voice changer and call recording are great.

Posted by: spoofcard at September 17, 2007 01:04 AM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.