I am gathering here unorganized facts and claims to support the idea that network switches should buffer data and indeed send upstream desist signals. Such signals must, of course, cross the interface to the network. This is, of course, contrary to the thrust for current optical router designs.

Wires and fibers have definite bandwidth. Computer generated communication demand is usually highly fractal and seldom has a natural bandwidth requirement. This is a very hard problem that most network designers try to ignore. It is common to hear at the beginning of a talk the admission that demand is fractal, and then hear no more on the subject. Often one end, or the other, of a data flow is constricted with some narrow band bottleneck which gives the designers an adequate reason to assign a bandwidth to the data flow. Be sure, however, that some hardware engineer is making rapid progress in eliminating that bottleneck! The problem will get worse.

Telephone engineers designed many of the early network protocols. ATM tries to ascribe a bandwidth to most data flows. Voice was presumed to be an early large ATM customer so this made some sense. It was almost never right for computer traffic, however.

Much money is now being invested in developing totally optical networks with the faith that most of the data arriving at a switch will find a profitable egress from the switch in the few nanoseconds that the data can loiter in glass. I am highly skeptical!

Not only is the demand fractally distributed in time (bursty), the demand is almost never foreseen. If there were only a few milliseconds of warning it might be possible to plan ahead and select a route where collisions would be less likely. In most current networks the strategy is to discard data when some outgoing link is oversubscribed. There is a “hot potato” routing scheme that tries to forward a packet out over the best link but does indeed send it out over some link. There being as much output bandwidth as input bandwidth this is always possible. There are rumors that this is unreasonably effective. Perhaps someone understands this.

Tymnet was designed to conserve long link capacity, which was expensive. Most Tymnet traffic was directed towards teletypes and the like, which provided a convenient bottleneck from the perspective of the network designer. Yet we did not want to transmit data and then retransmit it due to congestion induced loss. We buffered data in the nodes and sent upstream signals to stop the flow. These signals caused the timeshared host to cease scheduling the program that was producing the flow. That happens with a telnet session but it is part of the telnet protocol and not the network protocol. I do not see a trivial way to translate this idea to GHz networks but I think it is possible.

Slow start is the promulgated standard to avoid sending packets which are liable to be discarded before reaching their destination.

Much installed fiber is still dark today. (P.S. That was in 2004. I hear that this is no longer so in 2011.) To me that means that if we work very hard we might come up with a scheme soon enough to use the fiber efficiently when it is mostly lit up. When fiber bandwidth is again dear and link queueing arises, it is better to store packets than to discard them if only to reduce worst case latency—that is to say jitter.