There are two prominent network traffic parameters: data size and delay tolerance.
tolerance\size | 100 bytes | 1GB
|
50 ms | Gaming signals, buy orders | teleconferencing
|
1000 sec | e-mail | movie to be watched tomorrow
|
One size does not fit all.
See other demands here.
Another variation in traffic is value to end user (code for ‘willingness to pay’).
The value of teleconferencing will vary by orders of magnitude.
This is of course an opportunity for the would be monopolist but I hope that this architecture will enable competition among carriers to limit high prices to situations where congestion could not have been foreseen in time to provide capacity.
In such cases high prices are the best solution.
(pardon the ideology)
While it is good to find solutions that scale well — the Unix malloc works well for both 10 bytes and a gigabyte — sometimes one must admit that tactics differ under the hood — malloc does not process 10 bytes as it does 1GB.
We are discussing here a part of DSR that is under the hood.
Large delay tolerant files can fly stand-by with less impact on other network users, and concomitant lower fares.
The small quick messages can pay more per byte and be economical of physical capacity used and even affordable to the casual user.
The Circuit
Most of the DSR literature has concentrated on datagrams, which are packets with self-contained routing information.
Occasionally we describe circuits which carry data flows over a pre-established route where the constituent packets need no routing information.
Successive packets form a ‘flow’ thru the network that the network needs to be slightly aware of.
Packets, in such a flow (circuit) across some link, are distinguished from other flows on that link by a ‘channel number’.
The channel number for the same packet on the next link will likely be different.
Channel numbers are allocated collaboratively by the two nodes at the ends of the link.
In some places I have suggested that portions of a flow could be automatically channelized without changing the network semantics proper.
I think that this is wrong if we are to support flow management with backpressure ideas, which seem to me to be the only way to avoid the discrepancy between ‘thruput’ and ‘goodput’ which plagues Internet.
The data originator is the first agent to know that a large amount of data is to be moved and having the network discover the flow empirically is perhaps a bad idea — the benefits begin too late and the nodes do too much difficult work speculating.
Tymnet used the word ‘backpressure’ to describe its technique of slowing the source of a character stream when the source could produce data faster than some link in the net, or the ultimate recipient could accept it.
Here is the most accurate description of the Tymnet technique that I have.
Here is some flow control lore.
What is a node to do when:
- it has received a packet and the outgoing link is overbooked?
- it is the ultimate recipient and it is not ready to act on the request?
Communication node software must operate as packets emerge from a fiber.
When a gigabyte file is to transit the net thru some node, that node may have to explicitly allocate buffers, CPU time and fiber time for the transfer, or at least establish a rate of serving that flow.
This amounts to planning ahead for seconds or minutes.
Such planning need not be done at the node for it is not especially time critical.
Borrowing Tymnet jargon we say that a needle precedes the new circuit, with normal destination path information, but somehow distinguished as a needle heralding a new circuit.
In place of a variable length return path the packet has a channel number allocated by the previous node to identify future packets from this circuit.
That number is used to identify the circuit on the link by which the packet just arrived.
This node will, according to the destination path, allocate a channel number for the outgoing link, overwrite the channel number in the packet, deduct some toll, and forward the packet.
Subsequent packets on this new circuit, each with its own money field and payload size, enter with the first channel number and leave with the second, with a money field diminished by a toll.
Packets can also flow in the opposite direction without steering information.
There were no Tymnet patents and any Cisco patents have expired.
It seems feasible for a node to do its part in a protocol that does not drop channelized packets that flow thru circuits.
Counter-flow packets must influence upstream packet progress in the net and at the source.
To flesh out this protocol I first consider a simple scheme of reserving fixed buffers for the duration of the circuit.
(Tymnet was more dynamic.)
The needle would nominate a buffer size for each link.
This size is normally related to the latencies of the adjacent links.
A counter-flow invitation will be issued by the node when buffers are emptied with a count indicating how many are free.
Such invitations could be batched in frequent link messages citing channel numbers.
Channelized packets will always find a buffer waiting for them.
Lets work out the circuit thru put as a function of buffer count.
Assumptions:
- 5 short links at each end and a 1000 km (5 msec) link in the middle.
- All link bandwidths are 10Gb/s.
- Packets carry payloads with 216 bytes = 219 bits.
(A packet passes a point in the fiber in 52 μs.
There is room for 96 such packets in the long fiber!)
Shorter packets are important but not in this analysis.
- We begin with empty buffers and no packets in the fibers.
(There is no need for the sender to wait for acknowledgement of a complete circuit to begin sending, but some node may be unable or unwilling to build its part of the circuit.)
- No ‘cut-thru’: A packet must finish entering before it begins to exit.
From buffer to buffer, over a short link, the time is 52 μs.
Then there is queuing for the CPU, followed by the queuing for the exit link.
For a lightly loaded system this would be 8 μs.
Ten of these gives us 80 μs plus the 3 ms of the long fiber.
3.08 ms is the circuit latency.
A new upstream invitation can be issued only after a downstream acknowledgment has arrived.
There are two buffers allocated to every packet in a link (fiber).
There are two buffers allocated to every ack in a link, but one of those may be shared with its packet which is already in the next link.
Here is some simulation code that leads me to think that leasing buffers at the ends of an expensive long link is tantamount to reserving capacity on that link if only because your data will be the only data available to use the link when it is free.
Other node strategies can interfere but perhaps those should be designed with this idea in mind.
This plan leads naturally to bandwidth futures which is a subject of boundless complexity in the real world.
I will try to find a bone simple convention perhaps with hooks for the more complex contract.
Here is a scheme for fairly continuous price information that aids a class of planning.
The two buffers at either end of the long link must be simultaneously allocated for backwards error control.
With Reverse Peristalsis one can over allocate circuit buffers and still keep the no-drop guarantee, but it is not trivial.
I think that the nodes must wager on outcomes here.
From the perspective of the expensive link one might on wonder what the purpose of halting a flow is, despite the inability of the far end to use it.
Is energy saved by refraining from putting the bits in the fiber?
No, but other standby data that is not doomed may be ready to use the link, albeit at a lesser price.
This economic analysis should be unified with other agoric aspects of DSR!
There is a spectrum of planning by user agents foreseen here.
The occasional small e-mail can move without worrying much about costs or timing, (unless you are sending e-mail to Mars).
Tomorrows movie can and should expend a few cents of agent work finding a suitable time and route on which to receive the movie.
The latter may reserve capacity in the form of buffers and perhaps other goods.
The former will not.