TCP’s flow control (the “slow start” scheme) is end to end, which means that retransmissions further congest the net just when it is already congested. It also means that congested links deep in the net are unable to exploit brief dips in the load. They must limit their flow (by discarding packets) so that the peak demand is accommodated, which leads to significant unused capacity, just when capacity is dear. When the peaks are not accommodated extra traffic is generated in the form of retransmissions, at least in the neighboring links which may themselves be congested. Commonly the flow’s most restricted link is the last before the destination. If I load a web page while I am also fetching a large file, the node feeding my last link will likely discard packets for the file, at least if the last link was full before. The TCP flow algorithm at the sending end will presumably back off for a while when it learns of lost packets. This will squander the capacity of my link for this duration. This dead loss cannot be made up. As best I can tell the Stream Control Transmission Protocol has these same problems while providing new attractive properties beyond TCP. The problem is lower in the protocol stack. Indeed the problem may be in the very layering of the protocol stack.
I do not know the implementation status of RFC 3168 or the subsequent fixes to thwart cheaters.
Networks with link by link back-pressure, such as Tymnet, become more efficient under load, rather than less.
Also the incentives are wrong.
Vague Back-pressure proposal
Fractal Transmission Demand
The proposed Bundle Protocol addresses some of these issues.
Lary Roberts idea
Bufferbloat