The returner was invented to give a program an easy way to guard against being blocked when invoking a key that should be a resume key. A server of diverse interests must guard against returning to a key that is not a resume key. With the returner this is cheap; if the supplied key to return to is not a resume key then the return message is discarded and the server becomes available.
The resume key directly implements Landin’s continuation concept. It supports tail call optimization and delivers the other continuation benefits.
In conjunction with those benefits, when combined with the mutex logic of the domain (only one thread in a domain at a time) we automatically get more multiprogramming. When the subroutine passes its continuation (resume key) to the next routine, it becomes available to begin work on its next input. Extending these ideas, this leads to massive multiprogramming for many stream oriented applications.
We experimented with several mutex designs but the mutex implicit in any domain seemed necessary and served most mutex needs.
Join was a late invention that made many sorts of simple multi-programming simple. The PL/I language has a “TASK” option on the CALL statement and the JOIN object serves those semantics with minimal fuss.
Coroutines are as simple as the call-return pattern with this design.
A return key never blocks and a server of a diverse crowd need not fear getting stuck. If a client disappears, then the return key to him becomes DK(0) to which you can return whereupon the return message is lost, just as if the destruction had happened just after the return.
Some object, a server, has a charter to serve clients of wide provenance and where behavior to one client depends on behavior of other clients. The server must survive as it processes a transaction. Allocators of fungible things, such as space, is an example. In a simple thread scheme with a stack for each thread, there is a conundrum of what to do when the resources for a thread are recalled due to apparent looping. The server may be running at the instant of loosing faith in the thread. Other events such as normal scheduling must avoid putting a shared server on a long CPU ready queue. I suspect that Unix puts such known servers in the kernel where the stack is not shared. Keykos supports external servers. Some design process produces a cap savvy hierarchy that ends up giving servers better meters than any of their clients. This design also deals with space banks that know what space has been sold and can reclaim all that space even when the bank returned space to a client that died while the allocation proceeded. Unlike Unix, this logic may be done recursively, which is necessary for virtual strategies.
More here