See this too.
Most computing environments provide a form of efficient stack. This may take the form of user ignorable convention that some particular register locates the top of the stack and that “much room” is available there for more temporary storage. Code collected together from several code producing technologies may agree on little beyond how to convey stack location upon calls and returns. Here are some notes on some history of the stack.
Attempts have been made to treat an interrupt as a routine call not arising from the current instruction stream. This assumes, however, that the interrupted program has left the stack pointer referring to a place where writes are possible. Some Unixes failed when a user program put an invalid address in the stack pointer. Otherwise useful programs did this while ignoring stack conventions.
Natural languages subsume some stack mechanism. A description of how to do something may well specify doing something else. This seems to be how we progress towards a goal—with a collection of unfinished sub-goals.
Supplanting concepts of stacks, call and return by CPS supports the old patterns and several others as well such as co-routines, which were at odds with the old stack foundations. Exception handling is also expressible in CPS as well. For most programs it is necessary to accommodate the conventional stack for its great efficiency as well as its familiarity. The task is thus to harness the stack and yet allow the freedom and flexibility of CPS.
Rick Mascitti proposed that a copy of the stack was a suitable reification of a continuation. I was shocked but eventually accepted the idea. This was for Scheme where a continuation could be invoked more than once. I am not convinced that invoking continuations more than once is of much use and indeed I one must often write extra code to ensure that this does not happen.
Representing Control in the Presence of One-Shot Continuations covers this idea and more very well.