Some memory technologies, especially core, were destructive read. Externally there were separate memory read and write operations while internally there were separate read and write half-cycles called ‘cycles’ in this note, but there was not the obvious connection between external operations and internal cycles. A read cycle would leave zeros in the medium even as it captured the previous data from the medium. A write cycle was thus required to restore the information in memory for conventional RAM semantics but the write cycle, by the nature of the magnetic core logic, was actually an or-to-memory. This was suitable for at that point the memory word had been set to all zeros. This is called the regeneration cycle and meanwhile the information resides in the memory register or ‘regen register’. A write operation would require a read cycle, to put necessary zeros in the cores, and then the regen cycle to put the ones there. Here we describe some aspects of these mechanisms more closely for the common 3D core.

Here are some situations where one can profitably exploit the more primitive cycles for unconventional RAM semantics:

  1. If the desired operation is “OR to memory” then it suffices to omit the read cycle.
  2. If the operation is to add to memory then the addition can be done between the read cycle and write cycle. (Ditto many other to-memory ops)
  3. If the operation is to shift memory by a small number of words then that many words must first be cleared at the destination, and then half-reads and half-writes suffice for the rest. This leaves 0’s in the vacated locations which may be OK.
  4. The common operation to store 0 is merely a bare read cycle.
  5. Sometimes a compiler can use the less expensive load-and-leave-zero op such as on the last use of a temporary. Such an op could free a cache line too.
  6. Similarly code such as x = (… x …) can fetch and store x with abbreviated commands.
  7. Support a user mode instruction to swap memory with a register.
  8. User mode lock instructions, such as Compare and swap, or test and set.
  9. Read a word and turn a bit on in the that word. Some machines used this when fetching a page table entry to mark the entry ‘used’. With care and careful documentation the page map can write the dirty bit back in the entry with a bare write cycle since the kernel is on its honor not to change that entry while it might be cached in the TLB.
  10. See application to cache design
Most of these come under the term “read-modify-write”.

Modern DRAMs (and F-RAMs) do much the same. Reading a row of the DRAM bank is highly analogous to the read cycle for core, or Williams tubes, for that matter. The bits are ruined in their home location for their charge is used to drive a signal down the bit lines to the sense amps and thence to a register to keep the only copy of the data until the end of the read operation. The other half of the cycle is called ‘precharge’ for rumored reasons which I can’t explain. ‘Recharge’ would describe this action well as it puts back the charge in the capacitor which is the home location of the memory bits.

The same tricks would work for DRAM but the modern cache generally gets in the way. Perhaps split cycle tricks work in the cache proper, for some cache designs. ECC ruins tricks 1 & 2.

Here is an idea that I have not seen attempted. If the RAM has a cache in front of it then do only the read cycle upon loading the cache and perform only the write cycle when the cache line is retired. Writes to memory thus require one half of the RAM cycles. This might require some engineering of the RAM cells in addition to the RAM control in order to ensure that the state between a read cycle and write cycle is suitably stable over longer periods of time.

I recount here how the CDC Star 100 used split cycle in its memory map.

The CDC 6600 “Exchange Jump” operation was a split-cycle core operation which stored old user program state while loading new user program state at the same locations.