The first Fortran restricted subscripts to linear combinations of integer program variables, called ‘index variables’ here. The coefficients were required to be numerals, and thus known at compile time. An additive constant was allowed. Up to three subscripts were possible. Given that the location in memory was a linear combination of the subscript values it followed that the address of the array element in memory was a linear combination of index variables. The compiler (even the first) could and did compute the coefficients of this implicit linear combination. That computation involved only adds, subtracts and multiplies by known constants. For the first Fortran it was up to the Fortran program logic to assure that the subscript values were in range. If that was so then the computation of the memory address of the element was sure to produce an address in memory allocated to that array, despite the fact that some of the intermediate calculations overflowed. Those calculations were all done modulo a power of two greater than memory size. I suppose that this worked on ones complement machines where the modulus was one less than a power of two. Some machines distinguish between signed and unsigned multiplies. The high order half of the product depends on which, but the low order does not.

Subscripts in loops often involve an index variable that is incremented each iteration. The increment is known at compile time. The compiler can compute the difference of array element addresses between iterations and compile an add of a constant instead of multiplying each loop iteration. This is strength reduction and the first Fortran did that.

Later languages and later Fortrans relaxed the rule that the coefficients had to be know at compile time, but maintained the rule that the coefficients of the index variables had to be constant during the loop. In such cases the compiler emitted instructions to perform the calculation at run time, that earlier compilers did at compile time. The principle circumstance where the compiler could do this was for languages where passing an array by reference, meant passing the subscript ranges, strides and virtual origin; that information was collectively called ‘the dope vector’.

Later other expression that were ‘affine’ in this sense got the benefit of the optimizations as well even when they were not subscripts.

All of these calculations, at compile time and run time, were simplified by the fact that performing coefficients and constants could be expressed modulo the size of the address and the answer would be correct despite intermediate values which were correct only ‘modulo the address size’; i.e. overflows could be ignored. Machines such as Stretch and the 1401 could not use these tricks for lack suitable arithmetic operations.