### Eigen values and such

All of the following is presented nicely in a small book by von Neumann & Richtmyer which I cannot now track down.

Fourier discovered that nearly arbitrary functions could be represented as sums of sines and cosines of various wavelengths. He further noted that when the rod’s temperature varied as the sine of the distance along the rod, the magnitude of this variation decayed exponentially with time, with a factor in the exponent that depended on the wave length of the sine. Further noting that the sum of solutions was itself a solution he had found a powerful way to compute the future heat distribution on a rod from the current distribution.

Why sines and cosines? They alone are the functions f that solve ∂2f/∂x2 = λf for particular values of λ.

More generally shapes other than rods have their own functions, in place of sines and cosines in which solutions to the heat equation can be expressed. These functions are called eigenfunctions of diffusion operator. Different eigenfunctions f will have their associated eigenvalues λ as connected by the formula above. Thinking about these functions may be useful even when we do not bother to discover their nature in detail.

Abstract vectors provide a powerful tool in this analysis. We bundle the many temperatures, one per zone, as components of a vector in a vector space with as many dimensions as there are zones. The concept of eigenfunction then transforms to the eigenvector.

If ϕ is the temperature at some time and some place then we can write
tϕ = ∇2ϕ
to express roughly that it is getting warmer here to the extent that it is warmer nearby on the average. In flat 2D space this might read
tϕ = ∂2ϕ/∂x2 + ∂2ϕ/∂y2.

We can write this as ∂tϕ = Aϕ taking A to be a linear operator. If ϕ were a real number instead of an abstract vector and A were real instead of an operator, then we would solve this equation by writing
ϕ(t) = etAϕ(0). If we expand the indicated exponential in a Taylor series we see powers of A. If we take these to mean operator composition or matrix multiplication, and the plus signs to mean vector addition, then the expression on the right is meaningful. It also provides a solution for the differential (or difference) equation.

The above paragraph holds whether A is a vector with a component per zone or a member of the Hilbert space of temperature distribution functions. We proceed here with only the former finite dimensional case.

There is another coordinate system in our abstract space in which A is a diagonal matrix. All of this magic stems from the Cayley-Hamilton Theorem. Working through the proof of this theorem reveals why this math works. It is possible, tedious and generally unnecessary to find this coordinate system. Along the diagonal of this matrix will be many scalar values called eigenvalues, the j’th of which corresponds to the j’th basis vector. When A operates on just that vector it merely multiplies it by the eigenvalue. (This is covered a little more fully in the above note on vectors.)

What has this to do with accuracy and stability? We can leave some of the abstract vector space behind while noting only that it led us to the following concrete analysis. Suppose that the heat distribution is indeed an eigenfunction of the diffusion operator. In the problem and notation introduced here, we take as initial data the values ϕ0j = sin(2π3j/100). (We will adjust 3 later.) Note that, 3 being an integer, this formula provides the proper boundary conditions for j = 0 and 100. We also take Δx = ej = fj = 1 to simplify things. With simple trigonometric identities we compute for the explicit method
1j – ϕ0j]/Δt = [(ϕ0j–1 – ϕ0j) + (ϕ0j+1 – ϕ0j)] = (2(1 – cos(2π(3/100))))ϕ0j.
ϕ1j = (1 + Δt(2(1 – cos(2π(3/100))))) ϕ0j or
ϕnj = (1 + Δt(2(1 – cos(2π(3/100)))))nϕ0j

We see from this equation that the shape is unchanged (direction of the vector) but that it is multiplied by some factor. This simple program does the above specific calculation.

In the limit of small Δt we have
ϕj(t) = ϕj(0)e(2(1 – cos(2π(3/100))))t

### Implicit Grid Methods

These techniques apply whenever we have divided space into zones with definite neighbors and we have expressed the time derivative of various physical quantities (dynamic variables) in terms of these same quantities and their spatial derivatives. The division of space into zones need not be fixed but we will speak here of time periods where they are fixed. The division of zones need not be regular and we do not require that here. The zones may be moving, however, for the location of the zones is considered here as a dynamic variable. This encompasses hydrodynamics if we take velocity as a dynamic variable. Indeed this encompasses all of the 19th century partial differential equations of physics.

We take the various dynamic variables of the various zones as components of a vector.
tϕ = f(ϕ)
where f is some non linear function from our space to our space. We consider the partial derivatives ∂jϕ of f with respect to component j.