Some Linear Operator Theory

Here are some strategic principles from operator theory in plain English. They may be expressed more concisely in mathematical notation but my head stores something more like the English prose. Other peoples heads work differently. Here is a very nice history of the development of abstract vectors, and here is my own vector introduction.

A square matrix describes a linear transformation in a vector space but only when a basis has been specified in that space. The same transformation expressed in another basis is described by a different matrix.

It is necessary in all of the following to think of the transformation under consideration as being more real than the matrix. Sometimes we will work with a matrix while reasoning about the transformation even when we cannot identify the basis. Still it is necessary to regard the transformation as real (albeit unknown) while the matrix is ephemeral (albeit known). Other times we will consider a transformation and a variety of matrices that describe that transformation each with a different basis.

Given a transformation, there will be a basis, each of whose elements is unturned by the transformation. For the Cartesian plane a transformation that sends (x, y) into (2x, 3y) stretches vectors on the coordinate axes but does not turn them. It turns all other vectors. Given a transformation that is specified somehow, a central problem is to find such a basis for it. Here is a more complex example.

The elements of such a basis are called the eigenvectors of the transformation, and the amount of stretching is the corresponding eigenvalue.

If the field of the matrices and the vector space is real then there will be many transformations (such as a rotation) without eigenvectors. When the field is the complex numbers, or any ‘complete field’, there will always be eigenvectors.

Given a transformation in the form of a matrix, it may be required to find a new basis whose elements are the transformation’s eigenvectors. The answer must express the eigenvectors in the original basis. This is called finding the eigenvectors of the original matrix.

Eigenvectors are orthogonal. The identity transformation can make do with any orthogonal basis as a set of eigenvectors. In this case the eigen vectors are chosen orthogonally just to make the theorems pretty. More generally whenever a transformation has eigenvectors with common eigenvalues, those vectors span a subspace and an orthogonal basis is chosen there. The transformation turns no vector in that subspace.

When a transformation is expressed as a matrix and the basis is the set of eigen vectors, then the matrix is diagonal. Diagonal matrices obviously commute.

Here are some useful connections between eigen values and matrix properties.

See “eigenvalues” herein about programs.