Unlike vector spaces which find and unify common structure in many preexisting fields, algebras over vector spaces are more of a process of creating new structures. (This is a historical, not mathematical claim.)
Algebras over Vector Spaces
Several useful mathematical structures arise when some multiplication like operation on a vector space is introduced.
We use juxtaposition of vector expressions here to denote this multiplication.
It is required that multiplication be bi-linear which means that for fixed x, xy and yx are both linear functions of y, which means that for all vectors x, y, z and all scalars α:
x(y + z) = xy + xz
(x + y)z = xz + yz
(αx)y = α(xy) = x(αy)
Generally this multiplication is neither commutative nor associative.
An associative algebra is merely an algebra over a vector space with an associative multiplication.
Sometimes the last equation α(xy) = x(αy) is replaced by α(xy) = x(α*y) where α* is an image of α under some particular field automorphism.
Some people use ‘algebra’ only for associative algebras and use ‘non-associative’ for the larger class that we call algebras. Some include ‘module over a ring’ in the concept of algebra, but we limit algebras to a vector space over a field.
For particular algebras additional definitional equations are given in the form of an equation to hold for all values of the field and vector variables.
Sometimes the product is required to be in the same vector space but for some constructions the product is generally in some new expanded vector space over the same field. The larger space includes the original space as a liner subspace. Division algebras, Clifford algebras and Grassmann algebras are examples. Often the definitional equations assert that the product is an element of the field. It seems to me best to interpret this as requiring that the expanded vector space include a one dimensional subspace that is not a subspace of the original vector space and whose elements serve the role of scalars in the space. Normally a vector space does not ‘include’ its field. In typical vector spaces, the scalar 1 is not a vector. 1 is a member of the algebras that I know except when Gibb’s cross product forms the multiplication over the 3D space.
These spaces are commonly defined by baldly asserting that the product exists without discussion of which space it is in. Some new definitional equations are asserted. Only gradually does it emerge that there must be more dimensions than in the original space. It is presumed that two values from the algebra are distinct unless the definitional equations require them to be equal. A Clifford or Grassmann algebra over an n-dimensional space results in a 2n dimensional space when the dust settles. The student may justifiably wonder if such structures exist—are the axioms consistent? Typically the definition of the new algebra does not try to deny yet larger models, but yet the developments typically consider only those elements forced by the new definitional equations. Here we prove that they do exist by explicitly constructing them.
To prove consistency of these axioms we construct an explicit model. We explicitly construct a new vector space over the same field by stages. Select a basis K for the original space. Stage 0 is the original vector space. If a stage of dimension N has basis elements {ei} then we include in the subsequent stage basis elements {e'i} and {e'ij} for 0 ≤ i, j < N. Each e'ij = e'ie'j. Each stage has a set of basis elements including those of the previous stage, plus an element for each pair of elements of the previous stage serving as products of the corresponding elements. The new stage thus allows products of all pairs of elements of the previous stage. The definitional equations may result in linear dependencies among these new basis elements and enough of the new basis elements are sacrificed to achieve the necessary independence of a basis set. For instance in the Grassmann algebra we would delete either e'12 or e'21. (The definitional equations for Grassmann are x(yz) = (xy)z and xy+yx = 0.)
Classically these definitional equations often include an equation that seems to require a product of two vectors to be a scalar, or field element. For this case we include a basis element called “1” with the property that for all basis elements x, 1x = x = x1. At this point the scalar 0 and vector 0 are conflated and the multiplication of a scalar and vector is subsumed in multiplication of vectors. This is sleight of hand to those (like me) who feel they must understand the domain and range of functions. This notational fraud can be seen, with some work, to be harmless.
The ultimate model is the vector space with a possibly infinite set of basis elements but where each vector is a linear combinations of some finite subset of these elements. The definitional equations extinguish production of new basis elements at some finite stage for the useful algebras that I am aware of.
See the particular Clifford Algebras.
In an algebra a left unit is an element u such that for all x ux = x.
Consider V2 where e1 = <1, 0> and e2 = <0, 1> with the multiplication <a, b><c, d> = <ac+bc, ad+bd>. Note that <1, 0><c, d> = <c, d> and <0, 1><c, d> = <c, d> and thus e1 and e2 are distinct left units. If there were both left and right units, L and R then LR = L = R and then there could not be two left units for then L' = L'R = R = LR = L. Many algebras of interest have a two sided unit u. The scalar multiples of this unit, or any left unit form a subspace isomorphic to the ground field of the space. In an algebra with a unique unit u it is common practice to dispense with the field and use its isomorphic image within the algebra. Under this convention one may write “3” in place of “3u” and 1 in place of u. No typographic convention for field values remains. Multiples of the unit are called “reals” in most cases where the reals form the ground field of the algebra. This is all usually done silently leaving the impression that the vector space has somehow swallowed its field whole. This is not so convenient for non-associative algebras, such as the octonions, but there seems to be no need for the scalar multiplication from the vector space there since (xy)z = x(yz) if x is a unit or scalar multiple thereof.
Such algebras are normally described in the style often used to describe complex numbers where i is merely posited as another previously overlooked number that was there all along. This invites all of the misgivings of those suspicious of imaginary numbers when they were new. These misgivings were ultimately overcome when it was seen to be possible to construct the complex numbers, and not merely assume them into existence. From this perspective they were not always there. The ultimate in this progression is Hilbert’s dictum:
Mathematical existence is the mere absence of contradiction.