Diagonalizable
In
linear algebra, a
square matrix A is called
diagonalizable if it is
similar to a
diagonal matrix, i.e. if there exists an
invertible matrix P such that
P -1AP is a diagonal matrix. If
V is a finite-
dimensionalal vector space, then a
linear map T :
V →
V is called
diagonalizable if there exists a
basis of
V with respect to which
T is represented by a diagonal matrix.
Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.
Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power.
The fundamental fact about diagonalizable maps and matrices is expressed by the following:
- An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensionss of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A. If such a basis has been found, one can form the matrix P having these basis vectors as columns, and P -1AP will be a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of A.
- A linear map T : V → V is diagonalizable if and only if the sum of the dimensionss of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T.
Another characterization: A matrix or linear map is diagonalizable over the field
F if and only if its
minimal polynomial is a product of distinct linear factors over
F.
The following sufficient (but not necessary) condition is often useful.
- An n-by-n matrix A is diagonalizable over the field F if it has n distinct eigenvalues in F, i.e. if its characteristic polynomial has n distinct roots in F.
- A linear map T : V → V with n=dim(V) is diagonalizable if it has n distinct eigenvalues, i.e. if its characteristic polynomial has n distinct roots in F.
Here is an example of a diagonalizable matrix:
Since the matrix is
triangular (specifically upper triangular), the eigenvalues are 5, 0, and -2. Since
A is a 3-by-3 matrix with 3 real, distinct eigenvalues,
A is diagonalizable over
R.
As a rule of thumb, over C almost every matrix is diagonalizable. More precisely: the set of complex n-by-n matrices that are not diagonalizable over C, considered as a subset of Cn×n, is a null set with respect to the Lebesgue measure. The same is not true over R; as n increases, it becomes less and less likely that a randomly selected real matrix is diagonalizable over R.
An application
Diagonalization can be used
to compute the powers of a matrix A efficiently, provided the matrix is diagonalizable. Suppose we have found that
is a diagonal matrix. Then
and the latter is easy to calculate since it only involves the powers of a diagonal matrix.
For example, consider the following matrix:
-
Calculating the various powers of M reveals a surprising pattern:
The above phenomenon can be explained by diagonalizing M. To accomplish this, we need a basis of R2 consisting of eigenvectors
of M. One such eigenvector basis is given by
-
where ei denotes the standard basis of Rn.
The reverse change of basis is given by
Straighforward calculations show that
-
Thus, a and b are the eigenvalues corresponding to u and v, respectively.
By linearity of matrix multiplication, we have that
-
Switching back to the standard basis, we have
-
-
The preceding relations, expressed in matrix form, are
-
thereby explaining the above phenomenon.