# 4: Eigenvalues and Eigenvectors - Mathematics

4: Eigenvalues and Eigenvectors - Mathematics
Algebra 9
number theory, variables, operators, exponentiation, square roots, . Analytical Geometry 4
lines, planes, distances, intersections, . Calculus 11
functions, derivatives, integrals, extrema, roots, limits, . Geometry 7
shapes, triangles, quadrilaterals, circles, . Linear Algebra 15
vectors, linear combinations, independence, dot product, cross product, . Trigonometry 4
sine, cosine, tangent, .

## Characteristic polynomial¶

How do we actually find eigenvectors and eigenvalues? Let us consider a general square matrix (A in mathbb^) with eigenvectors (mathbf in mathbb^n) and eigenvalues (lambda in mathbb) such that:

After subtracting the right hand side:

Therefore, we are solving a homogeneous system of linear equations, but we want to find non-trivial solutions (( mathbf eq mathbf <0>)). Recall from the section on null spaces that a homogeneous system will have non-zero solutions iff the matrix of the system is singular, i.e.

This is a polynomial of degree (n) with roots (lambda_1, lambda_2, dots, lambda_k) , (k leq n) . This polynomial is termed the characteristic polynomial of (A) , where the roots of the polynomial are the eigenvalues. The eigenvectors are then found by plugging each eigenvalue back in ( (A -lambda I) mathbf = mathbf <0>) and solving it.

### Example¶

Let us find the eigenvalues and eigenvectors of the following matrix (A in mathbb^<3 imes 3>) :

The characteristic polynomial is:

The roots of this polynomial, which are the eigenvalues, are //( lambda_ <1, 2, 3>= 2, 2 pm sqrt <2>//). Now to find the eigenvectors we need to plug these values into ((A - lambda I)mathbf = 0) .

where (x_1) , (x_2) and (x_3) are entries of the eigenvector (mathbf) . The solution may be obvious to some, but let us calculate it by solving this system of linear equations. Let us write it with an augmented matrix and reduce it to RREF by swapping the 1st and 2nd row and subtracting the 1st row (2nd after swapping) from the last row:

As expected, there is no unique solution because we required before that //((A - lambda I) ) is singular. Therefore, we can parameterise the first equation: (x_1 = -x_3) in terms of the free variable (x_3 = t, t in mathbb) . We read from the second equation that (x_2 = 0) . The solution set is then ( < (-t, 0, t)^T, t in mathbb>). If we let (t = 1) then the eigenvector (mathbf_1) corresponding to eigenvalue ( lambda_1 = 2) is (mathbf_1 = (-1, 0, 1)^T) . We do this because we only care about the direction of the eigenvector and can scale it arbitrarily.

We leave it to the readers to convince themselves that the other two eigenvectors are ( (1, sqrt<2>, 1)^T ) and ( (1, -sqrt<2>, 1)^T ).

### Example: Algebraic and geometric multiplicity¶

The characteristic equation is (det(A - lambda I) = (lambda - 1)(lambda - 1)(lambda + 1) = (lambda - 1)^2(lambda + 1) = 0 ).

We see that the eigenvalues are (lambda_1 = 1, lambda_2 = -1) , where (lambda_1) is repeated twice. We therefore say that the algebraic multiplicity, which is the number of how many times an eigenvalue is repeated, of (lambda_1) is 2 and of (lambda_2) it is 1.

Let us now find the eigenvectors corresponding to these eigenvalues. For (lambda_1 = 1) :

The only constraint on our eigenvector is that (x_3 = 0) , whereas there are no constraints on (x_1) and (x_2) - they can be whatever we want. In cases like this, we still try to define as many linearly independent eigenvectors as possible, which does not have to be equal to the algebraic multiplicity of an eigenvalue. In our case, we can easily define two linearly independent vectors by choosing (x_1=1, x_2=0) for one vector and (x_1=0, x_2=1) for the other. Therefore, we managed to get two linearly independent eigenvectors corresponding to the same eigenvalue:

The number of linearly independent eigenvectors corresponding to an eigenvalue (lambda) is called the geometric multiplicity of that eigenvalue. The algebraic multiplicity of (lambda) is equal or greater than its geometric multiplicity. An eigenvalue for which algebraic multiplicity (>) geometric multiplicity is called, rather harshly, a defective eigenvalue.

Now consider the non-repeated eigenvalue (lambda_2 = -1) :

We have (x_1 = 0, x_2 = 0) and there is no constraint on (x_3) , so now (x_3) can be any number we want. For simplicity we choose it to be 1. Then the eigenvector is simply

## Problem no.3

### Solution :

In this problem we have two eigenvectors of the 2ࡨ matrix mentioned in the problem . The given matrix is a upper triangular matrix , if you observe!

And we know from properties that , eigenvalues of U.T.M or L.T.M = Principal diagonal elements

Therefore, the eigen values of the given matrix are , Λ1 = 1 and Λ2 = 2

Now , substituting the value of matrix A ,first eigenvector (X1) and Λ1 in the equation ,

A * X1 = Λ1 * X1 ,

after carrying out the multiplication of the matrices , we can calculate the value of a ,

1 + 2a = 1 , which gives a = 0

similarly, substituting the value of matrix A ,second eigenvector (X2) and Λ2 in the equation ,

A * X2 = Λ2 * X2 ,

after carrying out the multiplication of the matrices , we can calculate the value of b,

Here is the most important definition in this text.

##### Definition

The German prefix “eigen” roughly translates to “self” or “own”. An eigenvector of

is a vector that is taken to a multiple of itself by the matrix transformation

which perhaps explains the terminology. On the other hand, “eigen” is often translated as “characteristic” we may think of an eigenvector as describing an intrinsic, or characteristic, property of

Eigenvalues and eigenvectors are only for square matrices.

Eigenvectors are by definition nonzero. Eigenvalues may be equal to zero.

We do not consider the zero vector to be an eigenvector: since

the associated eigenvalue would be undefined.

If someone hands you a matrix

On the other hand, given just the matrix

it is not obvious at all how to find the eigenvectors. We will learn how to do this in Section 6.2.

##### Example (An eigenvector with eigenvalue

are collinear with the origin. So, an eigenvector of

lie on the same line through the origin. In this case,

the eigenvalue is the scaling factor.

For matrices that arise as the standard matrix of a linear transformation, it is often best to draw a picture, then find the eigenvectors and eigenvalues geometrically by studying which vectors are not moved off of their line. For a transformation that is defined geometrically, it is not necessary even to compute its matrix to find the eigenvectors and eigenvalues.

## Mathematics Prelims

Now since is a change of basis matrix, each of its columns gives the coordinates to a basis vector of some basis. Let’s call that basis and let through be the elements of that basis. Now, if we take the above equation and multiply by on the right, notice that

That is, the -th column of is equal to the -th column of , which is just times the -th column of . Since each column of is just a linear combination of the columns of , though, we have

This means that when we plug in the -th column of to the linear transformation represented by , we get back a multiple of that column. Calling the linear transformation , we have that

Vectors such as whose image under is just a multiple of the vector are called eigenvectors of . That multiple, the above, is called an eigenvalue of . These eigenvectors and eigenvalues are associated with a particular linear transformation, so when we talk about the eigenvectors and eigenvalues of a matrix, we really mean the eigenvectors and eigenvalues of the transformation represented by that matrix. Notice that this means that eigenvalues are independent of the chosen basis since similar matrices represent the same transformation just with respect to different bases, similar matrices have the same eigenvalues.

We assumed that was similar to a diagonal matrix above, but this isn’t always true. If is similar to a diagonal matrix, say , then as we’ve just shown, the columns of are eigenvectors of . Since these form the columns of a non-singular matrix, the eigenvectors of form a basis for the vector space. Also, if the eigenvectors of form a basis, let’s take those basis vectors as columns of .

So a matrix is diagonalizable (similar to a diagonal matrix) if and only if its eigenvectors form a basis for the vector space.

a) Scalar multiplication: if we multiply a column of a matrix by k the determinant is multiplied by k.

b) Vector addition: the determinant of a sum of vectors is equal to the sum of the determinants.

c) If the vectors are linearly dependent, the determinant is equal to zero.

d) The determinant of an identity matrix is equal to one.

e) If we change the place of two columns/rows, the signal of the determinant is changed.

f) If A is a square matrix, then

g) If A, B are square matrices n x n. Then

## Eigenvalue equations in linear algebra¶

First of all let us review eigenvalue equations in linear algebra. Assume that we have a (square) matrix with dimensions and is a column vector in dimensions. The corresponding eigenvalue equation will be of form with being a scalar number (real or complex, depending on the type of vector space). We can express the previous equation in terms of its components, assuming as usual some specific choice of basis, by using the rules of matrix multiplication, The scalar is known as the eigenvalue of the equation, while the vector is known as the associated eigenvector.

The key feature of such equations is that applying a matrix to the vector returns the original vector up to an overall rescaling, . In general there will be multiple solutions to the eigenvalue equation , each one characterised by an specific eigenvalue and eigenvectors. Note that in some cases one has degenerate solutions, whereby a given matrix has two or more eigenvectors that are equal.

In order to determine the eigenvalues of the matrix , we need to evaluate the solutions of the so-called characteristic equation of the matrix , defined as where is the identity matrix of dimensions , and is the determinant.

This relations follows from the eigenvalue equation in terms of components Therefore the eigenvalue condition can be written as a set of coupled linear equations which only admit non-trivial solutions if the determinant of the matrix vanishes (the so-called Cramer condition), thus leading to the characteristic equation.

Once we have solved the characteristic equation, we end up with eigenvalues , .

We can then determine the corresponding eigenvector by solving the corresponding system of linear equations

Let us remind ourselves that in dimensions the determinant of a matrix is evaluated as while the corresponding expression for a matrix belonging to a vector space in dimensions will be given in terms of the previous expression

Let us illustrate how to compute eigenvalues and eigenvectors by considering a vector space. Consider the following matrix which has associated the following characteristic equation This is a quadratic equation which we know how to solve exactly, and we find that the two eigenvalues are and .

Next we can determine the associated eigenvectors and . For the first one the equation that needs to be solved is from where we find the condition that : an important property of eigenvalue equations is that the eigenvectors are only fixed up to an overall normalisation condition. This should be clear from its definition: if a vector satisfies , then the vector with some constant will also satisfy the same equation. So then we find that the eigenvalue has associated an eigenvector and indeed one can check that as we wanted to demonstrate. As an exercise, you can try to obtain the expression of the eigenvector corresponding to the second eigenvalue .

## Example

Say we have to find eigenvalues and eigenvectors of matrix G.

First we will obtain characteristic equation from matrix G

then we expand determinant to form an equation in terms of lambda.

Finally we will find the values of lambda (eigenvalues) by solving the equation.

We have the eigenvalues now we have to find eigenvectors. Starting with lambda = 5

After performing matrix multiplication we get

The ratio of x11 to x12 is 1 : (-1) so the first eigenvector of matrix G is

Similarly we can find eigenvector of matrix G when lambda = (-1)

and the second eigenvector of matrix G is

## Finding eigenvectors

After you have found the eigenvalues, you are now ready to find the eigenvector (or eigenvectors) for each eigenvalue.

To find the eigenvector (or eigenvectors) associated with a given eigenvalue, solve for ##vec## in the matrix equation ##(A – lambda I)vec = vec<0>##. This action must be performed for each eigenvalue.

Example 2: Find the eigenvectors for the matrix ##A = egin 1 & 3 -1 & 5end.##

(This is the same matrix as in Example 1.)

#### Work for ##lambda = 4##

To find an eigenvector associated with ##lambda = 4##, we are going to solve the matrix equation ##(A – 4I)vec = vec<0>## for ##vec##. Rather than write the matrix equation out as a system of equations, I’m going to take a shortcut, and use row reduction on the matrix ##A – 4I.## After row reduction, I’ll write the system of equations that are represented by the reduced matrix.

In the work shown here, I’m assuming that you are able to solve a system of equations in matrix form, using row operations to get an equivalent matrix in reduced row-echelon form. Using row operations on the last matrix above, we find that the matrix above is equivalent to ##egin 1 & -1 0 & 0end.##

The last matrix represents this system of equations:

We can write this as ##vec = egin x_1 x_2end = x_2egin 1 1end##, where ##x_2## is a parameter.

An eigenvector for ##lambda = 4## is ##egin 1 1end.##

This is not the only possible eigenvector for ##lambda = 4## any scalar multiple (except the zero multiple) will also be an eigenvector.

As a check, satisfy yourself that ##egin 1 & 3 -1 & 5end egin 1 1 end = 4egin 1 1 end##, thus showing that ##Avec = lambda vec## for our eigenvalue/eigenvector pair.

#### Work for ##lambda = 2##

Using row operations to get the last matrix in reduced row-echelon form, we find that the last matrix above is equivalent to ##egin 1 & -3 0 & 0end.##

This matrix represents the following system of equations:

We can write this as ##vec = egin x_1 x_2end = x_2egin 3 1end##, where ##x_2## is a parameter.

An eigenvector for ##lambda = 2## is ##egin 3 1end.##

As a check, satisfy yourself that ##egin 1 & 3 -1 & 5end egin 3 1 end = 2egin 3 1 end##.

For the final example, we’ll look at a 3 x 3 matrix.

Example 3: Find the eigenvalues and eigenvectors for the matrix ##A = egin 1 & 0 & -4 0 & 5 & 4 -4 & 4 & 3end.##

Because this example deals with a 3 x 3 matrix instead of the 2 x 2 matrix of the previous examples, the work is a considerably longer. The solution I provide won’t show the level of detail of the previous examples. I leave it to readers of this article to flesh out the details I have omitted.

(Part A – Finding the eigenvalues)

Set ##|A – lambda I|## to 0 and solve for ##lambda##.

##Rightarrow egin 1 – lambda & 0 & -4 0 & 5 – lambda & 4 -4 & 4 & 3 – lambda end = 0##

##Rightarrow -lambda^3 + 9lambda^2 + 9lambda – 81 = 0##

##Rightarrow (lambda – 9)(lambda^2 – 9) = 0##

∴ The eigenvalues are ##lambda = 9##, ##lambda = 3##, and ##lambda = -3.##

I’ve skipped a lot of steps above, so you should convince yourself by expanding the determinant and factoring the resulting third-degree polynomial, that the values shown are the correct ones.

(Part B – Finding the eigenvectors)

I’ll show an outline of the work for ##lambda = 9##, but will just show the results for the other two eigenvalues, ##lambda = 3## and ##lambda = -3##.

#### Work for ##lambda = 9##

The last matrix on the right is equivalent to ##egin 2 & 0 & 1 0 & 1 & -1 2 & -2 & 3end.##

Using row operations to put this matrix in reduced row-echelon form, we arrive at this fully reduced matrix:

This matrix represents the following system of equations:

We can write this system in vector form, as

##vec = egin x_1 x_2 x_3 end = x_3egin -frac 1 2 1 1end##, where ##x_3## is a parameter.

An eigenvector for ##lambda = 9## is ##egin -frac 1 2 1 1end.##

Any nonzero multiple of this eigenvector is also an eigenvector, so we could just as well have chosen ##egin -1 2 2end## for the eigenvector.

As before, you should always check your work, by verifying that ##egin 1 & 0 & -4 0 & 5 & 4 -4 & 4 & 3end egin -1 2 2end = 9 egin -1 2 2end.##

#### Results for ##lambda = 3## and ##lambda = -3##

Using the same procedure as above, I find that an eigenvector for ##lambda = 3## is ##egin -2 -2 1end##, and that an eigenvector for ##lambda = -3## is ##egin 1 -frac 1 2 1end.## If you wish to avoid fractions, it’s convenient to choose ##egin 2 -1 2end## for an eigenvector for ##lambda = -3.##

#### Summary for Example 2

For the matrix of this example, the eigenvalues are ##lambda = 9##, ##lambda = 3##, and ##lambda = -3.## In the same order, a set of eigenvectors for these eigenvalues is ##left <egin-1 2 2end, egin -2 – 2 1end, egin 2 -1 2end ight>.##

Former college mathematics professor for 19 years taught a variety of programming languages. Former technical writer for 15 years at a large software firm headquartered in Redmond, WA. Current associate faculty at a nearby community college, teaching classes in C++ and computer architecture/assembly language.
I enjoy traipsing around off-trail in Olympic National Park, as well as riding and tinkering with my four motorcycles.