Articles

8: The Eigenvalue Problem


Thumbnail: Mona Lisa with shear, eigenvector, and grid. Imaged used with permission (Public domain; TreyGreer62).


Eigenvalue problem associated with nonhomogeneous integro-differential operators

In this paper, we establish the existence of two positive constants (lambda _0) and (lambda _1) with (lambda _0leqslant lambda _1) , such that any (lambda in [lambda _1, infty )) is an eigenvalue, while any (lambda in (0,lambda _0)) is not an eigenvalue, for a Kirchhoff type problem driven by nonlocal operators of elliptic type in a fractional Orlicz-Sobolev space, with Dirichlet boundary conditions.

This is a preview of subscription content, access via your institution.


8: The Eigenvalue Problem

As we did in the previous section we need to again note that we are only going to give a brief look at the topic of eigenvalues and eigenfunctions for boundary value problems. There are quite a few ideas that we’ll not be looking at here. The intent of this section is simply to give you an idea of the subject and to do enough work to allow us to solve some basic partial differential equations in the next chapter.

Now, before we start talking about the actual subject of this section let’s recall a topic from Linear Algebra that we briefly discussed previously in these notes. For a given square matrix, (A), if we could find values of (lambda ) for which we could find nonzero solutions, i.e. (vec x e vec 0), to,

then we called (lambda ) an eigenvalue of (A) and (vec x) was its corresponding eigenvector.

It’s important to recall here that in order for (lambda ) to be an eigenvalue then we had to be able to find nonzero solutions to the equation.

So, just what does this have to do with boundary value problems? Well go back to the previous section and take a look at Example 7 and Example 8. In those two examples we solved homogeneous (and that’s important!) BVP’s in the form,

In Example 7 we had (lambda = 4) and we found nontrivial (i.e. nonzero) solutions to the BVP. In Example 8 we used (lambda = 3) and the only solution was the trivial solution (i.e. (yleft( t ight) = 0)). So, this homogeneous BVP (recall this also means the boundary conditions are zero) seems to exhibit similar behavior to the behavior in the matrix equation above. There are values of (lambda ) that will give nontrivial solutions to this BVP and values of (lambda ) that will only admit the trivial solution.

So, for those values of (lambda ) that give nontrivial solutions we’ll call (lambda ) an eigenvalue for the BVP and the nontrivial solutions will be called eigenfunctions for the BVP corresponding to the given eigenvalue.

We now know that for the homogeneous BVP given in (eqref) (lambda = 4) is an eigenvalue (with eigenfunctions (yleft( x ight) = sin left( <2x> ight))) and that (lambda = 3) is not an eigenvalue.

Eventually we’ll try to determine if there are any other eigenvalues for (eqref), however before we do that let’s comment briefly on why it is so important for the BVP to be homogeneous in this discussion. In Example 2 and Example 3 of the previous section we solved the homogeneous differential equation

with two different nonhomogeneous boundary conditions in the form,

[yleft( 0 ight) = ahspace<0.25in>yleft( <2pi > ight) = b]

In these two examples we saw that by simply changing the value of (a) and/or (b) we were able to get either nontrivial solutions or to force no solution at all. In the discussion of eigenvalues/eigenfunctions we need solutions to exist and the only way to assure this behavior is to require that the boundary conditions also be homogeneous. In other words, we need for the BVP to be homogeneous.

There is one final topic that we need to discuss before we move into the topic of eigenvalues and eigenfunctions and this is more of a notational issue that will help us with some of the work that we’ll need to do.

Let’s suppose that we have a second order differential equation and its characteristic polynomial has two real, distinct roots and that they are in the form

Then we know that the solution is,

While there is nothing wrong with this solution let’s do a little rewriting of this. We’ll start by splitting up the terms as follows,

Now we’ll add/subtract the following terms (note we’re “mixing” the () and ( pm ,alpha ) up in the new terms) to get,

Next, rearrange terms around a little,

Finally, the quantities in parenthesis factor and we’ll move the location of the fraction as well. Doing this, as well as renaming the new constants we get,

All this work probably seems very mysterious and unnecessary. However there really was a reason for it. In fact, you may have already seen the reason, at least in part. The two “new” functions that we have in our solution are in fact two of the hyperbolic functions. In particular,

So, another way to write the solution to a second order differential equation whose characteristic polynomial has two real, distinct roots in the form ( = alpha ,,, = - ,alpha ) is,

Having the solution in this form for some (actually most) of the problems we’ll be looking will make our life a lot easier. The hyperbolic functions have some very nice properties that we can (and will) take advantage of.

First, since we’ll be needing them later on, the derivatives are,

Next let’s take a quick look at the graphs of these functions.

Note that (cosh left( 0 ight) = 1) and (sinh left( 0 ight) = 0). Because we’ll often be working with boundary conditions at (x = 0) these will be useful evaluations.

Next, and possibly more importantly, let’s notice that (cosh left( x ight) > 0) for all (x) and so the hyperbolic cosine will never be zero. Likewise, we can see that (sinh left( x ight) = 0) only if (x = 0). We will be using both of these facts in some of our work so we shouldn’t forget them.

Okay, now that we’ve got all that out of the way let’s work an example to see how we go about finding eigenvalues/eigenfunctions for a BVP.

We started off this section looking at this BVP and we already know one eigenvalue ((lambda = 4)) and we know one value of (lambda ) that is not an eigenvalue ((lambda = 3)). As we go through the work here we need to remember that we will get an eigenvalue for a particular value of (lambda ) if we get non-trivial solutions of the BVP for that particular value of (lambda ).

In order to know that we’ve found all the eigenvalues we can’t just start randomly trying values of (lambda ) to see if we get non-trivial solutions or not. Luckily there is a way to do this that’s not too bad and will give us all the eigenvalues/eigenfunctions. We are going to have to do some cases however. The three cases that we will need to look at are : (lambda > 0), (lambda = 0), and (lambda < 0). Each of these cases gives a specific form of the solution to the BVP to which we can then apply the boundary conditions to see if we’ll get non-trivial solutions or not. So, let’s get started on the cases.

(underline )
In this case the characteristic polynomial we get from the differential equation is,

In this case since we know that (lambda > 0) these roots are complex and we can write them instead as,

The general solution to the differential equation is then,

Applying the first boundary condition gives us,

So, taking this into account and applying the second boundary condition we get,

This means that we have to have one of the following,

However, recall that we want non-trivial solutions and if we have the first possibility we will get the trivial solution for all values of (lambda > 0). Therefore, let’s assume that ( e 0). This means that we have,

[sin left( <2pi sqrt lambda > ight) = 0hspace <0.25in>Rightarrow hspace<0.25in>2pi sqrt lambda = npi hspace<0.25in>n = 1,2,3, ldots ]

In other words, taking advantage of the fact that we know where sine is zero we can arrive at the second equation. Also note that because we are assuming that (lambda > 0) we know that (2pi sqrt lambda > 0)and so (n) can only be a positive integer for this case.

Now all we have to do is solve this for (lambda ) and we’ll have all the positive eigenvalues for this BVP.

The positive eigenvalues are then,

and the eigenfunctions that correspond to these eigenvalues are,

Note that we subscripted an (n) on the eigenvalues and eigenfunctions to denote the fact that there is one for each of the given values of (n). Also note that we dropped the () on the eigenfunctions. For eigenfunctions we are only interested in the function itself and not the constant in front of it and so we generally drop that.

Let’s now move into the second case.

(underline )
In this case the BVP becomes,

[y'' = 0hspace<0.25in>yleft( 0 ight) = 0hspace<0.25in>yleft( <2pi > ight) = 0]

and integrating the differential equation a couple of times gives us the general solution,

Applying the first boundary condition gives,

Applying the second boundary condition as well as the results of the first boundary condition gives,

Here, unlike the first case, we don’t have a choice on how to make this zero. This will only be zero if ( = 0).

Therefore, for this BVP (and that’s important), if we have (lambda = 0) the only solution is the trivial solution and so (lambda = 0) cannot be an eigenvalue for this BVP.

Now let’s look at the final case.

(underline )
In this case the characteristic equation and its roots are the same as in the first case. So, we know that,

However, because we are assuming (lambda < 0) here these are now two real distinct roots and so using our work above for these kinds of real, distinct roots we know that the general solution will be,

Note that we could have used the exponential form of the solution here, but our work will be significantly easier if we use the hyperbolic form of the solution here.

Now, applying the first boundary condition gives,

[0 = yleft( 0 ight) = cosh left( 0 ight) + sinh left( 0 ight) = left( 1 ight) + left( 0 ight) = hspace <0.25in>Rightarrow hspace<0.25in> = 0]

Applying the second boundary condition gives,

Because we are assuming (lambda < 0) we know that (2pi sqrt < - lambda > e 0) and so we also know that (sinh left( <2pi sqrt < - lambda >> ight) e 0). Therefore, much like the second case, we must have ( = 0).

So, for this BVP (again that’s important), if we have (lambda < 0) we only get the trivial solution and so there are no negative eigenvalues.

In summary then we will have the following eigenvalues/eigenfunctions for this BVP.

Let’s take a look at another example with slightly different boundary conditions.

Here we are going to work with derivative boundary conditions. The work is pretty much identical to the previous example however so we won’t put in quite as much detail here. We’ll need to go through all three cases just as the previous example so let’s get started on that.

(underline )
The general solution to the differential equation is identical to the previous example and so we have,

Applying the first boundary condition gives us,

[0 = y'left( 0 ight) = sqrt lambda ,hspace <0.25in>Rightarrow hspace<0.25in> = 0]

Recall that we are assuming that (lambda > 0) here and so this will only be zero if ( = 0). Now, the second boundary condition gives us,

Recall that we don’t want trivial solutions and that (lambda > 0) so we will only get non-trivial solution if we require that,

[sin left( <2pi sqrt lambda > ight) = 0hspace <0.25in>Rightarrow hspace<0.25in>2pi sqrt lambda = npi hspace<0.25in>n = 1,2,3, ldots ]

Solving for (lambda ) and we see that we get exactly the same positive eigenvalues for this BVP that we got in the previous example.

The eigenfunctions that correspond to these eigenvalues however are,

So, for this BVP we get cosines for eigenfunctions corresponding to positive eigenvalues.

(underline )
The general solution is,

Applying the first boundary condition gives,

Using this the general solution is then,

and note that this will trivially satisfy the second boundary condition,

Therefore, unlike the first example, (lambda = 0) is an eigenvalue for this BVP and the eigenfunctions corresponding to this eigenvalue is,

Again, note that we dropped the arbitrary constant for the eigenfunctions.

Finally let’s take care of the third case.

(underline )
The general solution here is,

Applying the first boundary condition gives,

[0 = y'left( 0 ight) = sqrt < - lambda >,sinh left( 0 ight) + sqrt < - lambda >,cosh left( 0 ight) = sqrt < - lambda >,hspace <0.25in>Rightarrow hspace<0.25in> = 0]

Applying the second boundary condition gives,

As with the previous example we again know that (2pi sqrt < - lambda > e 0) and so (sinh left( <2pi sqrt < - lambda >> ight) e 0). Therefore, we must have ( = 0).

So, for this BVP we again have no negative eigenvalues.

In summary then we will have the following eigenvalues/eigenfunctions for this BVP.

Notice as well that we can actually combine these if we allow the list of (n)’s for the first one to start at zero instead of one. This will often not happen, but when it does we’ll take advantage of it. So the “official” list of eigenvalues/eigenfunctions for this BVP is,

So, in the previous two examples we saw that we generally need to consider different cases for (lambda ) as different values will often lead to different general solutions. Do not get too locked into the cases we did here. We will mostly be solving this particular differential equation and so it will be tempting to assume that these are always the cases that we’ll be looking at, but there are BVP’s that will require other/different cases.

Also, as we saw in the two examples sometimes one or more of the cases will not yield any eigenvalues. This will often happen, but again we shouldn’t read anything into the fact that we didn’t have negative eigenvalues for either of these two BVP’s. There are BVP’s that will have negative eigenvalues.

Let’s take a look at another example with a very different set of boundary conditions. These are not the traditional boundary conditions that we’ve been looking at to this point, but we’ll see in the next chapter how these can arise from certain physical problems.

So, in this example we aren’t actually going to specify the solution or its derivative at the boundaries. Instead we’ll simply specify that the solution must be the same at the two boundaries and the derivative of the solution must also be the same at the two boundaries. Also, this type of boundary condition will typically be on an interval of the form [-L,L] instead of [0,L] as we’ve been working on to this point.

As mentioned above these kind of boundary conditions arise very naturally in certain physical problems and we’ll see that in the next chapter.

As with the previous two examples we still have the standard three cases to look at.

(underline )
The general solution for this case is,

Applying the first boundary condition and using the fact that cosine is an even function (i.e.(cos left( < - x> ight) = cos left( x ight))) and that sine is an odd function (i.e. (sin left( < - x> ight) = - sin left( x ight))). gives us,

This time, unlike the previous two examples this doesn’t really tell us anything. We could have (sin left( ight) = 0) but it is also completely possible, at this point in the problem anyway, for us to have ( = 0) as well.

So, let’s go ahead and apply the second boundary condition and see if we get anything out of that.

[egin - sqrt lambda ,sin left( < - pi sqrt lambda > ight) + sqrt lambda ,cos left( < - pi sqrt lambda > ight) & = - sqrt lambda ,sin left( ight) + sqrt lambda ,cos left( ight) sqrt lambda ,sin left( ight) + sqrt lambda ,cos left( ight) & = - sqrt lambda ,sin left( ight) + sqrt lambda ,cos left( ight) sqrt lambda ,sin left( ight) & = - sqrt lambda ,sin left( ight) 2sqrt lambda ,sin left( ight) & = 0end]

So, we get something very similar to what we got after applying the first boundary condition. Since we are assuming that (lambda > 0) this tells us that either (sin left( ight) = 0) or ( = 0).

Note however that if (sin left( ight) e 0) then we will have to have ( = = 0) and we’ll get the trivial solution. We therefore need to require that (sin left( ight) = 0) and so just as we’ve done for the previous two examples we can now get the eigenvalues,

[pi sqrt lambda = npi hspace <0.25in>Rightarrow hspace<0.25in>lambda = ,,,,n = 1,2,3, ldots ]

Recalling that (lambda > 0) and we can see that we do need to start the list of possible (n)’s at one instead of zero.

So, we now know the eigenvalues for this case, but what about the eigenfunctions. The solution for a given eigenvalue is,

[yleft( x ight) = cos left( ight) + sin left( ight)]

and we’ve got no reason to believe that either of the two constants are zero or non-zero for that matter. In cases like these we get two sets of eigenfunctions, one corresponding to each constant. The two sets of eigenfunctions for this case are,

[left( x ight) = cos left( ight)hspace<0.25in>left( x ight) = sin left( ight)hspace<0.25in>n = 1,2,3, ldots ]

(underline )
The general solution is,

Applying the first boundary condition gives,

Using this the general solution is then,

and note that this will trivially satisfy the second boundary condition just as we saw in the second example above. Therefore, we again have (lambda = 0) as an eigenvalue for this BVP and the eigenfunctions corresponding to this eigenvalue is,

Finally let’s take care of the third case.

(underline )
The general solution here is,

Applying the first boundary condition and using the fact that hyperbolic cosine is even and hyperbolic sine is odd gives,

Now, in this case we are assuming that (lambda < 0) and so we know that (pi sqrt < - lambda > e 0) which in turn tells us that (sinh left( > ight) e 0). We therefore must have ( = 0).

Let’s now apply the second boundary condition to get,

By our assumption on (lambda ) we again have no choice here but to have ( = 0).

Therefore, in this case the only solution is the trivial solution and so, for this BVP we again have no negative eigenvalues.

In summary then we will have the following eigenvalues/eigenfunctions for this BVP.

[egin> & = & left( x ight) & = sin left( ight) & n & = 1,2,3, ldots > & = & left( x ight) & = cos left( ight) & n & = 1,2,3, ldots > & = 0 & left( x ight) & = 1end]

Note that we’ve acknowledged that for (lambda > 0) we had two sets of eigenfunctions by listing them each separately. Also, we can again combine the last two into one set of eigenvalues and eigenfunctions. Doing so gives the following set of eigenvalues and eigenfunctions.

[egin> & = & left( x ight) & = sin left( ight) & n & = 1,2,3, ldots > & = & left( x ight) & = cos left( ight) & n & = 0,1,2,3, ldots end]

Once again, we’ve got an example with no negative eigenvalues. We can’t stress enough that this is more a function of the differential equation we’re working with than anything and there will be examples in which we may get negative eigenvalues.

Now, to this point we’ve only worked with one differential equation so let’s work an example with a different differential equation just to make sure that we don’t get too locked into this one differential equation.

Before working this example let’s note that we will still be working the vast majority of our examples with the one differential equation we’ve been using to this point. We’re working with this other differential equation just to make sure that we don’t get too locked into using one single differential equation.

This is an Euler differential equation and so we know that we’ll need to find the roots of the following quadratic.

[rleft( ight) + 3r + lambda = + 2r + lambda = 0]

The roots to this quadratic are,

Now, we are going to again have some cases to work with here, however they won’t be the same as the previous examples. The solution will depend on whether or not the roots are real distinct, double or complex and these cases will depend upon the sign/value of (1 - lambda ). So, let’s go through the cases.

(underline <1 - lambda < 0,,,lambda > 1>)
In this case the roots will be complex and we’ll need to write them as follows in order to write down the solution.

By writing the roots in this fashion we know that (lambda - 1 > 0) and so (sqrt ) is now a real number, which we need in order to write the following solution,

Applying the first boundary condition gives us,

[0 = yleft( 1 ight) = cos left( 0 ight) + sin left( 0 ight) = hspace <0.25in>Rightarrow hspace<0.25in> = 0]

The second boundary condition gives us,

In order to avoid the trivial solution for this case we’ll require,

This is much more complicated of a condition than we’ve seen to this point, but other than that we do the same thing. So, solving for (lambda ) gives us the following set of eigenvalues for this case.

Note that we need to start the list of (n)’s off at one and not zero to make sure that we have (lambda > 1) as we’re assuming for this case.

The eigenfunctions that correspond to these eigenvalues are,

(underline <1 - lambda = 0,,,,lambda = 1>)
In this case we get a double root of (> = - 1) and so the solution is,

Applying the first boundary condition gives,

The second boundary condition gives,

[0 = yleft( 2 ight) = frac<1><2>ln left( 2 ight)hspace <0.25in>Rightarrow hspace<0.25in> = 0]

We therefore have only the trivial solution for this case and so (lambda = 1) is not an eigenvalue.

Let’s now take care of the third (and final) case.

(underline <1 - lambda > 0,,,lambda < 1>)
This case will have two real distinct roots and the solution is,

Applying the first boundary condition gives,

Using this our solution becomes,

Applying the second boundary condition gives,

Now, because we know that (lambda e 1) for this case the exponents on the two terms in the parenthesis are not the same and so the term in the parenthesis is not the zero. This means that we can only have,

and so in this case we only have the trivial solution and there are no eigenvalues for which (lambda < 1).

The only eigenvalues for this BVP then come from the first case.

So, we’ve now worked an example using a differential equation other than the “standard” one we’ve been using to this point. As we saw in the work however, the basic process was pretty much the same. We determined that there were a number of cases (three here, but it won’t always be three) that gave different solutions. We examined each case to determine if non-trivial solutions were possible and if so found the eigenvalues and eigenfunctions corresponding to that case.

We need to work one last example in this section before we leave this section for some new topics. The four examples that we’ve worked to this point were all fairly simple (with simple being relative of course…), however we don’t want to leave without acknowledging that many eigenvalue/eigenfunctions problems are so easy.

In many examples it is not even possible to get a complete list of all possible eigenvalues for a BVP. Often the equations that we need to solve to get the eigenvalues are difficult if not impossible to solve exactly. So, let’s take a look at one example like this to see what kinds of things can be done to at least get an idea of what the eigenvalues look like in these kinds of cases.

The boundary conditions for this BVP are fairly different from those that we’ve worked with to this point. However, the basic process is the same. So let’s start off with the first case.

(underline )
The general solution to the differential equation is identical to the first few examples and so we have,

Applying the first boundary condition gives us,

[0 = yleft( 0 ight) = ,hspace <0.25in>Rightarrow hspace<0.25in> = 0]

The second boundary condition gives us,

So, if we let ( = 0) we’ll get the trivial solution and so in order to satisfy this boundary condition we’ll need to require instead that,

[egin0 = sin left( ight) + sqrt lambda ,cos left( ight) sin left( ight) & = - sqrt lambda ,cos left( ight) an left( ight) & = - sqrt lambda end]

Now, this equation has solutions but we’ll need to use some numerical techniques in order to get them. In order to see what’s going on here let’s graph ( an left( ight)) and ( - sqrt lambda ) on the same graph. Here is that graph and note that the horizontal axis really is values of (sqrt lambda ) as that will make things a little easier to see and relate to values that we’re familiar with.

So, eigenvalues for this case will occur where the two curves intersect. We’ve shown the first five on the graph and again what is showing on the graph is really the square root of the actual eigenvalue as we’ve noted.

The interesting thing to note here is that the farther out on the graph the closer the eigenvalues come to the asymptotes of tangent and so we’ll take advantage of that and say that for large enough (n) we can approximate the eigenvalues with the (very well known) locations of the asymptotes of tangent.

How large the value of (n) is before we start using the approximation will depend on how much accuracy we want, but since we know the location of the asymptotes and as (n) increases the accuracy of the approximation will increase so it will be easy enough to check for a given accuracy.

For the purposes of this example we found the first five numerically and then we’ll use the approximation of the remaining eigenvalues. Here are those values/approximations.

The number in parenthesis after the first five is the approximate value of the asymptote. As we can see they are a little off, but by the time we get to (n = 5) the error in the approximation is 0.9862%. So less than 1% error by the time we get to (n = 5) and it will only get better for larger value of (n).

The eigenfunctions for this case are,

where the values of (>) are given above.

So, now that all that work is out of the way let’s take a look at the second case.

(underline )
The general solution is,

Applying the first boundary condition gives,

Using this the general solution is then,

Applying the second boundary condition to this gives,

[0 = y'left( 1 ight) + yleft( 1 ight) = + = 2hspace <0.25in>Rightarrow hspace<0.25in> = 0]

Therefore, for this case we get only the trivial solution and so (lambda = 0) is not an eigenvalue. Note however that had the second boundary condition been (y'left( 1 ight) - yleft( 1 ight) = 0) then (lambda = 0) would have been an eigenvalue (with eigenfunctions (yleft( x ight) = x)) and so again we need to be careful about reading too much into our work here.

Finally let’s take care of the third case.

(underline )
The general solution here is,

Applying the first boundary condition gives,

[0 = yleft( 0 ight) = cosh left( 0 ight) + sinh left( 0 ight) = hspace <0.25in>Rightarrow hspace<0.25in> = 0]

Using this the general solution becomes,

Applying the second boundary condition to this gives,

Now, by assumption we know that (lambda < 0) and so (sqrt < - lambda >> 0). This in turn tells us that (sinh left( > ight) > 0) and we know that (cosh left( x ight) > 0) for all (x). Therefore,

and so we must have ( = 0) and once again in this third case we get the trivial solution and so this BVP will have no negative eigenvalues.

In summary the only eigenvalues for this BVP come from assuming that (lambda > 0) and they are given above.

So, we’ve worked several eigenvalue/eigenfunctions examples in this section. Before leaving this section we do need to note once again that there are a vast variety of different problems that we can work here and we’ve really only shown a bare handful of examples and so please do not walk away from this section believing that we’ve shown you everything.

The whole purpose of this section is to prepare us for the types of problems that we’ll be seeing in the next chapter. Also, in the next chapter we will again be restricting ourselves down to some pretty basic and simple problems in order to illustrate one of the more common methods for solving partial differential equations.


8: The Eigenvalue Problem

Find the general solution of the given system of equations. Also draw a direction field and plot a few of the trajectories. In each of these problems, the coefficient matrix has a zero eigenvalue. As a result, the pattern of trajectories is different from those in the examples in the text.

To solve this differential equation, we first need to find the eigenvalues and eigenvectors of the matrix

In MATLAB, this is a simple one-line command, once the matrix has been entered:

The matrices V and D are the matrices that diagonalize the matrix A. That is, the columns of V are the eigenvectors of A that correspond to the eigenvalues of A found in the corresponding column of the diagonal matrix D. Recall that we can scale the eigenvectors by any value we wish. Thus, for the solution below, I chose to scale the eigenvectors to [3-1] and [-21] to make the formula prettier (but no different). That is, we get eigenvectors and eigenvalues

Using formula (25) in section 7.5 of the text, we find the general solution of the given differential equation is:

or equivalently, as parametric functions,

The MATLAB module dirfield2d.m will plot the direction field of a 2x2 linear system x' = Ax. It also provides the option of plotting individual trajectories on the direction field, when given an user-specified initial point. Here we will run 'dirfield2d' for the matrix A.

Note: The 'divide by zero' warnings are true (and many more of them occur than are shown here), but we can ignore them. We'll get that warning anywhere the slope line becomes vertical or undefined. For this example, that happens along the line of dots we see in the center of the graph, where the arrows abruptly change direction.

If we answer 'y' instead of 'n' to the last question in dirfield2d, we are prompted to enter the coordinates of an initial condition that determines the integral curve (trajectory) on the direction field that corresponds to the particular solution to the initial value problem.

Here is the phase plot that arises when we plot the 6 trajectories with initial conditions (x(0), y(0)) as shown below:

(0, -5), (0, -3), (0, -1),
(0, 1), (0, 3), (0, 5).

This looks nothing like the examples in the text. In fact, it looks like nothing more interesting than a set of parallel lines. (Which is in itself pretty interesting.) Why? Which line is it and how does it relate to the matrix A? What does that line of dots mean?

Although the solutions x(t) and y(t) both involve exponential functions, there is a simple relationship between them:

Each of these lines is in the direction of the eigenvector v1. However, these lines have a different slope than the line of dots down the center of the graph where the derivatives have no length. Let's look what happens to solutions that intersect that line of dots. That line is y = -x/2, and it's plotted on the graph in aqua after the 'dirfield2d' module is finished.

The solutions stop abruptly when they cross the aqua line. Why? Although there is a linear relationship between x and y, they are still defined by exponential functions, and one of the fundamental properties of exponential functions is that they are never negative. Let's consider the case when (x0, y0) = (0, 0.5), which produces the uppermost red line shown on the graph above. The parametric equations for this line are

x(t) =3e t - 3
y(t) =-e t + 1.5

Thus, for any values of t, we have x(t) > 3 and y(t) < 1.5. The points where these solutions start and stop lie along the line y = -x/2, which is the line through the eigenvector [2-1] corresponding to the eigenvalue of 0. The derivatives along this line all have value 0, because x' = Ax = 0 for x along this eigenvector line.


8.2. Implementation¶

This demo is implemented in a single Python file, demo_eigenvalue.py , which contains both the variational forms and the solver.

The eigensolver functionality in DOLFIN relies on the library SLEPc which in turn relies on the linear algebra library PETSc. Therefore, both PETSc and SLEPc are required for this demo. We can test whether PETSc and SLEPc are available, and exit if not, as follows:

First, we need to construct the matrix (A) . This will be done in three steps: defining the mesh and the function space associated with it constructing the variational form defining the matrix and then assembling this form. The code is shown below:

Note that we (in this example) first define the matrix A as a PETScMatrix and then assemble the form into it. This is an easy way to ensure that the matrix has the right type.

In order to solve the eigenproblem, we need to define an eigensolver. To solve a standard eigenvalue problem, the eigensolver is initialized with a single argument, namely the matrix A .

Now, we ready solve the eigenproblem by calling the solve method of the eigensolver. Note that eigenvalue problems tend to be computationally intensive and may hence take a while.

The result is kept by the eigensolver, but can fortunately be extracted. Here, we have computed all eigenvalues, and they will be sorted by largest magnitude. We can extract the real and complex part ( r and c ) of the largest eigenvalue and the real and complex part of the corresponding eigenvector ( ru and cu ) by asking for the first eigenpair as follows:

Finally, we want to examine the results. The eigenvalue can easily be printed. But, the real part of eigenvector is probably most easily visualized by constructing the corresponding eigenfunction. This can be done by creating a Function in the function space V and the associating eigenvector rx with the Function. Then the eigenfunction can be manipulated as any other Function , and in particular plotted:


8: The Eigenvalue Problem

Kreyszig TA노트
Chapter 8
Linear Algebra: Matrix Eigenvalue Problems
작성자 : 김민도

Chapter 8 에서는 특별한 언급이 없을 때 Matrix라고 하면 모든 Entry가 실수인
Matrix로 간주하고 Vector라고 하면 모든 Entry가 실수인 Vector로 간주한다.

8.1 행렬의 고윳값 문제. 고윳값과 고유벡터 구하기(The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors)
Matrix 가 상수 와 인 Vector 에 대하여 를 만족하면 를 의
Eigenvalue, 를 Eigenvalue 에 대응하는 의 Eigenvector라고 부른다. 그리고 의
Eigenvalue의 집합을 Spectrum이라고 부르고 의 Eigenvalue의 절댓값 중 가장 큰 값을
의 Spectrum radius라고 부른다.

Eigenvalue는 Matrix의 정보를 많이 가지고 있어서 Matrix를 사용하는 문제를 풀 때
Eigenvalue를 이용해서 푸는 경우가 많다. Chapter 4에서 배우는 연립 ODE의 해법도
Matrix의 Eigenvalue를 이용하는 것이다.

의 Eigenvalue를 구하기 위해서는 즉, 연립일차방정식 이
인 해를 갖도록 하는 상수 를 찾아야 한다. 이것은 과
동치이므로 에 대한 차방정식 을 풀어서 얻을수 있다.

이때 를 의 Characteristic equation이라고 하고
를 의 Characteristic polynomial이라고 한다.

의 Eigenvalue 를 구했으면 에 대응하는 의 Eigenvector는 연립일차방정식
의 Zero vector가 아닌 해 이므로 7.3절에서 배운 Gauss elimination을
사용해서 구할수 있다.

일반적으로 Gauss elimination을 사용하려면 Augmented matrix를 가지고 Elementary
row operation을 사용해야 한다. 연립일차방정식 에서 Elementary row
operation에 의해 도 변하기 때문이다.

하지만 Zero vector는 Elementary row operation에 의해 변하지 않으므로 이때는
Coefficient matrix에만 Elementary row operation을 사용해도 된다. TA노트에서
Eigenvector를 구할땐 Coefficient matrix에만 Elementary row operation을 사용할
것이다.

Problem 8.1.1 다음 Matrix의 Eigenvalue와 각각의 Eigenvalue에 대응하는 Eigenvector를
모두 구하고 Spectrum radius를 구하시오.
(a).
(b).
sol) (a). Characteristic equation은 다음과 같다.

따라서 해는 이고 각각의 절댓값은 이므로 Spectrum radius는 이다.

라고 하면 이다. 따라서 에 대응하는 Eigenvector는

라고 하면 이다. 따라서 에 대응하는 Eigenvector는

(b). Characteristic equation은 다음과 같다.

따라서 해는 이고 각각의 절댓값은 이므로 Spectrum radius는 이다.

라고 하면 두 번째 식에서 를 얻는다. 이것을 첫 번째 식에 대입하면
를 얻는다. 따라서 에 대응하는 Eigenvector는 다음과 같다.

라고 하면 이다. 따라서 에 대응하는
Eigenvector는 다음과 같다.

모든 Entry가 실수여도 Eigenvalue와 그 Eigenvalue에 대응하는 Eigenvector의 Entry는
허수가 될수도 있다. 이면
에서 이므로 의 Eigenvalue는 허수이다.

복소수 의 절댓값은 로 정의된다. 따라서 이 경우
Eigenvalue의 절댓값은 모두 이고 Spectrum radius는 이다.

에 대응하는 Eigenvector 중 하나는 이고 에 대응하는 Eigenvector
중 하나는 이다. 따라서 Eigenvector의 Entry에 허수가 포함될수 있다.

Problem 8.1.2 다음 물음에 답하시오.
(a). 실수 계수 차다항식 와
Matrix 에 대하여 새로운 Matrix 를 다음과 같이 정의하자.

의 Eigenvalue를 라고 할 때 의 Eigenvalue는 이고 만약
이면 을 만족함을 증명하시오.
(b). 일 때 을 구하시오.
(c). Matrix 가 Nonsingular matrix일 필요충분조건은 을 Eigenvalue로
갖지 않는 것임을 증명하고 Nonsingular matrix 의 Eigenvalue가 이면 의
Eigenvalue는 임을 증명하시오.
sol) (a). 의 Eigenvalue 에 대응하는 Eigenvector를 라고 하자. 그러면
이므로 자연수 에 대하여 다음을 얻는다.

그러므로 의 Eigenvalue는 이다. 만약 이면 위 식에서
이고 이므로 을 만족한다. ■

(b). 먼저 의 Eigenvalue를 구하자. Characteristic equation은 다음과 같다.

에 대응하는 Eigenvector는 로 택할수 있다.

에 대응하는 Eigenvector는 로 택할수 있다.

(c). 가 을 Eigenvalue로 갖는 것과 은 동치이다. 따라서 가
Nonsingular matrix일 필요충분조건은 을 Eigenvalue로 갖지 않는 것이다.

Nonsingular matrix 의 Eigenvalue 에 대응하는 Eigenvector를 라고 하자. 그러면
이고 이므로 등식의 왼쪽에 를 곱하면 를
얻는다. 따라서 의 Eigenvalue는 이다. ■


Eigenvalue Theory

Consider a dynamic system. In general, the equations of motion can be expressed as a function of the system mass, stiffness, damping and applied loads:

[K] = global stiffness matrix

Eigenvalues or natural frequencies are found when there is no damping or applied loads. The equations of motion for free vibration can then be written as:

Assume a sinusoidal vibration, where the displacement can be described by:

Then replace the term with the above and consider that, for a sinusoidal variation, the acceleration is the second derivative of the displacement:

Thus the equation of motion becomes:

Since is never zero, the equation can be rearranged to the form of a general eigenvalue problem. Inventor Nastran determines the natural frequency by solving the eigenvalue problem:

[K] = global linear stiffness matrix

= the eigenvalue for each mode that yields the natural frequency =

= the eigenvector for each mode that represents the natural mode shape

The eigenvalue is related to the system’s natural frequency:

= the circular frequency (radians per second)

f = the cyclic frequency (Hertz)

One solution is trivial ( = 0), but the other solutions for are interesting. is called the eigenvalue , and each is accompanied by a unique called the eigenvector.

In solving the above eigenvalue problem, there are as many eigenvalues and corresponding eigenvectors as there are unconstrained degrees of freedom. Often, however, only the lowest natural frequency is of practical interest. This frequency will always be the first mode extracted.

The solution of the eigenvalue problem is difficult and a number of different approaches have been developed over the years. Currently the Lanczos approach is favored as it is fast, accurate and robust. Inventor Nastran also offers the Subspace method. This can be used in those rare cases where Lanczos fails. SUBSPACE is selected using the Nastran directive EXTRACTMETHOD=SUBSPACE in the Parameters dialog box under Program Control Directives (select the Advanced Settings checkbox first). For more details, see the Parameters topic of the User's Guide. The default AUTO setting for this parameter uses Lanczos in most circumstances, but changes to Subspace for some small problems. Note that Inventor Nastran does not recognize the EIGR card available in other Nastrans to use other extraction methods.

Also, while the found is the exact eigenvalue, the eigenvectors are arbitrarily scaled. That is, there is no unique magnitude to the vectors. They simply represent a shape. By default, Inventor Nastran performs a mass scaling on the vectors. This is done by calculating the generalized mass of the model from the equation:

All of the terms of the vector are then divided by it. This results in a seemingly arbitrary scaling of the vectors, but it has important mathematical properties that can be exploited elsewhere. In addition to mass scaling, Nastran also has max scaling available, where the largest value in the vector will be 1.0. This allows small vectors to be examined manually.

A property of eigenvectors is that they are orthogonal. This means that one eigenvector multiplied by another will produce an identity matrix. An eigenvector vector multiplied by itself will be zero.

This is another property that is exploited in dynamics solutions.


Both eigenvalues are real and negative. Thus, according to Table 9.1.1 in Section 9.1 of the text, the origin is a node and it is asymptotically stable.

The general solution to this system is

The MATLAB code below will plot 25 curves (not necessarily distinct) for values of c1 and c2 between -2 and 2.

Because the origin is an asymptotically stable node, these trajectories are traversed inward, toward the origin. We can also plot a few representative graphs of x1 versus t.


8: The Eigenvalue Problem

Find eigenvalues and eigenvectors

d = eig(A) returns a vector of the eigenvalues of matrix A .

d = eig(A,B) returns a vector containing the generalized eigenvalues, if A and B are square matrices.

    Note If S is sparse and symmetric, you can use d = eig(S) to returns the eigenvalues of S . If S is sparse but not symmetric, or if you want to return the eigenvectors of S , use the function eigs instead of eig .

[V,D] = eig(A) produces matrices of eigenvalues ( D ) and eigenvectors ( V ) of matrix A , so that A*V = V*D . Matrix D is the canonical form of A --a diagonal matrix with A 's eigenvalues on the main diagonal. Matrix V is the modal matrix--its columns are the eigenvectors of A .

If W is a matrix such that W'*A = D*W' , the columns of W are the left eigenvectors of A . Use [W,D] = eig(A.') W = conj(W) to compute the left eigenvectors.

[V,D] = eig(A,'nobalance') finds eigenvalues and eigenvectors without a preliminary balancing step. Ordinarily, balancing improves the conditioning of the input matrix, enabling more accurate computation of the eigenvectors and eigenvalues. However, if a matrix contains small elements that are really due to roundoff error, balancing may scale them up to make them as significant as the other elements of the original matrix, leading to incorrect eigenvectors. Use the nobalance option in this event. See the balance function for more details.

[V,D] = eig(A,B) produces a diagonal matrix D of generalized eigenvalues and a full matrix V whose columns are the corresponding eigenvectors so that A*V = B*V*D .

[V,D] = eig(A,B, flag ) specifies the algorithm used to compute eigenvalues and eigenvectors. flag can be:

    Note For eig(A) , the eigenvectors are scaled so that the norm of each is 1.0. For eig(A,B) , eig(A,'nobalance') , and eig(A,B,flag) , the eigenvectors are not normalized.

The eigenvalue problem is to determine the nontrivial solutions of the equation

where is an n -by- n matrix, is a length n column vector, and is a scalar. The n values of that satisfy the equation are the eigenvalues, and the corresponding values of are the right eigenvectors. In MATLAB, the function eig solves for the eigenvalues , and optionally the eigenvectors .

The generalized eigenvalue problem is to determine the nontrivial solutions of the equation

where both and are n -by- n matrices and is a scalar. The values of that satisfy the equation are the generalized eigenvalues and the corresponding values of are the generalized right eigenvectors.

If is nonsingular, the problem could be solved by reducing it to a standard eigenvalue problem

Because can be singular, an alternative algorithm, called the QZ method, is necessary.

When a matrix has no repeated eigenvalues, the eigenvectors are always independent and the eigenvector matrix V diagonalizes the original matrix A if applied as a similarity transformation. However, if a matrix has repeated eigenvalues, it is not similar to a diagonal matrix unless it has a full (independent) set of eigenvectors. If the eigenvectors are not independent then the original matrix is said to be defective. Even if a matrix is defective, the solution from eig satisfies A*X = X*D .

has elements on the order of roundoff error. It is an example for which the nobalance option is necessary to compute the eigenvectors correctly. Try the statements

Inputs of Type Double

For inputs of type double , MATLAB uses the following LAPACK routines to compute eigenvalues and eigenvectors.

    With preliminary balance step
    d = eig(A,'nobalance')
    [V,D] = eig(A,'nobalance')
    With preliminary balance step
    d = eig(A,'nobalance')
    [V,D] = eig(A,'nobalance')

Inputs of Type Single

For inputs of type single , MATLAB uses the following LAPACK routines to compute eigenvalues and eigenvectors.


Extended Capabilities

C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™.

Usage notes and limitations:

The basis of the eigenvectors can be different in the generated code than in MATLAB ® . In general, in the eigenvalues output, the eigenvalues for real inputs are not sorted so that complex conjugate pairs are adjacent.

Differences in eigenvectors and ordering of eigenvalues can lead to differences in the condition numbers output.


Watch the video: Οι ελληνικοί υπότιτλοι εμφανίζονται στα κινέζικα, έχουν πρόβλημα (October 2021).