Articles

29: 15 Pre-Class Assignment - Diagonalization and Powers - Mathematics


29: 15 Pre-Class Assignment - Diagonalization and Powers - Mathematics

Homework Schedule

I may end up modifying some assignments and the timing as we go along so listen up when I announce homework in lecture.

The list below is merely an approximation to what will happen.

In the table entries below, the assigned problems are due for the next class meeting.

Aug 26
Review of Vectors
Modular Vectors (pages 13-16) p17:29,30,35,37,40,43,45,47,49,51,53,55,!56c,!57c
Code Vectors (pages 53-57) p58: 16,18,20,24,!25,27,28,!30c. 34c

Aug 28
Matrix Multiplication (review) and Hamming Code (Example 3.71, p253) p261: 79,81-86,88,!93 and some review problems: p159: 31, 39 p168: !45,!47

Sep 2
Linear Systems and Finite Linear Games (Example 2.35, p115) p123: 30,31,32 c),40,!52

Sep 4
Vector Spaces and Subspaces (6.1 but also review 2.3, 3.5) p460: 1-6,7!,9-11,18,19,22!,23!

Sep 9
6.1 cont p460: 25,26,27,31,33!,35,36,38,39,42?,45,46,47,48!,49!,54,61

Sep 11
Linear Indep., Basis, Dimension (6.2 and 3.5) p475: 3,4,8,9,12,13,14,15,16!,17,19,21,24,25

Sep 16
6.2 cont p475: 27,29,31,33,35!,36!,40,41,42!,43!,45,47,51,55!,58!

Sep 18
6.3: Change of Basis (6.3) p489: 3,4,7,8,10,12,13,15,18,19,21!,22!

Sep 23
Linear Transformations (6.4 and 3.6) p498: 4,5,9,10,11,15,17,19,22,24

Sep 25
Linear Transformations, cont. p498: 26,28,30,33!,35,34!,36!

Sep 30
Kernel and Range (6.5) p513: 2,4,6,8,12,13,14,16,17,18,20 p513: 21,22,23,28,29,32! (warm up with 30 and 31), 33!, 34!, 36, 37!!

Matrix of Linear Transformation (6.6) p531: 3,5,7,9,11,13,15,17,22,23,27,29

Oct 7
Matrix of Linear Transf. cont. p532: 31,33,37,39,40,41,44!,45!,46!!

Oct 9
Exam 1

Oct 14
Go over exam. Start Crystallographic Restriction.

Oct 16
Application: Tilings and Crystallographic Restriction (cont) Start Inner Product Spaces (7.1) and Review Orthoganal Diagonalization p563: 5,6,8,9,10,11,13-18,20,21,22 (. ) p418: 13,14,15,16,23.

Oct 21
Inner Product Spaces cont., p563: 25,28,30,33!,34!,35,37,39,40,41,43!,44!

Oct 23
7.2: Distance and approximation p589: 2,3,4,6,7,8!,9-12,14!,16,17,20,28,32!,33!,34

Oct 28
7.2 cont., p590: 41,42,43,45,46,47

Oct 30
7.3: Least Squares, p609: 21, 23, 34, 44, 47 ,53, 54, 55
absorb the theory from class notes and the text!

Nov 4
Election Day
(no classes: offices closed)

Nov 6
7.4: Singular Value Decomposition, p632: 3,7,9,10,25,26,27,28,30,31,33,34,35,36,43,60

Nov 11
Veteran's Day
(no classes: offices closed)

Nov 13
7.5 Application: Reed Muller Code

Nov 20
Go over Exam
Perron Frobenius Theorem, p372: 28-31,40, maybe 38 absorb the proof of Perron Th

Nov 25
(Di)Graphs, Irreducibility, and Perron Frobenius Theorem
p259: 59,62,64,65 (read on digraph versus graph), 68,71,72,73
p372: prove the assertion before 32-35 and solve some two of these.

Nov 27
Thanksgiving
no classes

Dec 2
Asymptotic behaviour of powers of A (not all in text!)
(I gave a more general version of Th 4.33 p339.)
Do not study proofs but solve p370: 12,13 and then:
predict the fate of the populations,
find the asymptotic ratio between populations.
(If confused, consult p245 and p341.) For the "Google application" read p367-369.


Related Resources

OK. Shall we start? This is the second lecture on eigenvalues.

So the first lecture was -- reached the key equation, A x equal lambda x. x is the eigenvector and lambda's the eigenvalue.

And the, the good way to, after we've found -- so, so job one is to find the eigenvalues and find the eigenvectors.

Now after we've found them, what do we do with them?

Well, the good way to see that is diagonalize the matrix.

And I want to show -- first of all, this is like the basic fact.

That's, that's the key to today's lecture.

This matrix A, I put its eigenvectors in the columns of a matrix S.

So S will be the eigenvector matrix.

And I want to look at this magic combination S inverse A S.

So can I show you how that -- what happens there?

And notice, there's an S inverse.

We have to be able to invert this eigenvector matrix S.

So for that, we need n independent eigenvectors.

So that's the, that's the case.

OK. So suppose we have n linearly independent eigenvectors

of A. Put them in the columns of this matrix S.

So I'm naturally going to call that the eigenvector matrix, because it's got the eigenvectors in its columns.

And all I want to do is show you what happens when you multiply A times S.

So this is A times the matrix with the first eigenvector in its first column, the second eigenvector in its second column, the n-th eigenvector in its n-th column.

And how I going to do this matrix multiplication?

Well, certainly I'll do it a column at a time.

A times the first column gives me the first column of the answer, but what is it?

A times x1 is equal to the lambda times the x1. And that lambda's we're -- we'll call lambda one,

of course. So that's the first column. Ax1 is the same as lambda one x1. A x2 is lambda two x2. So on, along to in the n-th column we now how lambda n xn.

Looking good, but the next step is even

better. So for the next step, I want to separate out those eigenvalues, those, those multiplying numbers, from the x-s. So then I'll have just what I want.

OK. So how, how I going to separate out?

So that, that number lambda one is multiplying the first column.

So if I want to factor it out of the first column, I better put -- here is going to be x1, and that's going to multiply this matrix lambda one in the first entry and all zeros.

Do you see that that, that's going to come out right for the first column?

Because w- we remember how -- how we're going back to that original punchline.

That if I want a number to multiply x1 then I can do it by putting x1 in that column, in the first column, and putting that number there.

Th- u- what I going to have here?

I'm going to have lambda -- I'm going to have x1, x2, . ,xn.

These are going to be my columns again.

But now what's it multiplied by, on the right it's multiplied by?

If I want lambda n xn in the last column, how do I do it?

Well, the last column here will be -- I'll take the last column, use these coefficients, put the lambda n down there, and it will multiply that n-th column and give me lambda n xn.

There, there you see matrix multiplication just working for us.

I wrote down what it meant, A times each eigenvector.

That gave me lambda time the eigenvector.

And then when I peeled off the lambdas, they were on the right-hand side, so I've got S, my matrix, back again.

And this matrix, this diagonal matrix, the eigenvalue matrix, and I call it capital lambda.

Using capital letters for matrices and lambda to prompt me that it's, that it's eigenvalues that are in there.

So you see that the eigenvalues are just sitting down that diagonal?

If I had a column x2 here, I would want the lambda two in the two two position, in the diagonal position, to multiply that x2 and give me the lambda two x2. That's my formula.

OK. That's the -- you see, it's just a calculation.

Now -- I mentioned, and I have to mention again, this business about n independent eigenvectors.

As it stands, this is all fine, whether -- I mean, I could be repeating the same eigenvector, but -- I'm not interested in that.

I want to be able to invert S, and that's where this comes in.

This n independent eigenvectors business comes in to tell me that that matrix is invertible.

So let me, on the next board, write down what I've got.

And now I'm, I can multiply on the left by S inverse.

So this is really -- I can do that, provided S is invertible.

Provided my assumption of n independent eigenvectors is

satisfied. And I mentioned at the end of last time, and I'll say again, that there's a small number of matrices for -- that don't have n independent eigenvectors.

So I've got to discuss that, that technical point.

But the great -- the most matrices that we see have n di- n independent eigenvectors, and we can diagonalize.

I could also write it, and I often will, the other way round.

If I multiply on the right by S inverse, if I took this equation at the top and multiplied on the right by S inverse, I could -- I would have A left here.

Now S inverse is coming from the right.

So can you keep those two straight?

A multiplies its eigenvectors, that's how I keep them

straight. So A multiplies S.

And then this S inverse makes the whole thing diagonal.

And this is another way of saying the same thing, putting the Ss on the other side of the equation.

So that's the, that's the new factorization.

That's the replacement for L U from elimination or Q R for -- from Gram-Schmidt. And notice that the matrix -- so it's, it's a matrix times a diagonal matrix times the inverse of the first one.

It's, that's the combination that we'll see throughout this chapter.

This combination with an S and an S inverse.

OK. Can I just begin to use that?

For example, what about A squared?

What are the eigenvalues and eigenvectors of A squared?

That's a straightforward question with a, with an absolutely clean answer.

So let me, let me consider A squared.

So I start with A x equal lambda x.

And I'm headed for A squared.

So let me multiply both sides by A.

That's one way to get A squared on the left.

So -- I should write these if-s in here.

If A x equals lambda x, then I multiply by A, so I get A squared x equals -- well, I'm multiplying by A, so that's lambda A x.

That lambda was a number, so I just put it on the left.

And what do I -- tell me how to make that look better.

What have I got here for if, if A has the eigenvalue lambda and eigenvector x, what's up with A squared?

A squared x, I just multiplied by A, but now for Ax I'm going to substitute lambda x.

So I've got lambda squared x.

So from that simple calculation, I -- my conclusion is that the eigenvalues of A squared are lambda squared.

And the eigenvectors -- I always think about both of

those. What can I say about the eigenvalues?

What can I say about the eigenvectors?

The same x as in -- as for A.

Now let me see that also from this formula.

How can I see what A squared is looking like from this formula?

So let me -- that was one way to do it.

Let me do it by just taking A squared from that.

A squared is S lambda S inverse -- that's A -- times S lambda S inverse -- that's A, which is?

This is the beauty of eigenvalues, eigenvectors.

Having that S inverse and S is the identity, so I've got S lambda squared S inverse.

Do you see what that's telling me?

It's, it's telling me the same thing that I just learned here, but in the -- in a matrix form.

It's telling me that the S is the same, the eigenvectors are the same, but the eigenvalues are squared.

Because this is -- what's lambda squared?

It's got little lambda one squared, lambda two squared, down to lambda n squared o- on that diagonal.

Those are the eigenvalues, as we just learned, of A squared.

OK. So -- somehow those eigenvalues and eigenvectors are really giving you a way to -- see what's going on inside a matrix.

Of course I can continue that for -- to the K-th power, A to the K-th power.

If I multiply, if I have K of these together, do you see how S inverse S will keep canceling in the, in the inside?

I'll have the S outside at the far left, and lambda will be in there K times, and S inverse.

That's telling me that the eigenvalues of A, of A to the K-th power are the K-th powers.

The eigenvalues of A cubed are the cubes of the eigenvalues of

A. And the eigenvectors are the same, the same.

OK. In other words, eigenvalues and eigenvectors give a great way to understand the powers of a matrix.

If I take the square of a matrix, or the hundredth power of a matrix, the pivots are all over the place.

L U, if I multiply L U times L U times L U times L U a hundred times, I've got a hundred L Us.

I can't do anything with them.

But when I multiply S lambda S inverse by itself, when I look at the eigenvector picture a hundred times, I get a hundred or ninety-nine of these guys canceling out inside, and I get A to the hundredth is S lambda to the hundredth S inverse.

I mean, eigenvalues tell you about powers of a matrix in a way that we had no way to approach previously.

For example, when does -- when do the powers of a matrix go to zero?

I would call that matrix stable, maybe.

So I could write down a theorem.

I'll write it as a theorem just to use that word to emphasize that here I'm getting this great fact from this eigenvalue picture.

OK. A to the K approaches zero as K goes, as K gets bigger if what?

What's the w- how can I tell, for a matrix A, if its powers go to zero?

What's -- somewhere inside that matrix is that information.

That information is not present in the pivots.

It's present in the eigenvalues.

What do I need for the -- to know that if I take higher and higher powers of A, that this matrix gets smaller and smaller?

Well, S and S inverse are not moving.

So it's this guy that has to get small.

And that's easy to -- to understand.

The requirement is all eigenvalues -- so what is the requirement?

The eigenvalues have to be less than one.

Now I have to wrote that absolute value, because those eigenvalues could be negative, they could be complex numbers.

So I'm taking the absolute value.

If all of those are below one.

That's, in fact, we practically see why.

And let me just say that I'm operating on one assumption here, and I got to keep remembering that that assumption is still present.

That assumption was that I had a full set of, of n independent eigenvectors.

If I don't have that, then this approach is not working.

So again, a pure eigenvalue approach, eigenvector approach, needs n independent eigenvectors.

If we don't have n independent eigenvectors, we can't diagonalize the matrix.

We can't get to a diagonal matrix.

This diagonalization is only possible if S inverse makes sense.

OK. Can I, can I follow up on that point now?

So you see why -- what we get and, and why we want it, because we get information about the powers of a matrix just immediately from the eigenvalues.

Now let me follow up on this, business of which matrices are diagonalizable.

Sorry about that long word.

So a matrix is, is sure -- so here's, here's the main point.

A is sure to be -- to have N independent eigenvectors and, and be -- now here comes that word -- diagonalizable if, if -- so we might as well get the nice case out in the open.

The nice case is when -- if all the lambdas are different.

That means, that means no repeated eigenvalues.

If my matrix, and most -- if I do a random matrix in Matlab and compute its eigenvalues -- so if I computed if I took eig of rand of ten ten, gave, gave that Matlab command, the -- we'd get a random ten by ten matrix, we would get a list of its ten eigenvalues, and they would be different.

They would be distinct is the best word.

I would have -- a random matrix will have ten distinct -- a ten by ten matrix will have ten distinct eigenvalues.

And if it does, the eigenvectors are automatically independent.

I'll refer you to the text for the proof.

That, that A is sure to have n independent eigenvectors if the eigenvalues are different, if.

If all the, if all eigenvalues are different.

It's just if some lambdas are repeated, then I have to look more closely.

If an eigenvalue is repeated, I have to look, I have to count, I have to check.

Has it got -- say it's repeated three times.

So what's a possibility for the -- so here is the, here is the repeated possibility.

And, and let me emphasize the conclusion.

That if I have repeated eigenvalues, I may or may not, I may or may not have, have n independent eigenvectors.

I, I, you know, this isn't a completely negative case.

The identity matrix -- suppose I take the ten by ten identity matrix.

What are the eigenvalues of that matrix?

So just, just take the easiest matrix, the identity.

If I look for its eigenvalues, they're all ones.

So that eigenvalue one is repeated ten times.

But there's no shortage of eigenvectors for the identity matrix.

In fact, every vector is an eigenvector.

So I can take ten independent vectors.

Oh, well, what happens to everything -- if A is the identity matrix, let's just think that one through in our head.

If A is the identity matrix, then it's got plenty of eigenvectors.

I choose ten independent vectors.

And, and what do I get from S inverse A S?

If A is the identity -- and of course that's the correct lambda.

The matrix was already diagonal.

So if the matrix is already diagonal, then the, the lambda is the same as the matrix.

A diagonal matrix has got its eigenvalues sitting right there in front of you.

Now if it's triangular, the eigenvalues are still sitting there, but so let's take a case where it's triangular.

Suppose A is like, two one two zero.

So there's a case that's going to be trouble.

There's a case that's going to be trouble.

First of all, what are the -- I mean, we just -- if we start with a matrix, the first thing we do, practically without thinking is compute the eigenvalues and eigenvectors.

OK. So what are the eigenvalues?

You can tell me right away what they are.

It's a triangular matrix, so when I do this determinant, shall I do this determinant of A minus lambda I?

I'll get this two minus lambda one zero two minus lambda, right?

I take that determinant, so I make those into vertical bars to mean determinant.

And what's the determinant?

It's two minus lambda squared.

So the eigenvalues are lambda equals two and two.

Now the next step, find the eigenvectors.

So I look for eigenvectors, and what do I find for this guy? Eigenvectors for this guy, when I subtract two minus the identity, so A minus two I has zeros here.

And I'm looking for the null space.

What's, what are the eigenvectors?

They're the -- the null space of A minus lambda I.

The null space is only one dimensional.

This is a case where I don't have enough eigenvectors.

My algebraic multiplicity is two.

I would say, when I see, when I count how often the eigenvalue is repeated, that's the algebraic multiplicity.

That's the multiplicity, how many times is it the root of the polynomial?

My polynomial is two minus lambda squared.

So my algebraic multiplicity is two.

But the geometric multiplicity, which looks for vectors, looks for eigenvectors, and -- which means the null space of this thing, and the only eigenvector is one

zero. That's in the null space.

Zero one is not in the null space.

The null space is only one dimensional.

So there's a matrix, my -- this A or the original A, that are not diagonalizable.

I can't find two independent eigenvectors.

OK. So that's the case that I'm -- that's a case that I'm not really handling.

For example, when I wrote down up here that the powers went to zero if the eigenvalues were below one, I didn't really handle that case of repeated eigenvalues, because my reasoning was based on this formula.

And this formula is based on n independent eigenvectors.

OK. Just to say then, there are some matrices that we're, that, that we don't cover through diagonalization, but the great majority we do.

OK. And we, we're always OK if we have different distinct eigenvalues.

OK, that's the, like, the typical case.

Because for each eigenvalue there's at least one eigenvector.

The algebraic multiplicity here is one for every eigenvalue and the geometric multiplicity is one.

OK. OK. Now let me come back to the important case, when, when we're OK.

The important case, when we are diagonalizable.

Let me, look at -- so -- let me solve this equation.

The equation will be each -- I start with some -- start with a given vector u0. And then my equation is at every step, I multiply what I have by A.

That, that equation ought to be simple to handle.

And I'd like to be able to solve it.

How would I find -- if I start with a vector u0 and I multiply by A a hundred times, what have I got?

Well, I could certainly write down a formula for the answer, so what, what -- so u1 is A u0. And u2 is -- what's u2 then? u2, I multiply -- u2 I get from u1 by another multiplying by A, so I've got A twice.

And my formula is uk, after k steps, I've multiplied by A k times the original u0. You see what I'm doing?

The next section is going to solve systems of differential equations.

I'm going to have derivatives.

This section is the nice one.

It solves difference equations.

I would call that a difference equation.

It's -- at first order, I would call that a first-order system, because it connects only -- it only goes up one level.

And I -- it's a system because these are vectors and that's a matrix.

And the solution is just that.

That's the, like, the most compact formula I could ever get. u100 would be A to the one hundred u0. But how would I actually find u100? How would I find -- how would I discover what u100 is?

Let me, let me show you how.

If -- so to solve, to really solve -- shall I say, to really solve -- to really solve it, I would take this initial vector u0 and I would write it as a combination of eigenvectors.

To really solve, write u nought as a combination, say certain amount of the first eigenvector plus a certain amount of the second eigenvector plus a certain amount of the last eigenvector.

You want to -- you got to see the magic of eigenvectors working here.

A times -- so what's A -- I can separate it out into n separate pieces, and that's the whole point.

That each of those pieces is going in its own merry way.

Each of those pieces is an eigenvector, and when I multiply by A, what does this piece become?

So that's some amount of the first -- let's suppose the eigenvectors are normalized to be unit vectors.

So that says what the eigenvector is.

It's a -- And I need some multiple of it to produce u0. OK.

Now when I multiply by A, what do I get?

I get c1, which is just a factor, times Ax1, but Ax1 is lambda one x1. When I multiply this by A, I get c2 lambda two x2. And here I get cn lambda n xn.

And suppose I multiply by A to the hundredth power now.

Can we, having done it, multiplied by A, let's multiply by A to the hundredth.

What happens to this first term when I multiply by A to the one hundredth?

It's got that factor lambda to the hundredth.

That -- that's what I mean by going its own merry way.

It, it is pure eigenvector.

It's exactly in a direction where multiplication by A just brings in a scalar factor, lambda one.

So a hundred times brings in this a hundred times.

Hundred times lambda two, hundred times lambda n.

Actually, we're -- what are we seeing here?

We're seeing, this same, lambda capital lambda to the hundredth as in the, as in the diagonalization.

And we're seeing the S matrix, the, the matrix S of eigenvectors.

That's what this has got to -- this has got to amount to.

A lambda to the hundredth power times an S times this vector c that's telling us how much of each one is in the original thing.

So if, if I had to really find the hundredth power, I would take u0, I would expand it as a combination of eigenvectors -- this is really S, the eigenvector matrix, times c, the, the coefficient vector.

And then I would immediately then, by inserting these hundredth powers of eigenvalues, I'd have the answer.

So -- huh, there must be -- oh, let's see, OK. It's -- so, yeah.

So if u100 is A to the hundredth times u0, and u0 is S c -- then you see this formula is just this formula, which is the way I would actually get hold of this, of this u100, which is -- let me put it here. u100. The way I would actually get hold of that, see what, what the solution is after a hundred steps, would be -- expand the initial vector into eigenvectors and let each eigenvector go its own way, multiplying by a hundred at -- by lambda at every step, and therefore by lambda to the hundredth power after a hundred steps.

Can I do an example? So that's the formulas.

Now let me take an example.

I'll use the Fibonacci sequence as an example.

You remember the Fibonacci numbers?

If we start with one and one as F0 -- oh, I think I start with zero, maybe.

Let zero and one be the first ones.

So there's F0 and F1, the first two Fibonacci numbers.

Then what's the rule for Fibonacci numbers?

The next one is the sum of those, so it's one.

The next one is the sum of those, so it's two.

The next one is the sum of those, so it's three.

Well, it looks like one two three four five, but somehow it's not going to do that way.

The next one is five, right.

And the one hundredth Fibonacci number is what?

How could I get a formula for the hundredth number?

And, for example, how could I answer the question, how fast are they growing?

How fast are those Fibonacci numbers growing?

Whatever the eigenvalues of whatever matrix it is, they're not smaller than one.

These numbers are growing.

But how fast are they growing?

The answer lies in the eigenvalue.

So I've got to find the matrix, so let me write down the Fibonacci rule. F(k+2) = F(k+1)+F k, right?

Now that's not in my -- I want to write that as uk plus one and Auk.

But right now what I've got is a single equation, not a system, and it's second-order. It's like having a second-order differential equation with second derivatives.

I want to get first derivatives.

Here I want to get first differences.

So the way, the way to do it is to introduce uk will be a vector -- see, a small trick.

Let uk be a vector, F(k+1) and Fk.

So I'm going to get a two by two system, first order, instead of a one -- instead of a scalar system, second order, by a simple trick.

I'm just going to add in an equation F(k+1) equals F(k+1). That will be my second equation.

Then this is my system, this is my unknown, and what's my one step equation?

So, so now u(k+1), that's -- so u(k+1) is the left side, and what have I got here on the right side?

I've got some matrix multiplying uk.

Can you, do -- can you see that all right?

if you can see it, then you can tell me what the matrix is.

Do you see that I'm taking my system here.

I artificially made it into a system.

I artificially made the unknown into a vector.

And now I'm ready to look at and see what the matrix

is. So do you see the left side, u(k+1) is F(k+2) F(k+1), that's just what I want.

On the right side, this remember, this uk here -- let me for the moment put it as F(k+1) Fk. So what's the matrix?

Well, that has a one and a one, and that has a one and a zero.

Do you see that that gives me the right-hand side?

And this is our friend uk.

So we've got -- so that simple trick -- changed the second-order scalar problem to a first-order system.

Two b- u- with two unknowns.

Well, before I even think, I find its eigenvalues and eigenvectors.

So what are the eigenvalues and eigenvectors of that matrix?

I always -- first let me just, like, think for a minute.

It's two by two, so this shouldn't be impossible to do.

So my matrix, again, is one one one zero.

So what I will eventually know about symmetric matrices is that the eigenvalues will come out real.

I won't get any complex numbers here.

And the eigenvectors, once I get those, actually will be orthogonal.

But two by two, I'm more interested in what the actual numbers are.

What do I know about the two numbers?

Well, should do you want me to find this determinant of A minus

So it's the determinant of one minus lambda one one zero,

There'll be two eigenvalues.

What will -- tell me again what I know about the two eigenvalues before I go any further.

Tell me something about these two eigenvalues.

Lambda one plus lambda two is?

Is the same as the trace down the diagonal of the matrix.

So lambda one plus lambda two should come out to be one.

And lambda one times lambda one times lambda two should come out to be the determinant, which is minus one.

So I'm expecting the eigenvalues to add to one and to multiply to minus one.

But let's just see it happen here.

If I multiply this out, I get -- that times that'll be a lambda squared minus lambda minus one.

Good. Lambda squared minus lambda minus one.

Actually, I -- you see the b- compare that with the original equation that I started with. F(k+2) - F(k+1)-Fk is zero.

The recursion that -- that the Fibonacci numbers satisfy is somehow showing up directly here for the eigenvalues when we set that to zero.

Well, I would like to be able to factor that, that quadratic, but I'm better off to use the quadratic formula.

Minus b is one plus or minus the square root of b squared, which is one, minus four times that times that, which is plus four, over two.

So that's the square root of five.

So the eigenvalues are lambda one is one half of one plus square root of five, and lambda two is one half of one minus square root of five.

And sure enough, they -- those add up to one and they multiply to give minus one.

OK. Those are the two eigenvalues.

How -- what are those numbers approximately?

Square root of five, well, it's more than two but less than three.

Hmm. It'd be nice to know these numbers.

I think, I think that -- so that number comes out bigger than one, right?

This number comes out bigger than one.

It's about one point six one eight or something.

And suppose it's one point six.

Is, is lambda two positive or negative?

Negative, right, because I'm -- it's, obviously negative, and I knew that the -- so it's minus -- and they add up to one, so minus point six one eight, I guess.

Those are the two eigenvalues.

One eigenvalue bigger than one, one eigenvalue smaller than

one. Actually, that's a great situation to be in.

Of course, the eigenvalues are different, so there's no doubt whatever -- is this matrix diagonalizable?

Is this matrix diagonalizable, that original matrix A?

We've got two distinct eigenvalues and we can find the eigenvectors in a moment.

But they'll be independent, we'll be diagonalizable.

And now, you, you can already answer my very first question.

How fast are those Fibonacci numbers increasing?

How -- those -- they're increasing,

right? They're not doubling at every step.

Let me -- let's look again at these numbers.

Five, eight, thirteen, it's not obvious.

The next one would be twenty-one, thirty-four. So to get some idea of what F one hundred is, can you give me any -- I mean the crucial number -- so it -- these -- it's approximately -- what's controlling the growth of these Fibonacci numbers?

And which eigenvalue is controlling that growth?

So F100 will be approximately some constant, c1 I guess, times this lambda one, this one plus square root of five over two, to the hundredth power.

And the two hundredth F -- in other words, the eigenvalue -- the Fibonacci numbers are growing by about that factor.

Do you see that we, we've got precise information about the, about the Fibonacci numbers out of the eigenvalues?

OK. And again, why is that true?

Let me go over to this board and s- show what I'm doing here.

The -- the original initial value is some combination of eigenvectors.

And then when we start -- when we start going out the theories of Fibonacci numbers, when we start multiplying by A a hundred times, it's this lambda one to the hundredth.

This term is, is the one that's taking over.

It's -- I mean, that's big, like one point six to the hundredth power.

The second term is practically nothing, right?

The point six, or minus point six, to the hundredth power is an extremely small, extremely small number.

So this is -- there're only two terms, because we're two by two.

This number is -- this piece of it is there, but it's, it's disappearing, where this piece is there and it's growing and controlling everything.

So, so really the -- we're doing, like, problems that are evolving.

We're doing dynamic u- instead of Ax=b, that's a static problem.

We're now we're doing dynamics.

A, A squared, A cubed, things are evolving in

time. And the eigenvalues are the crucial, numbers.

OK. I guess to complete this, I better write down the eigenvectors. So we should complete the, the whole process by finding the eigenvectors.

OK, well, I have to -- up in the corner, then, I have to look at A minus lambda I.

So A minus lambda I is this one minus lambda one one and minus lambda.

And now can we spot an eigenvector out of that?

That's, that's, for these two lambdas, this matrix is singular.

I guess the eigenvector -- two by two ought to be, I mean, easy.

So if I know that this matrix is singular, then u- seems to me the eigenvector has to be lambda and one, because that multiplication will give me the zero.

And this multiplication gives me -- better give me also zero.

This is the minus lambda squared plus lambda plus one.

It's the thing that's zero because these lambdas are special.

There's the eigenvector. x1 is lambda one one, and x2 is lambda two one.

I did that as a little trick that was available in the two by two case.

So now I finally have to -- oh, I have to take the initial u0 now. So to complete this example entirely, I have to say, OK, what was u0? u0 was F1 F0. So u0, the starting vector is F1 F0, and those were one and

zero. So I have to use that vector.

So I have to look for, for a multiple of the first eigenvector and the second to produce u0, the one zero

vector. This is what will find c1 and c2, and then I'm done.

Do you -- so let me instead of, in the last five seconds, grinding out a formula, let me repeat the idea.

Because I'd really -- it's the idea that's central.

When things are evolving in time -- let me come back to this board, because the ideas are here.

When things are evolving in time by a first-order system, starting from an original u0, the key is find the eigenvalues and eigenvectors of A.

That will tell -- those eigenvectors -- the eigenvalues will already tell you what's happening.

Is the solution blowing up, is it going to zero, what's it doing.

And then to, to find out exactly a formula, you have to take your u0 and write it as a combination of eigenvectors and then follow each eigenvector separately.

And that's really what this formula, the formula for, -- that's what the formula for A to the K is doing.

So remember that formula for A to the K is S lambda to the K S inverse.

OK. That's, that's difference equations.

And you just have to -- so the, the homework will give some examples, different from Fibonacci, to follow through.


Topology (Math 731)

Weekly assignments are due Wednesdays at 11am. Occasionally, as on the first day of class, you will be given one problem due at the start of the next class.

Diagostic exercises are indicated by DX these should not be submitted. All other exercises require written solutions. Check the errata for exercises marked *.

Assignment 1, due F., 9/2 [Do not discuss with others.]

Assignment 2, due W., 9/7
DX: 1.1, 1.3, 1.6, 1.7, 1.8, 1.9, 1.10, 1.11, 1.15
Required: 1.2, 1.4, 1.5, 1.12, 1.13 (1.12, 1.13 delayed until Asst 3)
Required: Prove that the interval [0,1) is uncountable using Cantor's diagonalization argument. (Do not look this up, unless you've tried it for a long time!)

Assignment 3, due W., 9/14
DX: 1.17, 1.25, 1.28, 1.29, 1.30, 1.31
Required: 1.12, 1.13, 1.14, 1.16, 1.19, 1.20 (defining a subbasis), 1.21, 1.27, 1.34*, 1.35
Required: Show that the separation axioms obey: T4 implies T3 implies T2.

Assignment 4, due W., 9/21
DX: 1.38, 2.4, 2.5, 2.6, 2.7, 2.8
Required: 1.22, 1.34*, 1.35, 1.39, 1.44, 2.1, 2.3, 2.10
Required: Show that the separation axioms obey: T4 implies T3 implies T2.

Assignment 5, due F., 9/23. We will present the proof of Theorem 2.15 (exercise 2.28) in class. There are 12 parts, so most of you will be randomly chosen to present. If you present, your grade is based upon that. If you don't, you will submit 3-4 parts in class to be graded.

Assignment 6, due W., 9/28
DX: 2.26, 2.27, 2.31
DX: Let T and T' are two topologies on X, with T coarser than T'. Show, for A a subset of X, that the interior of A under T is contained in the interior of A under T'. Similarly show that the closure of A under T contains the closure of A under T'.'
Required: 2.13, 2.14*, 2.18, 2.19*, 2.23, 2.24,
Challenge Problem: (Munkres 17.21) Consider the power set P(X) of X. The operations closure and complement may be viewed as maps P(X)->P(X). (a) Show that, starting from a given set A, one can form no more than 14 distinct sets by applying these operations. (b) Find a subset of the reals for which the maximum of 14 distinct sets is achieved.

Assignment 7, due W., 10/12
DX: 3.1-3.6, 3.9, 3.12, 3.13, 3.14, 3.18, 3.20, 3.23, 3.25, 3.28
Required: 2.29, 2.35, 3.7, 3.15, 3.17, 3.22, 3.26, 3.29, 3.30, 3.33
For 2.29, assume all intervals [c,d] have positive length, i.e., c&ned.

Assignment 8, due W., 10/19
DX: 3.40, 4.1, 4.2, 4.7
Required: 3.44, 4.3, 4.4, 4.5, 4.10, 4.12, 4.18, 4.19, 4.20, 4.21
Challenge problem: Describe the configuration space C3(S 1 ). What familiar space is it equivalent to?
Hint (3.44): The bonding angle of a water molecule is constant, roughly 104.5 degrees.

Assignment 10, due F., 11/11
DX: 4.38, 5.1, 5.2, 5.4, 5.5, 5.6, 5.10, 5.13, 5.16, 5.22, 5.26, 5.27, 5.30, 5.33, 6.1, 6.2, 6.5
Required: 5.3, 5.12, 5.14*, 5.24, 5.29**, 5.31, 5.35, 5.37, 5.41, 6.7b, 6.9, 6.18, 6.20, 6.27, 6.30
Problem [replacing 5.29b]. Show that for all c1>0, for all c2 satisfying 0< c2 &le c1(b-a), there exists f in C[a,b] s.t. &rhoM(f,0)=c1 and &rho(f,0)=c2.
Required: Read section 4.3. What is "gimbal lock"? How did NASA encounter and address it 40-50 years ago?
Attend either the colloquium on 11/10 and write a 1-page reaction, or do problem 6.24.

Assignment 12, due Tu., 11/22
DX: 7.1, 7.2, 7.3, 7.6, 7.9
Required: 6.41, 6.43, 6.44, 6.45, 7.5 (you must use open covers), 7.11


29: 15 Pre-Class Assignment - Diagonalization and Powers - Mathematics

Instructor: Prof. Zhong-Jin Ruan

Classroom: 141AH, TuTh 11:00am - 12:20pm

Office Hour: TuTh 1-2 pm, or by appointment.

Web page: https://math.uiuc.edu/

This course provides a careful development of elementary analysis for those who intend to take graduate courses in mathematics. Topics include completeness property of real number system basic topological properties of n-dimensional space convergence of numerical sequences and series of functions properties of continuous functions and basic theorems concerning differentiation and Riemann integration.

Textbook: Elementary Analysis: The Theory of Calculus by Kenneth Ross, 2nd edition, 2013.

Pre-requisite: Math 242 and Math 347, or equivalent.

Homework: Homework will be assigned each week and will be due in class on the following dates:
Part I: Thursdays Jan 26, Feb 2, 9, 16
Part II: Thursdays Mar 9, 16, 30, Apr 6
Part III: Tuesdays Apr 25, May 2.

No late homework will be accepted. If you have a reasonable excuse for missing an assignment, I will score it by the average of the other assignments.

Exams: We will have two midterm exams and a final exam.

Exam1: Thursday February 23
Exam2: Thursday April 13

Final Exam: Thursday May 11, 7-10pm at 141Altgeld Hall.

Grading policy: There will be total of 500 points, which can be computed as follows.

Homework:10 x 10 pts 100 pts
Exams:2 x 100 pts 200 pts
Final Exam: 200 pts
Total: 500 pts

Your grade will be based on the total scores.

HW#2: #8.2c),d), 8.4, 8.6a), 8.8a), 8.10. Due Thursday, February 2, 2017. Solution [pdf]"
Practice HW (No need to hand in): #9.1, 9.2, 9.4, 9.8a), c), 9.15.


HW#3: #10.6a), b), 10.7, 10.10, 12.2, 12.4, 12.10. Due Thursday, February 9, 2017. Solution [pdf]"
Practice HW (No need to hand in): 10.1, 10.4, 12.3.


HW#4: #11.4 consider (x_n), (z_n) 11.6 12.6 a), c) 14.2 e), f), g) 14.4a), b) 14.12. Due Thursday February 16, 2017. Solution [pdf]"
Practice HW (No need to hand in): #11.11, 14.7, 14.10.

The 1st exam will be given on Thursday February 23, 2017 from 11:00am to 12:15pm Solution [pdf]"

HW#5: #13.4, 13.12, 13.13 17.10a), b), 17.12a), b). Due Thursday March 2, 2017. Solution [pdf]"
Practice HW (No need to hand in): #13.3, 13.9, 13.11 17.3, 17.9c), d).

HW#6: #18.2, 18.4, 18.6, 18.8, 19.2c), 19.6a), b). Due Thursday March 9, 2017. Solution [pdf]"

HW#7: #13.8a), 19.7a), b), 19.10, 21.2, 21.8, 21.10a),d). Due Thursday March 16, 2017. Solution [pdf]"
Practice HW (No need to hand in) #21.1, 21.3, 21.7.

HW#8: #22.2, 22.4a),b),c), 22.6a), 22.8, 23.2b),d), 23.6. Due Thursday March 30, 2017. Solution [pdf]"
(Hint: You may apply 22.3 to 22.4b).)
Practice HW (No need to hand in). #22.1, 22.5, 22.11, 23.1.

HW#9: #24.4, 24.6, 24.14, 25.4, 25.6, 25.10. Due Thursday April 6, 2017 Solution [pdf]"
Practice HW (No need to hand in) #24.7, 24.11, 24.13, 25.5, 25.9.

For section 28 and 29, we only have practice homework (No need to hand in): #28.4, 28.6, 28.8, 28.14, 29.10, 29.12, 29.14.

The 2nd exam will be given on Thursday April 13, 2017 from 11am to 12:15pm. Solution [pdf]"

HW#10: #32.2, 32.6, 32.8, 33.4, 33.8a), 33.10. Due Tuesday April 25, 2017. Solution [pdf]"
Practice HW (No need to hand in) #33.3, 33.7

HW11: #33.14, 34.2a), 34.4, 34.6, 34.7, 34.8a). Due Tuesday May 2, 2017. Solution [pdf]"

Final Exam Review: I will hold a final exam review session at 343AH, 5pm - 6pm on Tuesday May 9, 2017.

Extra Office Hour: I will add an office hour on Thursday May 11, 11am-noon at my office 353AH.

Final Exam will be given at classroom (141 Altgeld Hall) on Thursday May 11, 2017 from 7pm to 10pm.
Notice 1: This is a close book exam. So no book, no notes, and no cell phone will be allowed during the exam.
Notice 2: Let me know if you have any potential conflicts. Otherwise, do not miss this final exam. I will not give any make-up exam unless you have a strong reason.


29: 15 Pre-Class Assignment - Diagonalization and Powers - Mathematics

Instructor: Nikola Petrov, 802 PHSC, (405)325-4316, npetrov AT math.ou.edu

Office Hours: Mon 2:30-3:30 p.m., Tue 4:30-5:30 p.m., or by appointment.

Prerequisite: 2443 (Calculus and Analytic Geometry IV), 3413 (Physical Mathematics I).

Course catalog description: The Fourier transform and applications, a survey of complex variable theory, linear and nonlinear coordinate transformations, tensors, elements of the calculus of variations. Duplicates one hour of 3333 and one hour of 4103. (Sp)

Text: D. A. McQuarrie, Mathematical Methods for Scientists and Engineers, University Science Books, Sausalito, CA, 2003. The course will cover (parts of) chapters 4-10, 17-20.

  • Homework 1, due Thu, Aug 30.
  • Homework 2, due Thu, Sep 6.
  • Homework 3, due Thu, Sep 13.
  • Homework 4, due Thu, Sep 27.
  • Homework 5, due Thu, Oct 4.
  • Homework 6, due Thu, Oct 11.
  • Homework 7, due Thu, Oct 18.
  • Homework 8, due Thu, Nov 1. SOLUTIONS
  • Homework 9, due Tue, Nov 13. SOLUTIONS
  • Homework 10, due Tue, Nov 27. SOLUTIONS
  • Homework 11, due Thu, Dec 6.

Hour exam 3 will be on Thursday, November 29, in class.
No formula sheets and calculators are allowed.

    Lecture 1 (Tue, Aug 21):Reminder: Fourier series: Fourier series of a periodic function (using complex exponents or sines and cosines), even and odd funcitons, sine and cosine series, convergence of Fourier series, Parseval's theorem, physical interpretation (Sec. 15.1-15.3).
    Fourier transform: definition of Fourier transform, examples (pages 845-849 of Sec. 17.5).

Attendance: You are required to attend class on those days when an examination is being given attendance during other class periods is also strongly encouraged. You are fully responsible for the material covered in each class, whether or not you attend. Make-ups for missed exams will be given only if there is a compelling reason for the absence, which I know about beforehand and can document independently of your testimony (for example, via a note or a phone call from a doctor or a parent).

You should come to class on time if you miss a quiz because you came late, you won't be able to make up for it.

Homework: It is absolutely essential to solve a large number of problems on a regular basis! Homework assignments will be given regularly throughout the semester and will be posted on this web-site. Usually the homeworks will be due at the start of class on Thursday. Each homework will consist of several problems, of which some pseudo-randomly chosen problems will be graded. Your lowest homework grade will be dropped. All homework should be written on a 8.5"×11" paper with your name clearly written, and should be stapled. No late homework will be accepted!

You are encouraged to discuss the homework problems with other students. However, you have to write your solutions clearly and in your own words - this is the only way to achieve real understanding! It is advisable that you first write a draft of the solutions and then copy them neatly. Please write the problems in the same order in which they are given in the assignment.

Shortly after a homework assignment's due date, solutions to the problems from that assignment will be placed on restricted reserve in the Chemistry-Mathematics Library in 207 PHSC.

Quizzes: Short pop-quizzes will be given in class at random times your lowest quiz grade will be dropped. Often the quizzes will use material that has been covered very recently (even in the previous lecture), so you have to make every effort to keep up with the material and to study the corresponding sections from the book right after they have been covered in class.

Exams: There will be three in-class midterms and a (comprehensive) final. The approximate dates for the midterms are September 18, October 23 and November 27. The final is scheduled for Tuesday, December 11, 1:30-3:30 p.m. All tests must be taken at the scheduled times, except in extraordinary circumstances. Please do not arrange travel plans that prevent you from taking any of the exams at the scheduled time.

Grading: Your grade will be determined by your performance on the following coursework:

Pop-quizzes (lowest grade dropped) 15%
Homework (lowest grade dropped) 15%
Three in-class midterms 15% each
Final Examination 25%

Academic calendar for Fall 2007.

Policy on W/I Grades : Through September 23, you can withdraw from the course with an automatic W . In addition, it is my policy to give any student a W grade, regardless of his/her performance in the course, through the extended drop period that ends on December 7. However, after October 29, you can only drop via petition to the Dean of your college. Such petitions are not often granted. Furthermore, even if the petition is granted, I will give you a grade of "Withdrawn Failing" if you are indeed failing at the time of your petition.

The grade of I (Incomplete) is not intended to serve as a benign substitute for the grade of F . I only give the I grade if a student has completed the majority of the work in the course (for example everything except the final exam), the coursework cannot be completed because of compelling and verifiable problems beyond the student's control, and the student expresses a clear intention of making up the missed work as soon as possible.

Academic Misconduct: All cases of suspected academic misconduct will be referred to the Dean of the College of Arts and Sciences for prosecution under the University's Academic Misconduct Code. The penalties can be quite severe. Don't do it! For more details on the University's policies concerning academic misconduct see http://www.ou.edu/provost/integrity/. See also the Academic Misconduct Code, which is a part of the Student Code and can be found at http://www.ou.edu/studentcode/.

Students With Disabilities: The University of Oklahoma is committed to providing reasonable accommodation for all students with disabilities. Students with disabilities who require accommodations in this course are requested to speak with the instructor as early in the semester as possible. Students with disabilities must be registered with the Office of Disability Services prior to receiving accommodations in this course. The Office of Disability Services is located in Goddard Health Center, Suite 166: phone 405-325-3852 or TDD only 405-325-4173.


E-Lesson Plan for Mathematics Teachers

For a teacher it is important to make a lesson plan. In mathematics lesson planning is an art. With the help of good planning a teacher can achieve his goal in the classroom. In the link given below teacher can find beautiful lesson planning in mathematics.

/>

/>


Math 171: Fundamental Concepts of Analysis, Spring 2016

Math 171 is Stanford's honors analysis class and will have a strong emphasis on rigor and proofs. The class will take an abstract approach, especially around metric spaces and related concepts. Math 171 is required for honors majors, and satisfies the WIM (Writing In the Major) requirement.

For some students, Math 115 may be a suitable alternative to 171. Both Math 115 and Math 171 cover similar material, but 171 will be more fast-paced and have a more abstract/proof-based flavor. If you are unsure which of these two classes will be more appropriate for you, please come and talk to me as soon as possible (well before the drop deadline, which is April 15).

Textbook and topics

Foundations of Mathematical Analysis by Johnsonbaugh and Pfaffenberger.

We will cover approximately chapters I-X of the book. Thematically, the content we will cover falls into three areas:

  • Real numbers, sequences, limits, series, functions. Much of this will be familiar to you already, and we will not cover it in detail. Specifically, we will not cover the following sections in complete detail: 1-8, 10-17, 22-33 and 48-50. Instead, these topics will be reviewed in the first two weeks. Here are some notes for the review, written by Prof. Leon Simon.
  • Metric spaces. Completeness, compactness. Introduction to topological spaces.
  • Integration. The Riemann integral. Introduction to the Lebesgue integral. We will make use of these notes written by Prof. Leon Simon.

A more detailed lecture plan (updated after each lecture) is below.

Course Grade

The course grade will be based on the following:

  • 25% Homework assignments,
  • 25% Midterm exam,
  • 15% Writing assigment,
  • 35% Final exam

Homework Assignments

Homeworks will be posted here on an ongoing basis (roughly a week before they are due) and will be due at 4pm on the date listed. You can hand write your solutions, but you are encouraged to consider typing your solutions with LaTeX (now is a good time to start learning LaTeX if you haven't already, as you will be required to typeset the WIM assignment). Please submit your homework either directly to our Course Assistant Alex if he is in his office (380-380M) or slide the homework under his door if he is away. If you have typed your solutions, you are welcome to simply e-mail your homework to to Alex.

Note: we are looking not just for valid proofs, but also a readable, well explained ones (and indeed, you will be partly graded on readability). This means you should try to use complete sentences, insert explanations, and err on the side of writing out "for all" and "there exist", etc. symbols if there is any chance of confusion.

Late homeworks will not be accepted. In order to accomodate exceptional situations such as serious illness, your lowest homework score will be dropped at the end of the quarter. You are encouraged to discuss problems with each other, but you must work on your own when you write down solutions. The Honor Code applies to this and all other written aspects of the course.

As homeworks are completed, solutions (in PDF and LaTeX) will be uploaded here. The LaTeX files may be useful as templates for your own LaTeX work (whether on homework or the writing assignment).

Due date Assignment
Fri, Apr 8 Homework 1. Solutions: PDF. TeX
Fri, Apr 15 Homework 2. Solutions: PDF. TeX
Fri, Apr 22 Homework 3. Solutions: PDF. TeX
Fri, Apr 29 Homework 4. Solutions: PDF. TeX
Fri, May 6 Homework 5. Solutions: PDF. TeX
Fri, May 13 Homework 6. Solutions: PDF. TeX
Fri, May 20 Homework 7 (half weight, in light of WIM assignment). Solutions: PDF. TeX
Fri, May 27 Homework 8 (not half weight, but also shorter than usual) Solutions: PDF. TeX

Writing Assignment

The writing assignment is now posted here. The first draft will be due Friday, May 20 at 4pm to Alex and the final draft will be due Tuesday, May 31 at 4pm to Alex ( edit: The deadline has been extended to June 1 at 4 pm, should you require an additional day.) Your paper should be about 4-7 pages in length. It is required that you typeset your assignment we strongly recommend you use LaTeX. We have made available copies of the TeX solutions to homework assignemnts above, in case they are helpful references as you learn LaTeX. A couple of the homework assignments will be shorter than usual, to give you time to work on the writing assignment.

Clear writing is an important part of mathematical communication, and is an important part of our course. The broad idea of this assignment is to write a clear exposition of a specific mathematical topic detailed in the assignment beyond what we have covered in class, which is accessible to someone at a similar stage in a similar class.

Professor Keith Conrad at the University of Connecticut has written a helpful guide to common errors in mathematical writing, available here.

Midterm and Final Exam

Midterm exam

The Midterm Exam was held on Wednesday April 27 from 8:30 am - 10:20 am in 380-380F. Here is a copy of the exam. Here is a set of solutions.

Also, here are some old exams from previous versions of the course which we used for practice (bear in mind that the exact topics, time of exam, and format all differ from year to year):

  • 2014 spring midterm exam (with solutions built in)
  • 2013 spring midterm exam (solution set here)
  • 2011 spring midterm exam (no solutions available).

Final Exam

The Final Exam will be held on Saturday, June 4th from 8:30 am - 11:30 am in classroom 380-380X.

The final exam is a closed book, closed notes exam. The topics range through all of the topics we have covered in the class. Please see the Lecture Plan below for a review of all of the important topics and book sections (plus Professor Simon's review notes, and notes on integration) covered.

A helpful way to prepare for the final exam is to make sure you can solve all of the homework problems, and/or related problems in the book. It is also important to know and be able to articulate statements of all of the major definitions and Theorems/results covered in class to date (Indeed, a part of an exam question might even be to state an important result, before working with it. Or at the very least, when you are explaining your argument, you may need to cite such a result). You will not be asked to recall from memory proofs of important theorems, but you will certainly be asked to use these results, or reason through parts of their proof --- so some understanding of how various results are proved is definitely important.

Also, here is an old exam and some practice problems. See also the old midterm exams above (and note that some of them had problems which did not appear on our midterm, but might appear on the final)

  • Practice problems from the Spring 2013 Math 171 class final preparation here (a couple of these problems already appeared on our HW 8).
  • 2007 fall final exam (solutions available here).

Lecture Plan

Lecture topics by day will be posted on an ongoing basis below. Future topics are tentative and will be adjusted as necessary.


Math 416, Abstract Linear Algebra

This is a rigorous proof-oriented course in linear algebra. Topics include vector spaces, linear transformations, determinants, eigenvectors and eigenvalues, inner product spaces, Hermitian matrices, and Jordan Normal Form.

Prerequisites: Math 241 required with Math 347 strongly recommended.

Required text: Friedberg, Insel, and Spence, Linear Algebra, 4th edition, 600 pages, Pearson 2002.

Supplementary text: Especially for the first quarter of the course, I will also refer to the free text:

Breezer, A First Course in Linear Algebra, Version 3.5 (2015). Available online or as a downloadable PDF file.

Course Policies

Overall grading: Your course grade will be based on homework (16%), three in-class midterm exams (18% each), and a comprehensive final exam (30%).

Weekly homework: These are due at the beginning of class, typically on a Friday. Late homework will not be accepted however, your lowest two homework grades will be dropped, so you are effectively allowed two infinitely late assignments. Collaboration on homework is permitted, nay encouraged. However, you must write up your solutions individually and understand them completely.

In-class midterms: These three 50 minute exams will be held in our usual classroom on the following Wednesdays: February 17, March 16, and April 20.

Final exam: There will be a combined final exam for sections B13 and C13 of Math 416, which will be held on Friday, May 6 from 1:30-4:30 in Psychology 23.

Missed exams: There will be no make-up exams. Rather, in the event of a valid illness, accident, or family crisis, you can be excused from an exam so that it does not count toward your overall average. I reserve final judgment as to whether an exam will be excused. All such requests should be made in advance if possible, but in any event no more than one week after the exam date.

Cheating: Cheating is taken very seriously as it takes unfair advantage of the other students in the class. Penalties for cheating on exams, in particular, are very high, typically resulting in a 0 on the exam or an F in the class.

Disabilities: Students with disabilities who require reasonable accommodations should see me as soon as possible. In particular, any accommodation on exams must be requested at least a week in advance and will require a letter from DRES.

James Scholar/Honors Learning Agreements/4th credit hour: These are not offered for these sections of Math 416. Those interested in such credit should enroll in a different section of this course.

Detailed Schedule

Includes scans of my lecture notes and the homework assignments. Here [FIS] and [B] refer to the texts by Friedberg et al. and Breezer respectively.

Jan 20 Introduction. Section 1.1 of [FIS]. Jan 22 Vectors spaces. Section 1.2 of [FIS]. Jan 25 Subspaces. Section 1.3 of [FIS]. Jan 27 Linear combinations and systems of equations. Section 1.4 of [FIS] and Section SSLE of [B]. Jan 29 Using matrices to encode and solve linear systems. Section RREF of [B]. HW 1 due. Solutions. Feb 1 Row echelon form and Gaussian elimination. Section RREF of [B]. Feb 3 Solution spaces to linear systems. Section TSS of [B]. Feb 5 Linear dependence and independence. Section 1.5 of [FIS]. HW 2 due. Solutions. Feb 8 Basis and dimension, part 1. Section 1.6 of [FIS]. Feb 10 Basis and dimension, part 2. Section 1.6 of [FIS]. Feb 12 Basis, dimension, and linear systems. HW 3 due. Solutions. Feb 15 Intro to linear transformations. Section 2.1 of [FIS]. Feb 17 Midterm the First. Handout. Solutions. Feb 19 The Dimension Theorem. Section 2.1 of [FIS]. Feb 22 Encoding linear transformations as matrices. Section 2.2 of [FIS]. Feb 24 Composing linear transformations and matrix multiplication. Section 2.3 of [FIS]. Feb 26 More on matrix multiplication. HW 4 due. Section 2.3 of [FIS]. Solutions. Feb 29 Isomorphisms and invertibility. Section 2.4 of [FIS]. Mar 2 Matrices: invertibility and rank. Section 2.4 of [FIS] and Sections MINM and CRS of [B]. Mar 4 Changing coordinates. Section 2.5 of [FIS]. HW 5 due. Solutions. Mar 7 Introduction to determinants. Section 4.1 of [FIS]. Mar 9 Definition of the determinant. Section 4.2 of [FIS]. Mar 11 The determinant and row operations. Section 4.2 of [FIS]. HW 6 due. Solutions. Mar 14 Elementary matrices and the determinant. Sections 3.1 and 4.3 of [FIS]. Mar 16 Midterm the Second. Handout. Solutions. Mar 18 Determinants and volumes. Section 4.3 of [FIS]. Mar 19 Spring Break starts. Mar 27 Spring Break ends. Mar 28 Diagonalization and eigenvectors. Section 5.1 of [FIS]. Mar 30 Finding eigenvectors. Sections 5.1 and 5.2 of [FIS]. Apr 1 Diagonalization Criteria. Section 5.2 of [FIS]. HW 7 due. Solutions. Apr 4 Proof of the Diagonalization Criteria. Section 5.2 of [FIS]. Apr 6 Matrix powers and Markov Chains. Section 5.3 of [FIS]. Apr 8 Convergence of Markov Chains. Section 5.3 of [FIS]. HW 8 due. Solutions. Apr 11 Inner products. Section 6.1 of [FIS]. Apr 13 Inner products and orthogonality. Sections 6.1 and 6.2 of [FIS]. Apr 15 Gram-Schmidt and friends. Section 6.2 of [FIS]. HW 9 due. Solutions. Apr 18 Orthogonal complements and projections. Sections 6.2 and 6.3 of [FIS]. Apr 20 Midterm the Third. Handout. Solutions. Apr 22 Projections and adjoints. Section 6.3 of [FIS]. Apr 25 Normal and self-adjoint operators. Section 6.4 of [FIS]. Apr 27 Diagonalizing self-adjoint operators. Section 6.4 of [FIS]. HW 10 due. Solutions. Apr 29 Orthgonal and unitary operators. Section 6.5 of [FIS]. (a) can be eliminated by noting that the proof of (a) => (b) is really shows (e) => (b) and then giving the one-line proof that (b) => (a). Second, one replace the proof on the theorem on page 5 by a formal manipulation: (Ax, Ay) = (Ay)^t (A x) = y^t (A^t A) x = (x, y) at the cost of making the proof of the corollary on the next page less obvious. May 2 Dealing with nondiagonalizable matrices. Section 6.7 and 7.1 of [FIS]. May 4 Linear approximation, diagonalizing symmetric matrices, and the second derivative test. HW 11 due. Solutions. May 6 Final exam from 1:30 - 4:30 pm in Psychology 23. Handout. Solutions.


SYLLABUS

  • Office: 300H LeConte College
  • Office hours: T Th 2-3 pm
  • Email:[email protected]
  • Course Web Page:www.math.sc.edu /

Course Description This is an introduction to linear algebra and its applications. Main topics include matrix algebra, solution of linear systems, determinants, notions of vector space, basis, dimension, linear transformations, eigenvalues, and diagonaliztions. We will develop at each step the applications of these concepts to a range of problems in Mathematics, Engineering, and Economics.

Prerequisites Math 241--familiarity with vectors.

Textbook Linear Algebra and it Applications, by David C. Lay, Second Edition.

Homework and Quizzes There will be weekly homework assignments due every Tuesday. Late homeworks will not be accepted.

Also, there will be weekly quizzes every Thursday. No calculators, textbooks, or notes will be allowed during quizzes. You should save copies of these quizzes as they are a very good source for preparing for the final and midterms. There may be a number of practice quizzes during the lectures as well.

Doing the homework problems is the most important part of any math class. You may work with a group of your classmates if you are all at about the same level however, you should definitely try to do many problems on your own. Further, try to practice doing at least some of the problems in settings which resemble that of the test and quizzes, i.e., without using your calculator or constantly referring to the textbook.

Lecture and Reading Schedule You should make a sincere effort to keep up with your reading assignments.

Dates Lectures
Jan 16
18
T
TH
1.1
1.2
Systems of Linear Equations
Row Reduction and Echelon Forms
23
25
T
TH
1.3
1.4
Vector Equations
The Matrix Equation Ax=b
30 T 1.5 Solutions of Linear Systems
Feb 1 TH 1.6 Linear Independence
6
8
T
TH
1.7
1.8
Intro to linear transformations
Matrix of a linear transformation
13
15
T
TH
1.9
.
Linear Models in Science
Midterm 1
20
22
T
TH
2.1
2.2
Matrix Operations
The Inverse of a Matrix
27 T 2.3 Characterizations of Invertible Matrices
Mar 1 Th 2.8 Applications to Computer Graphics
6
8
T
TH
2.9
2.9
Subspaces of R^n
Subspaces of R^n
13
15
T
TH
.
.
Spring Break
Spring Break
20
22
T
TH
3.1
3.2, 3.3
Intro to Determinants
Properties, Volume
27
29
T
TH
.
.
Review
Midterm 2
Apr 3
5
T
TH
4.1
4.2, 4.3
Vector Spaces and Subspaces
Null Spaces, Column Spaces, and Bases
10
12
T
TH
4.4
4.5
Coordinate Systems
The Dimension of a vector Space
17
19
T
TH
4.7
5.1, 5.2
Change of basis
Eigenvalues, Characteristic Equation
24
26
T
TH
5.3
.
Diagonalization
Fibonacci Sequence
May 1 T . Review
9 W . Final Exam

Assignments You should plan to work on these problems over a period of several days. Getting a head start on each assignment is perhaps the most critical factor determining your success in this class.

Homework # Due Date Problems
1 Jan 23 1.1) 2, 6, 8, 12, 14, 24, 28, 34, 35
1.2) 2, 6, 10, 14, 31, 32, 33, 35.
2 Jan 30 1.3) 6, 8, 10, 12, 14, 23, 29
1.4) 4, 8, 10, 12, 14, 18.
3 Feb 6 1.5) 2, 6, 14, 16, 26, 36, 38
1.6) 4, 6, 10, 20, 22, 26, 28, 30, 32, 34, 36.
4 Feb 13 1.7)2, 4, 10, 12, 14, 16, 18, 20, 24, 30, 34
1.8) 2, 6, 10, 12, 18, 24, 26, 28, 33.
5 Feb 20 1.9)2, 4, 10, 12
Chap 1 Supplementary Exercises) 1, 3, 6, 11.
6 Feb 27 2.1)2, 4, 8, 12, 16, 20, 24, 26, 30
2.2) 4, 6, 10, 12, 13, 25, 26.
7 Mar 6 2.2)23, 24, 30, 32, 34
2.3)4, 6, 8, 14, 16, 26, 28, 34
2.8)2, 4, 6, 8, 10, 11.
8 Mar 20 2.8)7,14
2.9)2, 4, 6, 8, 10, 16, 18, 22, 24, 26, 28, 34.
9 Mar 27 3.1)4, 10, 16, 20, 22, 28, 34, 42
3.2)16, 18, 22, 26, 28, 31, 32, 33, 34, 35
3.3) 20, 24, 29, 30, 31.
10 Apr 3 Chap 3 Supplementary Exercises) 4, 6, 7, 9, 12.
11 Apr 10 4.1) 2, 4, 6, 8, 10, 12, 14, 24, 26, 28, 30
4.2) 2, 6, 8, 10, 26, 28, 34
4.3) 2, 4, 6, 11, 12, 23, 24.
12 Apr 17 4.4) 4, 6, 12, 14, 28
4.5) 2, 4, 9, 22, 24, 27, 28.
13 Apr 24 4.7) 8, 10, 14
5.1) 2, 6, 10, 24, 32
5.2) 4, 8, 10, 16, 25.
14 May 1 5.3) 2, 8, 10, 24
Handout) 2, 3, 4, 5, 7, 8.

Tests and Exams There will be two midterms, on Thursday Feb 15 and on Thursday Mar 29. The final exam will be on Wednesday May 9 at 5:30 pm. No calculators, or notes will be allowed during the exams. Note: bring a bluebook to the exams.

Grading The final grade is based on homeworks 10%, quizzes, 10%, midterms, 20% each, and the final exam, 40%.

A Few More Study Hints and Guidelines Learning Mathematics is a demanding affair. It requires a good deal of self discipline and hard work to appreciate the power and beauty of the subject. Further, solving math problems, much like playing a musical instrument, is a skill, which can be developed only through persistent practice. You should plan to work on your exercises everyday, and for a total of at least 8 hours each week. Also, it is very important that you faithfully attend all lectures.

A good deal of class time shall be devoted to working through problems. Do not get into the habit of sitting passively and expecting the professor to make you understand. Rather, you should take out your paper and pencil and try to do the problems at the same time with your instructor. If something is unclear to you, feel free to ask questions, and if you need more help, go see your instructor during the office hours. If you cannot come during the office hours, you are welcome to knock on the professor's door at another time, or send an email for an appointment.