Placeholder Image

字幕列表 影片播放

  • Okay.

  • This lecture is mostly about the idea of similar matrixes.

  • I'm going to tell you what that word similar means

  • and in what way two matrixes are called similar.

  • But before I do that, I have a little more

  • to say about positive definite matrixes.

  • You can tell this is a subject I think is really important and I

  • told you what positive definite meant --

  • it means that this --

  • this expression, this quadratic form, x transpose I

  • x is always positive.

  • But the direct way to test it was with eigenvalues

  • or pivots or determinants.

  • So I -- we know what it means, we know how to test it,

  • but I didn't really say where positive definite matrixes come

  • from.

  • And so one thing I want to say is that they come from least

  • squares in -- and all sorts of physical problems start with

  • a rectangular matrix -- well, you remember in least squares

  • the crucial combination was A transpose A.

  • So I want to show that that's a positive definite matrix.

  • Can -- so I --

  • I'm going to speak a little more about positive definite

  • matrixes, just recapping --

  • so let me ask a question.

  • It may be on the homework.

  • Suppose a matrix A is positive definite.

  • I mean by that it's all --

  • I'm assuming it's symmetric.

  • That's always built into the definition.

  • So we have a symmetric positive definite matrix.

  • What about its inverse?

  • Is the inverse of a symmetric positive definite matrix also

  • symmetric positive definite?

  • So you quickly think, okay, what do I

  • know about the pivots of the inverse matrix?

  • Not much.

  • What do I know about the eigenvalues

  • of the inverse matrix?

  • Everything, right?

  • The eigenvalues of the inverse are

  • one over the eigenvalues of the matrix.

  • So if my matrix starts out positive definite,

  • then right away I know that its inverse is positive definite,

  • because those positive eigenvalues --

  • then one over the eigenvalue is also positive.

  • What if I know that A -- a matrix A and a matrix B are

  • both positive definite?

  • But let me ask you this.

  • Suppose if A and B are positive definite, what about --

  • what about A plus B?

  • In some way, you hope that that would be true.

  • It's -- positive definite for a matrix is kind of like positive

  • for a real number.

  • But we don't know the eigenvalues of A plus B.

  • We don't know the pivots of A plus B.

  • So we just, like, have to go down this list of, all right,

  • which approach to positive definite

  • can we get a handle on?

  • And this is a good one.

  • This is a good one.

  • Can we -- how would we decide that --

  • if A was like this and if B was like this,

  • then we would look at x transpose A plus B x.

  • I'm sure this is in the homework.

  • Now -- so we have x transpose A x bigger than zero,

  • x transpose B x positive for all -- for all x,

  • so now I ask you about this

  • guy.

  • And of course, you just add that and that

  • and we get what we want.

  • If A and B are positive definites, so is A plus B.

  • So that's what I've shown.

  • So is A plus B.

  • Just -- be sort of ready for all the approaches through

  • eigenvalues and through this expression.

  • And now, finally, one more thought about positive definite

  • is this combination that came up in least squares.

  • Can I do that?

  • So now -- now suppose A is rectangular, m by n.

  • I -- so I'm sorry that I've used the same letter A

  • for the positive definite matrixes in the eigenvalue

  • chapter that I used way back in earlier chapters when

  • the matrix was rectangular.

  • Now, that matrix -- a rectangular matrix,

  • no way its positive definite.

  • It's not symmetric.

  • It's not even square in general.

  • But you remember that the key for these rectangular ones

  • was A transpose A.

  • That's square.

  • That's symmetric.

  • Those are things we knew --

  • we knew back when we met this thing

  • in the least square stuff, in the projection stuff.

  • But now we know something more --

  • we can ask a more important question, a deeper question --

  • is it positive definite?

  • And we sort of hope so.

  • Like, we -- we might --

  • in analogy with numbers, this is like --

  • sort of like the square of a number, and that's positive.

  • So now I want to ask the matrix question.

  • Is A transpose A positive definite?

  • Okay, now it's -- so again, it's a rectangular A that

  • I'm starting with, but it's the combination A transpose A

  • that's the square, symmetric and hopefully positive definite

  • matrix.

  • So how -- how do I see that it is positive definite,

  • or at least positive semi-definite?

  • You'll see that.

  • Well, I don't know the eigenvalues of this product.

  • I don't want to work with the pivots.

  • The right thing -- the right quantity to look at is this,

  • x transpose Ax --

  • A -- x transpose times my matrix times x.

  • I'd like to see that this thing --

  • that that expression is always positive.

  • I'm not doing it with numbers, I'm doing it with symbols.

  • Do you see -- how do I see that that expression comes out

  • positive?

  • I'm taking a rectangular matrix A and an A transpose --

  • that gives me something square symmetric,

  • but now I want to see that if I multiply --

  • that if I do this --

  • I form this quadratic expression that I

  • get this positive thing that goes upwards when I graph it.

  • How do I see that that's positive,

  • or absolutely it isn't negative anyway?

  • We'll have to, like, spend a minute on the question

  • could it be zero, but it can't be negative.

  • Why can this never be negative?

  • The argument is --

  • like the one key idea in so many steps in linear algebra --

  • put those parentheses in a good way.

  • Put the parentheses around Ax and what's the first part?

  • What's this x transpose A transpose?

  • That is Ax transpose.

  • So what do we have?

  • We have the length squared of Ax.

  • We have -- that's the column vector Ax that's the row vector

  • Ax, its length squared, certainly greater than

  • or possibly equal to zero.

  • So we have to deal with this little possibility.

  • Could it be equal?

  • Well, when could the length squared be zero?

  • Only if the vector is zero, right?

  • That's the only vector that has length squared zero.

  • So we have -- we would like to --

  • I would like to get that possibility out of there.

  • So I want to have Ax never -- never be zero,

  • except of course for the zero vector.

  • How do I assure that Ax is never zero?

  • The -- in other words, how do I show that there's no null space

  • of A?

  • The rank should be --

  • so now remember -- what's the rank when there's no null

  • space?

  • By no null space, you know what I mean.

  • Only the zero vector in the null space.

  • So if I have a -- if I have an 11 by 5 matrix --

  • so it's got 11 rows, 5 columns, when is there no null space?

  • So the columns should be independent -- what's the rank?

  • n 5 -- rank n.

  • Independent columns, when -- so if I --

  • then I conclude yes, positive definite.

  • And this was the assumption -- then A transpose A is

  • invertible --

  • the least squares equations all work fine.

  • And more than that -- the matrix is even positive definite.

  • And I just to say one comment about numerical things,

  • with a positive definite matrix, you never

  • have to do row exchanges.

  • You never run into unsuitably small numbers or zeroes

  • in the pivot position.

  • They're the right -- they're the great matrixes to compute with,

  • and they're the great matrixes to study.

  • So that's -- I wanted to take this first ten minutes of grab

  • the first ten minutes away from similar matrixes and continue

  • a -- this much more with positive definite.

  • I'm really at this point, now, coming close

  • to the end of the heart of linear algebra.

  • The positive definiteness brought everything together.

  • Similar matrixes, which is coming the rest of this hour

  • is a key topic, and please come on Monday.

  • Monday is about what's called the SVD, singular values.

  • It's the -- has become a central fact in --

  • a central part of linear algebra.

  • I mean, you can come after Monday also, but --

  • Monday is, -- that singular value thing has made it

  • into this course.

  • Ten years ago, five years ago it wasn't in the course,

  • now it has to be.

  • Okay.

  • So can I begin today's lecture proper with this idea

  • of similar matrixes.

  • This is what similar matrixes mean.

  • So here -- let's start again.

  • I'll write it again.

  • So A and B are similar.

  • A and B are -- now I'm -- these matrixes --

  • I'm no longer talking about symmetric matrixes, in --

  • at least no longer expecting symmetric matrixes.

  • I'm talking about two square matrixes n by n.

  • A and B, they're n by n matrixes.

  • And I'm introducing this word similar.

  • So I'm going to say what does it mean?

  • It means that they're connected in the way --

  • well, in the way I've written here, so let me rewrite it.

  • That means that for some matrix M, which has to be invertible,

  • because you'll see that --

  • this one matrix is --

  • take the other matrix, multiply on the right

  • by M and on the left by M inverse.

  • So the question is, why that combination?

  • But part of the answer you know already.

  • You remember -- we've done this -- we've taken a matrix A --

  • so let's do an example of similar.

  • Suppose A -- the matrix A -- suppose it has a full set

  • of eigenvectors.

  • They go in this eigenvector matrix S.

  • Then what was the main point of the whole --

  • the main calculation of the whole chapter was -- is --

  • use that eigenvector matrix S and its inverse

  • comes over there to produce the nicest possible matrix lambda.

  • Nicest possible because it's diagonal.

  • So in our new language, this is saying A is similar to lambda.

  • A is similar to lambda, because there is a matrix,

  • and this particular --

  • there is an M and this particular M

  • is this important guy, this eigenvector matrix.

  • But if I take a different matrix M and I look at M inverse A M,

  • the result won't come out diagonal,

  • but it will come out a matrix B that's similar to A.

  • Do you see that I'm -- what I'm doing is, like --

  • I'm putting these matrixes into families.

  • All the matrixes in one -- in the family are similar to each

  • other.

  • They're all -- each one in this family is connected to each

  • other one by some matrix M and the --

  • like the outstanding member of the family is the diagonal guy.

  • I mean, that's the simplest, neatest matrix

  • in this family of all the matrixes that are similar to A,

  • the best one is lambda.

  • But there are lots of others, because I can take different --

  • instead of S, I can take any old matrix M,

  • any old invertible matrix and -- and do it.

  • I'd better do an example.

  • Okay.

  • Suppose I take A as the matrix two one one two.

  • Okay.

  • Do you know the eigenvalue matrix for that?

  • The eigenvalues of that matrix are --

  • well, three and one.

  • So that -- and the eigenvectors would be easy to find.

  • So this matrix is similar to this one.

  • But my point is --

  • but also, I can also take my matrix, two one one two,