线性代数第17讲

合集下载

线性代数教案同济版

线性代数教案同济版

线性代数教案同济版第一章线性代数基本概念1.1 向量空间教学目标:1. 理解向量空间的概念及其性质;2. 掌握向量空间中的线性组合和线性关系;3. 了解向量空间的基和维数。

教学内容:1. 向量空间的概念;2. 向量空间的性质;3. 线性组合和线性关系;4. 基和维数的概念及计算。

教学方法:1. 通过具体例子引入向量空间的概念,引导学生理解向量空间的基本性质;2. 通过练习题,让学生掌握线性组合和线性关系的计算方法;3. 通过案例分析,让学生了解基和维数的概念及计算方法。

教学资源:1. 教材《线性代数》(同济版);2. 教学PPT;3. 练习题及答案。

教学步骤:1. 引入向量空间的概念,讲解向量空间的基本性质;2. 讲解线性组合和线性关系的计算方法,举例说明;3. 介绍基和维数的概念,讲解计算方法,举例说明;4. 布置练习题,让学生巩固所学知识。

教学评估:1. 课堂问答,检查学生对向量空间概念的理解;2. 练习题,检查学生对线性组合和线性关系计算方法的掌握;3. 案例分析,检查学生对基和维数概念及计算方法的掌握。

1.2 线性变换教学目标:1. 理解线性变换的概念及其性质;2. 掌握线性变换的矩阵表示;3. 了解线性变换的图像和核。

教学内容:1. 线性变换的概念;2. 线性变换的性质;3. 线性变换的矩阵表示;4. 线性变换的图像和核的概念及计算。

教学方法:1. 通过具体例子引入线性变换的概念,引导学生理解线性变换的基本性质;2. 通过练习题,让学生掌握线性变换的矩阵表示方法;3. 通过案例分析,让学生了解线性变换的图像和核的概念及计算方法。

教学资源:1. 教材《线性代数》(同济版);2. 教学PPT;3. 练习题及答案。

教学步骤:1. 引入线性变换的概念,讲解线性变换的基本性质;2. 讲解线性变换的矩阵表示方法,举例说明;3. 介绍线性变换的图像和核的概念,讲解计算方法,举例说明;4. 布置练习题,让学生巩固所学知识。

《线性代数讲义》课件

《线性代数讲义》课件

在工程学中,性变换也得到了广泛的应用。例如,在图像处理中,可
以通过线性变换对图像进行缩放、旋转等操作;在线性控制系统分析中
,可以通过线性变换对系统进行建模和分析。
THANKS
感谢观看
特征向量的性质
特征向量与特征值一一对应,不同的 特征值对应的特征向量线性无关。
特征值与特征向量的计算方法
01
定义法
根据特征值的定义,通过解方程 组Av=λv来计算特征值和特征向 量。
02
03
公式法
幂法
对于某些特殊的矩阵,可以利用 公式直接计算特征值和特征向量 。
通过迭代的方式,不断计算矩阵 的幂,最终得到特征值和特征向 量。
矩阵表示线性变换的方法
矩阵的定义与性质
矩阵是线性代数中一个基本概念,它可以表示线性变 换。矩阵具有一些重要的性质,如矩阵的加法、标量 乘法、乘法等都是封闭的。
矩阵表示线性变换的方法
通过将线性变换表示为矩阵,可以更方便地研究线性 变换的性质和计算。具体来说,如果一个矩阵A表示 一个线性变换L,那么对于任意向量x,有L(x)=Ax。
特征值与特征向量的应用
数值分析
在求解微分方程、积分方程等数值问题时, 可以利用特征值和特征向量的性质进行求解 。
信号处理
在信号处理中,可以利用特征值和特征向量的性质 进行信号的滤波、降噪等处理。
图像处理
在图像处理中,可以利用特征值和特征向量 的性质进行图像的压缩、识别等处理。
05
二次型与矩阵的相似性
矩阵的定义与性质
数学工具
矩阵是一个由数字组成的矩形阵列,表示为二维数组。矩阵具有行数和列数。矩阵可以进行加法、数 乘、乘法等运算,并具有相应的性质和定理。矩阵是线性代数中重要的数学工具,用于表示线性变换 、线性方程组等。

麻省理工-线性代数-讲义17

麻省理工-线性代数-讲义17

18.06 Linear Algebra, Spring 2005Please use the following citation format:Gilbert Strang, 18.06 Linear Algebra, Spring 2005. (MassachusettsInstitute of Technology: MIT OpenCourseWare). (accessed MM DD, YYYY). License: Creative Commons Attribution-Noncommercial-Share Alike.Note: Please use the actual date you accessed this material in your citation. For more information about citing these materials or our Terms of Use, visit: /terms18.06 Linear Algebra, Spring 2005Transcript – Lecture 17OK, here's the last lecture in the chapter on orthogonality.So we met orthogonal vectors, two vectors, we met orthogonal subspaces, like the row space and null space. Now today we meet an orthogonal basis, and an orthogonal matrix. So we really --this chapter cleans up orthogonality.And really I want --I should use the word orthonormal.Orthogonal is --so my vectors are q1,q2 up to qn --I use the letter "q", here, to remind me, I'm talking about orthogonal things, not just any vectors, but orthogonal ones. So what does that mean? That means that every q is orthogonal to every other q.It's a natural idea, to have a basis that's headed off at ninety-degree angles, the inner products are all zero. Of course if q is --certainly qi is not orthogonal to itself. But there we'll make the best choice again, make it a unit vector.Then qi transpose qi is one, for a unit vector.The length squared is one. And that's what I would use the word normal. So for this part, normalized, unit length for this part.OK. So first part of the lecture is how does having an orthonormal basis make things nice? It certainly does. It makes all the calculations better, a whole lot of numerical linear algebra is built around working with orthonormal vectors, because they never get out of hand, they never overflow or underflow. And I'll put them into a matrix Q, and then the second part of the lecture will be suppose my basis, my columns of A are not orthonormal.How do I make them so? And the two names associated with that simple idea are Graham and Schmidt. So the first part is we've got a basis like this.Let's put those into the columns of a matrix.So a matrix Q that has --I'll put these orthonormal vectors, q1 will be the first column, qn will be the n-th column.And I want to say, I want to write this property, qi transpose qj being zero, I want to put that in a matrix form. And just the right thing is to look at Q transpose Q. So this chapter has been looking at A transpose A. So it's natural to look at Q transpose Q. And the beauty is it comes out perfectly. Because Q transpose has these vectors in its rows, the first row is q1 transpose, the nth row is qn transpose. So that's Q transpose.And now I want to multiply by Q.That has q1 along to qn in the columns.That's Q. And what do I get? You really --this is the first simplest most basic fact, that how do orthonormal vectors, orthonormal columns in a matrix, what happens if I compute Q transpose Q? Do you see it? If I take the first row times the first column, what do I get? A one. If I take the first row times the second column, what do I get? Zero. That's the orthogonality.The first row times the last column is zero.And so I'm getting ones on the diagonal and I'm getting zeroes everywhere else. I'm getting the identity matrix. You see how that's --it's just like the right calculation to do.If you have orthonormal columns, and the matrix doesn't have to be square here. We might have just two columns.And they might have four, lots of components.So but they're orthonormal, and when we do Q transpose times Q, that Q transpose times Q or A transpose A just asks for all those dot products.Rows times columns. And in this orthonormal case, we get the best possible answer, the identity.OK, so this is --so I mean now we have a new bunch of important matrices. What have we seen previously? We've seen in the distant past we had triangular matrices, diagonal matrices, permutation matrices, that was early chapters, then we had row echelon forms, then in this chapter we've already seen projection matrices, and now we're seeing this new class of matrices with orthonormal columns. That's a very long expression. I sorry that I can't just call them orthogonal matrices. But that word orthogonal matrices --or maybe I should be able to call it orthonormal matrices, why don't we call it orthonormal --I mean that would be an absolutely perfect name. For Q, call it an orthonormal matrix because its columns are orthonormal. OK, but the convention is that we only use that name orthogonal matrix, we only use this --this word orthogonal, we don't even say orthonormal for some unknown reason, matrix when it's square.So in the case when this is a square matrix, that's the case we call it an orthogonal matrix.And what's special about the case when it's square? When it's a square matrix, we've got its inverse, so --so in the case if Q is square, then Q transpose Q equals I tells us --let me write that underneath --tells us that Q transpose is Q inverse.There we have the easy to remember property for a square matrix with orthonormal columns. That --I need to write some examples down. Let's see.Some examples like if I take any --so examples, let's do some examples. Any permutation matrix, let me take just some random permutation matrix.Permutation Q equals let's say oh, make it three by three, say zero, zero, one, one, zero, zero, zero, one, zero.OK. That certainly has unit vectors in its columns. Those vectors are certainly perpendicular to each other. And if I --and so that's it.That makes it a Q. And --if I took its transpose, if I multiplied by Q transpose, shall I do that --and let me stick in Q transpose here.Just to do that multiplication once more, transpose it'll put the --make that into a column, make that into a column, make that into a column. And the transpose is also --another Q. Another orthonormal matrix.And when I multiply that product I get I.OK, so there's an example. And actually there's a second example. But those are real easy examples, right, I mean to get orthogonal columns by just putting ones in different places is like too easy. So let me keep going with examples. So here's another simple example. Cos theta sine theta, there's a unit vector, oh, let me even take it, well, yeah. Cos theta sine theta and now the other way I want sine theta cos theta.But I want the inner product to be zero.And if I put a minus there, it'll do it.So that's --unit vector, that's a unit vector. And if I take the dot product, I get minus plus zero. OK.For example Q equals say one, one, one, minus one, is that an orthogonal matrix?I've got orthogonal columns there, but it's not quite an orthogonal matrix. How shall I fix it to be an orthogonal matrix? Well, what's the length of those column vectors, the dot product with themselves is --right now it's two, right, the --the length squared.The length squared would be one plus one would be two, the length would be square root of two, so I better divide by square root of two.OK. So there's a --there now I have got an orthogonal matrix, in fact, it's this one --when theta is pi over four. The cosines and well almost, I guess the minus sine is down there, so maybe, I don't know, maybe minus pi over four or something. OK. Let me do one final example, just to show that you can get bigger ones. Q equals let me take that matrix up in the corner and I'll sort of repeat that pattern, repeat it again, and then minus it down here.That's one of the world's favorite orthogonal matrices.I hope I got it right, is --can you see whether --if I take the inner product of one column with another one, let's see, if I take the inner product of that column with that I have two minuses and two pluses, that's good.When I take the inner product of that with that I have a plus and a minus, a minus and a plus. Good.I think it all works out. And what do I have to divide by now? To make those into unit vectors. Right now the vector one, one, one, one has length two. Square root of four.So I have to divide by two to make it unit vector, so there's another. That's my entire array of simple examples. This construction is named after a guy called Adhemar and we know how to do it for two, four, sixteen, sixty-four and so on, but we --nobody knows exactly which size matrices have --which size --which sizes allow orthogonal matrices of ones and minus ones. So Adhemar matrix is an orthogonal matrix that's got ones and minus ones, and a lot of ones --some we know, some other sizes, there couldn't be a five by five I think.But there are some sizes that nobody yet knows whether there could be or can't be a matrix like that.OK. You see those orthogonal matrices. Now let me ask what --why is it good to have orthogonal matrices? What calculation is made easy? If I have an orthogonal matrix.And --let me remember that the matrix could be rectangular. Shall I put down --I better put a rectangular example down. So the --these were all square examples. Can I put down just --a rectangular one just to be sure that we realize that this is possible. let's help me out.Let's see, if I put like a one, two, two and a minus two, minus one, two.That's --a matrix --oh its columns aren't normalized yet.I always have to remember to do that.I always do that last because it's easy to do.What's the length of those columns? So if I wanted them --if I wanted them to be length one, I should divide by their length, which is --so I'd look at one squared plus two squared plus two squared, that's one and four and four is nine, so I take the square root and I need to divide by three. OK.So there is --well, without that, I've got one orthonormal vector.I mean just one unit vector. Now put that guy in.Now I have a basis for the column space for a two-dimensional space, an orthonormal basis, right? These two columns are orthonormal, they would be an orthonormal basis for this two-dimensional space that they span.Orthonormal vectors by the way have got to be independent.It's easy to show that orthonormal vectors since they're headed off all at ninety degrees there's no combination that gives zero. Now if I wanted to create now a third one, I could either just put in some third vector that was independent and go tothis Graham-Schmidt calculation that I'm going to explain, or I could be inspired and say look, that --with that pattern, why not put a one in there, and a two in there, and a two in there, and try to fix up the signs so that they worked. Hmm.I don't know if I've done this too brilliantly.Let's see, what signs, that's minus, maybe I'd make a minus sign there, how would that be? Yeah, maybe that works. I think that those three columns are orthonormal and they --the beauty of this --this is the last example I'll probably find where there's no square root, the --the punishing thing in Graham-Schmidt, maybe we better know that in advance, is that because I want these vectors to be unit vectors, I'm always running into square roots. I'm always dividing by lengths.And those lengths are square roots.So you'll see as soon as I do a Graham-Schmidt example, square roots are going to show up.But here are some examples where we did it without any square root. OK.OK. So --so great.Now next question is what's the good of having a Q? What formulas become easier? Suppose I want to project, so suppose Q --suppose Q has orthonormal columns.I'm using the letter Q to mean this, I'll write it this one more time, but I always mean when I write a Q, I always mean that it has orthonormal columns.So suppose I want to project onto its column space.So what's the projection matrix? What's the projection matrix is I project onto a column space? OK, that gives me a chance to review the projection section, including that big formula, which used to be --those four As in a row, but now it's got Qs, because I'm projecting onto the column space of Q, so do you remember what it was? It's Q Q transpose Q inverse Q transpose.That's my four Qs in a row. But what's good here? What --what makes this formula nice if I'm projecting onto a column space when I have orthonormal basis for that space? What makes it nice is this is the identity. I don't have to do any inversion. I just get Q Q transpose.So Q Q transpose is a projection matrix.Oh, I can't help --I can't resist just checking the properties, what are the properties of a projection matrix? There are two properties to know for any projection matrix. And I'm saying that this is the right projection matrix when we've got this orthonormal basis in the columns. OK.So there's the projection matrix.Suppose the matrix is square. First just tell me first this extreme case. If my matrix is square and it's got these orthonormal columns, then what's the column space? If I have a square matrix and I have independent columns, and even orthonormalcolumns, then the column space is the whole space, right? And what's the projection matrix onto the whole space? The identity matrix.If I'm projecting in the whole space, every vector B is right where it's supposed to be and I don't have to move it by projection. So this would be --I'll put in parentheses this is I if Q is square.Well that we said that already. If Q is square, that's the case where Q transpose is Q inverse, we can put it on the right, we can put it on the left, we always get the identity matrix, if it's square.But if it's not a square matrix then it's not --we don't get the identity matrix. We have Q Q transpose, and just again what are those two properties of a projection matrix? First of all, it's symmetric. OK, no problem, that's certainly a symmetric matrix.So what's that second property of a projection? That if you project and project again you don't move the second time. So the other property of a projection matrix should be that Q Q transpose twice should be the same as Q Q transpose once.That's projection matrices. And that property better fall out right away because from the fact we know about orthonormal matrices, Q transpose Q is I. OK, you see it.In the middle here is sitting Q Q t-Q transpose Q, sorry, that's what I meant to say, Q transpose Q is I. So that's sitting right in the middle, that cancels out, to give the identity, we're left with one Q Q transpose, and we're all set.OK. So this is the projection matrix --all the equation --all the messy equations of this chapter become trivial when our matrix --when we have this orthonormal basis.I mean what do I mean by all the equations? Well, the most important equation was the normal equation, do you remember old A transpose A x hat equals A transpose b? But now --now A is Q. Now I'm thinking I have Q transpose Q X hat equals Q transpose b.And what's good about that? What's good is that matrix on the left side is the identity. The matrix on the left is the identity, Q transpose Q, normally it isn't, normally it's that matrix of inner products and you've to compute all those dopey inner products and --and --and solve the system. Here the inner products are all one or zero. This is the identity matrix.It's gone. And there's the answer.There's no inversion involved. Each component of x is a Q times b. What that equation is saying is that the i-th component is the i-th basis vector times b. That's --probably the most important formula in some major parts of mathematics, that if we have orthonormal basis, then the component in the --in the i-th, along the i-th --the projection on the i-th basis vector is just qi transpose b.That number x that we look for is just a dot product.OK. OK, so I'm ready now for the sort of like second half of the lecture.Where we don't start with an orthogonal matrix, orthonormal vectors. We just start with independent vectors and we want to make them orthonormal.So I'm going to --can I do that now? Now here comes Graham-Schmidt. So --Graham-Schmidt.So this is a calculation, I won't say --I can't quite say it's like elimination, becauseit's different, our goal isn't triangular anymore. With elimination our goal was make the matrix triangular. Now our goal is make the matrix orthogonal. Make those columns orthonormal. So let me start with two columns. So I start with vectors a and b.And they're just like --here, let me draw them.Here's a. Here's b.For example. A isn't specially horizontal, wasn't meant to be, just a is one vector, b is another. I want to produce those two vectors, they might be in twelve-dimensional space, or they might be in two-dimensional space.They're independent, anyway.So I better be sure I say that. I start with independent vectors. And I want to produce out of that q 1 and q2, I want to produce orthonormal vectors. And Graham and Schmidt tell me how. OK.Well, actually you could tell me how, we don't need --frankly, I don't know --there's only one idea here, if Graham had the idea, I don't know what Schmidt did.But OK. So you'll see it.We don't need either of them, actually.OK, so what I going to do. I'll take that --this first guy. OK. Well, he's fine. That direction is fine except --yeah, I'll say OK, I'll settle for that direction.So I'm going to --I'm going to get, so what I going to --my goal is I'm going to get orthogonal vectors and I'll call those capital A and B.So that's the key step is to get from any two vectors to two orthogonal vectors. And then at the end, no problem, I'll get orthonormal vectors, how will --what will those will be my qs, q1 and q2, and what will they be? Once I've got A and B orthogonal, well, look, it's no big deal --maybe that's what Schmidt did, he, brilliant Schmidt, thought OK, divide by the length, all right. That's Schmidt's contribution.OK. But Graham had a little more thinking to do, right? We haven't done Graham's part. This part except OK, I'm happy with A, A can be A. That first direction is fine. Why should --no complaint about that. The trouble is the second direction is not fine. Because it's not orthogonal to the first. I'm looking for a vector that's --starts with B, but makes it orthogonal to A.What's the vector? How do I do that? How do I produce from this vector a piecethat's orthogonal to this one? And the --remember these vectors might be in two dimensions or they might be in twelve dimensions.I'm just looking for the idea. So what's the idea? Where did we have orthogonal --a vector showing up that was orthogonal to this guy? Well, that was the first basic calculation of the whole chapter.We --we did a projection and the projection gave us this part, which was the part in the A direction. Now, the part we want is the other part, the e part. This part.This is going to be our --that guy is that guy.This is our vector B. That gives us that ninety-degree angle. So B is you could say --B is really what we previously called e.The error vector. And what is it? I mean what do I --what is B here? A is A, no problem. B is --OK, what's this error piece? Do you remember? It's I start with the original B and I take away what? Its projection, this P.This --the vector B, this error vector, is the original vector removing the projection. So instead of wanting the projection, now that's what I want to throw away.I want to get the part that's perpendicular.And there will be a perpendicular part, it won't be zero. Because these vectors were independent, so B --if B was along the direction of A, then if the original B and A were in the same direction, then I'm --I've only got one direction. But here they're in two independent directions and all I'm doing is getting that guy. So what's its formula? What's the formula for that if --I want to subtract the projection, so do you remember the projection? It's some multiple of A and what's that multiple? It's --it's that thing we called x in the very very first lecture on this chapter.There's an A transpose A in the bottom and there's an A transpose B, isn't that it? I think that's Graham's formula. Or Graham-Schmidt.No, that's Graham. Schmidt has got to divide the whole thing by the length, so he --his formula makes a mess which I'm not willing to write down.So let's just see that what I saying here? I'm saying that this vector is perpendicular to A.That these are orthogonal. A is perpendicular to B.Can you check that? How do you see that yes, of course, we --our picture is telling us, yes, we did it right. How would I check that this matrix is perpendicular to A? I would multiply by A transpose and I better get zero, right? I should check that. A transpose B should come out zero. So this is A transpose times --now what did we say B was? We start with the original B, and we take away this projection, and that should come out zero. Well, here we get an A transpose B minus --and here is another A transpose B, and the --and it's an A transpose A over A transpose A, a one, those cancel, and we do get zero.Right. Now I guess I can do numbers in there. But I have to take a third vector to be sure we've got this system down. So now I have to say if I have independent vectors A, B and C, I'm looking for orthogonal vectors A, B and capital C, and then of course the third guy will just be C over its length, the unit vector.So this is now the problem. I got B here.I got A very easily. And now --if you see the idea, we could figure out a formula for C.So now that --so this is like a typical homework quiz problem.I give you two vectors, you do this, I give you three vectors, and you have to make them orthonormal. So you do this again, the first vector's fine, the second vector is perpendicular to the first, and now I need a third vector that's perpendicular to the first one and the second one. Right? Tthis is the end of a --the lecture is to find this guy.Find this vector --this vector C, that's perpendicular we n-at this point we know A and B. But C, the little c that we're given, is off in some --it's got to come out of the blackboard to be independent, so --so can I sort of draw off --off comes a c somewhere. I don't know, where I going to put the darn thing? Maybe I'll put it off, oh, I don't know, like that somehow, C, little c.And I already know that perpendicular direction, that one and that one. So now what's the idea? Give me the Graham-Schmidt formula for C.What is this C here? Equals what? What I going to do? I'll start with the given one. As before. Right? I start with the vector I'm given.And what do I do with it? I want to remove out of it, I want to subtract off, so I'll put a minus sign in, I want to subtract off its components in the A, capital A and capital B directions.I just want to get those out of there.Well, I know how to do that. I did it with B.So I'll just --so let me take away --what if I do this? What have I done? I've got little c and what have I subtracted from it? Its component, its projection if you like, in the A direction.And now I've got to subtract off its component B transpose C over B transpose B, that multiple of B, is its component in the B direction.And that gives me the vector capital C that if anything is --if there's any justice, this C should be perpendicular to A and it should be perpendicular to B.And the only thing it's --hasn't got is unit vector, so we divide by its length to get that too. OK. Let me do an example.Can I --I'll make my life easy, I'll just take two vectors. So let me do a numerical example. If I'll give you two vectors, you give me back the Graham-Schmidt orthonormal basis, and we'll see how to express that in matrix form.OK. So let me give you the two vectors. So I'll take the vector A equals let's say one, one, one, why not? And B equals let's say one, zero, two, OK? I didn't want to cheat and make them orthogonal in the first place because then Graham-Schmidt wouldn't be needed.OK. So those are not orthogonal.So what is capital A? Well that's the same as big A.That was fine. What's B? So B is this b --is the original B, and then I subtract off some multiple of the A. And what's the multiple? What goes in here? B --here's the A --this is the --this is the little b, this is the big A, also the little a, and I want to multiply it by that right --that right ratio, which has A transpose A, here's my ratio. I'm just doing this.So it's A transpose B, what is A transpose B, it looks like three. And what is A --oh, my --what's A transpose A? Three. I'm sorry. I didn't know that was going to happen. OK.But it happened. Why should we knock it? OK. So do you see it all right? That's A transpose B, there's A transpose A, that's the fraction, so I take this away, and I get one take away one is a zero, zero minus this one is a minus one, and two minus the one is a one.OK. And what's this vector that we finally found? This is B.And how do I know it's right? How do I know I've got a vector I want? I check that B is perpendicular to --that A and B are perpendicular.That A is perpendicular to B. Just look at that.That one --the dot product of that with that is zero.OK. So now what is my q1 and q2? Why don't I put them in a matrix? Of course. Since I'm always putting these --so the Q, I'll put the q1 and the q2 in a matrix. And what are they? Now when I'm writing q-s I'm supposed to make things normalized. I'm supposed to make things unit vectors. So I'm going to take that A but I'm going to divide it by square root of three.And I'm going to take this B but I'm going to divide it by square root of two to make it a unit vector, and there is my matrix. That's my matrix with orthonormal columns coming from Graham-Schmidt and it sort of it --it came from the original one, one, one, one, zero, two, right? That was my original guys. These were the two I started with. These are the two that I'm happy to end with. Because those are orthonormal. So that's what Graham-Schmidt did.It --well, tell me about the column spaces of these matrices.How is the column space of Q related to the column space of A? So I'm always asking you things like this, and that makes you think, OK, the column space is all combinations of the columns, it's that plane, right? I've got two vectors in three-dimensional space, their column space is a plane, the column space of this matrix is a plane, what's the relation between the planes? Between the two column spaces? They're one and the same, right? It's the same column space.All I'm taking is here this B thing that I computed, this B thing that I computed is a combination of B and A, and A was little A, so I'm always working here with this in the same space. I'm just like getting ninety-degree angles in there. Where my original column space had a perfectly good basis, but it wasn't as good as this basis, because it wasn't orthonormal.Now this one is orthonormal, and I have a basis then that --so now projections, all the calculations I would ever want to do are --are a cinch with this orthonormal basis. One final point.One final point in this chapter. And it's --just like elimination.We learned how to do elimination, we know all the steps, we can do it. But then I came back to it and said look at it as a matrix in matrix language and elimination gave me --what was elimination in matrix language? I'll just put it up there.A was LU. That was matrix, that was elimination. Now, I want to do the same for Graham-Schmidt. Everybody who works in linear algebra isn't going to write out the columns are orthogonal, or orthonormal. And isn't going to write out these formulas. They're going to write out the connection between the matrix A and the matrix Q. And the two matrices have the same column space, but there's some --some matrix is taking the --and I'm going to call it R, so A equals QR is the magic formula here. It's the expression of Graham-Schmidt. And I'll --let me just capture that. So that's the --my final step then is A equal QR. Maybe I can squeeze it in here. So A has columns, let's say a1 and a2.Let me suppose n is two, just two vectors.OK. So that's some combination of q1 and q2. And times some matrix R.They have the same column space. This is just --this matrix just includes in it whatever these numbers like three over three and one over square root of three and one over square root of two, probably that's what it's got. One over square root of three, one over square root of two, something there, but actually it's got a zero there.So the main point about this A equal QR is this R turns out to be upper triangular.It turns out that this zero is upper triangular.We could see why. Let me see, I can put in general formulas for what these are. This I think in here should be the inner product of a1 with q1.。

简明线性代数讲义(郭志军,2015,8)

简明线性代数讲义(郭志军,2015,8)

a11 a21 an1
a12 a22 an 2
a1n a2 n ann
N i1i2 in N j1 j2 jn
aij
nn

j1 j2
1
jn
N j1 j2
jn
a1 j1 a2 j2
anjn
i1i2
1
in
N i1i2
1
增加未知量的个数(二元、三元方程组) ;②增加未知量的 幂次(一元二次方程) 。韦达曾经这样地描述过“算术”与 “代数” :所谓“算术” ,即仅研究关于具体数的计算方法; 所谓“代数” ,即是研究关于事物的类或形式的运算方法— 字母表示数的思想方法是代数学发展史上的一个重大转折。 代数学的深化阶段即是高等代数阶段。十七世纪下半叶,从 研究线性方程组的解出发, 在莱布尼茨、 凯莱等人的努力下, 建立了以行列式、矩阵和线性方程组为主要内容的线性代 数,标志着高等代数理论体系的建立。由于计算机的飞速发 展与广泛应用,许多实际问题可以通过离散化的数值计算加 以解决;作为处理离散问题的线性代数,已成为科研与设计 等的必备数学基础。代数学的抽象化阶段—近世代数(抽象 代数)产生于十九世纪,其研究各种抽象的合理化的代数系 统,包括群论、环论、线性代数等许多分支。一般认为,其 形成的时间为 1926 年;从此代数学的研究对象由代数方程 根的计算与分布,进入到研究数字、文字和更一般元素的代 数运算规律和各种代数结构。
in
ai1 ,1ai2 ,2
ain ,n

1, 2,
i1i2 in j1 j2 jn
1
ai1 ai2 j2
这里, j1 j2 ain jn ,
jn 表示求和取遍

中国科学技术大学线性代数课程讲义1

中国科学技术大学线性代数课程讲义1

本章主要介绍一般的 n 元线性方程组
aa2111
x1 x1
+ +
a12x2 a22x2
+ +
· ·
· ·
· ·
+ +
a1nxn a2nxn
= =
b1 b2
am1x1
+
ห้องสมุดไป่ตู้
··········· am2x2 + · · · +
· amnxn
=
bm
(1.1)
的求解方法,其中 a11, a12, · · · , amn, b1, b2 · · · , bm 是已知的数,x1, x2, · · · , xn 是待求解的变量.特 别,当 b1 = b2 = · · · = bm = 0 时,线性方程组 (1.1) 称为齐次线性方程组.
以把 a11 ̸= 0 情形化为 a11 = 1 情形.
例 1.1 中的同解变形消元过程可以表示为如下初等变换.
⃝⃝12 ⃝⃝34
−换−行→
⃝⃝21 ⃝⃝34
−消−去−−x→2
⃝⃝21 ⃝⃝67
=
⃝5

3
×
⃝1
−消−去−−x→1
⃝⃝21 ⃝⃝35
=
⃝4

⃝3
−消−去−−x→1
⃝⃝21 ⃝⃝65
=
⃝3
7
8
第一章 线性方程组
例 1.1. 求解线性方程组
xx12
− −
x3 = −1 2x2 = −4
⃝1 ⃝2
33xx11
− +
2x2 + x4 = −7 x2 + x3 + 3x4 =

考研线性代数讲义及其答案

考研线性代数讲义及其答案

例 4 设 A, B, I 为同阶矩阵,下列命题哪些是正确的? (1) ( A + B ) 2 = A2 + 2 AB + B 2 不正确 正确 正确
(2) ( A + λ I )3 = A3 + 3λ A2 + 3λ 2 A + λ 3 I ( λ 为数)
(3)若 A, B 可交换,则 ( A + B ) 与 ( A − B ) 相乘也可交换. (4) ( AB ) 2 = A2 B 2 当且仅当 AB = BA
也即 ( A − E ) ⋅
2012 届普鸣学员个性化学习方案 例 11 下列矩阵 A, B 是否可逆?若可逆,求其逆矩阵,其中
⎛ 3 2 1⎞ ⎛ b1 ⎜ ⎟ A = ⎜ 1 1 1⎟ , B = ⎜ b2 ⎜ ⎜ 1 0 1⎟ ⎜ ⎝ ⎠ ⎝
⎞ ⎟ ⎟ b3 ⎟ ⎠
解: A = 2 ≠ 0 ,故 A 可逆,记 A = ( aij )3×3 ,各元素的代数余子式分别为
答案: An = ( −8) n −1 A
⎛λ ⎜ 例9 A= ⎜ ⎜ ⎝
⎞ n λ 1⎟ ⎟ ,求 A λ⎟ ⎠
1
⎛λ 1 ⎞ ⎛0 1 ⎞ ⎛0 1 ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 解: A = λ 1 ⎟ = λE + ⎜ 0 1 ⎟ ,记 B = ⎜ 0 1⎟ ⎜ ⎜ ⎜ ⎜ λ⎟ 0⎟ 0⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎛0 0 1⎞ ⎛0 0 0⎞ ⎜ ⎟, 3 ⎜ ⎟ B =⎜ 0 0⎟ B = ⎜ 0 0⎟ ⎜ ⎜ 0⎟ 0⎟ ⎝ ⎠ ⎝ ⎠
例 4 设行列式
⋯ ⋯
⋱ ⋮ ⋯ an
, ai ≠ 0, i = 1, 2,⋯ , n
1 1 a1 − − ⋯ − ⋯ 1 a2 an ⋯ 0 0 = ⋱ ⋮ ⋮ ⋯ an 0

线性代数讲义

线性代数讲义

目录第一讲基本概念线性方程组矩阵与向量初等变换和阶梯形矩阵线性方程组的矩阵消元法第二讲行列式完全展开式化零降阶法其它性质克莱姆法则第三讲矩阵乘法乘积矩阵的列向量和行向量矩阵分解矩阵方程逆矩阵伴随矩阵第四讲向量组线性表示向量组的线性相关性向量组的极大无关组和秩矩阵的秩第五讲方程组解的性质解的情况的判别基础解系和通解第六讲特征向量与特征值相似与对角化特征向量与特征值—概念,计算与应用相似对角化—判断与实现附录一内积正交矩阵施密特正交化实对称矩阵的对角化第七讲二次型二次型及其矩阵可逆线性变量替换实对称矩阵的合同标准化和规范化惯性指数正定二次型与正定矩阵附录二向量空间及其子空间附录三两个线性方程组的解集的关系附录四06,07年考题第一讲基本概念1.线性方程组的基本概念线性方程组的一般形式为:a11x1+a12x2+…+a1n x n=b1,a21x1+a22x2+…+a2n x n=b2,…………a m1x1+a m2x2+…+a mn x n=b m,其中未知数的个数n和方程式的个数m不必相等.线性方程组的解是一个n维向量(k1,k2, …,k n)(称为解向量),它满足:当每个方程中的未知数x i都用k i 替代时都成为等式.线性方程组的解的情况有三种:无解,唯一解,无穷多解.对线性方程组讨论的主要问题两个:(1)判断解的情况.(2)求解,特别是在有无穷多接时求通解.b1=b2=…=b m=0的线性方程组称为齐次线性方程组.n维零向量总是齐次线性方程组的解,称为零解.因此齐次线性方程组解的情况只有两种:唯一解(即只要零解)和无穷多解(即有非零解).把一个非齐次线性方程组的每个方程的常数项都换成0,所得到的齐次线性方程组称为原方程组的导出齐次线性方程组,简称导出组.2.矩阵和向量(1)基本概念矩阵和向量都是描写事物形态的数量形式的发展.由m⨯n个数排列成的一个m行n列的表格,两边界以圆括号或方括号,就成为一个m⨯n型矩阵.例如2 -1 0 1 11 1 1 0 22 5 4 -2 93 3 3 -1 8是一个4⨯5矩阵.对于上面的线性方程组,称矩阵a11 a12… a1n a11 a12… a1n b1A= a21 a22… a2n 和(A|β)= a21 a22… a2n b2…………………a m1 a m2… a mn a m1 a m2… a mnb m为其系数矩阵和增广矩阵.增广矩阵体现了方程组的全部信息,而齐次方程组只用系数矩阵就体现其全部信息.一个矩阵中的数称为它的元素,位于第i行第j列的数称为(i,j)位元素.元素全为0的矩阵称为零矩阵,通常就记作0.两个矩阵A和B相等(记作A=B),是指它的行数相等,列数也相等(即它们的类型相同),并且对应的元素都相等.由n个数构成的有序数组称为一个n维向量,称这些数为它的分量.书写中可用矩阵的形式来表示向量,例如分量依次是a1,a2,⋯ ,a n的向量可表示成a1(a1,a2,⋯ ,a n)或 a2 ,┆a n请注意,作为向量它们并没有区别,但是作为矩阵,它们不一样(左边是1⨯n矩阵,右边是n⨯1矩阵).习惯上把它们分别称为行向量和列向量.(请注意与下面规定的矩阵的行向量和列向量概念的区别.)一个m⨯n的矩阵的每一行是一个n维向量,称为它的行向量; 每一列是一个m维向量, 称为它的列向量.常常用矩阵的列向量组来写出矩阵,例如当矩阵A的列向量组为α1, α2,⋯ ,αn时(它们都是表示为列的形式!)可记A=(α1, α2,⋯ ,αn).矩阵的许多概念也可对向量来规定,如元素全为0的向量称为零向量,通常也记作0.两个向量α和β相等(记作α=β),是指它的维数相等,并且对应的分量都相等.(2) 线性运算和转置线性运算是矩阵和向量所共有的,下面以矩阵为例来说明.加(减)法:两个m⨯n的矩阵A和B可以相加(减),得到的和(差)仍是m⨯n矩阵,记作A+B (A-B),法则为对应元素相加(减).数乘: 一个m⨯n的矩阵A与一个数c可以相乘,乘积仍为m⨯n的矩阵,记作c A,法则为A的每个元素乘c.这两种运算统称为线性运算,它们满足以下规律:①加法交换律:A+B=B+A.②加法结合律:(A+B)+C=A+(B+C).③加乘分配律:c(A+B)=c A+c B.(c+d)A=c A+d A.④数乘结合律: c(d)A=(cd)A.⑤ c A=0⇔ c=0 或A=0.转置:把一个m⨯n的矩阵A行和列互换,得到的n⨯m的矩阵称为A的转置,记作A T(或A').有以下规律:① (A T)T=A.② (A+B)T=A T+B T.③ (c A)T=c A T.转置是矩阵所特有的运算,如把转置的符号用在向量上,就意味着把这个向量看作矩阵了.当α是列向量时, α T表示行向量, 当α是行向量时,α T表示列向量.向量组的线性组合:设α1, α2,…,αs是一组n维向量, c1,c2,…,c s是一组数,则称c1α1+c2α2+…+c sαs为α1, α2,…,αs的(以c1,c2,…,c s为系数的)线性组合.n维向量组的线性组合也是n维向量.(3) n阶矩阵与几个特殊矩阵行数和列数相等的矩阵称为方阵,行列数都为n的矩阵也常常叫做n阶矩阵.把n阶矩阵的从左上到右下的对角线称为它对角线.(其上的元素行号与列号相等.)下面列出几类常用的n阶矩阵,它们都是考试大纲中要求掌握的.对角矩阵: 对角线外的的元素都为0的n阶矩阵.单位矩阵: 对角线上的的元素都为1的对角矩阵,记作E(或I).数量矩阵: 对角线上的的元素都等于一个常数c的对角矩阵,它就是c E.上三角矩阵: 对角线下的的元素都为0的n阶矩阵.下三角矩阵: 对角线上的的元素都为0的n阶矩阵.对称矩阵:满足A T=A矩阵.也就是对任何i,j,(i,j)位的元素和(j,i)位的元素总是相等的n阶矩阵.(反对称矩阵:满足A T=-A矩阵.也就是对任何i,j,(i,j)位的元素和(j ,i)位的元素之和总等于0的n阶矩阵.反对称矩阵对角线上的元素一定都是0.)3. 矩阵的初等变换和阶梯形矩阵矩阵有以下三种初等行变换:①交换两行的位置.②用一个非0的常数乘某一行的各元素.③把某一行的倍数加到另一行上.(称这类变换为倍加变换)类似地, 矩阵还有三种初等列变换,大家可以模仿着写出它们,这里省略了. 初等行变换与初等列变换统称初等变换.阶梯形矩阵:一个矩阵称为阶梯形矩阵,如果满足:①如果它有零行,则都出现在下面.②如果它有非零行,则每个非零行的第一个非0元素所在的列号自上而下严格单调递增.把阶梯形矩阵的每个非零行的第一个非0元素所在的位置称为台角.简单阶梯形矩阵:是特殊的阶梯形矩阵,特点为:③台角位置的元素为1.④并且其正上方的元素都为0.每个矩阵都可以用初等行变换化为阶梯形矩阵和简单阶梯形矩阵.这种运算是在线性代数的各类计算题中频繁运用的基本运算,必须十分熟练.请注意: 1.一个矩阵用初等行变换化得的阶梯形矩阵并不是唯一的,但是其非零行数和台角位置是确定的.2. 一个矩阵用初等行变换化得的简单阶梯形矩阵是唯一的.4. 线性方程组的矩阵消元法线性方程组的基本方法即中学课程中的消元法:用同解变换把方程组化为阶梯形方程组(即增广矩阵为阶梯形矩阵的方程组).线性方程组的同解变换有三种:①交换两个方程的上下位置.②用一个非0的常数乘某个方程.③把某个方程的倍数加到另一个方程上.以上变换反映在增广矩阵上就是三种初等行变换.线性方程组求解的基本方法是消元法,用增广矩阵或系数矩阵来进行,称为矩阵消元法. 对非齐次线性方程组步骤如下:(1)写出方程组的增广矩阵(A|β),用初等行变换把它化为阶梯形矩阵(B|γ).(2)用(B|γ)判别解的情况:如果最下面的非零行为(0,0, ⋯,0|d),则无解,否则有解.有解时看非零行数r(r不会大于未知数个数n),r=n时唯一解;r<n时无穷多解.(推论:当方程的个数m<n时,不可能唯一解.)(3)有唯一解时求解的初等变换法:去掉(B|γ)的零行,得到一个n×(n+1)矩阵(B0|γ0),并用初等行变换把它化为简单阶梯形矩阵(E|η),则η就是解.对齐次线性方程组:(1)写出方程组的系数矩阵A,用初等行变换把它化为阶梯形矩阵B.(2)用B判别解的情况:非零行数r=n时只有零解;r<n时有非零解(求解方法在第五章讲). (推论:当方程的个数m<n时,有非零解.)讨论题1.设A是n阶矩阵,则(A) A是上三角矩阵⇒A是阶梯形矩阵.(B) A是上三角矩阵⇐A是阶梯形矩阵.(C) A是上三角矩阵⇔A是阶梯形矩阵.(D) A是上三角矩阵与A是阶梯形矩阵没有直接的因果关系.2.下列命题中哪几个成立?(1) 如果A是阶梯形矩阵,则A去掉任何一行还是是阶梯形矩阵.(2) 如果A是阶梯形矩阵,则A去掉任何一列还是是阶梯形矩阵.(3) 如果(A|B)是阶梯形矩阵,则A也是阶梯形矩阵.(4) 如果(A|B)是阶梯形矩阵,则B也是阶梯形矩阵.(5) 如果 A 是阶梯形矩阵,则A和B都是阶梯形矩阵.B第二讲行列式一.概念复习1. 形式和意义形式:用n2个数排列成的一个n行n列的表格,两边界以竖线,就成为一个n阶行列式:a11 a12 (1)a21 a22 (2)……… .a n1 a n2… a nn如果行列式的列向量组为α1, α2, … ,αn,则此行列式可表示为|α1, α2, … ,αn|.意义:是一个算式,把这n2个元素按照一定的法则进行运算,得到的数值称为这个行列式的值.请注意行列式和矩阵在形式上和意义上的区别.当两个行列式的值相等时,就可以在它们之间写等号! (不必形式一样,甚至阶数可不同.)每个n阶矩阵A对应一个n阶行列式,记作|A|.行列式这一讲的的核心问题是值的计算,以及判断一个行列式的值是否为0.2. 定义(完全展开式)2阶和3阶行列式的计算公式:a11 a12a21 a22 = a11a22-a12a21 .a11 a12 a13a21 a22 a23 = a11a22a33+ a12a23a31+ a13a21a32-a13a22a31- a11a23a32-a12a21a33.a31 a32 a33一般地,一个n阶行列式a11 a12 (1)a21 a22 (2)………a n1 a n2… a nn的值是许多项的代数和,每一项都是取自不同行,不同列的n个元素的乘积,其一般形式为:,这里把相乘的n个元素按照行标的大小顺序排列,它们的列标j1j2…j n构成1,2, …,n的一个全排列(称为一个n元排列),共有n!个n元排列,每个n元排列对应一项,因此共有n!个项.所谓代数和是在求总和时每项先要乘+1或-1.规定τ(j1j2…j n)为全排列j1j2…j n的逆序数(意义见下面),则项所乘的是全排列的逆序数即小数排列在大数右面的现象出现的个数.逆序数可如下计算:标出每个数右面比它小的数的个数,它们的和就是逆序数.例如求436512的逆序数:, τ(436512)=3+2+3+2+0+0=10.至此我们可以写出n阶行列式的值:a11 a12 (1)a21 a22… a2n =………a n1 a n2… a nn这里表示对所有n元排列求和.称此式为n阶行列式的完全展开式.用完全展开式求行列式的值一般来说工作量很大.只在有大量元素为0,使得只有少数项不为0时,才可能用它作行列式的计算.例如对角行列式,上(下)三角行列式的值就等于主对角线上的元素的乘积,因为其它项都为0.2. 化零降阶法把n阶行列式的第i行和第j列划去后所得到的n-1阶行列式称为(i,j)位元素a ij的余子式,记作M ij.称A ij=(-1)i+j M ij为元素a ij的代数余子式.定理(对某一行或列的展开)行列式的值等于该行(列)的各元素与其代数余子式乘积之和.命题第三类初等变换(倍加变换)不改变行列式的值.化零降阶法用命题把行列式的某一行或列化到只有一个元素不为0,再用定理.于是化为计算一个低1阶的行列式.化零降阶法是实际计算行列式的主要方法,因此应该熟练掌握.3.其它性质行列式还有以下性质:①把行列式转置值不变,即|A T|=|A| .②某一行(列)的公因子可提出.于是, |c A|=c n|A|.③对一行或一列可分解,即如果某个行(列)向量α=β+γ ,则原行列式等于两个行列式之和,这两个行列式分别是把原行列式的该行(列)向量α换为β或γ 所得到的行列式.例如|α,β1+β2,γ |=|α,β1,γ |+|α,β2,γ |.④把两个行(列)向量交换, 行列式的值变号.⑤如果一个行(列)向量是另一个行(列)向量的倍数,则行列式的值为0.⑥某一行(列)的各元素与另一行(列)的对应元素的代数余子式乘积之和=0.⑦如果A与B都是方阵(不必同阶),则A * = A O =|A||B|.O B * B范德蒙行列式:形如1 1 1 (1)a1 a2 a3 … a na12 a22 a32… a n2…………a1n-i a2n-i a3n-i… a n n-i的行列式(或其转置).它由a1,a2 ,a3,…,a n所决定,它的值等于因此范德蒙行列式不等于0⇔ a1,a2 ,a3,…,a n两两不同.对于元素有规律的行列式(包括n阶行列式),常常可利用性质简化计算,例如直接化为三角行列式等.4.克莱姆法则克莱姆法则应用在线性方程组的方程个数等于未知数个数n (即系数矩阵为n阶矩阵)的情形.此时,如果它的系数矩阵的行列式的值不等于0,则方程组有唯一解,这个解为(D1/D, D2/D,⋯,D n/D),这里D是系数行列式的值, D i是把系数行列式的第i个列向量换成常数列向量所得到的行列式的值.说明与改进:按法则给的公式求解计算量太大,没有实用价值.因此法则的主要意义在理论上,用在对解的唯一性的判断,而在这方面法则不够. 法则的改进:系数行列式不等于0是唯一解的充分必要条件.实际上求解可用初等变换法:对增广矩阵(A|β)作初等行变换,使得A变为单位矩阵:(A|β)→(E|η),η就是解.用在齐次方程组上 :如果齐次方程组的系数矩阵A是方阵,则它只有零解的充分必要条件是|A|≠0.二. 典型例题1.利用性质计算元素有规律的行列式例1① 2 a a a a ② 1+x 1 1 1 ③ 1+a 1 1 1a 2 a a a 1 1+x 1 1 2 2+a 2 2a a 2 a a . 1 1 1+x 1 . 3 3 3+a 3 .a a a 2 a 1 1 1 1+x 4 4 4 4+aa a a a 2例2 1 2 3 4 52 3 4 5 13 4 5 1 2 .4 5 1 2 35 1 2 3 4例3 1+x1 1 1 11 1+x2 1 1 .1 1 1+x3 11 1 1 1+x4例4 a 0 b c0 a c b .b c a 0c b 0 a例5 1-a a 0 0 0-1 1-a a 0 00 -1 1-a a 0 . (96四)0 0 -1 1-a a0 0 0 -1 1-a2. 测试概念与性质的题例6 x3-3 1 -3 2x+2多项式f(x)= -7 5 -2x 1 ,求f(x)的次数和最高次项的系数.X+3 -1 33x2-29 x3 6 -6例7求 x-3 a -1 4f(x)= 5 x-8 0 –2 的x4和x3的系数.0 b x+1 12 2 1 x例8 设4阶矩阵A=(α, γ1, γ2 ,γ3),B=(β, γ1, γ2 ,γ3),|A|=2, |B|=3 ,求|A+B| .例9 a b c d已知行列式 x -1 -y z+1 的代数余子式A11=-9,A12=3,A13=-1,A14=3,求x,y,z.1 -z x+3 yy-2 x+1 0 z+3例10 求行列式 3 0 4 0 的第四行各元素的余子式的和.(01)2 2 2 20 -7 0 05 3 -2 23.几个n阶行列式两类爪形行列式及其值:例11 a1 a2 a3… a n-1 a nb1 c2 0 … 0 0证明 0 b2 c3 0 0 =.…………0 0 0 …b n-1 c n提示: 只用对第1行展开(M1i都可直接求出).例12 a0 a1 a2… a n-1 a nb1 c1 0 … 0 0证明 b2 0 c2… 0 0 =.…………b n 0 0 …0c n提示: 只用对第1行展开(M1i都可直接求出).另一个常见的n阶行列式:例13 证明a+b b 0 … 0 0a a+b b … 0 0………… = (当a≠b时).0 0 0 … a+b b0 0 0 a a+b提示:把第j列(行)的(-1)j-1倍加到第1列(行)上(j=2,…,n),再对第1列(行)展开.4.关于克莱姆法则的题例14设有方程组x1+x2+x3=a+b+c,ax1+bx2+cx3=a2+b2+c2,bcx1+acx2+abx3=3abc.(1)证明此方程组有唯一解的充分必要条件为a,b,c两两不等.(2)在此情况求解.参考答案例1 ①(2+4a)(2-a)4.② x3(x+4). ③ a3(a+10).例2 1875.例3 x1x2x3x4+x2x3x4+x1x3x4+x1x2x4+x1x2x3.例4 (a+b+c)(a+b-c)(a-b+c)(a-b-c).例5 1-a+a2-a3+a4-a5.例6 9,-6例7 1,-10.例8 40.例9 x=0,y=3,z=-1.例10 -28.例14 x1=a,x2=b,x3=c..第三讲矩阵一.概念复习1. 矩阵乘法的定义和性质定义2.1 当矩阵A的列数和B的行数相等时,和A和B可以相乘,乘积记作AB. AB的行数和A相等,列数和B相等. AB的(i,j)位元素等于A的第i个行向量和B的第j个列向量(维数相同)对应分量乘积之和. 设 a11 a12... a1n b11 b12... b1s c11 c12 (1)A= a21 a22... a2n B= b21 b22... b2s C=AB=c21 c22 (2)………………………a m1 a m2… a mn ,b n1 b n2… b ns ,c m1 c m2… c ms ,则c ij=a i1b1j+a i2b2j+…+a in b nj.矩阵的乘法在规则上与数的乘法有不同:①矩阵乘法有条件.②矩阵乘法无交换律.③矩阵乘法无消去律,即一般地由AB=0推不出A=0或B=0.由AB=AC和A≠0推不出B=C.(无左消去律)由BA=CA和A≠0推不出B=C. (无右消去律)请注意不要犯一种常见的错误:把数的乘法的性质简单地搬用到矩阵乘法中来.矩阵乘法适合以下法则:①加乘分配律 A(B+C)= AB+AC,(A+B)C=AC+BC.②数乘性质 (c A)B=c(AB).③结合律 (AB)C= A(BC).④ (AB)T=B T A T.2. n阶矩阵的方幂和多项式任何两个n阶矩阵A和B都可以相乘,乘积AB仍是n阶矩阵.并且有行列式性质:|AB|=|A||B|.如果AB=BA,则说A和B可交换.方幂设k是正整数, n阶矩阵A的k次方幂A k即k个A的连乘积.规定A 0=E.显然A的任何两个方幂都是可交换的,并且方幂运算符合指数法则:①A k A h= A k+h.② (A k)h= A kh.但是一般地(AB)k和A k B k不一定相等!n阶矩阵的多项式设f(x)=a m x m+a m-1x m-1+…+a1x+a0,对n阶矩阵A规定f(A)=a m A m+a m-1A m-1+…+ a1A+a0E.称为A的一个多项式.请特别注意在常数项上加单位矩阵E.乘法公式一般地,由于交换性的障碍,小代数中的数的因式分解和乘法公式对于n阶矩阵的不再成立.但是如果公式中所出现的n阶矩阵互相都是乘法交换的,则乘法公式成立.例如当A和B可交换时,有:(A±B)2=A2±2AB+B2;A2-B2=(A+B)(A-B)=(A+B)(A-B).二项展开式成立: 等等.前面两式成立还是A和B可交换的充分必要条件.同一个n阶矩阵的两个多项式总是可交换的. 一个n阶矩阵的多项式可以因式分解.3. 分块法则矩阵乘法的分块法则是简化矩阵乘法的一种方法.对两个可以相乘的矩阵A和B,可以先用纵横线把它们切割成小矩阵(一切A的纵向切割和B的横向切割一致!),再用它们来作乘法.(1)两种常见的矩阵乘法的分块法则A11 A12 B11 B12 = A11B11+A12B21 A11B12+A12B22A21 A22 B21 B22 A21B11+A22B21 A21B12+A22B22要求A ij的列数B jk和的行数相等.准对角矩阵的乘法:形如A1 0 0A= 0 A2 0………0 0 …A n的矩阵称为准对角矩阵,其中A1,A2,…,A k都是方阵.两个准对角矩阵A1 0 ... 0 B1 0 0A= 0 A2 ... 0 , B= 0 B2 0………………0 0 …A k 0 0 …B k如果类型相同,即A i和B i阶数相等,则A1B1 0 0AB = 0 A2B2 … 0 .………00 …A k B k(2)乘积矩阵的列向量组和行向量组设A是m⨯n矩阵B是n⨯s矩阵. A的列向量组为α1,α2,…,αn,B的列向量组为β1, β2,…,βs, AB的列向量组为γ1, γ2,…,γs,则根据矩阵乘法的定义容易看出(也是分块法则的特殊情形):①AB的每个列向量为:γi=Aβi,i=1,2,…,s.即A(β1, β2,…,βs)=(Aβ1,Aβ2,…,Aβs).②β=(b1,b2,…,b n)T,则Aβ= b1α1+b2α2+…+b nαn.应用这两个性质可以得到:如果βi=(b1i,b2i,…,b ni)T,则γi=AβI=b1iα1+b2iα2+…+b niαn.即:乘积矩阵AB的第i个列向量γi是A的列向量组α1, α2,…,αn的线性组合,组合系数就是B的第i个列向量βi的各分量.类似地, 乘积矩阵AB的第i个行向量是B的行向量组的线性组合,组合系数就是A的第i个行向量的各分量.以上规律在一般教材都没有强调,但只要对矩阵乘法稍加分析就不难得出.它们无论在理论上和计算中都是很有用的.(1) 当两个矩阵中,有一个的数字很简单时,直接利用以上规律写出乘积矩阵的各个列向量或行向量,从而提高了计算的速度.(2) 利用以上规律容易得到下面几个简单推论:用对角矩阵Λ从左侧乘一个矩阵,相当于用Λ的对角线上的各元素依次乘此矩阵的各行向量; 用对角矩阵Λ从右侧乘一个矩阵,相当于用Λ的对角线上的各元素依次乘此矩阵的各列向量.数量矩阵k E乘一个矩阵相当于用k乘此矩阵;单位矩阵乘一个矩阵仍等于该矩阵.两个同阶对角矩阵的相乘只用把对角线上的对应元素相乘.求对角矩阵的方幂只需把对角线上的每个元素作同次方幂.(3) 矩阵分解:当一个矩阵C的每个列向量都是另一个A的列向量组的线性组合时,可以构造一个矩阵B,使得C=AB.例如设A=(α,β,γ), C=(α+2β-γ,3α-β+γ,α+2γ),令1 3 1B= 2 -1 0 ,则C=AB.-1 1 2(4) 初等矩阵及其在乘法中的作用对单位矩阵E作一次初等(行或列)变换,所得到的矩阵称为初等矩阵.有三类初等矩阵:E(i,j):交换E的i,j两行(或列)所得到的矩阵.E(i(c)):用非0数c乘E的第i行(或列)所得到的矩阵.也就是把E的对角线上的第i个元素改为c.E(i,j(c))(i≠j):把E的第j行的c倍加到第i行上(或把第i列的c倍加到第j列上)所得到的矩阵, 也就是把E的(i,j)位的元素改为c.命题对矩阵作一次初等行(列)变换相当于用一个相应的初等矩阵从左(右)乘它.4. 矩阵方程和可逆矩阵(伴随矩阵)(1) 矩阵方程矩阵不能规定除法,乘法的逆运算是解下面两种基本形式的矩阵方程:(I) AX=B.(II) XA=B.这里假定A是行列式不为0的n阶矩阵,在此条件下,这两个方程的解都是存在并且唯一的.(否则解的情况比较复杂.)当B只有一列时,(I)就是一个线性方程组.由克莱姆法则知它有唯一解.如果B有s列,设B=(β1, β2,…,βs),则 X也应该有s列,记X=(X1,X2,…,X s),则有AX i=βi,i=1,2,…,s,这是s个线性方程组.由克莱姆法则,它们都有唯一解,从而AX=B有唯一解.这些方程组系数矩阵都是A,可同时求解,即得(I)的解法:线性代数讲义将A和B并列作矩阵(A|B),对它作初等行变换,使得A变为单位矩阵,此时B变为解X.(A|B)→(E|X)(II)的解法:对两边转置化为(I)的形式:A T X T=B T.再用解(I)的方法求出X T,转置得X..(A T|B T)→(E|X T)矩阵方程是历年考题中常见的题型,但是考试真题往往并不直接写成(I)或(II)的形式,要用恒等变形简化为以上基本形式再求解.(2) 可逆矩阵的定义与意义定义设A是n阶矩阵,如果存在n阶矩阵B,使得AB=E, BA=E,则称A为可逆矩阵.此时B是唯一的,称为A的逆矩阵,通常记作A-1.如果A可逆,则A在乘法中有消去律:AB=0⇒B=0;AB=AC⇒B=C.(左消去律);BA=0⇒B=0;BA=CA⇒B=C. (右消去律)如果A可逆,则A在乘法中可移动(化为逆矩阵移到等号另一边):AB=C⇔B=A-1C. BA=C⇔B=CA-1.由此得到基本矩阵方程的逆矩阵解法:(I) AX=B的解X=A-1B .(II) XA=B的解X= BA-1.这种解法想法自然,好记忆,但是计算量比初等变换法大(多了一次矩阵乘积运算).(3) 矩阵可逆性的判别与性质定理 n阶矩阵A可逆⇔|A|≠0.证明“⇒”对AA-1=E两边取行列式,得|A||A-1|=1,从而|A|≠0. (并且|A-1|=|A|-1.)“⇐”因为|A|≠0,矩阵方程AX=E和XA=E都有唯一解.设B,C分别是它们的解,即AB=E, CA=E. 事实上B=C(B=EB=CAB=CE=C),于是从定义得到A可逆.推论如果A和B都是n阶矩阵,则AB=E⇔BA=E.于是只要AB=E(或BA=E)一式成立,则A和B都可逆并且互为逆矩阵.可逆矩阵有以下性质:①如果A可逆,则A-1也可逆,并且(A-1)-1=A.1 / 1。

线性代数专题知识讲座

线性代数专题知识讲座
e2 = (0 , 1, 0 , ···, 0)T , e3 = (0 , 0 , 1, ···, 0)T ,
············ en = (0 , 0 , 0 , ···, 1)T . 并由此可知它是 n - 1 维向量空间.
由向量组 a1 , a2 , ···, am 所生成旳向量空间
L ={ x = 1a1 + 2a2 + ···+ mam | 1, ···, m R },
a = ( 0 , a2 , ···, an )T V .
例 19 集合
V = { x = (1 , x2 , ···, xn )T | x2 , ···, xn R } 不是向量空间. 因为若 a = (1 , a2 , ···, an )T V , 则
2a = (2 , 2a2 , ···, 2an )T V.
1 2 2
4 2
验证 a1 , a2 , a3 是 R3 旳一种基, 并求 b1 , b2 在这
个基中旳坐标.
解 要证 a1 , a2 , a3 是 R3 的一个基, 只要证
a1 , a2 , a3 线性无关, 即只要证 A ~ E .
设 b1 = x11a1 + x21a2 + x31a3 ,
这个向量空间称为由向量 a , b 所生成旳向量空 间.
一般地, 由向量组 a1 , a2 , ···, am 所生成旳向 空间量为
L={x=1a1 + 2a2 + ···+ mam | 1, 2 , ···, m R }.
例 23 设向量组 a1 , ···, am与向量组 b1, ···,
等价b, s记
若向量组 a1 , a2 , ···, ar 是向量空间 V 旳 一种基, 则 V 可表达为

2018线性代数基础讲义

2018线性代数基础讲义

a13 a23 a33.来自a23 1 ,则 4a21
a31
三、行列式的计算 (一)降阶 1.余子式 M ij ,代数余子式 Aij 的概念. 记 n 阶行列式
a11 D a 21 a n1
a12 a22
a1 n a2 n

an 2 ann
将 D 中划去元素 aij 所在的第 i 行第 j 列后剩下的 n 1 行、 n 1 列元素不改变相互位
排列. 例如:2143 是一个 4 阶排列,3124 也是一个 4 阶排列,25134 是一个 5 阶排列. 定义 2 一个排列中如果一个大的数排列在一个小的数之前,就称这两个数构成一个逆 序.一个排列的逆序总数称为这个排列的逆序数.用 ( j1 j2 jn ) 表示排列 j1 j2 ... jn 的逆序数. 如果一个排列的逆序数是偶数,则称这个排列为偶排列,否则称为奇排列. 例如:在 5 阶排列 25134 中,共有逆序 21,51,53,54,即 (25134) =4,所以 25134 是偶排列. 在 6 阶排列 365412 中,共有逆序 31,32,65,64,61,62,54,51,52,41,42 即 (365412) =11,所以 365412 是奇排列. 2. n 阶行列式的定义
a11 a21 a11
a12 a22 a12 a22 a32
a11a22 a12 a21 ; (对角线法则) a13 a23 a11a22a33 a12a23a31 a13a21a32 a33
1
三阶行列式: a21
a31
2018考研备考QQ群498923473免费分享 仔细对照群号 防止有人倒卖
a11
定义 3 称
a12 a22

线性代数重难点大纲

线性代数重难点大纲

绪论从高科技本质上就是数学技术到CT 技术到数学应用到数学建模到黑客帝国2的矩阵母。

工程数学之线性代数《线性代数》主要讲述矩阵的初步理论及其应用,包括矩阵的代数运算;矩阵的秩与初等变换;矩阵的特征值、特征向量与相似,以及线性方程组和二次型。

n 维向量空间相关性理论则是本课程的难点所在。

全书各部分以线性空间与线性变换为主线,逐渐阐述欧氏空间的理论,使学生掌握线性代数的基本理论与方法,一方面为学生学习相关课程及进一步扩大数学知识面奠定必要的基础,另一方面培养学生建立数学模型解决实际问题的能力。

第一章 行列式内容概述:行列式是线性代数中的一个重要概念。

本章从二、三元方程组的解的公式出发,引出二阶、三阶行列式的概念,然后推广到n 阶行列式,并导出行列式的一些基本性质及行列式按行(列)展开的定理,最后讲用行列式解n 元方程组的克拉默法则。

第一节 行列式的定义和性质教学目的:复习二阶、三阶行列式的概念,了解逆序概念,掌握到n 阶行列式定义和性质。

重点难点:n 阶行列式定义和性质 教学过程:一、 复习二阶、三阶行列式的概念 1.二阶行列式我们从二元方程组的解的公式,引出二阶行列式的概念。

在线性代数中,将含两个未知量两个方程式的线性方程组的一般形式写为(1), 用加减消元法容易求出未知量x 1,x 2的值,当112212210a a a a -≠时,有 (2):(1) (2)这就是二元方程组的解的公式。

但这个公式不好记,为了便于记这个公式,于是引进二阶行列式的概念。

我们称记号(3)为二阶行列式,它表示两项的代数和:11221221a a a a -(3)即定义(4)二阶行列式所表示的两项的代数和,可用下面的对角线法则记忆:从左上角到右下角两个元素相乘取正号,这条连线为主对角线;从右上角到左下角两个元素相乘取负号,这条连线为副对角线(或次对角线),即:由于公式(3)的行列式中的元素就是二元方程组中未知量的系数,所以又称它为二元方程组的系数行列式,并用字母D 表示;如果将D 中第一列的元素a 11,a 21 换成常数项b 1,b 2 ,则可得到另一个行列式,用字母D 1表示,按二阶行列式的定义,它等于两项的代数和:,这就是公式(2)中x 1 的表达式的分子。

第17讲 共轭算子与紧算子

第17讲 共轭算子与紧算子
∗ ∗

T ∗ y∗ = l ≤ y∗ T ,
故 T∗ ≤ T .
∀x ∈ X ,若 Tx ≠ 0 ,则存在 y0∗ ∈ Y ∗ , y0∗ = 1 , T ∗ y0∗ = Tx ,于是
Tx = ( x, T ∗ y0∗ ) ≤ T ∗ y0∗ x ,
若 Tx = 0 ,此式自然成立. 故
y∗ ≤1
(A
∗∗
x, y ∗ ) = ( A∗∗ x∗∗ , y ∗ ) = ( x∗∗ , A∗ y ∗ ) = ( x, A∗ y ∗ ) = ( Ax, y ∗ ) ,
∀y ∗ ∈ Y ∗
故 A∗∗ x = Ax ,于是 A 是 A 的保范延拓.
∗∗
下面让我们考察一类重要的算子——紧算子, 它在积分方程论及数学物理等学科中具有 重要应用. 定义 2 设 X , Y 是线性赋范空间, T : X → Y 是线性算子.
所以
l ≤ y∗ T
∗ ∗ ∗ ∗

( 3)
这说明 l ∈ X ∗ . 显然 l 与 y 有关,记为 T y ,则 T 是 Y ∗ → X ∗ 的算子. 由定义知道
( x, T y ) = l ( x ) = (Tx, y ) , ∀x ∈ X , y
∗ ∗ ∗

∈Y ∗
直接验证可知 T 是线性算子.

对于其余的 ei , ek ( ei ) = 0 .
即 e1 ,

∗ 满足 , en
1, k = i, ∗ ek ( ei ) = δ ki = 0, k ≠ i.
称 e1 ,

n ∗ 是 Φ , en
( )

的关于 e1 ,
, en 的对偶基 . 类似地,设 µ1* ,

线性代数讲义 (17)

线性代数讲义 (17)

x1 x2 x3 x4 0,
例4
求解方程组
x1 x2 x3 3 x4 1,
x1 x2 2 x3 3 x4 1 2.
解 对增广矩阵 A 施行初等行变换 :
1 A 1
1
1 1 1
1 1 2
1 3 3
~ 0 r12(1)
1 1 2
r13 ( 1)
1 1 1 1 0 0 2 4
解,则
11
x
1
21
n1
称为方程组(1) 的解向量,它也就是向量方程 (2)的解.
2.齐次线性方程组 Ax 0 解的性质
(1)若 x 1 , x 2 为 Ax 0 的解,则 x 1 2
也是 Ax 0 的解.
证明: A1 0, A2 0
A1 2 A1 A2 0
0 1
~
1 0
1 0
0 1
1 2
1 2 1 2,
0 0 1 2 1 2 0 0 0 0 0
x1
x3
可见r( A) r( A) 2,故方程组有解,并有
x1 x2 x4 1 2,
x3
2 x4 1 2.
取 x2
x4
0,则 x1
x3
1 ,即得方程组的一个特解 2
1 2
0 12
所以方程组有无穷多解. 且原方程组等价于方程组
x1 x2
1 2
x3
1 2 x3
2 x5 x4
3
9 2 x5
23 2
令x3 k1, x4 k2 , x5 k3 ,即得
x1
1 2
k1
2k3
9 2
x2
1 2
k1
k2
3k3

线性代数(经管类)讲义

线性代数(经管类)讲义

高数线性代数课堂笔记第一章行列式线性代数学的核心内容是:研究线性方程组的解的存在条件、解的结构以及解的求法。

所用的基本工具是矩阵,而行列式是研究矩阵的很有效的工具之一。

行列式作为一种数学工具不但在本课程中极其重要,而且在其他数学学科、乃至在其他许多学科(例如计算机科学、经济学、管理学等)都是必不可少的。

1.1行列式的定义(一)一阶、二阶、三阶行列式的定义)定义:符号叫一阶行列式,它是一个数,其大小规定为:。

注意:在线性代数中,符号不是绝对值。

例如,且;)定义:符号叫二阶行列式,它也是一个数,其大小规定为:所以二阶行列式的值等于两个对角线上的数的积之差。

例如)符号叫三阶行列式,它也是一个数,其大小规定为例如=0三阶行列式的计算比较复杂,为了帮助大家掌握三阶行列式的计算公式,我们可以采用下面的对角线法记忆方法是:在已给行列式右边添加已给行列式的第一列、第二列。

我们把行列式左上角到右下角的对角线叫主对角线,把右上角到左下角的对角线叫次对角线,这时,三阶行列式的值等于主对角线的三个数的积与和主对角线平行的线上的三个数的积之和减去次对角线三个数的积与次对角线的平行线上数的积之和。

例如:(1)=1×5×9+2×6×7+3×4×8-3×5×7-1×6×8-2×4×9=0(2)(3)(2)和(3)叫三角形行列式,其中(2)叫上三角形行列式,(3)叫下三角形行列式,由(2)(3)可见,在三阶行列式中,三角形行列式的值为主对角线的三个数之积,其余五项都是0,例如例1a为何值时,[答疑编号10010101:针对该题提问]解因为所以8-3a=0,时例2当x取何值时,[答疑编号10010102:针对该题提问]解:解得0<x<9所以当0<x<9时,所给行列式大于0。

(二)n阶行列式符号:它由n行、n列元素(共个元素)组成,称之为n阶行列式。

2020线性代数辅导讲义练习答案

2020线性代数辅导讲义练习答案

= λ4 + λ3 + 2λ2 + 3λ + 4.
第 3 行的λ + 1倍加到第 4 行 第 2 行的λ2 + λ + 2倍加到第 4 行 第 1 行的λ3 + λ2 + 2λ + 3倍加到第 4 行
第 4 行展开
第 17 页
(1)(2018,3) 答案 2
解析:
A12 = (−1)1+2 3
2 −1 = − 1
11
1 0 =−
= 1,
87
034
8 70
A11 − A12 = −3 − 1 = −4. (方法二)
A11 − A12 = 1 · A11 + (−1) · A12 + 0 · A13 + 0 · A14 1 −1 0 0 −2 1 −1 1
= 3 −2 2 −1 0034
1 −1 0 0 0 −1 −1 1 = 0 1 2 −1 00 3 4
−1 −1 1 = 1 2 −1
034
−1 −1 1
10
= 0 1 0 =−
= −4.
34
0 34
(行列式按第一行展开式) (第一行倍数分别加二、三行)
(按第一列展开) (第一行加第二行)
行列式的计算方法很多,比如一开始还可以用第一列加到第二列. 请同学们自己多多练习.
《2020 线性代数辅导讲义》 练习参考答案
- 2019 年 4 月 18 日-
金榜图书编辑部数学组
微信公众号
目录
第 2 页 今年考题 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

《线性代数》课程教学大纲

《线性代数》课程教学大纲

《线性代数》课程教学大纲课程名称:线性代数课程代码:课程性质: 必修总学分:2 总学时: 32* 其中理论教学学时:32*适用专业和对象:理(非数学类专业)、工、经、管各专业**使用教材:注:(1)大部分高校开设本课程的教学学时数约为32—48学时,为兼顾少学时高校开展教学工作,本大纲以最低学时数32学时(约2学分)进行教学安排,有多余学时的学校或专业可对需要加强的内容适当拓展教学学时。

(2)对线性代数课程而言,理工类与经管类专业的教学基本要求几乎一致,所以这里所列教学内容及要求对这两类专业均适合。

一、课程简介《线性代数》是高等学校理(非数学类专业)、工、经、管各专业的一门公共基础课,其研究对象是向量,向量空间(或称线性空间),线性变换和有限维的线性方程组。

该课程具有理论上的抽象性、逻辑推理的严密性和工程应用的广泛性。

主要内容是学习科学技术中常用的矩阵方法、线性方程组及其有关的基本计算方法,使学生具有熟练的矩阵运算能力并能用矩阵方法解决一些实际问题。

通过本课程的学习,使学生理解和掌握行列式、矩阵的基本概念、主要性质和基本运算,理解向量空间的概念、向量的线性关系、线性变换、了解欧氏空间的线性结构,掌握线性方程组的求解方法和理论,掌握二次型的标准化和正定性判定。

线性代数的数学思想和数学方法深刻地体现辩证唯物主义的世界观和方法论,线性代数的发展历史也充分展示数学家们开拓创新、追求真理的科学精神,展现古今中外数学家们忠诚爱国、献身事业的高尚情怀。

思想政治教育元素融入线性代数的教学实践之中,可以培养学生用哲学思辨立场、观点和方法分析解决问题,能够提高学生的创新能力和应用意识,培养学生的爱国主义情怀、爱岗敬业精神和开拓创新精神,帮助学生在人生道路上形成良好的人格,树立正确的世界观、人生观、价值观。

线性代数理论不仅渗透到了数学的许多分支中,而且在物理、化学、生物、航天、经济、工程等领域中都有着广泛的应用。

同时,线性代数课程注重培养学生逻辑思维和抽象思维能力、空间直观和想象能力,提高学生分析问题解决问题的能力。

线性代数总复习讲义

线性代数总复习讲义
元素都是零的矩阵称为零矩阵, 记作O.
主对角线上的元素都是1, 其余元素都是零的 n阶方阵,叫做n阶单位阵, 简记作E .
5 矩阵相加
设A
(a ij)m n
,
B
(b
ij
) m
n
为两个同型矩阵,
矩阵加法定义为A B (aijbij)mn , A B称为
A与B的和.
交换律 A B B A
结合律 ( A B) C A (B C)
则称矩阵A是可逆的(或非奇异的、非退化的、满 秩的),且矩阵B称为A的逆矩阵.
若A有逆矩阵,则A的逆矩阵是唯一的, A的逆 矩阵记作 A1 .
相关定理及性质
方阵A可逆的充分必要条件是A 0.
若矩阵A可逆,则 A1 A .
( A )1 1
A;(A)1
1
A
A1 (
0);
( AT )1 ( A1)T .
4对换
定义 在排列中,将任意两个元素对调,其余元 素不动,称为一次对换.将相邻两个元素对调, 叫做相邻对换.
定理 一个排列中的任意两个元素对换,排列改 变奇偶性.
推论 奇排列调成标准排列的对换次数为奇数, 偶排列调成标准排列的对换次数为偶数.
5 n阶行列式的定义
a11 a12 a1n
D
a21 a22 a2n
若 同 阶 方 阵A与B都 可 逆, 那 么AB也 可 逆, 且
( AB)1 B1 A1 .
11 分块矩阵
矩阵的分块,主要目的在于简化运算及便于 论证.
分块矩阵的运算规则与普通矩阵的运算规则 相类似.
典型例题
一、矩阵的运算 二、逆矩阵的运算及证明 三、矩阵的分块运算
1 初等变换的定义

2014汤家凤线性代数辅导讲义

2014汤家凤线性代数辅导讲义

文都教育 2014 考研数学春季基础班线性代数辅导讲义文都教育 2014 年考研数学春季基础班线性代数辅导讲义主讲:汤家凤第一讲 行列式一、基本概念定义 1 逆序—设 i, j 是一对不等的正整数,若 ij ,则称 (i, j ) 为一对逆序。

定义 2 逆序数—设 i 1i 2 i n 是 1,2, , n 的一个排列,该排列所含逆序总数称为该排列的逆序数,记为(i 1i 2 i n ) ,逆序数为奇数的排列称为奇排列,逆序数为偶数的排列称为偶排列。

a11a 12a1n定义 3 行列式—称 Da 21 a 22a2 n称为 n 阶行列式,规定an1 an 2annD( 1) ( j 1 j 2 j n ) a 1 j a 2 j2anj。

j 1 j 2j n1na11a12a1n定义 4Da21 a22 a2 n余子式与代数余子式—把行列式中元素 a ij 所在的 i 行元an1an 2ann素和 j 列元素去掉,剩下的 n 1行和 n 1 列元素按照元素原来的排列次序构成的 n 1阶行列式,称为元素 a ij 的余子式,记为 M ij ,称 A ij ( 1) i jMij 为元素a ij的代数余子式。

二、几个特殊的高阶行列式a 1 0 0a 1 0 01、对角行列式—形如0 a 2 0 称为对角行列式, 0 a 2 0 a 1a 2a n 。

a na na11a12a1 na 112、上(下)三角行列式—称0 a22a2n 及a21a220 为上(下)三角行a nn a n1a n2a nna11a12a1na11a22a2na 11a22a nn , a21a22a nn 。

列式,a 11a220 0annan1an2ann文都教育 2014 考研数学春季基础班线性代数辅导讲义A O A C A O 3、|A| |B|,B|A| |B|,| A| |B|。

OBO CB1 1 14、范得蒙行列式—形如V ( a 1 , a 2 ,a 1a 2a n称为 n 阶范得蒙行列式,, a n )a 1n 1a 2n 1a n n 11 1 1且V (a 1 , a 2 , , a n )a 1a 2a n ( a i a j ) 。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
应于特征值i的特征向量. 由于A的对应于
不同特征值的特征向量相互正交, 我们可求 得k1+k2++km=n个正交化的特征向量组. 把 它们单位化就得到单位正交向量组.
7
定理4.12 设A为实对称矩阵, 则存在正交 矩阵Q, 使Q1AQ为对角矩阵.
8
例1. 设实对称矩阵
1 2 0
A
2 0
2 2
2 3
求正交矩阵Q, 使Q1AQ为对角矩阵.
9
解: 矩阵A的特征方程为
1 2 0 |IA| 2 2 2
0 2 3
由此得(+1)(2)(5)=0, 所以A的特征 值为11, 2=2, 3=5.
10
当11时, 解齐次线性方程组(IA)x=o,
得其基础解系为x1=(2,2,1)T.
当2=2时, 解齐次线性方程组(2IA)x=o,
解: A的特征方程为
2 2 2 | I A| 2 5 4
2 4 5
( 1)2( 10) 0
所以A的特征值为1=2=1, 3=10. 当1=2=1时, 解齐次线性方程组(IA)x=o,
得基础解系x1=(2,1,0)T, x2=(2,0,1)T, 利用 施密特正交化方法, 将x1,x2正交化.
15

x1 x1 (2,1,0)T
x2
x2
x2T x1 x1T x1
x1
2
2 0 1
(
4 5
)
2 1 0
5 4 5 1
16
再将 x1, x2单位化, 得
2
x1
||
x1 x1
||
1 5
1 025x2||x2 x2
||
5 35
4 5 1
17
当3=10时, 解齐次线性方程组(10IA)x=o,
1. 向量序列的极限
设 给定一向量序列 x(k):x(1),x(2),,x(k),
其中
x (1)
(
x
(1 1
)
,
x
(1 2
)
,
,
x
(1 n
)
)
x (2)
(
x
( 1
2
)
,
x
( 2
2
)
,
,
x
( n
2
)
)
x (k)
(
x
( 1
k
)
,
x
( 2
k
)
,
,
x
( n
k
)
)
21
如果每一个分量序列都有极限, 即 lki m x i(k)x i (i 1 ,2 , ,n )
则称向量x=(x1,x2,,xn)为向量序列 x(1),x(2),,x(k),的极限, 记作
lim x ( k ) x 或 x ( k ) x ( k )
k
22
例1.
设向量序列
x(k)
1 2k
,
2k k 1

x
(1 )
1 21
, 1
2
1
x
(2 )
1 22
,2 2
2 1
x
(k )
由12 可得 x2T x1 0, 即 x2 与 x1 正交.
6
如果n个实对称矩阵A有m个不同的特征值
1,2,,m, 其重数分别为k1,k2,,km, 且
k1+k2++km=n.
可以证明: 对于A的ki重特征根i, A恰有ki个 对应于特征值i的线性无关的特征向量(证
明略). 利用施密特正交化方法把这ki个特征 向量正交化, 正交化后的ki个向量仍是A的对
lim A (k )或 A (k ) A(k )
线性代数第17讲
1
定理4.11 实对称矩阵的对应于不同特征 值的特征向量是正交的.
4
证: 设 A 为 n 阶实对称矩阵, x1,x2 分别是 A
的对应于不同特征值1,2 的特征向量. 于是
Ax1=1x1
(x1o)
Ax2=2x2
(x2o)
所以 x2T Ax1 1x2T x1; x1T Ax2 2 x1T x2
因为 A 为实对称矩阵, α2T Aα1是一个数. 所以
α2T Aα1 (α2T Aα1)T α1T Aα2 由此可得
1x2T x1 2 x1T x2 5
Ax1=1x1
(x1o)
Ax2=2x2
(x2o)
1x2T x1 2 x1T x2
而 x2T x1 x1T x2, 所以
(1 2 ) x2T x1 0
1 2k
, 2k k 1
23
因为
lki m 21k 0,
lim2k 2 k k1
所以 lki m x(k)lki m 2 1 k,k2 k1 (0,2)
24
例2. 设
1
x (k )
k
1 3k
因为
lki m 1k0,lki m 31k 0
所以
1
lim
k
x(k)
lim k
k 1 3k
0
0
25
2. 矩阵的极限
设 给定一矩阵序列 A(k): A(1),A(2),,A(k),
其中
A(1)
a (1) 11
a (1) 21
a (1) 12
a (1) 22
a (1) m1
a (1) m2
a (1) 1n
a (1) 2n
,
A(2)
a(2) 11
a(2) 21
得其基础解系x3=(1,2,2)T单位化得
1
x3
||
1 x3
||
x3
1 3
2 2
18

2 5
5
2 15
5
1
3
Q
(
x1,
x2,
x3
)
5 5
0
45 15
2 3
15 3
2 3
则有
1 0 0
Q1AQQTAQ00
1 0
100
19
§4.4 矩阵级数的收敛性
20
(一)向量序列与矩阵序列的极限概念
a(2) 12
a(2) 22
a (1) mn
a(2) m1
a(2) m2
,
A(k )
a(k) 11
a(k) 21
a(k) 12
a(k) 22
a(k) m1
a(k) m2
a(k) 1n
a(k) 2n
,
a(k) mn
a(2) 1n
a(2) 2n
a(2) mn
26
如果每一个元素序列都有极限, 即 l k i m a i ( j k ) a i j( i 1 ,2 ,,m ;j 1 ,2 ,,n ) 则A(1称),A矩(2)阵,A,A=((ka),ij)m的n为极矩根阵, 记序作列
得其基础解系为x2=(2,1,2)T.
当3=5时, 解齐次线性方程组(5IA)x=o,
得基础解系为x3=(1,2,2)T. 不难验证x1,x2,x3是正交向量组.
11
把x1,x2,x3单位化, 得
2
2
3
3
x1
||
1 x1
||
x1
2 3 1
,
x
2
||
1 x2
||
x
2
1 3 2
,
3
3
1
3
x3
||
1 x3
||
x
3
2 3 2
3
12

2 2 1
3
3
3
Q
( x1,
x2,
x3)
2 3
1 3
2 3
1
2
2
3 3 3
则 1 0 0
Q1AQQTAQ
0 0
2 0
50
13
例2, 设实对称矩阵
2 2 2
A
2 2
5 4
4 5
求正交矩阵Q, 使Q1AQ为对角阵
14
相关文档
最新文档