高等数学【线性代数】英文版课件2

合集下载

线性代数 英文讲义

线性代数 英文讲义

Chapter 1 Matrices and Systems of EquationsLinear systems arise in applications to such areas as engineering, physics, electronics, business, economics, sociology(社会学), ecology (生态学), demography(人口统计学), and genetics(遗传学), etc. §1. Systems of Linear EquationsNew words and phrases in this section:Linear equation 线性方程Linear system,System of linear equations 线性方程组Unknown 未知量Consistent 相容的Consistence 相容性Inconsistent不相容的Inconsistence 不相容性Solution 解Solution set 解集Equivalent 等价的Equivalence 等价性Equivalent system 等价方程组Strict triangular system 严格上三角方程组Strict triangular form 严格上三角形式Back Substitution 回代法Matrix 矩阵Coefficient matrix 系数矩阵Augmented matrix 增广矩阵Pivot element 主元Pivotal row 主行Echelon form 阶梯形1.1 DefinitionsA linear equation (线性方程) in n unknowns(未知量)is1122...n na x a x a x b+++=A linear system of m equations in n unknowns is11112211211222221122...... .........n n n n m m m n n m a x a x a x b a x a x a x b a x a x a x b+++=⎧⎪+++=⎪⎨⎪⎪+++=⎩ This is called a m x n (read as m by n) system.A solution to an m x n system is an ordered n-tuple of numbers (n 元数组)12(,,...,)n x x x that satisfies all the equations.A system is said to be inconsistent (不相容的) if the system has no solutions.A system is said to be consistent (相容的)if the system has at least one solution.The set of all solutions to a linear system is called the solution set(解集)of the linear system.1.2 Geometric Interpretations of 2x2 Systems11112212112222a x a xb a x a x b +=⎧⎨+=⎩ Each equation can be represented graphically as a line in the plane. The ordered pair 12(,)x x will be a solution if and only if it lies on bothlines.In the plane, the possible relative positions are(1) two lines intersect at exactly a point; (The solution set has exactly one element)(2)two lines are parallel; (The solution set is empty)(3)two lines coincide. (The solution set has infinitely manyelements)The situation is the same for mxn systems. An mxn system may not be consistent. If it is consistent, it must either have exactly one solution or infinitely many solutions. These are only possibilities.Of more immediate concerns is the problem of finding all solutions to a given system.1.3 Equivalent systemsTwo systems of equations involving the same variables are said to be equivalent(等价的,同解的)if they have the same solution set.To find the solution set of a system, we usually use operations to reduce the original system to a simpler equivalent system.It is clear that the following three operations do not change the solution set of a system.(1)Interchange the order in which two equations of a system arewritten;(2)Multiply through one equation of a system by a nonzero realnumber;(3)Add a multiple of one equation to another equation. (subtracta multiple of one equation from another one)Remark: The three operations above are very important in dealing with linear systems. They coincide with the three row operations of matrices. Ask a student about the proof.1.4 n x n systemsIf an nxn system has exactly one solution, then operation 1 and 3 can be used to obtain an equivalent “strictly triangular system ”A system is said to be in strict triangular form (严格三角形) if in the k-th equation the coefficients of the first k-1 variables are all zero and the coefficient ofkx is nonzero. (k=1, 2, …,n)An example of a system in strict triangular form:123233331 2 24x x x x x x ++=⎧⎪-=⎨⎪=⎩Any nxn strictly triangular system can be solved by back substitution (回代法).(Note: A phrase: “substitute 3 for x ” == “replace x by 3”)In general, given a system of linear equations in n unknowns, we will use operation I and III to try to obtain an equivalent system that is strictly triangular.We can associate with a linear system an mxn array of numbers whose entries are coefficient of theix ’s. we will refer to this array as thecoefficient matrix (系数矩阵) of the system.111212122212.....................n nm m m n a a a a a a a a a ⎛⎫⎪ ⎪ ⎪ ⎪⎝⎭A matrix (矩阵) is a rectangular array of numbersIf we attach to the coefficient matrix an additional column whose entries are the numbers on the right-hand side of the system, we obtain the new matrix11121121222212n n s m m m na a ab a a a b b a a a ⎛⎫ ⎪ ⎪ ⎪⎝⎭We refer to this new matrix as the augmented matrix (增广矩阵) of a linear system.The system can be solved by performing operations on the augmented matrix. i x ’s are placeholders that can be omitted until the endof computation.Corresponding to the three operations used to obtain equivalent systems, the following row operation may be applied to the augmented matrix.1.5 Elementary row operationsThere are three elementary row operations:(1)Interchange two rows;(2)Multiply a row by a nonzero number;(3)Replace a row by its sum with a multiple of another row.Remark: The importance of these three operations is that they do not change the solution set of a linear system and may reduce a linear system to a simpler form.An example is given here to illustrate how to perform row operations on a matrix.★Example:The procedure for applying the three elementary row operations:Step 1: Choose a pivot element (主元)(nonzero) from among the entries in the first column. The row containing the pivotnumber is called a pivotal row(主行). We interchange therows (if necessary) so that the pivotal row is the new firstrow.Multiples of the pivotal row are then subtracted form each of the remaining n-1 rows so as to obtain 0’s in the firstentries of rows 2 through n.Step2: Choose a pivot element from the nonzero entries in column 2, rows 2 through n of the matrix. The row containing thepivot element is then interchanged with the second row ( ifnecessary) of the matrix and is used as the new pivotal row.Multiples of the pivotal row are then subtracted form eachof the remaining n-2 rows so as to eliminate all entries belowthe pivot element in the second column.Step 3: The same procedure is repeated for columns 3 through n-1.Note that at the second step, row 1 and column 1 remain unchanged, at the third step, the first two rows and first two columns remain unchanged, and so on.At each step, the overall dimensions of the system are effectively reduced by 1. (The number of equations and the number of unknowns all decrease by 1.)If the elimination process can be carried out as described, we will arrive at an equivalent strictly triangular system after n-1 steps.However, the procedure will break down if all possible choices for a pivot element are all zero. When this happens, the alternative is to reduce the system to certain special echelon form(梯形矩阵). AssignmentStudents should be able to do all problems.Hand-in problems are: # 7--#11§2. Row Echelon FormNew words and phrases:Row echelon form 行阶梯形Reduced echelon form 简化阶梯形 Lead variable 首变量 Free variable 自由变量Gaussian elimination 高斯消元Gaussian-Jordan reduction. 高斯-若当消元 Overdetermined system 超定方程组 Underdetermined systemHomogeneous system 齐次方程组 Trivial solution 平凡解2.1 Examples and DefinitionIn this section, we discuss how to use elementary row operations to solve mxn systems.Use an example to illustrate the idea.★ Example : Example 1 on page 13. Consider a system represented by the augmented matrix111111110011220031001131112241⎛⎫ ⎪--- ⎪ ⎪-- ⎪- ⎪ ⎪⎝⎭ 111111001120002253001131001130⎛⎫⎪ ⎪ ⎪ ⎪- ⎪ ⎪⎝⎭………..(The details will given in class)We see that at this stage the reduction to strict triangular form breaks down. Since our goal is to simplify the system as much as possible, we move over to the third column. From the example above, we see that the coefficient matrix that we end up with is not in strict triangular form,it is in staircase or echelon form (梯形矩阵).111111001120000013000004003⎛⎫ ⎪ ⎪ ⎪ ⎪- ⎪ ⎪-⎝⎭The equations represented by the last two rows are:12345345512=0 2=3 0=4 03x x x x x x x x x ++++=⎧⎪++⎪⎪⎨⎪-⎪=-⎪⎩Since there are no 5-tuples that could possibly satisfy these equations, the system is inconsistent.Change the system above to a consistent system.111111110011220031001133112244⎛⎫ ⎪--- ⎪ ⎪-- ⎪ ⎪ ⎪⎝⎭ 111111001120000013000000000⎛⎫⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎝⎭The last two equations of the reduced system will be satisfied for any 5-tuple. Thus the solution set will be the set of all 5-tuples satisfying the first 3 equations.The variables corresponding to the first nonzero element in each row of the augment matrix will be referred to as lead variable .(首变量) The remaining variables corresponding to the columns skipped in the reduction process will be referred to as free variables (自由变量).If we transfer the free variables over to the right-hand side in the above system, then we obtain the system:1352435451 2 3x x x x x x x x x ++=--⎧⎪+=-⎨⎪=⎩which is strictly triangular in the unknown 1x 3x 5x . Thus for each pairof values assigned to 2xand4x , there will be a unique solution.★Definition: A matrix is said to be in row echelon form (i) If the first nonzero entry in each nonzero row is 1.(ii)If row k does not consist entirely of zeros, the number of leading zero entries in row k+1 is greater than the number of leading zero entries in row k.(iii) If there are rows whose entries are all zero, they are below therows having nonzero entries.★Definition : The process of using row operations I, II and III to transform a linear system into one whose augmented matrix is in row echelon form is called Gaussian elimination (高斯消元法).Note that row operation II is necessary in order to scale the rows so that the lead coefficients are all 1.It is clear that if the row echelon form of the augmented matrix contains a row of the form (), the system is inconsistent.000|1Otherwise, the system will be consistent.If the system is consistent and the nonzero rows of the row echelon form of the matrix form a strictly triangular system (the number of nonzero rows<the number of unknowns), the system will have a unique solution. If the number of nonzero rows<the number of unknowns, then the system has infinitely many solutions. (There must be at least one free variable. We can assign the free variables arbitrary values and solve for the lead variables.)2.2 Overdetermined SystemsA linear system is said to be overdetermined if there are more equations than unknowns.2.3 Underdetermined SystemsA system of m linear equations in n unknowns is said to be underdetermined if there are fewer equations than unknowns (m<n). It is impossible for an underdetermined system to have only one solution.In the case where the row echelon form of a consistent system has free variables, it is convenient to continue the elimination process until all the entries above each lead 1 have been eliminated. The resulting reduced matrix is said to be in reduced row echelon form. For instance,111111001120000013000000000⎛⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎝⎭ 110004001106000013000000000⎛⎫⎪- ⎪ ⎪ ⎪ ⎪ ⎪⎝⎭Put the free variables on the right-hand side, it follows that12345463x x x x x =-=--=Thus for any real numbersαandβ, the 5-tuple()463ααββ---is a solution.Thus all ordered 5-tuple of the form ()463ααββ--- aresolutions to the system.2.4 Reduced Row Echelon Form★Definition : A matrix is said to be in reduced row echelon form if :(i)the matrix is in row echelon form.(ii) The first nonzero entry in each row is the only nonzero entry in its column.The process of using elementary row operations to transform a matrix into reduced echelon form is called Gaussian-Jordan reduction.The procedure for solving a linear system:(i) Write down the augmented matrix associated to the system; (ii) Perform elementary row operations to reduce the augmented matrix into a row echelon form;(iii) If the system if consistent, reduce the row echelon form into areduced row echelon form. (iv) Write the solution in an n-tuple formRemark: Make sure that the students know the difference between the row echelon form and the reduced echelon form.Example 6 on page 18: Use Gauss-Jordan reduction to solve the system:1234123412343030220x x x x x x x x x x x x -+-+=⎧⎪+--=⎨⎪---=⎩The details of the solution will be given in class.2.5 Homogeneous SystemsA system of linear equations is said to be homogeneous if theconstants on the right-hand side are all zero.Homogeneous systems are always consistent since it has a trivial solution. If a homogeneous system has a unique solution, it must be the trivial solution.In the case that m<n (an underdetermined system), there will always free variables and, consequently, additional nontrivial solution.Theorem 1.2.1 An mxn homogeneous system of linear equations has a nontrivial solution if m<n.Proof A homogeneous system is always consistent. The row echelon form of the augmented matrix can have at most m nonzero rows. Thus there are at most m lead variables. There must be some free variable. The free variables can be assigned arbitrary values. For each assignment of values to the free variables, there is a solution to the system.AssignmentStudents should be able to do all problems except 17, 18, 20.Hand-in problems are 9, 10, 16,Select one problem from 14 and 19.§3. Matrix AlgebraNew words and phrases:Algebra 代数Scalar 数量,标量Scalar multiplication 数乘 Real number 实数 Complex number 复数 V ector 向量Row vector 行向量 Column vector 列向量Euclidean n-space n 维欧氏空间 Linear combination 线性组合 Zero matrix 零矩阵Identity matrix 单位矩阵 Diagonal matrix 对角矩阵 Triangular matrix 三角矩阵Upper triangular matrix 上三角矩阵 Lower triangular matrix 下三角矩阵 Transpose of a matrix 矩阵的转置(Multiplicative ) Inverse of a matrix 矩阵的逆 Singular matrix 奇异矩阵 Singularity 奇异性Nonsingular matrix 非奇异矩阵 Nonsingularity 非奇异性The term scalar (标量,数量) is referred to as a real number (实数) or a complex number (复数). Matrix notationAn mxn matrix, a rectangular array of mn numbers.111212122212.....................n nm m m n a a a a a a a a a ⎛⎫⎪ ⎪ ⎪ ⎪⎝⎭()ij A a =3.1 VectorsMatrices that have only one row or one column are of special interest since they are used to represent solutions to linear systems.We will refer to an ordered n-tuple of real numbers as a vector (向量).If an n-tuple is represented in terms of a 1xn matrix, then we will refer to it as a row vector . Alternatively, if the n-tuple is represented by an nx1 matrix, then we will refer to it as a column vector . In this course, we represent a vector as a column vector.The set of all nx1 matrices of real number is called Euclidean n-space (n 维欧氏空间) and is usually denoted by nR.Given a mxn matrix A, it is often necessary to refer to a particular row or column. The matrix A can be represented in terms of either its column vectors or its row vectors.12(a ,a ,,a )n A = ora (1,:)a(2,:)a(,:)A m ⎛⎫ ⎪⎪= ⎪ ⎪⎝⎭3.2 EqualityFor two matrices to be equal, they must have the same dimensions and their corresponding entries must agree★Definition : Two mxn matrices A and B are said to be equal ifij ij a b =for each ordered pair (i, j)3.3 Scalar MultiplicationIf A is a matrix,αis a scalar, thenαA is the mxn matrix formed by multiplying each of the entries of A byα.★Definition : If A is an mxn matrix, αis a scalar, thenαA is themxn matrix whose (i, j) is ij a αfor each ordered pair (i, j) .3.4 Matrix AdditionTwo matrices with the same dimensions can be added by adding their corresponding entries.★Definition : If A and B are both mxn matrices, then the sum A+B is the mxn matrix whose (i,j) entry isij ija b + for each ordered pair (i, j).An mxn zero matrix (零矩阵) is a matrix whose entries are all zero. It acts as an additive identity on the set of all mxn matrices.A+O=O+A=AThe additive of A is (-1)A since A+(-1)A=O=(-1)A+A.A-B=A+(-1)B-A=(-1)A3.5 Matrix Multiplication and Linear Systems3.5.1 MotivationsRepresent a linear system as a matrix equationWe have yet to defined the most important operation, the multiplications of two matrices. A 1x1 system can be writtena xb =A scalar can be treated as a 1x1 matrix. Our goal is to generalize the equation above so that we can represent an mxn system by a single equation.A X B=Case 1: 1xn systems 1122... n n a x a x a x b +++=If we set()12n A a a a =and12n x x X x ⎛⎫ ⎪⎪= ⎪ ⎪⎝⎭, and define1122...n n AX a x a x a x =+++Then the equation can be written as A X b =。

线性代数英文课件:ch1_3 Cofactor Expansion

线性代数英文课件:ch1_3 Cofactor Expansion
a11 a12 a13 行列式给定后,某元素
D a21 a22
a23
的余子式、代数余子式 和该元素本身无关,只
a31 a32 a33 和行列式的其他元素有关
M 23
a11 a31
a12 a32
is the complement minor of a23
A23
(1)23
a11 a31
a12 a32 is the cofactor of a23
3 1 1 2
5 1
3
4 8 r2 r1 r4 5r1
0
4
6
D
2 0 1 1
2 0 1 1
1 5 3 3 16 0 2 7
8 (1)12 1 2
4 1
6
c1 c3
2cTec2l2heemreeni1st i6onnclyolou4nmenn2o,nsz2oero
1 weexp0anse th1e 0
【 Corollary(推论)】
ai1 Aj1 ai2 Aj2 ain Ajn 0 when i j;
a1i A1 j a2i A2 j ani Anj 0 when i j;
For example,
2 1 3
A1 2 1 ,
23 4
then A11 2 A12 A13 0.
Math. Dept., Wuhan University of Technology
Proof: For determinant
a11 a12
a1n
The cofactor of a j1 ,a j2 , ,a jn
a21 a22
a2n
are Aj1 , Aj2 , , Ajn
DD1 ai1 ai 2

线性代数 英文讲义

线性代数 英文讲义

Definition
A matrix is said to be in reduced row echelon form if: ⅰ. The matrix is in row echelon form. ⅱ. The first nonzero entry in each row is the only nonzero entry in its column.
n×n Systems Definition
A system is said to be in strict triangular form if in the kth equation the coefficients of the first k-1 variables are all zero and the coefficient of xk is nonzero (k=1, …,n).
1×n matrix
column vector
x1 x2 X x n
n×1 matrix
Definition
Two m×n matrices A and B are said to be equal if aij=bij for each i and j.
1 1
Matrix Multiplication and Linear Systems
Case 1 One equation in Several Unknows
If we let A (a1 a2 an ) and
Example
x1 x2 1 (a ) x1 x2 3 x 2 x 2 2 1 x1 x2 x3 x4 x5 2 (b) x1 x2 x3 2 x4 2 x5 3 x x x 2 x 3x 2 4 5 1 2 3

线性代数英文课件:ch3-1 Elementary Operations

线性代数英文课件:ch3-1 Elementary Operations

(注:增广矩阵化为最简形时,线性方程组的解亦求出)
下列哪些矩阵是行最简形?
1 0 3 3
1 0
0
1
0
2
?
0 0 1 1
0
1
0 0
0
0
0
0
0
0
0 3
0
2
?
1 1
2
0
1 0
0
1
0 0
0
0
0 3
0
2
?
1 1
0
1Leabharlann 1 0 0 301
0
2
0 0 0 0

0
0
0
0
1 0 0 3
I. ri rj : Interchange row i and row j.
II. kri (or k ri ) : Multiply the ith row by a nonzero
.scalar k.
III. ri krj : Add k times the jth row to the ith row (i j)
0
0
Echelon Form Matrix
Definition 4 A matrix is said to be in row echelon
form(行阶梯型矩阵), or simply an echelon matrix, if:
I. The zero rows, if any, are below all nonzero rows and
1. Elementary Operations and Gaussian Elimination Method
The Cramer’s Rule:前提:

《高等数学 线性代数部分》

《高等数学 线性代数部分》

矩阵空间
矩阵空间的代数维数和几何维数不一定相等。
线性变换的矩阵表示
矩阵作用
矩阵是一种非常方便的表示线性变换的方法, 在 大多数情况下, 矩阵都能表达线性变换。
矩阵元素和变换关系
可以通过矩阵中每个元素的值和与之对应的线性 变换之间的关系, 推导出矩阵的性质。
矩阵运算的动态演示
矩阵运算的乘法可以看作是线性变换的复合, 这种 变换可以使用动态演示来直观地展示。
正交矩阵的应用
正交矩阵在旋转、对称、镜像、奇异值分解等方面 有广泛应用。
2
例子
投影矩阵、旋转矩阵、切比雪夫多项式、求导算子等都是常见的线性变换。
3
作用
线性变换可以用于解决各种数学问题, 如求解微分方程、求解线性代数问题等。
代数维数与几何维数
代数维数
矩阵空间的代数维数是线性无关生成集中向量的数 量。
几何维数
向量空间中基向量的个数就是几何维数。
线性空间
线性空间的代数维数和几何维数是一样的。
矩阵的逆与转置
矩阵的逆
如果存在一个矩阵C, 使得AC=CA=I, 则称矩阵A 是可逆的。
矩阵的转置
将矩阵的行列互换, 得到新的矩阵。
求逆矩阵
使用初等行变换求逆矩阵, 通过计算检验逆矩阵的 正确性。
求转置矩阵
将矩阵的行列互换得到新的矩阵, 解决矩阵的对称 性问题。
向量空间的定义与性质
1
定义
向量空间是一个数域上的向量集合, 满足八个公理。
当向量集中有向量与其他向量 可表示成线性组合, 则该向量 集是线性相关的。
线性无关性
如果向量集中没有任何一个向 量可表示成其他向量的线性组 合, 则该向量集是线性无关的。

线性代数英文课件2.2

线性代数英文课件2.2

Definition: • A collection of a finite number of linear equations involving the same variables is called a system of linear equations, often abbreviated as SLE. • A system of m linear equations in n variables is called an m × n SLE (pronounced “m by n”). • An n-vector (x1 , x2 , ..., xn ) is called a solution to a particular SLE with n variables if it satisfies all of the equations in the system.
Math 1229A/B
Unit 5: Systems of Linear Equations
(text reference: Section 2.2)
Байду номын сангаас
c V. Olds 2010
Unit 5
61
5
Systems of Linear Equations
You know what the standard form of an equation of a line in ℜ2 looks like. It has the form ax + by = c, for some constants a, b and c. Any equation in 2 variables in which each variable is only multiplied by a constant is the standard form equation of a line in ℜ2 . The variables are usually called x and y , but they could be called x1 and x2 , or m and n, or anything. Of course, we can have other kinds of equations with 2 variables, and you’ve probably seen some of those before. For instance, x2 + y 2 = 1 is the equation of the circle with radius 1. More complicated curves in 2-space have equations like x2 + xy + y 2 = 4 or 5x4 y − 3x2 y 2 + 2xy 5 = 0 or √ √ y x − x y + y = 3. And then there are equations like x + x − y22 = 6, and 2x+y = 4 and x + sin y = 1. But none of those are linear equations. They’re not lines. They’re curvy things. You also know that in ℜ3 , an equation like x + y + z = 1 is not a line. It’s a plane. But ... what is a plane? It’s just a whole bunch of lines side-by-side. Well, okay, we wouldn’t really think of a plane that way, but a plane does have some characteristics which make it similar to a line in some ways. It’s flat. No curvy bits. And the same goes for a hyperplane in ℜm . Any plane or hyperplane, in standard form, has an equation which is just the sum of a bunch of constant multiples of variables, set equal to some constant. There are never any more complicated things done to the variables, like squaring, or taking the square root, or multiplying two variables together, or dividing by a variable. Just a constant times a variable, plus a constant times another variable, plus ... and equal to some constant. The most interesting things that happen are that sometimes a constant is negative, so that we’re actually subtracting, or sometimes a constant is 0, so that the variable isn’t even there. Equations like this are called linear equations, because the relationship between the variables is always like the relationship between the variables in an equation of a line in ℜ2 . We’re going to be working with linear equations a lot in this course. In fact, the textbook is called Elementary Linear Algebra, because the kind of algebra we’re studying all relates to these linear equations. So we need a careful definition of what these are. But it’s not any more complicated than what we’ve already said. Definition: A linear equation in the variables x1 , x2 , ... xn is an equation which can be put into the form a1 x1 + a2 x2 + ...an xn = b for some constants a1 , a2 , ..., an and b, where not all of the ai values are 0. Note: That last bit just means that the equation 0 = 0 isn’t considered to be a linear equation. There has to be at least one variable actually appearing in the equation. Also, as you already know, the variables don’t necessarily have to have the names shown in the definition. They could be x, y, z, w, r and t, or they could be subscripted y ’s or s’s instead of x’s. The names of the variables don’t matter. Any equation which says, or can be rearranged to say, that the sum of scalar multiples of variables is equal to a constant is a linear equation.

线性代数英文课件:ch1_1 Definition

线性代数英文课件:ch1_1 Definition
Linear Algebra
Math. Dept., Wuhan University of Technology
Textbook: 工程数学-线性代数,第五版,同济大学数学系编, 高等教育出版社,2011 References:
➢线性代数(第7版),S.J.Leon,机械工业出版社,2007 (Linear Algebra With Applications) ➢经济数学:线性代数,吴传生等编,高等教育出
Math. Dept., Wuhan University of Technology
Linear Algebra History
➢Leibniz introduced the definition of determinant in 17th century。 (日本数学家关孝和Seki Kowa将其概念称为“行列式”)
Example 1.
2 -1
Evaluate
.
34
23
Example 2. Evaluate
.
-1 4
Example 3.
23
Evaluate
.
15
Math. Dept., Wuhan University of Technology
Sec.1 Determinants of Order 2 and 3
24 8 4 16 4
2 10 Example 5. Evaluate D3 1 1 4
3 2 5
Math. Dept., Wuhan University of Technology
3.Permutations &Number of Inversions
? How to generalize the definitions of 2×2 and 3×3 determinants to n×n determinants? Analyzing formula (1),we can get : (1) It’s the algebraic sum of six (which is exactly the

线性代数英文课件3.3

线性代数英文课件3.3


1 3 5
Solution: We row-reduce each of the given matrices to get the RREF matrix. (You’ve had lots of practice with row-reducing by now, so the details of the reduction are not shown here. Of course, you should feel free to check that the RREF matrices shown here are correct.) 1 1 −1 1 0 0 RREF 2 1 − (a) 1 − − − → 0 1 0 −2 −3 4 0 0 1 The RREF of A does not contain any zero rows (i.e. rows containing only zeroes), so there are 3 non-zero rows. Therefore r(A) = 3, and since A has 3 columns, A has full rank. 1 1 1 1 1 1 RREF (b) 2 2 2 − − − − → 0 0 0 3 3 3 0 0 0 This time, there are some zero rows in the RREF of A. In fact, the RREF of A has only one non-zero row, so r(A) = 1. (And since A has more than 1 column, A does not have full rank.) 1 (c) 2 3 1 0 1 1 RREF 3 3 − − − − → 0 1 3 5 0 0 0 0 1

线性代数英文课件:ch4_Review

线性代数英文课件:ch4_Review

x1
x3
1 3
x5 ;
x2
x3
2 3
x5 ;
x3
x3 ;
x4
5 3
x5 ;
x5
x5
1 0 1 0 1/3
0
1
1
0
2
/
3
0 0 0 1 5/3
0
0
0
0
0
x1 x2 x3 x4 x5
c1
1
1
1
0
0
c2
0
1 3 2 3
which means
(k k1 knr ) * k11 kn-r n-r 0 (2)
From question(1) we know
*,1, ,n-r are linearly independent,
(k k1 knr ) 0; k1 0, , kn-r 0. k k1 kn-r 0.
1 0 1 0 1/3
0
1
1
0
2
/
3
0 0 0 1 5/3
0 0 0 0 0
x1 x3 1 / 3;
x2
x3
2 / 3;
x3
x3 ;
x4 5 / 3.
x1 1 1 / 3
x2 x3 x4
c
1
1
0
2 0 5
/ /
3
3
,
c
R.
(6)Let matrix A=(1 ,2 ,3 ,4 ,5 ),solve Ax 0.
1, ,n-r are linearly independent.
k1 knr 0,
So *,1, ,n-r are linearly independent.

线性代数第一章初等变化和秩英文版

线性代数第一章初等变化和秩英文版
Chapter 1 Elementary Operation and Rank
1. Gauss Elimination of Linear Systems of Equations
2. Elementary Operation and Elementary Matrices
3. Equivalence of Matrices 4. Applications of Elementary
Solution Multiply (1) by -2 and add it to (2), multiply (1) by -1 and add it to (3) we have:
x1 2x2 5x3 19
(1)
4x2 13x3 60
(4)
x2 7x3 30
(5)
Interchange (4) and (5) we obtain:
a11
A
(
A,
)
a21
a12
a22
a1n a2n
b1 b2
am1 am2 amn bm
is called the augment matrix of LS (1.1).
An LS can uniquely determine an augment
matrix A ; on the other hand an m (n 1)
matrix A can uniquely determine an m n
LS. For example 3 4
matrix
1 A 2
2 8
5 3
19 22
1 3 2 11
uniquely determines the following 3 3 LS

线性代数 英文讲义

线性代数 英文讲义

Chapter 2 DeterminantsWith each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number tells us whether the matrix is nonsingular.§1. The Determinant of a MatrixNew words and phrasesDeterminant 行列式Minor 余子式Cofactor 代数余子式1.1 Introduction and DefinitionFirst we consider three simple cases. Notice that A is nonsingular if and only if A is row equivalent to I.Case 1 1x1 MatricesIf A=(a) , we define det(A)=a. A is nonsingular if and only if det(A) is not zero.Case 2 2x2 MatricesIf we use row operations, we see that A is nonsingular if and only if112212210a a a a-≠. We definedet(A)= 11221221a a a a -.We can use the determinants of 1x1 matrices to express det(A) Notation We can refer to the determinant of a specific matrix by enclosing the array between vertical lines.1112112212212122a a a a a a a a =-Case 3 3x3 MatricesThis case is left to students. In this case, if we define thatdet(A)= 112233122331132132132231122133112332a a a a a a a a a a a a a a a a a a =++--- Then we see that A is nonsingular if and only if det(A) is not zero.We can use the determinants of 2x2 matrices to express det(A). ★Definition Let ()ij A a = be an nxn matrix and let ij M denote the (n-1)x(n-1) matrix obtained from A by deleting the row and column containing ij a . The determinant of ij M is called the minor of ij a . We define the cofactor ij A of ij a by(1)det()i j ij ij A M +=-111213212223313233a a a a a a a a a★Definition The determinant of an nxn matrix A, denoted det(A), is a scalar associated with the matrix A that is defined inductively as follows:111111212111 if 1det() if 1n n a n A a A a A a A n =⎧=⎨+++>⎩ (expansion along the first column of A)where 111(1)det()j j j A M +=-, j=1, 2, …, n are the cofactors associated with the entries in the first column of A.1.2 Determinants of Triangular matricesProof If A is upper triangular, then it is easy to prove by induction. If A is lower triangular, then1111212111111det() = nn n i i i A a A a A a A a A ==+++∑It is sufficient to show that 10j A = for j>1. Notice that 1j A is still upper triangular and has one row of all zeros, and the diagonal contains at least one element of zero. By induction, the statement is true.*000**00***0****⎛⎫ ⎪ ⎪ ⎪ ⎪⎝⎭Det(I)=1. If A is triangular, then det()det()T A A =§2. Properties of DeterminantsNew words and phrasesProperty 性质Swap 交换Induction 归纳法Invertibility 可逆性2.1 Effects of Row Operations on the Values of the Determinants(1) Row Operations of Type I:ProofStep 1: Prove the theorem under the assumption that column 1 is swapped with column k, for k>1. If this is proved, then swapping column k and column k’will be equivalent to performing three swaps: first swapping column 1 and column k, then swapping column 1 and column k’, and finally swapping column 1 and column k.The proof is by induction on n. The base case n=1 is completely trivial.( Or, if you prefer, you may take n=2 to be the base case, and thetheorem is easily proven using the formula for the determinant of a 2x2 matrix.) Suppose that the statement is true for (n-1)x(n-1) matrices.111112111111det() = nk n n i i i A a A a A a A a A ==++++∑11111111111det() = n k k n n i i i B b B b B b B b B ==++++∑B=EA= 1232122232(1)1(1)2(1)3(1)1112131123k k k kn n k k k k n n n n n nn a a a a a a a a a a a a a a a a a a a a ----⎛⎫⎪ ⎪⎪⎪⎪ ⎪⎪⎪ ⎪⎝⎭A=11121312122232(1)1(1)2(1)3(1)123123n n k k k k n k k k kn n n n nn a a a a a a a a a a a a a a a a a a a a ----⎛⎫⎪ ⎪⎪⎪⎪ ⎪⎪⎪ ⎪⎝⎭If {}1,i k ∉, then 11i i B A =- because 1A i M and 1Bi M are the sameexcept for two rows being swapped (交换)and 11i i b a =.For i=1 and i=k111k b a =, 11211111111(1)det()det()(1)det()(1)det()B B k A k A k k B M M M M +-=-==-=-(total of k-2 swaps)11111111111(1)det()(1)det()k A k A k k k k k k b B a M a M a A +=-=--=-,111k b a =11211111111111111111(1)det()(1)(1)det()det()k B k k A A k k k b B a M a M a M a A ++-=-=--=-=-(2) Row Operations of Type IIProof Use induction (归纳法)on n. If B=EA, then 11B Ai i M aM =for k is not equal to k.1111111111(1)det()(1)det()i B i A i i i i i i i i b B b M a M a A α++=-=-=For i=k, 1111111111(1)det()(1)det()i B i A k k i k i k i k b B a M a M a A ααα++=-=-=(3) Row Operations of Type IIIProof: Use induction to show that 111111i i i i i i c C a A b B =+ for i=1, 2, …, n.2.2 Determinants and invertibilityProof 21k A E E EU = . Use induction to show that 21det()det()det()det()det()k A E E E U = , where U is upper triangular. det(A) is zero if and only if its reduced echelon form U has a determinant of zero. If A is singular, then U must have a row of all zeros and hence det(U)=0(U is also triangular). If A is nonsingular, then U=I, det(A) is not zero.2.3 Determinants of Products of MatricesProof: If det(B)=0, then B is singular, then AB is also singular. So both sides are equal.If det(A)=0 and det(B) is not zero, then AY=0 has a nontrivial solution Y . ABX=0 has a nontrivial solution 1X B Y -=, so AB is singular. det(AB)=0.If det(A) is not zero, the A is row equivalent to I. 21k A E E E = .Then by the result above, the theorem is proven.2.4 Determinants of TransposesProof Express A in row echelon form U.21k A E E EU = , where U is upper triangular and E ’s are elementary matrices.12T T T T T k A U E E E =For triangular matrices and elementary matrices B, we havedet()det()T B B =2.5 Effects of Column Operations on the Values of the Determinants Since det(AE)=det(A)det(E), we have(1) Interchanging two columns of a matrix changes the sign of thedeterminant.(2) Multiplying a single column of a matrix by a scalar has theeffect of multiplying the value of the determinant by that scalar. (3) Adding a multiple of one column to another does not changethe value of the determinant.2.6 Cofactor expansion along any column or rowProof: LetA= 11121311123123123j n i i i ij in k k k kj kn n n n nj nn a a a a a a a a a a a a a a a aa a a a ⎛⎫⎪⎪ ⎪⎪⎪ ⎪⎪⎪ ⎪⎝⎭B=11213111231231231j n ij i i i in kj k k k kn njn n n nn a a a a a a a a a a a a a a a a a a a a ⎛⎫⎪⎪ ⎪⎪⎪ ⎪⎪⎪ ⎪⎝⎭Then det(A)=-det(B)1111det()(1)det()ni B i i i B b M +==-∑111(1)det()n i B ij i i a M +==-∑ (comparethe difference between 1B i M and Aij M )121(1)(1)det()ni j A ij ij i a M +-==--∑11(1)det()nni jAij ijij ij i i a M a A +===--=-∑∑Hence, 1det()det()nij ij i A B a A ==-=∑.This proves that det(A) can be expanded along any column of A. Sincedet()det()T A A =, det(A) can be expanded along any row of A.AssignmentHand in: 11, 12, 15, 16, 17, Not required: 19, 20,§3. Cramer’s RuleNew words and phraseCramer’s rule 克莱姆法则Adjoint matrix 伴随矩阵Expand 展开In this section, we learn a method for computing the inverse of a nonsingular matrix A using determinants. We also learn a method for solving AX=B using determinants. Both methods are dependent on the following lemma.Proof If i=j, then it is obvious. If i is not equal to j, then consider a matrix obtained by replacing the jth row of A by the ith row of A and expand along the jth row.B= 11121311123123123k n i i i ik in i i i ik in n n n nknn a a a a a a a a a a a a a a a aa a a a ⎛⎫⎪ ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎪⎝⎭, det(B)=0 Let A be an nxn matrix, we define a new matrix called the adjoint of AadjA=11121311123123123Tj n i i i ij in k k k kj kn n n n nj nn A A A A A A A A A A A A A A A A A A A A ⎛⎫ ⎪⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎝⎭=1121311123123n jj j nj nnn nn A A A A A A A A A A A A ⎛⎫⎪ ⎪⎪⎪ ⎪ ⎪⎪ ⎪ ⎪⎝⎭To form the adjoint, we must replace each term by its cofactor and then transpose the resulting matrix.By the lemma above, we obtainA(adjA)=(adjA)A=det(A)IIf A is nonsingular, det(A) is not zero, thus, 11adj det()A A A -=Cramer ’s rule gives a convenient method for expressing the solution to an nxn system of linear equations in terms of determinants. Proof11()det()X A B adjA B A -==()1112131111232121123123n i i i i in nj s sj jjnj s k k k k kn n n n n n nn a a a b a b a a a b a bA b A A A A a a a b a b aa ab a =⎛⎫⎪⎪⎛⎫⎪ ⎪ ⎪⎪=== ⎪ ⎪ ⎪⎪⎪⎝⎭⎪ ⎪⎝⎭∑A Summary on determinants → Definition of determinants → Determinants of triangular matrices → Effects of row operations on determinants → Determinants and Nonsingularity → Determinants of products of matrices → Determinants of transposes→ Effects of column operations on the values of determinants → Expansion along any row or column → Cramer ’s rule→ Another way to calculate the inverse of a matrixAssignmentHand in: 6, 7, 8, 10, 11, 12, 13 Not required: 14。

线性代数英文课件4.1

线性代数英文课件4.1
Math 1229A/B
Unit 10: Determinants
(text reference: Section 4.1)
c V. Olds 2010
128
Unit 10
10
Determin are a special class of matrices. We have already seen one instance of a concept which is defined only for square matrices — the inverse matrix. That is, only a square matrix may have an inverse. In this unit we will (begin to) learn about another concept which is defined only for square matrices — the determinant of a matrix.
Unit 10 d, then det A = ad − cb. That is, if A is a 2 × 2 matrix, then det A = a11 a22 − a21 a12 . Example 10.1. Find the determinants of the following matrices: 1 2 2 0 (a) A = [5] (b) B = (c) C = 3 4 1 3
The number which is the determinant of a square matrix measures a certain characteristic of the matrix. In a more advanced study of matrix algebra, this characteristic is used for various purposes. In this course, the only way in which we will use this number is in its connection to the existence of the inverse of the matrix, and through that it’s application to SLE’s in which the coefficient matrix is a square matrix. For these purposes, what will matter to us is whether or not this number, the determinant of the matrix, is 0. But of course, in order to determine whether or not the determinant of a particular matrix is 0, we need to know how to calculate that number. Calculating the determinant of a square matrix is somewhat complicated. The definition is recursive, meaning that the calculation is defined in a straightforward way for small matrices, and then for larger matrices, the determinant is defined as being a calculation involving the determinants of smaller matrices, which are certain submatrices of the matrix. We could express this recursive definition of the determinant of a square matrix of order n as applying for all n ≥ 2, specifically defining only the determinant of a square matrix of order 1, i.e. a (square) matrix containing only a single number. However, the calculation for a 2 × 2 matrix is very straightforward — easier to think of as a special definition all on its own — so instead we use specific definitions for n = 1 and n = 2, and then define the determinant of a square matrix of order n > 2 in terms of determinants of submatrices of order n − 1, which are found by expressing them in terms of determinants of successively smaller submatrices until we get down to submatrices of order 2. The calculation of det A as defined in this way, when A is a square matrix of order n > 2, is not really as complicated as it will look. It’s just a matter of applying a certain formula carefully, as many times as necessary until we have expressed det A in terms of the determinants of 2 × 2 matrices. Those determinants are easy to find. So we start by defining det A for square matrices of order 1 and of order 2. When A is a 1 × 1 matrix, i.e. a matrix containing only one number, finding the particular number det A which is associated with that matrix is trivial. That number is the only number around — the single number that’s in the matrix. For a square matrix of order 2, i.e. a matrix containing 4 numbers arranged in a square, we have to do a little more work. But it’s a simple calculation. In fact, we can think of the calculation as “down products minus up products”, which is something we have seen before. But this time there’s only one down product, and only one up product, so it’s actually just “down product minus up product”.

线性代数英文课件:ch2-2 Inverse of a Matrix

线性代数英文课件:ch2-2 Inverse of a Matrix

Example 5 For a system of linear equations:
2 x1 x2 3x3 1
2
x1
x2
x3
5
4 x1 x2 2 x3 5
It can also be denoted as:
Ax b, Where
2 1 3 x1 1
A
2 4
1 1
1 2
A1 AXBB1 A1CB1
That is, X=A-1CB-1
3
2
So
X
3 2
1
3 5
2
1
2
2
1 0
2
0 1
3 5
1
1
2
2
0 2
2 1
0 1
3 5
10
1 2
11 3
4
4 1
By premultiplying (左乘)a matrix A by a matrix B, we mean multiplying A on the left by B, that is, forming the product BA.
Solution A 2 0, and B 1 0
so A and B is invertible, and
3
2
A1 3
2
1
2
3
1
5 2
2
1
B1
3 5
1 2
Premultiplying matrix equation AXB C by matrix
A-1, and postmultiplying B-1 ,
Suppose B,C are both inverse matrices of A, that is,

线性代数英文课件4.2

线性代数英文课件4.2

144
Unit 11
Theorem 11.1. If matrix B is obtained from square matrix A by multiplying one row or column of A by some non-zero scalar c, then det B = c(det A).
Math 1229A/B
Unit 11: Properties of Determinants
(text reference: Section 4.2)
c V. Olds 2010
Unit 11
143
11
Properties of Determinants
In this section, we learn more about determinants. First, we observe some properties of determinants that allow us to calculate determinants more easily. We examine the effects on the determinant when the various kinds of elementary row operations are performed, so that we can easily see how the determinants of the various row-equivalent matrices are related to one another as we perform these operations. This allows us to calculate the determinant of a matrix by row-reducing the matrix (a procedure we already know well) to obtain a matrix whose determinant is easily calculated using facts we’ve already learnt in the previous section. We also learn some useful properties which allow us to calculate the determinant of a matrix from the determinants of one or more other matrices whose determinants we may already know. And finally we examine the relationship between determinants and inverses, which allows us to relate determinants to systems of linear equations, using what we already know about the implications of the existence of the inverse of a matrix for the number of solutions to the SLE which has that matrix as its coefficient matrix. Throughout all of this, of course, it is important to remember that we are only dealing with square matrices when we talk about determinants. That is, it is only for a square matrix that the characteristic “the determinant of the matrix” is defined. First, let’s think about what effect multiplying some row of a matrix by a non-zero scalar will have on the determinant. That is, let’s think about the relationship between det A and det B if matrix B is identical to matrix A except that one of the rows in B is the corresponding row of A multiplied by some c = 0. So suppose we have some n × n matrix A = [aij ]. Let B = [bij ] be the matrix obtained by multiplying one row, row k , by some non-zero scalar c. Then we know that bkj = cakj and bij = aij for all i = k . We can calculate det B by expanding along row k . Notice that when we form submatrices of B by deleting row k (and also some column of B ), the one row that’s different than in matrix A is deleted, so that in the submatrix of B obtained, each entry is just the corresponding entry from matrix A and therefore the entire submatrix of B is simply the corresponding submatrix of A. That is, we have Bkj = Akj . So when we expand along row k we get:

线性代数 英文讲义

线性代数 英文讲义

Chapter 3 Vector SpacesThe operations of addition and scalar multiplication are used in many diverse contexts in mathematics. Regardless of the context, however, these operations usually obey the same set of algebraic rules. Thus a general theory of mathematical systems involving addition and scalar multiplication will have applications to many areas in mathematics.§1. Examples and DefinitionNew words and phrasesVector space 向量空间Polynomial 多项式Degree 次数Axiom 公理Additive inverse 加法逆1.1 ExamplesExamining the following sets:(1) V=2R : The set of all vectors 12x x ⎛⎫ ⎪⎝⎭ (2) V=m n R ⨯: The set of all mxn matrices(3) V=[,]a b C : The set of all continuous functions on the interval [,]a b(4) V=n P : The set of all polynomials of degree less than n.Question 1: What do they have in common?We can see that each of the sets, there are two operations: addition and multiplication, i.e. with each pair of elements x and y in a set V, we can associate a unique element x+y that is also an element in V, and with each element x and each scalar α, we can associate a unique element αin V. And the operations satisfy some algebraic rules.xMore generally, we introduce the concept of vector space. .1.2 Vector Space Axioms★Definition Let V be a set on which the operations of addition and scalar multiplication are defined. By this we mean that, with each pair of elements x and y in a set V, we can associate a unique element x+y that is also an element in V, and with each element x and each scalar α, we can associate a unique element xαin V.The set V together with the operations of addition and scalar multiplication is said to form a vector space if the following axioms are satisfied.A1. x+y=y+x for any x and y in V.A2. (x+y)+z=x+(y+z) for any x, y, z in V.A3. There exists an element 0 in V such that x+0=x for each x in V.A4. For each x in V, there exists an element –x in V such that x+(-x)=0. A5. α(x+y)= αx+αy for each scalar αand any x and y in V.A6. (α+β)x=αx+βx for any scalars αandβand any x in V.A7. (αβ)x=α(βx) for any scalars αandβand any x in V.A8. 1x=x for all x in V.From this definition, we see that the examples in 1.1 are all vector spaces. In the definition, there is an important component, the closure properties of the two operations. These properties are summarized as follows:C1. If x is in V and αis a scalar, then αx is in VC2. If x, y are in V, then x+y is in V.An example that is not a vector space:Let {}=, on this set, the addition and W a a(,1)| is a real numbermultiplication are defined in the usually way. The operation + and scalar multiplication are not defined on W. The sum of two vector is not necessarily in W, neither is the scalar multiplication. Hence, W together with the addition and multiplication is not a vector space.In the examples in 1.1, we see that the following statements are true.Theorem 3.1.1 If V is a vector space and x is any element of V, then (i) 0x=0(ii) x+y=0 implies that y=-x (i.e. the additive inverse is unique). (iii)(-1)x=-x.But is this true for any vector space?Question: Are they obvious? Do we have to prove them?But if we look at the definition of vector space, we don’t know what the elements are, how the addition and multiplication are defined. So theorem above is not very obvious.Proof(i)x=1x=(1+0)x=1x+0x=x+0x, (A6 and A8)Thus –x+x=-x+(x+0x)=(-x+x)+0x (A2)0=0+0x=0x (A1, A3, and A4)(ii)Suppose that x+y=0. then-x=-x+0=-x+(x+y)Therefore, -x=(-x+x)+y=0+y=y(iii)0=0x=(1+(-1))x=1x+(-1)x, thusx+(-1)x=0It follows from part (ii) that (-1)x=-xAssignment for section 1, chapter 3Hand in: 9, 10, 12.§2. SubspacesNew words and phrasesSubspace 子空间Trivial subspace 平凡子空间Proper subspace 真子空间Span 生成Spanning set 生成集Nullspace 零空间2.1 DefinitionGiven a vector space V , it is often possible to form another vector space by taking a subset of V and using the operations of V . For a new subset S of V to be a vector space, the set S must be closed under the operations of addition and scalar multiplication.Examples (on page 124)The set 1212|2x S x x x ⎧⎫⎛⎫⎪⎪==⎨⎬ ⎪⎪⎪⎝⎭⎩⎭together with the usual addition and scalar multiplication is itself a vector space .The set S=| and are real numbers a a a b b ⎧⎫⎛⎫⎪⎪ ⎪⎨⎬ ⎪⎪⎪ ⎪⎝⎭⎩⎭together with the usual addition and scalar multiplication is itself a vector space.★Definition If S is a nonempty subset of a vector space V , and S satisfies the following conditions:(i) αx ∈S whenever x ∈S for any scalar α(ii) x+y ∈S whenever x ∈S and y ∈Sthen S is said to be a subspace (子空间)of V .A subspace S of V together with the operations of addition and scalar multiplication satisfies all the conditions in the definition of a vector space. Hence, every subspace of a vector space is a vector space in its own right. Trivial Subspaces and Proper SubspacesThe set containing only the zero element forms a subspace, called zero subspace, and V is also a subspace of V . Those two subspaces are called trivial subspaces of V . All other subspaces are referred to as proper subspaces.Examples of Subspaces(1) the set of all differentiable functions on [a,b] is a subspace of [,]a b C(2) the set of all polynomials of degree less than n (>1) with the property p(0) form a subspace of n P .(3) the set of matrices of the form a b b c ⎛⎫⎪-⎝⎭ forms a subspace of 22R ⨯. (4) the set of all mxm symmetric matrices forms a subspace of m m R ⨯(5) the set of all mxm skew-symmetric matrices form a subspace of m m R ⨯2.2 The Nullspace of a MatrixLet A be an mxn matrix, and{}()|,0n N A X X R AX =∈=.Then N(A) form a subspace of n R . The subspace N(A) is called the nullspace of A.The proof is a straightforward verification of the definition.2.3 The Span of a Set of VectorsIn this part, we give a method for forming a subspace of V with finite number of vectors in V .Given n vectors 12n v ,v ,,v in a vector space of V , we can form a newsubset of V as the following.{}12n 1122n n Span(v ,v ,,v )v v v |' are scalars i s αααα=+++It is easy to show that this set forms a subset of V. We call this subspace the span of 12n v ,v ,,v , or the subspace of V spanned by12n v ,v ,,v .Theorem 3.2.1 If 12n v ,v ,,v are elements of a vector space of V , then{}12n 1122n n Span(v ,v ,,v )v v v |' are scalars i s αααα=+++ is a subspace of V .For example, the subspace spanned by two vectors 100⎛⎫ ⎪ ⎪ ⎪⎝⎭and010⎛⎫ ⎪ ⎪ ⎪⎝⎭is the subspace consisting of the elements 120x x ⎛⎫ ⎪ ⎪ ⎪⎝⎭.2.4 Spanning Set for a Vector Space★Definition If 12n v ,v ,,v are vectors of V andV=12n Span(v ,v ,,v ), then the set {}12n v ,v ,,v is called a spanning set(生成集)for V .In other words, the set {}12n v ,v ,,v is a spanning set for V if andonly if every element can be written as a linear combination of 12n v ,v ,,v .The spanning sets for a vector space are not unique.Examples (Determining if a set spans for 3R )(a) (){}1231,2,3,T e e e (b) ()()(){}1,1,1,1,1,0,1,0,0T T T (c) ()(){}1,0,1,0,1,0T T (d) ()()(){}1,2,4,2,1,3,4,1,1T T T -To do this, we have to show that every vector in 3R can be written as a linear combination of the given vectors.Assignment for section 2, chapter 3 Hand in: 6, 8, 13, 16, 17, 18, 20Not required: 21Chapter 3---Section 3 Linear Independence§3. Linear IndependenceNew words and phrasesLinear independence 线性无关性Linearly independent 线性无关的Linear dependence 线性相关性Linearly dependent 线性相关的3.1 MotivationIn this section, we look more closely at the structure of vector spaces. We restrict ourselves to vector spaces that can be generated from a finite set of elements, or vector spaces that are spans of finite number of vectors. V=Span(v,v,,v)12nThe set {}v,v,,v is called a generating set or spanning set(生成集).12nIt is desirable to find a minimal spanning set. By minimal, we mean a spanning set with no unnecessary element.To see how to find a minimal spanning set, it is necessary to consider how the vectors in the collection depend on each other. Consequently we introduce the concepts of linear dependence and linear independence. These simple concepts provide the keys to understanding the structure of vector spaces.Give an example in which we can reduce the number of vectors in a spanning set.Consider the following three vectors in 3R.11x 12⎛⎫ ⎪=- ⎪ ⎪⎝⎭ 22x 31-⎛⎫ ⎪= ⎪⎪⎝⎭31x 38-⎛⎫⎪= ⎪ ⎪⎝⎭ These three vectors satisfy(1) 312x =3x +2xAny linear combination of 123x ,x ,x can be reduced to a linear combination of 12x ,x . Thus S= Span(123x ,x ,x )=Span(12x ,x ). (2) 1233x +2x +(1)x 0-= (a dependency relation)Since the three coefficients are nonzero, we could solve for any vector in terms of the other two. It follows thatSpan(123x ,x ,x )=Span(12x ,x )=Span(13x ,x )=Span(23x ,x )On the other hand, no such dependency relationship exists between12x and x . In deed, if there were scalars 1c and 2c , not both 0, such that(3) 1122c x +c x 0=then we could solve for one of the two vectors in terms of the other. However, neither of the two vectors in question is a multiple of the other. Therefore, Span(1x ) and Span(2x ) are both proper subspaces of Span(12x ,x ), and the only way that (3) can hold is if 12c =c =0.Observations: (I)If 12n v ,v ,,v span a vector space V and one of these vectors can be written as a linear combination of the other n-1 vectors, then those n-1 vectors span V .(II) Given n vectors 12n v ,v ,,v , it is possible to write one of thevectors as a linear combination of the other n-1 vectors if and only if there exist scalars 12n c ,c ,,c not all zero such that1122n n v v v 0c c c +++=Proof of I: Suppose that n v can be written as a linear combination of the vectors 12n-1v ,v ,,v .Proof of II: The key point here is that there at least one nonzero coefficient.3.2 Definitions★Definition The vectors 12n v ,v ,,v in a vector space V are said to be linearly independent (线性独立的) if1122n n v v v 0c c c +++=implies that all the scalars 12n c ,c ,,c must equal zero. Example: 12n e ,e ,,e are linearly independent.Definition The vectors 12n v ,v ,,v in a vector space V are said to be linearly dependent (线性相关的)if there exist scalars 12n c ,c ,,c not all zero such that1122n n v v v 0c c c +++=.Let 12n e ,e ,,e ,x be vector in n R . Then 12n e ,e ,,e ,x are linearlydependent.If there are nontrivial choices of scalars for which the linear combination 1122n n v v v c c c +++ equals the zero vector, then 12n v ,v ,,vare linearly dependent. If the only way the linear combination1122n n v v v c c c +++ can equal the zero vector is for all scalars 12n c ,c ,,cto be 0, then 12n v ,v ,,v are linearly independent.3.3 Geometric InterpretationThe linear dependence and independence in 2R and 3R .Each vector in 2R or 3R represents a directed line segment originated at the origin.Two vector are linearly dependent in 2R or 3R if and only if two vectors are collinear. Three or more vector in 2R must be linearly dependent.Three vectors in 3R are linearly dependent if and only if three vectors are coplanar. Four or more vectors in 3R must be linearly dependent.3.4 Theorems and ExamplesIn this part, we learn some theorems that tell whether a set of vectors is linearly independent.Example: (Example 3 on page 138) Which of the following collections of vectors are linearly independent?(a) (){}1231,2,3,Te e e(b) ()()(){}1,1,1,1,1,0,1,0,0TTT(c) ()(){}1,0,1,0,1,0T T(d) ()()(){}1,2,4,2,1,3,4,1,1T TT-The problem of determining the linear dependency of a collection of vectors in m R can be reduced to a problem of solving a linear homogeneous system.If the system has only the trivial solution, then the vectors are linearly independent, otherwise, they are linearly dependent, We summarize the this method in the following theorem:Theorem n vectors 12n x ,x ,,x in m R are linearly dependent if the linear system Xc=0 has a nontrivial solution, where 12n X=(x ,x ,,x ). Proof: 1122n n c x +c x +c x 0+= ⇔ Xc=0.Theorem 3.3.1 Let 12n x ,x ,,x be n vectors in n R and let12n X=(x ,x ,,x ). The vectors 12n x ,x ,,x will be linearly dependent if andonly if X is singular. (the determinant of X is zero)Proof: Xc=0 has a nontrivial solution if and only X is singular.Theorem 3.3.2 Let 12n v ,v ,,v be vectors in a vector space V. A vector v in Span(12n v ,v ,,v ) can be written uniquely as a linear combination of12n v ,v ,,v if and only if 12n v ,v ,,v are linearly independent.(A vector v in Span(12n v ,v ,,v ) can be written as two different linear combinations of 12n v ,v ,,v if and only if 12n v ,v ,,v are linearly dependent.)(Note: If---sufficient condition ; Only if--- necessary condition ) Proof: Let v ∈ Span(12n v ,v ,,v ), then 1122n n v v v v ααα=+++Necessity: (contrapositive law for propositions)Suppose that vector v in Span(12n v ,v ,,v ) can be written as two different linear combination of 12n v ,v ,,v , then prove that 12n v ,v ,,v are linearly dependent. The difference of two different linear combinations gives a dependency relation of 12n v ,v ,,vSuppose that 12n v ,v ,,v are linearly dependent, then there exist twodifferent representations. The sum of the original relation plus the dependency relation gives a new representation.Assignment for section 3, chapter 3Hand in : 5, 11, 13, 14, 15, ; Not required: 6, 7, 8, 9, 10,§4. Basis and DimensionNew words and phrasesBasis 基Dimension 维数Minimal spanning set 最小生成集Standard Basis 标准基4.1 Definitions and TheoremsA minimal spanning set for a vector space V is a spanning set with no unnecessary elements (i.e., all the elements in the set are needed in order to span the vector space). If a spanning set is minimal, then its elements are linearly independent. This is because if they were linearly dependent, then we could eliminate a vector from the spanning set, the remaining elements still span the vector space, this would contradicts the assumption of minimality. The minimal spanning set forms the basic building blocks for the whole vector space and, consequently, we say that they form a basis for the vector space(向量空间的基).★Definition The vectorsv,v,,v form a basis for a vector space V12nif and only if(i)v,v,,v are linearly independent12n(ii)v,v,,v span V.12nA basis of V actually is a minimal spanning set(最小张成集)for V.We know that spanning sets for a vector space are not unique. Minimal spanning sets for a vector space are also not unique. Even though, minimal spanning sets have something in common. That is, the number of elements in minimal spanning sets.We will see that all minimal spanning sets for a vector space have the same number of elements.Theorem 3.4.1 If {}12n v ,v ,,v is a spanning set for a vector space V , then any collection of m vectors in V , where m>n, is linearly dependent.Proof Let {}12m u ,u ,,u be a collection of m vectors in V . Then each u i can be written as a linear combination of 12n v ,v ,,v .i 1122n u =v +v ++v i i in a a aA linear combination 1122m u + u u m c c c ++can be written in the formnnn11j 22j j j=1j=1j=1v + v v j j m nj c a c a c a ++∑∑∑Rearranging the terms, we see that 1122m j 11u + u u ()v nmm ij i j i c c c a c ==++=∑∑Then we consider the equation 1122m m c u + c u c u 0++= to see if we canfind a nontrivial solution (12n c ,c ,,c ). The left-hand side of the equation can be written as a linear combination of 12n v ,v ,,v . We show that thereare scalars 12n c ,c ,,c , not all zero, such that 1122m m c u + c u c u 0++=.Here, we have to use a theorem: A homogeneous linear system must have a nontrivial solution if it has more unknowns than equations. Corollary 3.4.2 If {}12n v ,v ,,v and {}12m u ,u ,,u are both bases for a vector space V , then n=m. (all the bases must have the same number of vectors.)Proof Since 12n v ,v ,,v span V , if m>n, then {}12m u ,u ,,u must be linearly dependent. This contradicts the hypothesis that {}12m u ,u ,,u is linearly independent. Hence m n ≤. By the same reasoning, n m ≤. So m=n.From the corollary above, all the bases for a vector space have the same number of elements (if it is finite). This number is called the dimension of the vector space.★Definition Let V be a vector space. If V has a basis consisting of n vectors, we say that V has dimension n (the dimension of a vector space of V is the number of elements in a basis.) The subspace {0} of V is said to have dimension 0. V is said to be finite-dimensional if there is a finite set of vectors that spans V; otherwise, we say that V is infinite-dimensional.Recall that a set of n vector is a basis for a vector space if two conditions are satisfied. If we know that the dimension of the vector space is n, then we just need to verify one condition.Theorem 3.4.3 If V is a vector space of dimension n>0:I.Any set of n linearly independent vectors spans V (so this setforms a basis for the vector space).II.Any n vectors that span V are linearly independent (so this set forms a basis for the vector space).ProofProof of I: Suppose thatv,v,,v are linearly independent and v is12nany vector in V. Since V has dimension n, the collection of vectorsv,v,,v,v must be linearly dependent. Then we show that v can be 12nexpressed in terms ofv,v,,v.12nProof of II: Ifv,v,,v are linearly dependent, then one of v’s can12nbe written as a linear combination of the other n-1 vectors. It follows that those n-1 vectors still span V. Thus, we will obtain a spanning set with k<n vectors. This contradicts dimV=n (having a basis consisting of n vectors).Theorem 3.4.4 If V is a vector space of dimension n>0:(i) No set of less than n vectors can span V .(ii)Any subset of less than n linearly independent vectors can be extended to form a basis for V .(iii) Any spanning set containing more than n vectors can be pareddown (to reduce or remove by or as by cutting) to form a basis for V . Proof(i): If there are m (<n) vectors that can span V , then we can argue that dimV<n. this contradicts the assumption.(ii) We assume that 12k v ,v ,,v are linearly independent ( k<n). Then Span(12k v ,v ,,v ) is a proper subspace of V . There exists a vector1v k + that is in V but not in Span(12k v ,v ,,v ). We can show that12k v ,v ,,v ,1v k + must be linearly independent. Continue this extensionprocess until n linearly independent vectors are obtained.(iii) The set must be linearly independent. Remove (eliminate) one vector from the set, the remaining vectors still span V . If m-1>n, we can continue to eliminate vectors in this manner until we arrive at a spanning set containing n vectors.4.2 Standard BasesThe standard bases(标准基)for n R, m nR .Although the standard bases appear to be the simplest and most natural to use, they are not the most appropriate bases for many applied problems. Once the application is solved in terms of the new basis, it is a simple matter to switch back and represent the solution in terms of the standard basis.Assignment for section 4, chapter 3Hand in : 4, 7, 9,10,12,16,17,18Not required: 11,13,14, 15,§5. Change of BasisNew words and phrasesTransition matrix 过渡矩阵5.1 MotivationMany applied problems can be simplified by changing from one coordinate system to another. Changing coordinate systems in a vector space is essentially the same as changing from one basis to another. For example, in describing the motion of a particle in the plane at a particular time, it is often convenient to use a basis for 2R consisting of a unit tangent vector t and a unit normal vector n instead of the standard basis. In this section we discuss the problem of switching from one coordinate system to another. We will show that this can be accomplished by multiplying a given coordinate vector x by a nonsingular matrix S.5.2 Changing Coordinates in 2RThe standard basis for 2R is 12{e ,e }. Any vector in 2R can be written as a linear combination 12{e ,e }1122x=e +e x x .The scalars 12 and x x can be thought of as the coordinates (坐标) of x with respect to the standard basis. Actually, for any basis 12{u ,u } for 2R , a given vector x can be represented uniquely as a linear combination1122x=u +u c cThe scalars 12 and c c are the coordinates of x with respect to the basis12{u ,u }. Let us denote the ordered bases by [12e ,e ] and [12u ,u ]. 12(,)T x x iscalled the coordinate vector of x with respect to [12e ,e ],12(,)T c c the coordinate vector of x with respect to [12u ,u ].We wish to find the relationship between the coordinate vectors x and c.11122122x=e +e (e ,e )x x x x ⎛⎫= ⎪⎝⎭11122122x=u +u (u ,u )c c c c ⎛⎫= ⎪⎝⎭11121222(e ,e )(u ,u )x y x y ⎛⎫⎛⎫= ⎪ ⎪⎝⎭⎝⎭111222(u ,u )x c x c ⎛⎫⎛⎫= ⎪ ⎪⎝⎭⎝⎭Or simply, x=UcThe matrix U is called the transition matrix (过渡矩阵)from the ordered basis [12u ,u ] to [12e ,e ].The matrix U is nonsingular since 12u ,u are linearly independent. By the formula x=Uc, we see that if given a vector 1122u +u c c , its coordinate vector with respect to [12e ,e ] is given by Uc.Conversely if given a vector 12(,)T x x , then its coordinate vector with respect to [12u ,u ] is given by -1U xNow let us consider the general problem of changing from one basis[12v ,v ] to another basis [12u ,u ]. In this case, we assume that 112212x v +v (v ,v )c c c == and 112212x u +u (u ,u )d d d == ThenVc=UdIt follows that1d U Vc -=.Thus, given a vector x in 2R and its coordinate vector c with respect to the ordered basis [12v ,v ], to find the coordinate vector of x with respect to the new basis [12u ,u ], we simply multiply c by the transition matrix1S U V -=.where 12V=(v ,v ) and 12U=(u ,u ) Example (example 4 on page 156) Given two bases15v 2⎛⎫= ⎪⎝⎭, 27v 3⎛⎫= ⎪⎝⎭and 13u 2⎛⎫= ⎪⎝⎭, 21u 1⎛⎫= ⎪⎝⎭(1) Find the coordinate vectors c and d of the vector ()x=12,5Twith respect to the bases [12v ,v ] and [12u ,u ], respectively.12[e ,e ]12[v ,v ]12[u ,u ]1U -UV1U V -(2) And find the transition matrix S corresponding to the change of basis from [12v ,v ] to [12u ,u ]. (3) Check that d=Sc.Solution: The coordinate vector with respect to the basis [12v ,v ] is15712371212352551--⎛⎫⎛⎫⎛⎫⎛⎫⎛⎫== ⎪ ⎪ ⎪⎪ ⎪-⎝⎭⎝⎭⎝⎭⎝⎭⎝⎭The coordinate vector with respect to the basis [12u ,u ] is13112111272152359--⎛⎫⎛⎫⎛⎫⎛⎫⎛⎫== ⎪ ⎪ ⎪⎪ ⎪--⎝⎭⎝⎭⎝⎭⎝⎭⎝⎭The transition matrix corresponding to the change of the basis from [12v ,v ] to [12u ,u ] isS=131571157342123232345--⎛⎫⎛⎫⎛⎫⎛⎫⎛⎫== ⎪ ⎪ ⎪⎪ ⎪---⎝⎭⎝⎭⎝⎭⎝⎭⎝⎭Check that73419451⎛⎫⎛⎫⎛⎫= ⎪ ⎪⎪---⎝⎭⎝⎭⎝⎭.The discussion of the coordinate changes in 2R can be easily generalized to that in n R . We summarize it as follows.12n [v ,v ,,v ]1U -UV1U V -12n [e ,e ,,e ]12n [u ,u ,,u ]where 12V=(v ,v ,,v )n and 12U=(u ,u ,,u )nInterpretation: if x=12(,,,)T n x x x is a vector in n R , then the coordinate vector c of x with respect to 12[v ,v ,,v ]n is given by x=Vc, (c=-1V x ), the coordinate vector d of x with respect to 12[u ,u ,,u ]n is given by x=Ud, (d=-1U x ). The transition matrix from 12[v ,v ,,v ]n to 12[u ,u ,,u ]n is given by S=1U V -.5.3 Change of Basis for a General Vector Space★Definition (coordinate) Let V be a vector space and let E=[12n v ,v ,,v ] be an ordered basis for V . If v is any element of V , then v can be written in the form121122n n 12n n v v v v [v ,v ,,v ]c c c c c c ⎛⎫ ⎪ ⎪=+++= ⎪ ⎪⎝⎭(this is a formal multiplication since vectors here are not necessarily column vectors in n R ) where 12n c ,c ,,c are scalars. Thus we can associate with each vector v a unique vector c=12n (c ,c ,,c )T in n R . The vector c defined in this way is called theE [v]. The i c ’s are called coordinates of v relative to E . Transition MatrixLet E=[12n w ,w ,,w ], F=[12n v ,v ,,v ] be two ordered bases for V .Then11112121212122221122w v v v w v v v w v v v n n n n n n n nn ns s s s s s s s s =+++=+++=+++Formally, this change of bases can be written as111212122212n 12n 12[w ,w ,,w ][v ,v ,,v ]n n n n nn s s s s s s s s s ⎛⎫ ⎪ ⎪= ⎪ ⎪⎝⎭(The multiplication is formal matrix multiplication. If the vector space is the Euclidean space, then the multiplication becomes the actual multiplication.)This is called the change of basis from E=[12n w ,w ,,w ] to F =[12n v ,v ,,v ].A vector v has different coordinate vectors in different bases. Let x=E [v], i.e. 1122n v w +w ++w n x x x = and y=F [v], 1122n v v +v ++v n y y y =, then 1122n 111v ()v +()v ++()v nnnj j j j nj j j j j s x s x s x ====∑∑∑1ni ij j j y s x ==∑In matrix notation, we have y=Sx, whereS=111212122212n n n n nn s s s ss s s s s ⎛⎫ ⎪ ⎪⎪ ⎪⎝⎭This matrix is referred to as the transition matrix corresponding to the change of basis from E=[12n w ,w ,,w ] to F =[12n v ,v ,,v ] S is nonsingular, since Sx=y if and only if1122n n 1122n n w +w ++w v +v ++v x x x y y y =Sx=0 implies that 1122n n w +w ++w 0x x x =. Hence x must be zero. 1S y x -=1S - is the transition matrix corresponding to the change of base from F=[12n v ,v ,,v ] to E=[12n w ,w ,,w ]Any nonsingular matrix can be thought of as a transition matrix. If S is an nxn nonsingular matrix and [12n v ,v ,,v ] is an ordered basis for V , then define [12n w ,w ,,w ] by111212122212n 12n 12[w ,w ,,w ][v ,v ,,v ]n n n n nn s s s s s s s s s ⎛⎫ ⎪ ⎪= ⎪ ⎪⎝⎭Then12nw ,w ,,w arelinearly independent. Suppose that1122n n w +w ++w 0x x x =Then1122n 111()v +()v ++()v 0nnnj j j j nj j j j j s x s x s x ====∑∑∑By the linear independence of 12n v ,v ,,v , it follows that10nij jj s x==∑or , equivalentlySx=0Since S is nonsingular, x must equal zero. Therefore, 12n w ,w ,,w are linearly independent and hence they form a basis for V . The matrix S is the transition matrix corresponding to the change from the ordered basis [12n w ,w ,,w ] to [12n v ,v ,,v ]. Example Let 110u 01⎛⎫= ⎪⎝⎭ 210u 01⎛⎫= ⎪-⎝⎭ 301u 10⎛⎫= ⎪⎝⎭401u 10⎛⎫= ⎪-⎝⎭; 110v 00⎛⎫=⎪⎝⎭ 201v 00⎛⎫= ⎪⎝⎭ 301v 10⎛⎫= ⎪⎝⎭ 410v 01⎛⎫= ⎪-⎝⎭.Find the transition matrix corresponding to the change of base from E=[1234u ,u ,u ,u ] to F =[1234v ,v ,v ,v ]In many applied problems it is important to use the right type of basis for the particular application. In chapter 5 we will see that the key to solving least squared problems is to switch to a special type of basis called an orthonormal basis. In chapter 6 we will consider a number of applications involving the eigenvalues and eigenvectors associated with an nxn matrix A. The key to solving these types of problems is to switch to a basis for n R consisting of eigenvectors of A.Chapter 3---Section 5 Change of Basis Assignment for section 5, chapter 3 Hand in: 6, 7, 8, 11 ,Not required; 9, 10,§6. Row Space and Column SpaceNew words and phrasesRow space 行空间Column space 列空间Rank 秩6.1 DefinitionsWith an mxn matrix A, we can associate two subspaces.Definition If A is an mxn matrix, the subspace of 1n R ⨯ spanned by the row vectors of A is called the row space of A, the subspace of m R spanned by the column vectors of A is called the column space of A.Theorem 3.6.1 Two row equivalent matrices have the same row space. Proof 21kE E E A B =The row vectors of B must be a linear combination of the row vectors of A. Consequently, the row space of B must be a subspace of the row space of A. By the same reasoning, the row space of A is a subspace of the row space of B. So, they are the same.★Definition The rank (秩)of a matrix of A is the dimension of the row space of A.。

高等数学【线性代数】英文版课件1

高等数学【线性代数】英文版课件1
1 2 3 4
dy 2 dx + y = x d2 y = −k2 y dx2 2y 5 d3 y + d 2 + cos x = dx3 dx dy sin dx + tan−1 y = 1
0
Ordinary Differential Equations Lecture Notes
Definition (1.2.3) The order of the highest derivative occurring in a differential equation is called the order of the differential equation. In Example 1.2.2
Ordinary Differential Equations Lecture Notes
School of Physical and Mathematical Sciences Nanyang Technological University
August 2010
Ordinary Differential Equations Lecture Notes
Ordinary Differential Equations Lecture Notes
1.2. Basic Ideas and Terminology
Begin with a very general definition of a differential equation. Definition (1.2.1) A differential equation is an equation involving one or more derivatives of an unknown function. Examples (1.2.2) The following are all differential equations.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Ordinary Differential Equations Lecture Notes
Example (1.4.4) The general solution to the differential equation y + ω 2 y = 0, −∞ < x < ∞ where ω is a nonzero constant, is y(x) = C1 cos ωx + C2 sin ωx, where C1 , C2 are arbitrary constants. Solution According to Definition 1.3.5 of the general solution, we have to show the following: (1.4.4) (1.4.3)
Ordinary Differential Equations Lecture Notes
The solution technique for a separable differential equation is given in the following result. Theorem (1.5.2) If p(y) and q(x) are continuous, then Equation (1.5.1) has the general solution p(y) dy = q(x) dx + C, (1.5.2) where C is an arbitrary constant.
Ordinary Differential Equations Lecture Notes
Example (1.5.4)
dy Solve (1 + y2 ) dx = x cos x.
Solution. This differential equation is separable: (1 + y2 ) dy = x cos x dx. Integrating both sides of the differential equation yields (1 + y2 ) dy = x cos x dx + C.
has a unique solution. Notice that y = f (x) satisfies this problem, and so y = f (x) is precisely that unique solution to the initial-value problem (1.4.5).
Solution From Example 1.3.7, the general solution to Equation (1.4.1) is y(x) = e−x + C1 x + C2 , where C1 , C2 are arbitrary constants. We now impose the auxiliary conditions (1.4.2).
1.4. Initial-Value Problem
The unique specification of an applied problem requires more than just a differential equation. Of particular interest is the case of the initial-value problem defined for an nth-order differential equation. Definition (1.4.1) An nth-order differential equation together with n auxiliary conditions of the form y(x0 ) = y0 y (x0 ) = y1 ... y
Ordinary Differential Equations Lecture Notes
1.5. Separable Differential Equations
We provide some solution techniques for determining the exact solution to certain types of differential equations. The simplest differential equations for which a solution technique can be obtained are the so-called separable equations. Definition (1.5.1) A first-order differential equation is called separable if it can be written in the form dy (1.5.1) p(y) = q(x) dx (that is, if we can separate p(y) dy and q(x) dx)
Since f (x) is an arbitrary solution to the differential equation (1.4.3), we can conclude that every solution to (1.4.3) is of the form (1.4.4), that is y(x) = C1 cos ωx + C2 sin ωx and therefore this is the general solution on (−∞, ∞).
Ordinary Differential Equations Lecture e chain rule for derivatives to rewrite Equation (1.5.1) in the equivalent form d p(y) dy = q(x). (1.5.3) dx Indeed, since p(y) = d dy p(y) dy ,
Ordinary Differential Equations Lecture Notes
Setting x = 0 in y(x) = e−x + C1 x + C2 , we see that y(0) = 1 ⇐⇒ 1 = 1 + C2 ⇐⇒ C2 = 0, that is y(x) = e−x + C1 x. Furthermore, differentiating the result yields y (x) = −e−x + C1 . Consequently, y (0) = 4 ⇐⇒ 4 = −1 + C1 ⇐⇒ C1 = 5. Hence y(x) = e−x + 5x is the unique solution to the initial-value problem.
the left-hand side of (1.5.1) can be rewritten as follows p(y) dy d = dx dy p(y) dy dy d = dx dx p(y) dy .
Now integrating both sides of Equation (1.5.3) w.r.t. x yields Equation (1.5.2).
Ordinary Differential Equations Lecture Notes
Remark (1.5.3) In differential form Equation (1.5.1), as noted above, can be written as p(y) dy = q(x) dx, and the general solution (1.5.2) is obtained by integrating the left-hand side w.r.t. y and the right-hand side w.r.t. x. This is the general procedure for solving separable equations.
(n−1)
(x0 ) = yn−1 ,
where y0 , y1 , . . . , yn−1 are constants, is called an initial-value problem.
Ordinary Differential Equations Lecture Notes
Example (1.4.2) Solve the initial-value problem y = e−x y(0) = 1, y (0) = 4. (1.4.1) (1.4.2)
Ordinary Differential Equations Lecture Notes
1
y(x) = C1 cos ωx + C2 sin ωx is a solution to the differential equation (1.4.3) on (−∞, ∞). every solution to (1.4.3) is of the form (1.4.4).
Ordinary Differential Equations Lecture Notes
Consider the function ˜(x) = A cos ωx + y B sin ωx. ω (1.4.6)
This function is of the form (1.4.4), and so, by the first claim, is a solution to equation (1.4.3). We notice that evaluations of the function (1.4.6) and its derivative at x = 0 are as follows ˜(0) = f (0) = A, ˜ (0) = f (0) = B, y y which shows that ˜(x) is also a solution to the initial-value problem y (1.4.5).
相关文档
最新文档