张量分析翻译 英文原文

合集下载

张量分解

张量分解

三阶张量: X
I ×J ×K
5

纤维(fiber)
mode-1 (列) 纤维:x: jk
mode-2 (行) 纤维:xi:k
mode-3 (管) 纤维:x ij:
6

切片(slice)
水平切片:Xi::
侧面切片:X: j:
正面切片:X::k ( X k )
7

内积和范数
◦ 设 X ,Y 内积:
I1 ×I 2 × ×I N
X ,Y xi1i2 iN yi1i2 iN
i1 1 i2 1 iN 1
I1
I2
IN
(Frobenius)范数:
X
X,X
xi2i2 iN 1
i1 1 i2 1 iN 1
I1
I2
IN
8

秩一张量/可合张量
◦ N阶张量 X I1×I 2 × ×I N 是一个秩一张量,如果它能被写 成N个向量的外积,即
张量的(超)对角线
10

展开(matricization/unfolding/flattening)
◦ 将N阶张量 X 沿mode-n展开成一个矩阵 X( n )
X X (1)
三阶张量的mode-1展开
11

n-mode(矩阵)乘积
◦ 一个张量X I1×I 2 × ×I N 和一个矩阵 U J ×In 的n-mode 乘积 X n U I1××I n1 ×J ×I n1 ××I N,其元素定义为 I
X n U i i
1
n 1 jin 1iN
xi1i2 iN u jin
n
in 1

第二章chapter2

第二章chapter2

Chapter2Transformations and Vectors2.1Change of BasisLet us reconsider the vectorx=(2,1,3).Fully written out in a given Cartesian frame e i(i=1,2,3),it isx=2e1+e2+3e3.(This is one of the few times we do not use i as the symbol for a Cartesian frame vector.)Suppose we appoint a new frame˜e i(i=1,2,3)such thate1=˜e1+2˜e2+3˜e3,e2=4˜e1+5˜e2+6˜e3,e3=7˜e1+8˜e2+9˜e3.From these expansions we could calculate the˜e i and verify that they are non-coplanar.As x is an objective,frame-independent entity,we can write x=2(˜e1+2˜e2+3˜e3)+(4˜e1+5˜e2+6˜e3)+3(7˜e1+8˜e2+9˜e3)=(2+4+21)˜e1+(4+5+24)˜e2+(6+6+27)˜e3=27˜e1+33˜e2+39˜e3.In these calculations it is unimportant whether the frames are Cartesian; it is important only that we have the table of transformation⎛⎝123 456 789⎞⎠.1112Tensor Analysis with Applications in MechanicsIt is clear that we can repeat the same operation in general form.Let x be of the formx=3i=1x i e i(2.1)with the table of transformation of the frame given ase i=3j=1A ji˜e j.Thenx=3i=1x i3j=1A ji˜e j=3j=1˜e j3i=1A jix i.So in the new basis we havex=3j=1˜x j˜e j where˜x j=3i=1A jix i.Here we have introduced a new notation,placing some indices as subscripts and some as superscripts.Although this practice may seem artificial,there are fairly deep reasons for following it.2.2Dual BasesTo perform operations with a vector x,we must have a straightforward method of calculating its components—ultimately,no matter how ad-vanced we are,we must be able to obtain the x i using simple arithmetic. We prefer formulas that permit us tofind the components of vectors using dot multiplication only;we shall need these when doing frame transfor-mations,etc.In a Cartesian frame the necessary operation is simple dot multiplication by the corresponding basis vector of the frame:we havex k=x·i k(k=1,2,3).This procedure fails in a more general non-Cartesian frame where we do not necessarily have e i·e j=0for all j=i.However,it may still be possible tofind a vector e i such thatx i=x·e i(i=1,2,3)Transformations and Vectors 13in this more general situation.If we set x i =x ·e i =⎛⎝3 j =1x j e j ⎞⎠·e i =3 j =1x j (e j ·e i )and compare the left-and right-hand sides,we see that equality holds whene j ·e i =δi j (2.2)whereδi j = 1,j =i,0,j =i,is the Kronecker delta symbol.In a Cartesian frame we havee k =e k =i kfor each k .Exercise 2.1.Show that e i is determined uniquely by the requirement that x i =x ·e i for every x .Now let us discuss the geometrical nature of the vectors e i .Consider,for example,the equations for e 1:e 1·e 1=1,e 2·e 1=0,e 3·e 1=0.We see that e 1is orthogonal to both e 2and e 3,and its magnitude is such that e 1·e 1=1.Similar properties hold for e 2and e 3.Exercise 2.2.Show that the vectors e i are linearly independent.By Exercise 2.2,the e i constitute a frame or basis.This basis is said to be reciprocal or dual to the basis e i .We can therefore expand an arbitrary vector x asx =3i =1x i e i .(2.3)Note that superscripts and subscripts continue to appear in our notation,but in a way complementary to that used in equation (2.1).If we dot-multiply the representation (2.3)of x by e j and use (2.2)we get x j .This explains why the frames e i and e i are dual:the formulasx ·e i =x i ,x ·e i =x i ,14Tensor Analysis with Applications in Mechanicslook quite similar.So the introduction of a reciprocal basis gives many potential advantages.Let us discuss the reciprocal basis in more detail.Thefirst problem is tofind suitable formulas to define it.We derive these formulas next, butfirst let us note the following.The use of reciprocal vectors may not be practical in those situations where we are working with only two or three vectors.The real advantages come when we are working intensively with many vectors.This is reminiscent of the solution of a set of linear simultaneous equations:it is inefficient tofind the inverse matrix of the system if we have only one forcing vector.But when we must solve such a problem repeatedly for many forcing vectors,the calculation and use of the inverse matrix is reasonable.Writing out x in the e i and e i bases,we used a combination of indices (i.e.,subscripts and superscripts)and summation symbols.From now on we shall omit the symbol of summation when we meet matching subscripts and superscripts:we shall write,say,x i a i.x i a i for the sumiThat is,whenever we see i as a subscript and a superscript,we shall under-stand that a summation is to be carried out over i.This rule shall apply to situations involving vectors as well:we shall understand,for example,x i e i.x i e i to mean the summationiThis rule is called the rule of summation over repeated indices.1Note that a repeated index is a dummy index in the sense that it may be replaced by any other index not already in use:we havex i a i=x1a1+x2a2+x3a3=x k a kfor instance.An index that occurs just once in an expression,for example the index i inA k i x k,is called a free index.In tensor discussions each free index is understood to range independently over a set of values—presently this set is{1,2,3}. 1The rule of summation wasfirst introduced not by mathematicians but by Einstein, and is sometimes referred to as the Einstein summation convention.In a paper where he introduced this rule,Einstein used Cartesian frames and therefore did not distinguish superscripts from subscripts.However,we shall continue to make the distinction so that we can deal with non-Cartesian frames.Transformations and Vectors15 Let us return to the task of deriving formulas for the reciprocal basis vectors e i in terms of the original basis vectors e i.We construct e1first. Since the cross product of two vectors is perpendicular to both,we can satisfy the conditionse2·e1=0,e3·e1=0,by settinge1=c1(e2×e3)where c1is a constant.To determine c1we requiree1·e1=1.We obtainc1[e1·(e2×e3)]=1.The quantity e1·(e2×e3)is a scalar whose absolute value is the volume of the parallelepiped described by the vectors e i.Denoting it by V,we havee1=1V(e2×e3).Similarly,e2=1V(e3×e1),e3=1V(e1×e2).The reader may verify that these expressions satisfy(2.2).Let us mention that if we construct the reciprocal basis to the basis e i we obtain the initial basis e i.Hence we immediately get the dual formulase1=1V(e2×e3),e2=1V(e3×e1),e3=1V(e1×e2),whereV =e1·(e2×e3).Within an algebraic sign,V is the volume of the parallelepiped described by the vectors e i.Exercise2.3.Show that V =1/V.Let us now consider the forms of the dot product between two vectorsa=a i e i=a j e j,b=b p e p=b q e q.16Tensor Analysis with Applications in MechanicsWe havea·b=a i e i·b p e p=a i b p e i·e p.Introducing the notationg ip=e i·e p,(2.4) we havea·b=a i b p g ip.(As a short exercise the reader should write out this expression in full.) Using the reciprocal component representations we geta·b=a j e j·b q e q=a j b q g jqwhereg jq=e j·e q.(2.5) Finally,using a mixed representation we geta·b=a i e i·b q e q=a i b qδq i=a i b iand,similarly,a·b=a j b j.Hencea·b=a i b j g ij=a i b j g ij=a i b i=a i b i.We see that when we use mixed bases to represent a and b we get formulas that resemble the equationa·b=a1b1+a2b2+a3b3from§1.3;otherwise we get more terms and additional multipliers.We will encounter g ij and g ij often.They are the components of a unique tensor known as the metric tensor.In Cartesian frames we obviously haveg ij=δj,g ij=δi j.iTransformations and Vectors17 2.3Transformation to the Reciprocal FrameHow do the components of a vector x transform when we change to the reciprocal frame?We simply setx i e i=x i e iand dot both sides with e j to getx i e i·e j=x i e i·e jorx j=x i g ij.(2.6) In the system of equations⎛⎝x1x2x3⎞⎠=⎛⎝g11g21g31g12g22g32g13g23g33⎞⎠⎛⎝x1x2x3⎞⎠the matrix of the components of the metric tensor g ij is also called the Gram matrix.A theorem in linear algebra states that its determinant is not zero if and only if the vectors e i are linearly independent.Exercise2.4.(a)Show that if the Gram determinant vanishes,then the e i are linearly dependent.(b)Prove that the Gram determinant equals V2.We called the basis e i dual to the basis e i.In e i the metric components are given by g ij,so we can immediately write an expression dual to(2.6):x i=x j g ij.(2.7)We see from(2.6)and(2.7)that,using the components of the metric tensor, we can always change subscripts to superscripts and vice versa.These actions are known as the raising and lowering of indices.Finally,(2.6)and (2.7)together implyx i=g ij g jk x k,henceg ij g jk=δk i.Of course,this means that the matrices of g ij and g ij are mutually inverse.18Tensor Analysis with Applications in MechanicsQuick summaryGiven a basis e i,the vectors e i given by the requirement thate j·e i=δi jare linearly independent and form a basis called the reciprocal or dual basis. The definition of dual basis is motivated by the equation x i=x·e i.The e i can be written ase i=1V(e j×e k)where the ordered triple(i,j,k)equals(1,2,3)or one of the cyclic permu-tations(2,3,1)or(3,1,2),and whereV=e1·(e2×e3).The dual of the basis e k(i.e.,the dual of the dual)is the original basis e k.A given vector x can be expressed asx=x i e i=x i e iwhere the x i are the components of x with respect to the dual basis. Exercise2.5.(a)Let x=x k e k=x k e k.Write out the modulus of x in all possible forms using the metric tensor.(b)Write out all forms of the dot product x·y.2.4Transformation Between General FramesHaving transformed the components x i of a vector x to the corresponding components x i relative to the reciprocal basis,we are now ready to take on the more general task of transforming the x i to the corresponding compo-nents˜x i relative to any other basis˜e i.Let the new basis˜e i be related to the original basis e i bye i=A ji˜e j.(2.8) This is,of course,compact notation for the system of equations⎛⎝e1e2e3⎞⎠=⎛⎝A11A21A31A12A22A32A13A23A33⎞⎠≡A,say⎛⎝˜e1˜e2˜e3⎞⎠.Transformations and Vectors19the subscript indexes Before proceeding,we note that in the symbol A jithe row number in the matrix A,while the superscript indexes the column number.Throughout our development we shall often take the time to write various equations of interest in matrix notation.It follows from(2.8)that=e i·˜e j.A jiExercise2.6.A Cartesian frame is rotated about its third axis to give a new Cartesian frame.Find the matrix of transformation.A vector x can be expressed in the two formsx=x k e k,x=˜x i˜e i.Equating these two expressions for the same vector x,we have˜x i˜e i=x k e k,hence˜e j.(2.9)˜x i˜e i=x k A jkTofind˜x i in terms of x i,we may expand the notation and write(2.9)as ˜x1˜e1+˜x2˜e2+˜x3˜e3=x1A j1˜e j+x2A j2˜e j+x3A j3˜e jwhere,of course,A j1˜e j=A11˜e1+A21˜e2+A31˜e3,A j2˜e j=A12˜e1+A22˜e2+A32˜e3,A j3˜e j=A13˜e1+A23˜e2+A33˜e3.Matching coefficients of the˜e i wefind˜x1=x1A11+x2A12+x3A13=x j A1j,˜x2=x1A21+x2A22+x3A23=x j A2j,˜x3=x1A31+x2A32+x3A33=x j A3j,hence˜x i=x j A i j.(2.10) It is possible to obtain(2.10)from(2.9)in a succinct manner.On the right-hand side of(2.9)the index j is a dummy index which we can replace with20Tensor Analysis with Applications in Mechanicsi and thereby obtain(2.10)immediately.The matrix notation equivalent of(2.10)is⎛⎝˜x1˜x2˜x3⎞⎠=⎛⎝A11A12A13A21A22A23A31A32A33⎞⎠⎛⎝x1x2x3⎞⎠and thus involves multiplication by A T,the transpose of A.We shall also need the equations of transformation from the frame˜e i back to the frame e i.Since the direct transformation is linear the inverse must be linear as well,so we can write˜e i=˜A jie j(2.11) where˜A ji=˜e i·e j.Let usfind the relation between the matrices of transformation A and˜A. By(2.11)and(2.8)we have˜e i=˜A ji e j=˜A jiA k j˜e k,and since the˜e i form a basis we must have˜A jiA k j=δk i. The relationshipA ji ˜A kj=δk ifollows similarly.The product of the matrices(˜A ji)and(A k j)is the unit matrix and thus these matrices are mutually inverse.Exercise2.7.Show that x i=˜x k˜A ik.Formulas for the relations between reciprocal bases can be obtained as follows.We begin with the obvious identitiese j(e j·x)=x,˜e j(˜e j·x)=x.Putting x=˜e i in thefirst of these gives˜e i=A i j e j,while the second identity with x=e i yieldse i=˜A i j˜e j.From these follow the transformation formulas˜x i=x k˜A k i,x i=˜x k A k i.2.5Covariant and Contravariant ComponentsWe have seen that if the basis vectors transform according to the relatione i=A ji˜e j,then the components x i of a vector x must transform according tox i=A ji˜x j.The similarity in form between these two relations results in the x i being termed the covariant components of the vector x.On the other hand,the transformation lawx i=˜A i j˜x jshows that the x i transform like the e i.For this reason the x i are termed the contravariant components of x.We shallfind a further use for this nomenclature in Chapter3.Quick summaryIf frame transformationse i=A ji˜e j,˜e i=˜A ji e j,e i=˜A i j˜e j,˜e i=A ije j,are considered,then x has the various expressionsx=x i e i=x i e i=˜x i˜e i=˜x i˜e i and the transformation lawsx i=A ji˜x j,˜x i=˜A ji x j,x i=˜A i j˜x j,˜x i=A i j x j,apply.The x i are termed contravariant components of x,while the x i are termed covariant components.The transformation laws are particularly simple when the frame is changed to the dual frame.Thenx i=g ji x j,x i=g ij x j,whereg ij=e i·e j,g ij=e i·e j,are components of the metric tensor.2.6The Cross Product in Index NotationIn mechanics a major role is played by the quantity called torque.This quantity is introduced in elementary physics as the product of a force mag-nitude and a length(“force times moment arm”),along with some rules for algebraic sign to account for the sense of rotation that the force would encourage when applied to a physical body.In more advanced discussions in which three-dimensional problems are considered,torque is regarded as a vectorial quantity.If a force f acts at a point which is located relative to an origin O by position vector r,then the associated torque t about O is normal to the plane of the vectors r and f.Of the two possible unit normals,t is conventionally(but arbitrarily)associated with the vectorˆn given by the familiar right-hand rule:if the forefinger of the right hand is directed along r and the middlefinger is directed along f,then the thumb indicates the direction ofˆn and hence the direction of t.The magnitude of t equals|f||r|sinθ,whereθis the smaller angle between f and r.These rules are all encapsulated in the brief symbolismt=r×f.The definition of torque can be taken as a model for a more general operation between vectors:the cross product.If a and b are any two vectors,we definea×b=ˆn|a||b|sinθwhereˆn andθare defined as in the case of torque above.Like any other vector,c=a×b can be expanded in terms of a basis;we choose the reciprocal basis e i and writec=c i e i.Because the magnitudes of a and b enter into a×b in multiplicative fashion, we are prompted to seek c i in the formc i= ijk a j b k.(2.12) Here the ’s are formal coefficients.Let usfind them.We writea=a j e j,b=b k e k,and employ the well-known distributive property(u+v)×w≡u×w+v×wto obtainc =a j e j ×b k e k =a j b k (e j ×e k ).Thenc ·e i =c m e m ·e i =c i =a j b k [(e j ×e k )·e i ]and comparison with (2.12)shows thatijk =(e j ×e k )·e i .Now the value of (e j ×e k )·e i depends on the values of the indices i,j,k .Here it is convenient to introduce the idea of a permutation of the ordered triple (1,2,3).A permutation of (1,2,3)is called even if it can be brought about by performing any even number of interchanges of pairs of these numbers;a permutation is odd if it results from performing any odd number of interchanges.We saw before that (e j ×e k )·e i equals the volume of the frame parallelepiped if i,j,k are distinct and the ordered triple (i,j,k )is an even permutation of (1,2,3).If i,j,k are distinct and the ordered triple (i,j,k )is an odd permutation of (1,2,3),we obtain minus the volume of the frame parallelepiped.If any two of the numbers i,j,k are equal we obtain zero.Hence ijk =⎧⎪⎪⎨⎪⎪⎩+V,(i,j,k )an even permutation of (1,2,3),−V,(i,j,k )an odd permutation of (1,2,3),0,two or more indices equal.Moreover,it can be shown (Exercise 2.4)thatV 2=gwhere g is the determinant of the matrix formed from the elements g ij =e i ·e j of the metric tensor.Note that |V |=1for a Cartesian frame.The permutation symbol ijk is useful in writing formulas.For example,the determinant of a matrix A =(a ij )can be expressed succinctly asdet A = ijk a 1i a 2j a 3k .Much more than a notational device however, ijk represents a tensor (the so-called Levi–Civita tensor ).We discuss this further in Chapter 3.Exercise 2.8.The contravariant components of a vector c =a ×b can be expressed asc i = ijk a j b kfor suitable coefficients ijk .Use the technique of this section to find the coefficients.Then establish the identity ijk pqr = δp i δq i δr i δp j δq j δr j δp k δq kδr k and use it to show thatijk pqk =δp i δq j −δq i δp j .Use this in turn to prove thata ×(b ×c )=b (a ·c )−c (a ·b )(2.13)for any vectors a ,b ,c .Exercise 2.9.Establish Lagrange’s identity(a ×b )·(c ×d )=(a ·c )(b ·d )−(a ·d )(b ·c ).2.7Norms on the Space of Vectors We often need to characterize the intensity of some vector field locally or globally.For this,the notion of a norm is appropriate.The well-known Euclidean norm of a vector a =a k i k written in a Cartesian frame isa = 3 k =1a 2k1/2.This norm is related to the inner product of two vectors a =a k i k and b =b k i k :we have a ·b =a k b k so thata =(a ·a )1/2.In a non-Cartesian frame,the components of a vector depend on the lengths of the frame vectors and the angles between them.Since the sum of squared components of a vector depends on the frame,we cannot use it to characterize the vector.But the formulas connected with the dot product are invariant under change of frame,so we can use them to characterize the intensity of the vector —its length.Thus for two vectors x =x i e i and y =y j e j written in the arbitrary frame,we can introduce a scalar product (i.e.,a simple dot product)x ·y =x i e i ·y j e j =x i y j g ij =x i y j g ij =x i y i .Note that only in mixed coordinates does this resemble the scalar product in a Cartesian frame.Similarly,the norm of a vector x isx =(x·x)1/2=x i x j g ij1/2=x i x j g ij1/2=x i x i1/2.This dot product and associated norm have all the properties required from objects of this nature in algebra or functional analysis.Indeed,it is neces-sary only to check whether all the axioms of the inner product are satisfied.(i)x·x≥0,and x·x=0if and only if x=0.This property holdsbecause all the quantities involved can be written in a Cartesianframe where it holds trivially.By the same reasoning,we confirmsatisfaction of the property(ii)x·y=y·x.The reader should check that this holds for any representation of the vectors.Finally,(iii)(αx+βy)·z=α(x·z)+β(y·z)whereαandβare arbitrary real numbers and z is a vector.By the general theory then,the expressionx =(x·x)1/2(2.14) satisfies all the axioms of a norm:(i) x ≥0,with x =0if and only if x=0.(ii) αx =|α| x for any realα.(iii) x+y ≤ x + y .In addition we have the Schwarz inequalityx·y ≤ x y ,(2.15) where in the case of nonzero vectors the equality holds if and only if x=λy for some realλ.The set of all three-dimensional vectors constitutes a three-dimensional linear space.A linear space equipped with the norm(2.14)becomes a normed space.In this book,the principal space is R3.Note that we can introduce more than one norm in any normed space,and in practice a variety of norms turn out to be necessary.For example,2 x is also a norm in R3.We can introduce other norms,quite different from the above. One norm can be introduced as follows.Let e k be a basis of R3and letx =x k e k .For p ≥1,we introduce x p =3 k =1|x k |p 1/p.Norm axioms (i)and (ii)obviously hold.Axiom (iii)is a consequence of the classical Minkowski inequality for finite sums.The reader should be aware that this norm is given in a certain basis.If we use it in another basis,the value of the norm of a vector will change in general.An advantage of the norm (2.14)is that it is independent of the basis of the space.Later,when investigating the eigenvalues of a tensor,we will need a space of vectors with complex components.It can be introduced similarly to the space of complex numbers.We start with the space R 3having basis e k ,and introduce multiplication of vectors in R 3by complex numbers.This also yields a linear space,but it is complex and denoted by C 3.An arbitrary vector x in C 3takes the formx =(a k +ib k )e k ,where i is the imaginary unit (i 2=−1).Analogous to the conjugate number is the conjugate vector to x ,defined byx =(a k −ib k )e k .The real and imaginary parts of x are a k e k and b k e k ,respectively.Clearly,a basis in C 3may contain vectors that are not in R 3.As an exercise,the reader should write out the form of the real and imaginary parts of x in such a basis.In C 3,the dot product loses the property that x ·x ≥0.However,we can introduce the inner product of two vectors x and y asx ,y =x ·y .It is easy to see that this inner product has the following properties.Let x ,y ,z be arbitrary vectors of C 3.Then(i)x ·x ≥0,and x ·x =0if and only if x =0.(ii)x ·y =y ·x .(iii)(αx +βy )·z =α(x ·z )+β(y ·z )where αand βare arbitrarycomplex numbers.The reader should verify these properties.Now we can introduce the norm related to the inner product,x = x ,x 1/2,and verify that it satisfies all the axioms of a norm in a complex linear space.As a consequence of the general properties of the inner product, Schwarz’s inequality(2.15)also holds in C3.2.8Closing RemarksWe close by repeating something we said in Chapter1:A vector is an objective entity.In elementary mathematics we learn to think of a vector as an ordered triple of components.There is,of course,no harm in this if we keep in mind a certain Cartesian frame.But if wefix those components then in any other frame the vector is determined uniquely.Absolutely uniquely!So a vector is something objective,but as soon as we specify its components in one frame we canfind them in any other frame by the use of certain rules.We emphasize this because the situation is exactly the same with ten-sors.A tensor is an objective entity,andfixing its components relative to one frame,we determine the tensor uniquely—even though its components relative to other frames will in general be different.2.9Problems2.1Find the dual basis to e i.(a)e1=2i1+i2−i3,e2=2i2+3i3,e3=i1+i3;(b)e1=i1+3i2+2i3,e2=2i1−3i2+2i3,e3=3i1+2i2+3i3;(c)e1=i1+i2,e2=i1−i2,e3=3i3;(d)e1=cosφi1+sinφi2,e2=−sinφi1+cosφi2,e3=i3.2.2Let˜e1=−2i1+3i2+2i3,e1=2i1+i2−i3,˜e2=−2i1+2i2+i3,e2=2i2+3i3,˜e3=−i1+i2+i3,e3=i1+i3.Find the matrix A jof transformation from the basis˜e i to the basis e j.i2.3Let˜e1=i1+2i2,e1=i1−6i3,˜e2=−i2−i3,e2=−3i1−4i2+4i3,˜e3=−i1+2i2−2i3,e3=i1+i2+i3. Find the matrix of transformation of the basis˜e i to e j.2.4Find(a)a jδjk,(b)a i a jδi j,(c)δi i,(d)δijδjk,(e)δijδji,(f)δji δk jδik.2.5Show that ijk ijl=2δlk.2.6Show that ijk ijk=6.2.7Find(a) ijkδjk,(b) ijk mkjδi m,(c) ijkδk mδj n,(d) ijk a i a j,(e) ijk| ijk|,(f) ijk imnδj m.2.8Find(a×b)×c.2.9Show that(a×b)·a=0.2.10Show that a·(b×c)d=(a·d)b×c+(b·d)c×a+(c·d)a×b.2.11Show that(e×a)×e=a if|e|=1and e·a=0.2.12Let e k be a basis of R3,let x=x k e k,and suppose h1,h2,h3arefixed positive numbers.Show that h k|x k|is a norm in R3.。

(完整版)张量分析中文翻译

(完整版)张量分析中文翻译

张量张量是用来描述矢量、标量和其他张量之间线性关系的几何对象。

这种关系最基本的例子就是点积、叉积和线性映射。

矢量和标量本身也是张量。

张量可以用多维数值阵列来表示。

张量的阶(也称度或秩)表示阵列的维度,也表示标记阵列元素的指标值。

例如,线性映射可以用二位阵列--矩阵来表示,因此该阵列是一个二阶张量。

矢量可以通过一维阵列表示,所以其是一阶张量。

标量是单一数值,它是0阶张量。

张量可以描述几何向量集合之间的对应关系。

例如,柯西应力张量T 以v 方向为起点,在垂直于v 终点方向产生应力张量T(v),因此,张量表示了这两个 向量之间的关系,如右图所示。

因为张量表示了矢量之间的关系,所以张量必 须避免坐标系出现特殊情况这一问题。

取一组坐标 系的基向量或者是参考系,这种情况下的张量就可 以用一系列有序的多维阵列来表示。

张量的坐标以 “协变”(变化规律)的形式独立,“协变”把一种 坐标下的阵列和另一种坐标下的阵列联系起来。

这 种变化规律演化成为几何或物理中的张量概念,其 精确形式决定了张量的类型或者是值。

张量在物理学中十分重要,因为在弹性力学、流体力学、广义相对论等领域中,张量提供了一种简洁的数学模型来建立或是解决物理问题。

张量的概念首先由列维-奇维塔和格莱格里奥-库尔巴斯特罗提出,他们延续了黎曼、布鲁诺、克里斯托费尔等人关于绝对微分学的部分工作。

张量的概念使得黎曼曲率张量形式的流形微分几何出现了替换形式。

历史现今张量分析的概念源于卡尔•弗里德里希•高斯在微分几何的工作,概念的制定更受到19世纪中叶代数形式和不变量理论的发展[2]。

“tensor ”这个单词在1846年被威廉·罗恩·哈密顿[3]提及,这并不等同于今天我们所说的张量的意思。

[注1]当代的用法是在1898年沃尔德马尔·福格特提出的[4]。

“张量计算”这一概念由格雷戈里奥·里奇·库尔巴斯特罗在1890年《绝对微分几何》中发展而来,最初由里奇在1892年提出[5]。

学习张量必看一个文档学会张量张量分析

学习张量必看一个文档学会张量张量分析
➢ 分量记法: ui
Appendix A.1
张量基本概念
➢ 指标符号使用方法
1. 三维空间中任意点 P 旳坐标(x, y, z)可缩写成 xi , 其中x1=x, x2=y, x3=z。
2. 两个矢量 a 和 b 旳分量旳点积(或称数量积)为:
3
a b= a1b1 a2b2 a3b3 aibi i 1
3. 换标符号,具有换标作用。例如:
d s2 ij d xi d x j d xi d xi d x j d x j
即:假如符号 旳两个指标中,有一种和同项中其他
因子旳指标相重,则能够把该因子旳那个重指标换成
旳另一种指标,而 自动消失。
30
符号ij 与erst
类似地有
ij a jk aik ; ij aik a jk ij akj aki ; ij aki akj ij jk ik ; ij jkkl il
ij
1 0
(i = j) (i, j=1, 2, …, n) (i j)
➢ 特征
1. 对称性,由定义可知指标 i 和 j 是对称旳,即
ij ji
29
符号ij 与erst
2. ij 旳分量集合相应于单位矩阵。例如在三维空间
11 12 13 1 0 0
21
22
23
0
1
0
31 32 33 0 0 1
27
目录
引言 张量旳基本概念,爱因斯坦求和约定
符号ij与erst
坐标与坐标转换 张量旳分量转换规律,张量方程 张量代数,商法则 常用特殊张量,主方向与主分量 张量函数及其微积分
Appendix A
28
符号ij 与erst
➢ ij 符号 (Kronecker delta)

Introduction to Tensor Calculus and Continuum Mechanics 张量分析 英文版 part1

Introduction to Tensor Calculus and Continuum Mechanics 张量分析 英文版 part1

Introduction toTensor CalculusandContinuum Mechanicsby J.H.HeinbockelDepartment of Mathematics and StatisticsOld Dominion UniversityPREF ACEThis is an introductory text which presents fundamental concepts from the subject areas of tensor calculus,differential geometry and continuum mechanics.The material presented is suitable for a two semester course in applied mathematics and isflexible enough to be presented to either upper level undergraduate or beginning graduate students majoring in applied mathematics,engineering or physics.The presentation assumes the students have some knowledge from the areas of matrix theory,linear algebra and advanced calculus.Each section includes many illustrative worked examples.At the end of each section there is a large collection of exercises which range in difficulty.Many new ideas are presented in the exercises and so the students should be encouraged to read all the exercises.The purpose of preparing these notes is to condense into an introductory text the basic definitions and techniques arising in tensor calculus,differential geometry and continuum mechanics.In particular,the material is presented to(i)develop a physical understanding of the mathematical concepts associated with tensor calculus and(ii)develop the basic equations of tensor calculus,differential geometry and continuum mechanics which arise in engineering applications.From these basic equations one can go on to develop more sophisticated models of applied mathematics.The material is presented in an informal manner and uses mathematics which minimizes excessive formalism.The material has been divided into two parts.Thefirst part deals with an introduc-tion to tensor calculus and differential geometry which covers such things as the indicial notation,tensor algebra,covariant differentiation,dual tensors,bilinear and multilinear forms,special tensors,the Riemann Christoffel tensor,space curves,surface curves,cur-vature and fundamental quadratic forms.The second part emphasizes the application of tensor algebra and calculus to a wide variety of applied areas from engineering and physics. The selected applications are from the areas of dynamics,elasticity,fluids and electromag-netic theory.The continuum mechanics portion focuses on an introduction of the basic concepts from linear elasticity andfluids.The Appendix A contains units of measurements from the Syst`e me International d’Unit`e s along with some selected physical constants.The Appendix B contains a listing of Christoffel symbols of the second kind associated with various coordinate systems.The Appendix C is a summary of useful vector identities.J.H.Heinbockel,1996Copyright c 1996by J.H.Heinbockel.All rights reserved.Reproduction and distribution of these notes is allowable provided it is for non-profit purposes only.INTRODUCTION TOTENSOR CALCULUSANDCONTINUUM MECHANICSPART1:INTRODUCTION TO TENSOR CALCULUS§1.1INDEX NOTATION (1)Exercise1.1 (28)§1.2TENSOR CONCEPTS AND TRANSFORMATIONS (35)Exercise1.2 (54)§1.3SPECIAL TENSORS (65)Exercise1.3 (101)§1.4DERIV ATIVE OF A TENSOR (108)Exercise1.4 (123)§1.5DIFFERENTIAL GEOMETRY AND RELATIVITY (129)Exercise1.5 (162)PART2:INTRODUCTION TO CONTINUUM MECHANICS§2.1TENSOR NOTATION FOR VECTOR QUANTITIES (171)Exercise2.1 (182)§2.2DYNAMICS (187)Exercise2.2 (206)§2.3BASIC EQUATIONS OF CONTINUUM MECHANICS (211)Exercise2.3 (238)§2.4CONTINUUM MECHANICS(SOLIDS) (243)Exercise2.4 (272)§2.5CONTINUUM MECHANICS(FLUIDS) (282)Exercise2.5 (317)§2.6ELECTRIC AND MAGNETIC FIELDS (325)Exercise2.6 (347)BIBLIOGRAPHY (352)APPENDIX A UNITS OF MEASUREMENT (353)APPENDIX B CHRISTOFFEL SYMBOLS OF SECOND KIND355 APPENDIX C VECTOR IDENTITIES (362)INDEX (363)。

张量分解与MATLAB Tensor Toolbox

张量分解与MATLAB Tensor Toolbox

Observe: For two vectors a and b, a ◦ b and a ⊗ b have the same elements, but one is shaped into a matrix and the other into a vector.
Tamara G. Kolda – UMN – April 27, 2007 - p.7
Proposed by Tucker (1966) AKA: Three-mode factor analysis, three-mode PCA, orthogonal array decomposition A, B, and C may be orthonormal (generally assume they have full column rank) G is not diagonal Not unique
Tamara G. Kolda – UMN – April 27, 2007 - p.2
A tensor is a multidimensional array
An I × J × K tensor
K
Column (Mode-1) Fibers
Row (Mode-2) Fibers
Tube (Mode-3) Fibers
to from
authority scores for 2nd topic
hub scores for 1st topic
Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632, 1999. (doi:10.1145/324133.324140)

Gradient Shrinking Solitons with Vanishing Weyl Tensor

Gradient Shrinking Solitons with Vanishing Weyl Tensor

further that, it has nonnegative Ricci curvature and the growth of the Riemannian curvature is not faster that ea(r(x)+1) , where r (x) is the distance function and a is a suitable positive constant, then its universal cover is either Rn , S n , or S n−1 × R. This result had been improved by Peterson-Wylie (Theorem 1.2 and the remark 1.3 of [17]) in which they only need to assume the Ricci curvature is bounded from 2 2 below and the growth of the Ricci curvature is not faster than e 5 cr(x) outside of a compact set, where c < λ . We also notice that Cao-Wang [2] had an alternative 2 proof of the Ni-Wallach’s result [15]. The key point to get the above complete classification theorem on 3-dimensional complete gradient shrinking soliton without curvature bound assumption is the local version of the Hamilton-Ivey pinching estimate. The Hamilton-Ivey pinching estimate in 3-dimension plays a crucial role in the analysis of the Ricci flow. An open question is how to generalize Hamilton-Ivey’s to high dimension. In [20], the author obtained the following (global) Hamilton-Ivey type pinching estimate on high dimension: Suppose we have a solution to the Ricci flow on a n-dimension manifold which is complete with bounded curvature and vanishing Weyl tensor for each t ≥ 0. Assume at t = 0 the least eigenvalue of the curvature operator at each point are bounded below by ν ≥ −1. Then at all points and all times t ≥ 0 we have the pinching estimate n(n + 1) ] 2 whenever ν < 0. In the present paper, we will get a local version of this HamiltonIvey type pinching estimate for the gradient shrinking solitons with vanishing Weyl tensor (without curvature bound). Based on this pinching estimate, we will obtain the following complete classification theorem (without any curvature bound assumption): R ≥ (−ν )[log(−ν ) + log(1 + t) − Theorem 1.2 Any complete gradient shrinking soliton with vanishing Weyl tensor must be the finite quotients of Rn , S n−1 × R, or S n . This paper contains three sections and the organization is as follows. In section 2, we will prove an algebra lemma which will be used to prove the local version of the Hamilton-Ivey type pinching estimate. In section 3, we will give some propositions and finish the proof of the theorem 1.2. Acknowledgement I would be indebted to my advisor Professor X.P.Zhu for provoking the interest to this problem and many suggestions and discussions. 3

Decay Constants $f_{D_s^}$ and $f_{D_s}$ from ${bar{B}}^0to D^+ l^- {bar{nu}}$ and ${bar{B}

Decay Constants $f_{D_s^}$ and $f_{D_s}$ from ${bar{B}}^0to D^+ l^- {bar{nu}}$ and ${bar{B}
∗ = 336 ± 79 M eV , and fD ∗ /fD we predict fDs s = 1.41 ± 0.41 for (pole/constant)-type s
form factor.
PACS index : 12.15.-y, 13.20.-v, 13.25.Hw, 14.40.Nd, 14.65.Fy Keywards : Factorization, Non-leptonic Decays, Decay Constant, Penguin Effects
∗ experimentally from leptonic B and Ds decays. For instance, determine fB , fBs fDs and fDs
+ the decay rate for Ds is given by [1]
+ Γ(Ds
m2 G2 2 2 l 1 − m M → ℓ ν ) = F fD D s 2 8π s ℓ MD s
1/2
(4)
.
(5)
In the zero lepton-mass limit, 0 ≤ q 2 ≤ (mB − mD )2 .
2
For the q 2 dependence of the form factors, Wirbel et al. [8] assumed a simple pole formula for both F1 (q 2 ) and F0 (q 2 ) (we designate this scenario ’pole/pole’): q2 F1 (q ) = F1 (0) /(1 − 2 ), mF1
∗ amount to about 11 % for B → DDs and 5 % for B → DDs , which have been mentioned in

Tensor Permutation Matrices in Finite Dimensions

Tensor Permutation Matrices in Finite Dimensions

which has the following properties : for any unicolumns and two rows matrices [α] = α1 α2 β1 ∈ M2×1 (K) β2 [U2⊗2 ]·( [α]⊗[β ] ) = [β ]⊗[α] ∈ M2×1 (K), [β ] = 1
3
1 i2 i3 [M ] = Mji1 j2 j3
[M ] =

1 0 1 1 0 0 0 0 1 1 1 1 4 5 1 6 3 2 1 1 1 1 0 0 3 2 1 7 8 9 9 8 7 6 5 4 3 2

1 0 1 2 3 4 5 6 5 4 3 2 3 4 5 6 9 8 7 6 1 0 1 2
7 8 9 0 9 8 7 6 1 0 1 2 7 8 9 0 5 4 3 2 3 4 5 6

[U3⊗3 ] =
algebra and multilinear algebra [6]. In establishing firstly, the theorems on linear operators in intrinsic way, that is independently of the basis, and after that we demonstrate the analogous theorems for the matrices. Define [Un⊗p ] as the tensor commutation matrix n ⊗ p, n, p ∈ N⋆ , whose elements are 0 or 1. In this article we have given two manners to construct [Un⊗p ] for any n and p ∈ N⋆ , and we have constructed a formula which allows us to construct the tensor permutation matrix [Un1 ⊗n2 ⊗...⊗nk (σ )], ( n1 , n2 , . . ., nk ) ∈ N⋆ and a formula which gives us the expression of their elements. From (1) it is normal to think to what about the expression of [U3⊗3 ] by using Gell-Mann matrices. But at first we are obliged to talk a bit about the definitions of the types of matrices, and after that we are going to expose the properties of tensor product. Define [In ] as the n × n unit matrix. For the vectors and covectors we have used the RAOELINA ANDRIAMBOLOLONA’s notations[7], with overlining for the vectors, x, and underlining for the covectors, ϕ. Throughout this article K = R or C.

张量分析-第10讲LJ

张量分析-第10讲LJ
i Lagrange坐标系中, 物理量求物质导数仅需保持 坐标不变, 而对
时间 t 求导. Euler 坐标系中求物质导数时, 不仅要考虑物理自身随时间的变化, 而且要考虑由于质点运动而引起的位置坐标 x i 的变化. Lagrange 坐标系一般是曲线坐标系, 而Euler坐标系可以取直角坐 标系,一般在推导时采用Lagrange坐标系, 然后转换到Euler坐标系 中进行计算. 5
ˆi dg ˆ v) g ˆ i ( ˆi E g ˆi ω g dt
ˆi dg ˆ ) g ˆ i ( v ˆ i (E - Ω) g ˆi E g ˆ i Ω g ˆi E g ˆi ω g dt
13
4.3 欧拉坐标系基矢量的物质导数
r r ( x i (t )),
2. Lagrange坐标
r ( x j (t )) j gi g ( x (t )) i i x
ξ3
拉格朗月坐标是嵌在质点上, 随物体一起运动和变形, 又称随体坐标或嵌入坐标: i ξ3 变形前后的同一个质点坐标值不 B g ξ2 改变, 但是两质点的距离在变形前 A 后发生了变化。
ˆij dT
j ˆij d T ˆ ˆ d g dT d ˆ i d g j j i j i ˆ j i g ˆ j g ˆ ig ˆ T ˆ T ˆi ˆ ig ˆ ) (T j g g dt dt dt dt dt
ˆ r ˆ ˆi dr ( i ) t d i d i g ˆ r ˆ ˆ i ( j , t ) g i ( i )t g
ξ3
B A
3 g
ξ3
ˆ3 g
B'
ˆ2 g
A'

张量分析基础

张量分析基础

二阶张量的表示
P1 T11 P = T 2 21 P3 T31 T12 T22 T32 T13 Q1 T23 Q 2 T33 Q 3
傀标表示必须成对出现
爱因斯坦求和规则:傀标表示法
Pi =
∑T Q
j =1 ij
3
j
( i = 1, 2 ,3) ( i , j = 1, 2 ,3)
x1* a11 * x 2 = a 21 * x 3 a 31
a12 a 22 a 32
a13 x1 a 23 x 2 a 33 x 3
Neuman原理
物质张量、场张量
— 物质张量是建立晶体在外场作用下的响应与外场之间关系的物理性 能,物质张量受到晶体对称性的制约,如弹性系数 — 场张量:外场张量及晶体对外场响应后所产生的新的物理量,不受 晶体对称性的制约,如应力、电场 — 晶体响应,受外场、物理性能和晶体对称性的共同影响,如应变
—二次曲面方程系数与张量分量 具有相同的变换规律; —二次曲面方程称为张量S的示 性二次曲面; —示性二次曲面可描述具有二阶 对称张量性质的物理特性;
示性二次曲面的主轴
二次曲面的主轴方程
S x + S x + S x =1
2 1 1 2 2 2 2 3 3
x2 a
2
+
y2 b
2
+
z2 c
2
=1
P Q
Neuman原理
— 一个晶体的任何物理性能的对称性必须包括晶体点群的对称性, 即 G物性G点群; — 例1:属于立方晶系的晶体的介电系数可以是各向同性的; — 例2:属于立方晶系的晶体的介电系数不可以只有一个四次对称轴。
晶体对称性对二阶对称张量的制约

张量积Tensor products

张量积Tensor products

How to lose your fear of tensor productsIf you are not in the slightest bit afraid of tensor products, then obviously you do not need to read this page. However, if you have just met the concept and are like most people, then you will have found them difficult to understand. The aim of this page is to answer three questions:1. What is the point of tensor products?2. Why are they defined as they are?3. How should one answer questions involving them?Why bother to introduce tensor products?One of the best ways to appreciate the need for a definition is to think about a natural problem and find oneself more or less forced to make the definition in order to solve it. Here, then, is a very basic question that leads, more or less inevitably, to the notion of a tensor product. (If you really want to lose your fear of tensor products, then read the question and try to answer it for yourself.)Let V,W and X be vector spaces over R. (What I have to say works for any field F, and in fact under more general circumstances as well.) A function f:VxW--> X is called bilinear if it is linear in each variable separately. That is,f(av+bv',w)=af(v,w)+bf(v',w) and f(v,cw+dw')=cf(v,w)+df(v,w') for all possible choices of a,b,c,d,v,v',w,w'.I shall take it for granted that bilinear maps are worth knowing about - they crop up all over the place - and try to justify tensor products given that assumption.Now, bilinear maps are clearly related to linear maps, and there are questions one can ask about linear maps that one can also ask about bilinear ones. For example, if f:V-->W is a linear map between finite-dimensional vector spaces V and W, then one thing we like to do is encode it using a collection of numbers. The usual way to do this is to take bases of V and W and define a matrix A. To obtain the j th column of this matrix, one takes the j th basis vector e j of V, writes f(e j) as a linear combination of the vectors in the basis of W, and uses those coefficients.The reason that the matrix encodes the linear map is that if you know f(e j) for every j then you know f: if v is a linear combination of the e j then f(v) is the corresponding linear combination of the f(e j).This suggests two questions about bilinear maps:1. Can bilinear maps be encoded in a natural way using just a few realnumbers?2. Let V, W and X be finite-dimensional vector spaces, let f:VxW-->X be abilinear map, and let {(v i,w i):i=1,2,...,n} be a collection of pairs ofvectors in VxW. When is f completely determined by the values of thef(v i,w i)?The first question has an easy answer. Pick bases of V, W and X. If you knowf(v i,w j) whenever v i and w j are basis vectors then you know f(v,w) for all pairs (v,w) in VxW - by bilinearity. But each f(v i,w j) is a vector in X and cantherefore be written in terms of the basis of X. Thus, if the dimensions of V, W, and X are p, q and r respectively, it is enough to specify pqr numbers (in a sort of 3-dimensional matrix) in order to specify f. Furthermore, it is not hard to see that every p-by-q-by-r grid of numbers specifies a bilinear map : the number in position (i,j,k) tells us the k th coordinate of f(v i,w j).This observation provides a partial answer to the second question as well. If the pairs (v i,w i) run over all pairs (e i,f j), where (e i) and (f j) are bases of V and W, then the values of f(v i,w i) determine f. However, this is not the only way to fix f. For example, let V=W=R2 and let X=R. Let e1=(1,0) and e2=(0,1). If you know the values of f(e1,e1), f(e2,e2) f(e1+e2,e1+e2) and f(e1+e2,e1+2e2) then you know f. Why? Well,f(e1+e2,e1+e2) -f(e1,e1)-f(e2,e2) =f(e1,e2)+f(e2,e1)andf(e1+e2,e1+2e2) -f(e1,e1)-2f(e2,e2) =2f(e1,e2)+f(e2,e1)which allows us to work out f(e1,e2) and f(e2,e1), and hence determines f.On the other hand, f is not determined by f(e1,e1), f(e2,e2) f(e1+e2,e1+e2) andf(e1-e2,e1-e2). For example, if these values are all 0, then f could beidentically 0 or it could be defined by f((a,b),(c,d))=ad-bc.How, then, are we to say which sets of pairs fix f and which do not? This is the point at which, if you do not know the answer already, I would suggest reading no further and trying to work it out for yourself.There is no doubt that what we are looking for is something like a `basis' of pairs (v,w) in VxW. This should be `independent' in the sense that the value of f on any pair cannot be deduced from the value of f on the other pairs (or, equivalently, we can choose the values of f at the pairs however we like), and`spanning' in the sense discussed above - that f is determined by its values on the given pairs.Equally, there is no doubt that we are not looking for a basis of the vector space VxW itself. For example, if V and W are both R, then (1,0) and (0,1) form a basis of VxW, but to be told the values of a bilinear map f:RxR-->R at (1,0) and (0,1) is to be told nothing at all, since they have to be 0.To get a feel for what is happening, let us solve the problem in this special case (that is, when V=W=R). Suppose we know the value of f(a,b). We have just seen that this information is useless if either a or b is zero, but otherwise it completely determines f, since f(x,y)=(xy/ab)f(a,b).Perhaps that was too simple for any generalization to suggest itself, so let ustry V=W=R2. Suppose that we know the value of f(v,w). That tells us f(av,bw) for any pair of scalars a and b, so if we want to introduce a second pair (v',w') which is independent of the first (not that we know quite what this means) then we had better make sure that either v' is not a multiple of v or w' is not a multiple of w. If we have done that, then what can we deduce from the values of f(v,w) and f(v',w')? Well, we have the values of all f(cv',dw'), but unless v' is a multiple of v or w' is a multiple of w it seems to be difficult to deduce much else, because in order to use the fact that f(x,y+z)=f(x,y)+f(x,z) or f(x+y,z)=f(x,z)+f(y,z) we need to have one coordinate kept constant.One thing we can say, which doesn't determine other values of f but at least places some restriction on them, is thatf(v+v',w+w')=f(v,w)+f(v,w')+f(v',w)+f(v',w').Since we know f(v,w) and f(v',w'), this means that f(v,w'), f(v',w) andf(v+v',w+w') cannot be freely and independently chosen - once you've chosen two of them it fixes the third. But this isn't terribly exciting, so let's look at more pairs.Because subscripts and superscripts are a nuisance in html, I shall now change notation, and imagine that we know the values of f(s,t), f(u,v), f(w,x) andf(y,z). If s, u and w are all multiples of a single vector, then some of this information is redundant, since t, v and x are not linearly independent (they live in R2). So let us suppose that no three of the first vectors are multiples and no three of the second are. It follows easily that we can take two pairs, without loss of generality (s,t) and (u,v), and assume that s and u are linearly independent, and that so are t and v.We can now write w=as+bu, x=ct+dv, y=es+gu, z=ht+kv, and we know, by bilinearity, thatf(w,x)=acf(s,t)+adf(s,v)+bcf(u,t)+bdf(u,v)andf(y,z)=ehf(s,t)+ekf(s,v)+ghf(u,t)+gkf(u,v).Since we know f(s,t), f(u,v), f(w,x) and f(y,z), this gives us two linear equations for f(s,v) and f(u,t). They will have a unique solution as long as adgh does not equal bcek. In this case we have determined f completely, sincef(a's+b'u,c't+d'v)=a'c'f(s,t)+a'd'f(s,v) +b'c'f(u,t)+b'd'f(u,v),the pair on the left hand side can be anything and we know all about values of f on the right hand side.Notice that if adgh does equal bcek then an appropriate linear combination of the above two equations (ek times the first minus ad times the second) gives a linear equation that is automatically satisfied by f(s,t), f(u,v), f(w,x) and f(y,z). In other words, the pairs (s,t), (u,v), (w,x) and (y,z) are not `independent', in the sense that f of three of them determines f of the fourth.So now we understand the case V=W=R2 reasonably well, but it is not quite obvioushow to generalize the above argument to arbitrary spaces V and W. Before we try, let us think a little further about what we have already proved, and how we did it. In particular, let us see if we can be more specific about `independence' of pairs in VxW.Why, for instance, did we say that (s,t), (u,v), (w,x) and (y,z) were not`independent' when adgh=bcek? Was there some `linear combination' that gave a`dependence'? Well, I mentioned that there was a linear equation automatically satisfied by f(s,t), f(u,v), f(w,x) and f(y,z). To be specific, it isekf(w,x)-adf(y,z)=(ekac-adeh)f(s,t)+ (ekbd-adgk)f(u,v).This looks like a linear dependence between f(w,x), f(y,z), f(s,t) and f(u,v), but it isn't quite that because these are just four real numbers, and we are trying to say something more interesting than that the dimension of R is less than 4. What we are saying is more like this: if f is an arbitrary bilinear function, then the above linear equation will always be satisfied.How can we express that statement in terms of straightforward linear algebra? Here are a few suggestions.A first way of making sense of `independence' of pairs.We could think of f, in an expression like f(u,v), as standing for all bilinear functions at once. So then we would make a statement like f(u,v)=2f(w,x) only if this equation was true for every f, rather than for some specific f. We could even make this formal in a rather naughty way as follows. Let B be the set of all bilinear maps defined on VxW. (That's the naughtiness - B is too big to be a set, but actually we will see in a moment that it is enough to look just at bilinear maps into R.) Now regard (u,v) as a function defined on B - if f is a map in B then (u,v)(f) is just f(u,v). Then the statement that f(u,v) always equals 2f(w,x) is the statement that (u,v), considered as a function on B, is twice (w,x), considered as a function on B.Just so that we don't have to keep writing the phrase `considered as a function on B', let us invent some notation. When I want to think of (u,v) as a function on B I'll write [u,v] instead. So now the dependence that I wrote earlier becomesek[w,x]-ad[y,z]=(ekac-adeh)[s,t]+ (ekbd-adgk)[u,v].This dependence really is genuine linear dependence, in the vector space of functions from B to ... well, some rather complicated big sum of vector spaces or something. Instead of bothering to sort out that little difficulty, let's see why it is in fact enough to let B be the set of all bilinear functions to R, otherwise known as bilinear forms.Reducing to the case of bilinear maps into R.Here again there is an analogy with linear maps. Suppose that V and W are finite-dimensional vector spaces and f:V-->W is a linear map. Let w1,...,w m be a basisfor W. If we write vectors in W in coordinate form using this basis, then we will write f(v) as (f1v,...,f m v), and in that way we see that a linear map to an m-dimensional vector space can be thought of as a sequence of m linear maps to R.Exactly the same is true of a bilinear map f:VxW-->X if X is m-dimensional - we can think of it as a sequence (f1,...,f m) of bilinear maps from VxW to R. Fromthis observation we can make the following simple deduction. Ifa1f(v,w)+...+ a n f(v n,w n)=0for every bilinear map f:VxW-->R, then it is zero for every bilinear map from VxW to any finite-dimensional vector space X.In fact, one can even do away with the condition that X should be finite-dimensional, as follows. If f:VxW-->X is a bilinear map such thata1f(v1,w1)+...+ a n f(v n,w n)=xfor some non-zero vector x, then let g be a linear map from X to R such that g(x) is not zero. The existence of this map can be proved as follows. Using the axiom of choice, one can show that the vector x can be extended to a basis of X. Letg(x)=1, let g(y)=0 and extend linearly. Once we have g, we have a bilinear map gf:VxW-->R such thata1gf(v,w)+...+ a n gf(v n,w n)is non-zero.This use of the axiom of choice was not very pleasant, but we shall soon see that it can be avoided.Back to the main discussion.Let us therefore redefine B to be the set of all bilinear maps from VxW to R, and regard [v,w] as notation for the function from B to R defined by [v,w](f)=f(v,w). (This is the same definition as before, apart from the restriction of B to real-valued bilinear maps.) Now that [v,w] is a function from B to R it is completely clear in what sense it lives in a vector space, what is meant by linear dependence and so on. Everything takes place inside the vector space of all real-valued functions on B.The idea of defining functions such as [v,w] was that it gave us a way of reformulating our original problem - when does a set of pairs (v i,w i) fix all bilinear maps? - in terms of concepts we know from linear algebra - it fixes all bilinear maps if and only if the vector space spanned by the functions [v i,w i] contains all functions of the form [v,w]. Moreover, the set of pairs contains no redundancies if and only if the functions [v i,w i] are linearly independent.A second way of converting the problem into linear algebra.Have we really said anything at all? It might seem not. After all, if we are given a set of pairs (v i,w i) and asked whether the corresponding functions [v i,w i] span all functions [v,w], we will find ourselves making exactly the same calculations that we would make if instead we asked whether the values of f(v i,w i), for some unknown bilinear function f, determined the value of f(v,w).However, turning the problem into linear algebra does clarify it somewhat, especially if we can say something about the vector space generated by the functions [v,w] (which contains all the functions on B we will ever need to worry about). It also gives us a more efficient notation.Let me write down a few facts about this vector space.1. [v,w+w']=[v,w]+[v,w']2. [v+v',w]=[v,w]+[v',w]3. [av,w]=a[v,w]4. [v,aw]=a[v,w]The above relations hold for all vectors v,v' in V and w,w' in W, and all real numbers a. Now if we regard an equation like [v,w+w']=[v,w]+[v,w'] as nothing but a shorthand for the statement that f(v,w+w')=f(v,w)+f(v,w') for all bilinear maps f, then it would seem that everything about the space spanned by the functions [v,w] ought to follow from these four facts. After all, the linear relationships between the functions [v,w] are supposed to be the ones that we can deduce about the corresponding vectors f(v,w) when all we know about f is that it is bilinear. So they ought to follow from the axioms for bilinearity - which translate into facts 1-4 above.Let us try to make this hunch precise and prove it. To do this we must ask ourselves which linear dependences can be deduced from 1-4, and then whether those are all the linear relationships that hold for every bilinear function. In a moment I will express this less wordily, but for now let us note that the first question is easy. Facts 1-4 give us four linear equations, which we can make a bit more symmetrical by rewriting them as1. [v,w+w']-[v,w]-[v,w']=02. [v+v',w]-[v,w]-[v',w]=03. [av,w]-a[v,w]=04. [v,aw]-a[v,w]=0Which other linear combinations of pairs must be zero if these ones are? Answer: all linear combinations of these combinations, and nothing else. This is because the only method we have of deducing linear equations from other linear equations is forming linear combinations of those equations.There was something not quite satisfactory about what I have just said. It does seem to be true, but let us try to state and prove it more mathematically, without appealing to phrases like `method of deducing'. It is certainly clear that ifa1[v1,w1]+ a2[v2,w2]+...+ a n[v n,w n]is a linear combination of functions of the forms on the left-hand sides of 1-4 (in the second version), thena1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)=0for every bilinear function f. What is not quite so obvious is the converse: that for every other function of the forma1[v1,w1]+ a2[v2,w2]+...+ a n[v n,w n]we can find some bilinear map f such thata1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)does not equal 0. However, it turns out that there is an almost trivial way to show this. For every pair (v,w) in VxW, regard [[v,w]] as a meaningless symbol. We can define a rather large vector space Z by taking formal linear combinations of these symbols. By that I mean that Z consists of all expressions of the forma1[[v1,w1]]+ a2[[v2,w2]]+...+ a n[[v n,w n]]with obvious definitions for addition and scalar multiplication.Next, we let E be the subspace of Z generated by all vectors of one of the following four forms (which you should be able to guess):1. [[v,w+w']]-[[v,w]]-[[v,w']]2. [[v+v',w]]-[[v,w]]+[[v',w]]3. [[av,w]]-a[[v,w]]4. [[v,aw]]-a[[v,w]]We want everything in E to `be zero', in some appropriate sense. The standard way to make that happen is to take a quotient space Z/E. (If you are hazy about the definition, here is a reminder. Two vectors z and z' in Z are regarded as equivalent if z-z' belongs to E. The vectors in Z/E are equivalence classes, which can be written in the form z+E. That is, if K is an equivalence class and z belongs to K, then it is easy to see that K={z+e: e is in E}=z+E. Addition and scalar multiplication are defined by (z+E)+(z'+E)=z+z'+E and a(z+E)=az+E. It is not hard to check that these are well defined - that is, independent of the particular choices of z and z'.)Why does this quotient space Z/E help us? Because it gives us a trivial proof of the assertion we wanted before. Ifa1[v1,w1]+ a2[v2,w2]+...+ a n[v n,w n]is not a linear combination of expressions of the form1. [v,w+w']-[v,w]-[v,w']2. [v+v',w]-[v,w]+[v',w]3. [av,w]-a[v,w]4. [v,aw]-a[v,w]then triviallyz=a1[[v1,w1]]+ a2[[v2,w2]]+...+ a n[[v n,w n]]is not a linear combination of vectors of the form1. [[v,w+w']]-[[v,w]]-[[v,w']]2. [[v+v',w]]-[[v,w]]+[[v',w]]3. [[av,w]]-a[[v,w]]4. [[v,aw]]-a[[v,w]]In other words, z does not belong to the subspace E. In other words again, z+E is not zero in the quotient space Z/E. To complete the proof, it is enough to find a bilinear map f from VxW to Z/E such thata1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)=z+Eand in particular is non zero. What is the most obvious map one can possibly think of? Well, f(v,w)=[[v,w]]+E seems a good bet. Is it bilinear? Yes, by the way we designed E. For example, f(v,w+w')=[[v,w+w']]+E, and since [[v,w+w']]-[[v,w]]-[[v,w']] belongs to E, we may deduce thatf(v,w+w')=[[v,w]]+[[v,w']]+E=([[v,w]]+E)+([[v,w']]+E) =f(v,w)+f(v,w')To sum up, we have just proved the following (not very surprising) proposition. PropositionA linear combination of functions of the form [v,w] is zero if and only if it is generated by functions of the form [av,w]-a[v,w], [v,aw]-a[v,w], [v,w+w']-[v,w]-[v,w'] and [v+v',w]-[v,w]-[v',w].How to think about tensor products.What has all this to do with tensor products? Now is the time to admit that I have already defined tensor products - in two different ways. They are a good example of the phenomenon discussed in my page about definitions : exactly how they are defined is not important: what matters is the properties they have.The usual notation for the tensor product of two vector spaces V and W is V followed by a multiplication symbol with a circle round it followed by W. Since this is html, I shall write V@W instead, and a typical element of V@W will be a linear combination of elements written v@w. You can regard v@w as an alternative notation for [v,w], or for [[v,w]]+E - it doesn't matter which as the above discussion shows that the space spanned by [v,w] is isomorphic to the space spanned by [[v,w]]+E, via the (well-defined) linear map that takes [v,w] to [[v,w]]+E and extends linearly.A tempting mistake for beginners is to think that every element of V@W is of the form v@w, but this is just plain false. For example, if v, v', w and w' are vectors with no linear dependences, then v@w+v'@w' cannot be written in that form. (If it could then there would be some pair (v'',w'') such that a bilinear map f satisfied f(v'',w'')=0 if and only if f(v,w)+f(v',w')=0. It is not hard to convince yourself that there is no such pair - indeed I more or less proved it in the discussion above about two-dimensional spaces.)Another tempting mistake is to pay undue attention to how tensor products are constructed. I should say that the standard construction is the second one I gave, that is, the quotient space. Suppose that we try to solve problems by directlyusing this definition, or rather construction. They suddenly seem rather hard. For example, let v' and w' be non-zero vectors in V and W. How can we show that v'@w' is not zero? Well, to do so directly from the quotient space definition, we need to show that the pair [[v',w']] does not belong to the space E defined earlier. In order to prove that, we somehow need to find a property of [[v',w']] that is not shared by any linear combination of vectors of the four forms listed above.Let us ask ourselves a very general question: how does one ever show that acertain point v does not lie in a certain subspace W of a vector space? If the space is R n and we are given a basis of the subspace, then our task is to showthat a system of linear equations has no solution. In a more abstract set-up, the natural method - in fact, more or less the only method - is to find a linear map T from V to some other vector space (R will always be possible) such that Tv is not zero but Tw is zero for every w in W.Returning to the example at hand, can we find a linear map that sends everything in E to zero and [[v',w']] to something non-zero? Let us remind ourselves of our earlier proposition.Proposition (repeated)A linear combination of functions of the form [v,w] is zero if and only if it is generated by functions of the form [av,w]-a[v,w], [v,aw]-a[v,w], [v,w+w']-[v,w]-[v,w'] and [v+v',w]-[v,w]-[v',w].That gives us an obvious map that takes everything in E to zero: just map [[v,w]] to [v,w] and extend linearly. So we are then done if [v',w'] is non-zero. But for [v',w'] not to be zero, all we have to do is come up with a bilinear map f from VxW to R such that f(v',w') is not zero. To do this, extend the singletons {v'} and {w'} to bases of V and W and for any pair of basis vectors (x,y) let f(x,y)=0 unless x=v' and y=w' in which case let f(x,y)=1. Then extend f bilinearly.Well, we have solved the problem, but we didn't really do it directly from the quotient-space definition. Indeed, we got out of the quotient space as quickly as we could. How much simpler it would have been to start thinking immediately about bilinear functions. In order to show that v@w is non-zero, we could have regarded it as [v,w] instead, and instantly known that v@w is non-zero if and only if there is a bilinear function f defined on VxW such that f(v,w) does not vanish.So, here is a piece of advice for interpreting a linear equation involving expressions of the form v@w. Do not worry about what the objects themselves mean, and instead use the fact thata1v1@w1+ a2v2@w2+...+ a n v n@w n=0if and only ifa1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)=0for every bilinear function f defined on VxW. (We proved this earlier, except that instead of v i@w i we wrote [v i,w i].)Now algebraists have a more grown-up way of saying this, which runs as follows. Here is a sentence from earlier in this page:What we are saying is more like this: if f is an arbitrary bilinear function, then the above linear equation will always be satisfied.It would be nice if there were a bilinear map g on VxW that was so `generic' that we could regard it itself as an `arbitrary' bilinear map. But there is such a map, and we have more or less defined it. It takes (v,w) to v@w. The bilinearity ofthis map is obvious (if you don't find it obvious then you are forgetting to use the fact I have just mentioned and recommended). As for its `arbitrariness', the fact above can be translated as follows:a1g(v1,w1)+ a2g(v2,w2)+...+ a n g(v n,w n)=0if and only ifa1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)=0for every bilinear function f defined on VxW. In brief, no linear equation holds for g unless it holds for all bilinear functions.How do algebraists express this `arbitrariness'? They say that the tensor product has a universal property . The bilinear map g is in a certain sense `as big as possible'. To see what this sense is, let us return to our main fact in its v i@w i formulation. Let f:VxW-->U be some bilinear map, and let us try to define a linear map h:V@W-->U by sending v@w to f(v,w) and extending linearly. It is not quite obvious that this is well-defined, since we must check that if we write an element of V@W in two different ways as linear combinations of v@ws, then the corresponding linear combinations of f(v,w)s are equal. But this is exactly whatis guaranteed by the main fact. So h is a well-defined linear map, and hg=f, since hg(v,w)=h(v@w)=f(v,w), by the definition of h. Moreover, it is clear that h is the only linear map such that hg=f, since h(v@w) is forced to be f(v,w) and we are forced to extend linearly. We have therefore proved the following.PropositionFor every bilinear map f:VxW-->U there is a unique linear map h:V@W-->U such that hg=f, where g is the bilinear map from VxW to V@W that takes (v,w) to v@w.The map f is said to factor uniquely through g.Now let us see why this proposition says exactly the same as what I have called the main fact about V@W. Since it followed from the main fact, all I have to do is show that reverse implication holds as well: assuming this proposition we can recover the main fact. Suppose therefore that there is a bilinear function f such thata1f(v1,w1)+ a2f(v2,w2)+...+ a n f(v n,w n)is not zero. Since we can write f as hg, and since h is linear, it follows thata1g(v1,w1)+ a2g(v2,w2)+...+ a n g(v n,w n),which equalsa1v1@w1+ a2v2@w2+...+ a n v n@w n,is also non-zero. And that establishes the fact.A useful lemma about the tensor product is that it is unique, in the following sense.LemmaLet U and V be vector spaces, and let b:UxV-->X be a bilinear map from UxV to a vector space X. Suppose that for every bilinear map f defined on UxV there is a unique linear map c defined on X such that f=cb. Then there is an isomorphism i:X-->U@V such that u@v=ib(u,v) for every (u,v) in U@V.We can avoid mentioning u@v if we use the map g:UxV-->U@V. Then the lemma saysthat g=ib. Briefly, the point of the lemma is that any bilinear map b:UxV-->X satisfying the universal property is isomorphic to the map g:UxV-->U@V in an obvious sense.ProofApplying the hypothesis about b to the bilinear map g:UxV-->U@V, we obtain alinear map i:X-->U@V such that g=ib. Similarly, applying the universal property of g to the bilinear map b, we obtain a linear map j:U@V-->X such that b=jg. It follows that b=jg=jib. Now let c be the identity on X. Then b=cb. So by the uniqueness part of the hypothesis on X (applied when f=b) we find that ji=c. Similarly, ij is the identity on U@V, which shows that i is an isomorphism.The reason algebraists prefer to talk about the universal property of V@W and factorization of maps is that it enables them to avoid dirtying their hands by considering the actual elements of V@W. It can be hard to get used to this spaces-rather-than-objects way of thinking, so let me prove that the tensor product is associative (in the sense that there is a natural isomorphism between U@(V@W) and (U@V)@W), first by using the main fact and then by using the universal property. The associativity of the tensor productSince V@W is a vector space, it makes perfectly good sense to talk about U@(V@W) when U is another vector space. A typical element of U@(V@W) will be a linear combination of elements of the form u@x, where x itself is a linear combination of elements of V@W of the form v@w. Hence, we can write any element of U@(V@W) asu1@(v1@w1)+...+ u n@(v n@w n).(Here I have used facts such as that a(x@y)=x@ay=ax@y and x@(y+z)=x@y+x@z.) Since we can say something very similar about elements of (U@V)@W, there is a very tempting choice for the definition of a (potential) isomorphism between the two spaces, namely that the above vector should map to(u1@v1)@w1+...+ (u n@v n)@w n.Indeed, this works, but I haven't proved it yet because I haven't demonstratedthat it is well-defined. For this it is enough to prove that if the first vector is zero then the second must be as well. And now there is a slight problem. We。

张量分析Huang_Introduction to Tensor

张量分析Huang_Introduction to Tensor

矢量和张量所代表的物理量本身并不依赖于 坐标系而存在,但要对该物理量进行数值上 的描述和分析,则常常需要引入一个适当的 坐标系。在三维物理空间中一个矢量具有三 个分量,一个二阶张量则有九个分量。 一般地,三维空间中的一个n 阶张量则有 3n 个分量。而标量和矢量可分别看作为零 阶和一阶张量
1. 矢量和张量的表述
自然界中的物理量,有的只需要用一个实数来 描述,如温度、气压、时间、质量、能量等。 这样的物理量称为标量。 有些物理量包含了大小和方向等要素,需要用 矢量(向量)来描述,如力、力矩、位移、速度、 动量等。
自然界中还有一些物理量,包含了比矢量更多 的要素,如连续体中一点处的应力、应变等。 这样的物理量称为张量。
òijk A1i A2 j A3k i òijk Ai1 Aj 2 Ak 3 i
(4)
i1 i 2 i 3 òijk det j1 j 2 j 3 k 1 k 2 k 3
(5)
蝌 ijk
pqr
ip iq ir det jp jq jr kp kq kr
Aij
D Dijkl ei e j ek el
自由下标
Dijkl
(i, j, k , l {1, 2,3})
取遍1、2、3
张量的指标表述是依赖于坐标系的
2. Einstein求和约定:
c a jbj
j 1 3

c a jb j

j 1 3 3
3
Aij x j bi
HOHAI UNIVERSITY
弹 性 力 学 Engineering Elasticity
黄文雄

第4章 张量分析(清华大学张量分析,你值得拥有)

第4章 张量分析(清华大学张量分析,你值得拥有)

x2
A

• •
b
a
dF ( x) dx F (a ) F (b) dx
微分阶次降了一阶 域内转换到边界
l
x1
向二维扩展:Green定理
X 1 ( x1 , x 2 ) 1 2 1 2 1 dx dx X 1 ( x , x )dx 2 x A l X 2 ( x , x ) 1 2 dx dx 1 x A
gi ( x j ) gi ( x j x j )
gi gi ( x , x , x )
1 2 3
O
是坐标的非线性函数
基矢量的导数,Christoffel符号

基矢量的导数与Christoffel符号 协变基矢量的导数与第二类Christoffel符号 g j g j k k k ij gk ij i g 定义式 i x x
m Tim m T i j k
i i m T m T j mk m jk
四者之间满足指标升降关系。
张量分量对坐标的协变导数
★张量场函数的梯度 特殊张量1:度量张量G
g ij;k 0 G G 0
两个张量的并AB的协变导数
1 ij gg i j x g x
2
张量场函数的散度和旋度
因此,Laplace算子的计算式:
1 ij ( ) ( ) gg i j x g x
2
Euclid空间,只有一个最基本的一阶矢量微分算子, 即梯度算子。 Euclid空间,只有一个最基本的二阶标量微分算子, 即Laplace算子。
从而可得右梯度和左梯度:
T i T T (r ) i g x

VectorsTensors08Tensors08矢量张量张量

VectorsTensors08Tensors08矢量张量张量

1.8 TensorsHere the concept of the tensor is introduced. Tensors can be of different orders – zeroth-order tensors, first-order tensors, second-order tensors, and so on. Apart from the zeroth and first order tensors (see below), the second-order tensors are the most importanttensors from a practical point of view, being important quantities in, amongst other topics, continuum mechanics, relativity, electromagnetism and quantum theory.1.8.1 Zeroth and First Order TensorsA tensor of order zero is simply another name for a scalar α.A first-order tensor is simply another name for a vector u .1.8.2 Second Order TensorsNotation Vectors: lowercase bold-face Latin letters, e.g. a , r , q2ndorder Tensors: uppercase bold-face Latin letters, e.g. F , T , STensors as Linear OperatorsA second -order tensor T may be defined as an operator that acts on a vector u generating another vector v , so that v u T =)(, or 1v Tu v u T ==⋅or Second-order Tensor (1.8.1)The second-order tensor T is a linear operator (or linear transformation )2, which means that()Tb Ta b a T +=+ … distributive ()()Ta a T αα= … associativeThis linearity can be viewed geometrically as in Fig. 1.8.1.Note: the vector may also be defined in this way, as a mapping u that acts on a vector v , this time generating a scalar α, α⋅=u v . This transformation (the dot product) is linear (see properties (2,3) in §1.1.4). Thus a first-order tensor (vector) maps a first-order tensor into a zeroth-order tensor (scalar), whereas a second-order tensor maps a first-order tensor into a first-order tensor. It will be seen that a third-order tensor maps a first-order tensor into a second-order tensor, and so on.1both these notations for the tensor operation are used; here, the convention of omitting the “dot” will be used 2An operator or transformation is a special function which maps elements of one type into elements of a similar type; here, vectors into vectorsFigure 1.8.1: Linearity of the second order tensorFurther, two tensors T and S are said to be equal if and only ifTv Sv =for all vectors v .Example (of a Tensor)Suppose that F is an operator which transforms every vector into its mirror-image with respect to a given plane, Fig. 1.8.2. F transforms a vector into another vector and the transformation is linear, as can be seen geometrically from the figure. Thus F is a second-order tensor.Figure 1.8.2: Mirror-imaging of vectors as a second order tensor mapping■Example (of a Tensor)The combination ⨯u linearly transforms a vector into another vector and is thus asecond-order tensor 3. For example, consider a force f applied to a spanner at a distance r from the centre of the nut, Fig. 1.8.3. Then it can be said that the tensor ()⨯r maps the force f into the (moment/torque) vector f r ⨯.3Some authors use the notation u~ to denote ⨯u()b a +Figure 1.8.3: the force on a spanner■1.8.3 The Dyad (the tensor product)The vector dot product and vector cross product have been considered in previoussections. A third vector product, the tensor product (or dyadic product ), is important in the analysis of tensors of order 2 or more. The tensor product of two vectors u and v is written as 4v u ⊗ Tensor Product (1.8.2)This tensor product is itself a tensor of order two, and is called dyad :v u ⋅ is a scalar (a zeroth order tensor) v u ⨯ is a vector (a first order tensor) v u ⊗ is a dyad (a second order tensor)It is best to define this dyad by what it does : it transforms a vector w into another vector with the direction of u according to the rule 5)()(w v u w v u ⋅=⊗ The Dyad Transformation (1.8.3)This relation defines the symbol “⊗”.The length of the new vector is u times w v ⋅, and the new vector has the same direction as u , Fig. 1.8.4. It can be seen that the dyad is a second order tensor, because it operates linearly on a vector to give another vector {▲Problem 2}.Note that the dyad is not commutative, u v v u ⊗≠⊗. Indeed it can be seen clearly from the figure that ()()w u v w v u ⊗≠⊗.4 many authors omit the ⊗ and write simply uv5note that it is the two vectors that are beside each other (separated by a bracket) that get “dotted” togetherFigure 1.8.4: the dyad transformationThe following important relations follow from the above definition {▲Problem 4},()()()()()()wv u w v u x u w v x w v u ⋅=⊗⊗⋅=⊗⊗ (1.8.4)It can be seen from these that the operation of the dyad on a vector is not commutative:()()u w v w v u ⊗≠⊗ (1.8.5)Example (The Projection Tensor)Consider the dyad e e ⊗. From the definition 1.8.3, ()()e u e u e e ⋅=⊗. But u e ⋅ is the projection of u onto a line through the unit vector e . Thus ()e u e ⋅ is the vector projection of u on e . For this reason e e ⊗ is called the projection tensor . It is usually denoted by P .Figure 1.8.5: the projection tensor■wv u )(⊗uwv1.8.4 DyadicsA dyadic is a linear combination of these dyads (with scalar coefficients). An example might be()()()f e d c b a ⊗-⊗+⊗235This is clearly a second-order tensor. It will be seen in §1.9 that every second-order tensor can be represented by a dyadic, that is()()() +⊗+⊗+⊗=f e d c b a T γβα (1.8.6)Note: second-order tensors cannot, in general, be written as a dyad, =⊗T a b – when they can, they are called simple tensors .Example (Angular Momentum and the Moment of Inertia Tensor)Suppose a rigid body is rotating so that every particle in the body is instantaneously moving in a circle about some axis fixed in space, Fig. 1.8.6.Figure 1.8.6: a particle in motion about an axisThe body’s angular velocity ω is defined as the vector whose magnitude is the angular speed ω and whose direction is along the axis of rotation. Then a particle’s linear velocity isr ωv ⨯=where wd v = is the linear speed, d is the distance between the axis and the particle, and r is the position vector of the particle from a fixed point O on the axis. The particle’s angular momentum (or moment of momentum) h about the point O is defined to bev r h ⨯=mwhere m is the mass of the particle. The angular momentum can be written asωIh ˆ= (1.8.8)where I ˆ, a second-order tensor, is the moment of inertia of the particle about the point O, given by()r r I r I⊗-=2ˆm (1.8.9)where I is the identity tensor, i.e. a Ia = for all vectors a .To show this, it must be shown that ()ωr r I r v r ⊗-=⨯2. First examine v r ⨯. It isevidently a vector perpendicular to both r and v and in the plane of r and ω; its magnitude isθsin 2ωr v r v r ==⨯Now (see Fig. 1.8.7)()()()r e e ωr ωr r ωr ωr r I r θωcos 222-=⋅-=⊗-where ωe and r e are unit vectors in the directions of ω and r respectively. From thediagram, this is equal to h e ωr θsin 2. Thus both expressions are equivalent, and onecan indeed write ωIh ˆ= with I ˆ defined by Eqn. 1.8.9: the second-order tensor I ˆ maps the angular velocity vector ω into the angular momentum vector h of the particle.Figure 1.8.7: geometry of unit vectors for angular momentum calculation■1.8.5 The Vector Space of Second Order TensorsThe vector space of vectors and associated spaces were discussed in §1.2. Here, spaces of second order tensors are discussed.As mentioned above, the second order tensor is a mapping on the vector space V ,V V →:T (1.8.10)and follows the rules()()()Ta a T TbTa b a T αα=+=+ (1.8.11)for all V ∈b a , and R ∈α.Denote the set of all second order tensors by 2V . Define then the sum of two tensors 2,V ∈T S through the relation()Tv Sv v T S +=+ (1.8.12)and the product of a scalar R ∈α and a tensor 2V ∈T through()Tv v T αα= (1.8.13)Define an identity tensor 2V ∈I throughv Iv =, for all V ∈v (1.8.14)and a zero tensor 2V ∈O througho Ov =, for all V ∈v (1.8.15)It follows from the definition 1.8.11 that 2V has the structure of a real vector space, that is, the sum 2V ∈+T S , the product 2V ∈T α, and the following 8 axioms hold:1. for any 2,,V ∈C B A , one has )()(C B A C B A ++=++2. there exists an element 2V ∈O such that T T O O T =+=+ for every 2V ∈T3. for each 2V ∈T there exists an element 2V ∈-T , called the negative of T , such that0)()(=+-=-+T T T T4. for any 2,V ∈T S , one has S T T S +=+5. for any 2,V ∈T S and scalar R ∈α, T S T S ααα+=+)(6. for any 2V ∈T and scalars R ∈βα,, T T T βαβα+=+)(7. for any 2V ∈T and scalars R ∈βα,, T T )()(αββα=8. for the unit scalar R ∈1, T T =1 for any 2V ∈T .1.8.6 Problems1. Consider the function f which transforms a vector v into β+⋅v a . Is f a tensor (of order one)? [Hint: test to see whether the transformation is linear, by examining ()v u f +α.]2. Show that the dyad is a linear operator, in other words, show that()x v u w v u x w v u )()()(⊗+⊗=+⊗βαβα 3. When is a b b a ⊗=⊗? 4. Prove that(i) ()()()()x u w v x w v u ⊗⋅=⊗⊗ [Hint: post-“multiply” both sides of the definition (1.8.3) by x ⊗; then show that ()()()()x w v u x w v u ⊗⊗=⊗⊗.](ii) ()()w v u w v u ⋅=⊗ [hint: pre “multiply” both sides by ⊗x and use the result of (i)]5. Consider the dyadic (tensor) b b a a ⊗+⊗. Show that this tensor orthogonally projects every vector v onto the plane formed by a and b (sketch a diagram).6. Draw a sketch to show the meaning of ()Pv u ⋅, where P is the projection tensor. What is the order of the resulting tensor?7. Prove that ()⨯⨯=⊗-⊗a b a b b a .。

张量

张量
ab..d
= Aij.. Bab..d
(1)
For instance, the outer product of two vectors, u ◦ v , is a matrix. Conversely, the mode−p inner product between two arrays having the same pth dimension is denoted A •p B , and is obtained by summing over the pth index. More precisely, if A and B are of orders M and N respectively, this yields for p = 1 : {A • B }i2 ..iM ,j2 ..jN =
Arrays
Arrays will more than one index will be denoted in bold uppercase; vectors are one-way arrays, and will be denoted in bold lowercase. Plain uppercases will be mainly used to denote dimensions. For our purpose, only a few notations related to arrays [16] [11] are necessary. In this paper, the outer product of two arrays of order M and N is denoted C = A ◦ B and is an array of order M + N : Cij..

张量分析TensorAnalysis

张量分析TensorAnalysis
若以 aij表示行列式中的普遍项,以 aij 表示行列式,
则上述行列式可写成:
a aij eijka1i a2j a3k
aeilm eijkali amj ank
E) 克罗内克符号与置换符号的关系
11

1 2

1 3
100
i j 12

2 2

2 3
0
1
0 1
13

3 2
( j 1,2,3)
dyi

yi x j
dx j
dr gi dxi dr g k gi g k dxi
dyk (gi g k )dxi

gi
gk

y k xi
逆变基矢量的变换法则:
gk

(gk gi ) gi

gi
y k xi
相伴度量张量的变换法则:
张量分析(Tensor Analysis)
Objectives
1)熟练运用符号与求和约定; 2)熟练掌握张量以及包括基矢量、度量张量等基本张量的定义; 3)熟练掌握张量的运算法则; 4)熟练运用张量表示力学的基本方程。
1 张量的概念
在三维空间,一个矢量(例如力矢量、速度矢量等)在某参考坐标系中, 有三个分量;这三个分量的集合,规定了这个矢量;当坐标变换时,这些 分量按一定的变换法则变换。
ai xia j x j (i, j 1,2,, n)
求和约定可以推广到微分公式: 设 f(x1,x2,···,xn) 为n个独立变量 x1,x2,···,xn 的函数,则它的微分可写成 :
df

f xi
dxi
xi 中 i被认为是下标。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

TensorTensors are geometric objects that describe linearrelations between vectors, scalars, and other tensors.Elementary examples of such relations include thedot product, the cross product, and linearmaps.Vectors and scalars themselves are also tensors.A tensor can be represented as a multi-dimensionalarray of numerical values. The order (also degree orrank )of a tensor is the dimensionality of the arrayneeded to represent it, or equivalently, the number ofindices needed to label a component of that array. For example, a linear map can be represented by a matrix, a 2-dimensional array, and therefore is a 2nd-order tensor. A vector can be represented as a 1-dimensional array and is a1st-order tensor. Scalars are single numbers andare thus 0th-order tensors.Tensors are used to represent correspondences between sets of geometric vectors. For example, the Cauchy stress tensor T takes a direction v as input and produces the stress T (v ) on the surfacenormal to this vector for output thus expressinga relationship between these two vectors, shown in the figure (right).Because they express a relationship between vectors, tensors themselves must beindependent of a particular choice of coordinate system. Taking a coordinate basis or frame of reference and applying the tensor to it results in an organized multidimensional array representing the tensor in that basis, or frame of reference. The coordinate independence of a tensor then takes the form of a "covariant" transformation law that relates the array computed in one coordinate system to that computed in another one. This transformation law is considered to be built into the notion of a tensor in a geometric or physical setting, and the precise form of the transformation law determines the type (or valence ) of the tensor.Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as elasticity, fluid mechanics, and general relativity. Tensors were first conceived by Tullio Levi-Civita and Gregorio Ricci-Curbastro, who continued the earlier work of Bernhard Riemann and Elwin Bruno Christoffel and others, as part of the absolute differential calculus . The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.[1] Cauchy stress tenso r , a second-order tensor. The tensor's components, in a three-dimensional Cartesian coordinate system, form the matrix whose columns are the stresses (forces per unit area) acting on the e 1, e 2, and e 3 faces of the cube.HistoryThe concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century.[2]The word "tensor" itself was introduced in 1846 by William Rowan Hamilton[3] to describe something different from what is now meant by a tensor.[Note 1] The contemporary usage was brought in by Woldemar V oigt in 1898.[4]Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented by Ricci in 1892.[5] It was made accessible to many mathematicians by the publication of Ricci and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).[6]In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann.[7]Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.—Albert Einstein, The Italian Mathematicians of Relativity[8]Tensors were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics. From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem).[citation needed] Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field, but the theory is then certainly less geometric, and computations more technical and less algorithmic.[clarification needed]Tensors are generalized within category theory bymeans of the concept of monoidal category, from the 1960s.DefinitionThere are several approaches to defining tensors. Although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction.As multidimensional arraysJust as a scalar is described by a single number, and a vector with respect to a given basis is described by an array of one dimension, any tensor with respect to a basis is described by a multidimensional array. The numbers in the array are known as the scalar components of the tensor or simply its components.They are denoted by indices giving their position in the array, in subscript and superscript, after the symbolic name of the tensor. The total number of indices required to uniquely select each component is equal to the dimension of the array, and is called the order or the rank of the tensor.[Note 2]For example, the entries of an order 2 tensor T would be denoted T ij, where i and j are indices running from 1 to the dimension of the related vector space.[Note 3]Just as the components of a vector change when we change the basis of the vector space, the entries of a tensor also change under such a transformation. Each tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors),where the new basis vectors are expressed in terms of the old basis vectors as,where R i j is a matrix and in the second expression the summation sign was suppressed (a notational convenience introduced by Einstein that will be used throughout this article). The components, v i, of a regular (or column) vector, v, transform with the inverse of the matrix R,where the hat denotes the components in the new basis. While the components, w i, of a covector (or row vector), w transform with the matrix R itself,The components of a tensor transform in a similar manner with a transformation matrix for each index. If an index transforms like a vector with the inverse of the basis transformation, it is called contravariant and is traditionally denoted with an upper index, while an index that transforms with the basis transformation itself is called covariant and is denoted with a lower index. The transformation law for an order-m tensor with n contravariant indices and m−n covariant indices is thus given as,Such a tensor is said to be of order or type (n,m−n).[Note 4] This discussion motivates the following formal definition:[9]Definition. A tensor of type (n, m−n) is an assignment of a multidimensional arrayto each basis f = (e1,...,e N) such that, if we apply the change of basisthen the multidimensional array obeys the transformation lawThe definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[1]Nowadays, this definition is still used in some physics and engineering text books.[10][11]Tensor fieldsMain article: Tensor fieldIn many applications, especially in differential geometry and physics, it is natural to consider a tensor with components which are functions. This was, in fact, the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, but they are often simply referred to as tensors themselves.[1]In this context the defining transformation law takes a different form. The "basis" for the tensor field is determined by the coordinates of the underlying space, and thedefining transformation law is expressed in terms of partial derivatives of thecoordinate functions, , defining a coordinate transformation,[1]As multilinear mapsA downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach is to define a tensor as a multilinear map. In that approach a type (n,m) tensor T is defined as a map,where V is a vector space and V* is the corresponding dual space of covectors, which is linear in each of its arguments.By applying a multilinear map T of type (n,m) to a basis {e j} for V and a canonical cobasis {εi} for V*,an n+m dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realised as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.Using tensor productsMain article: Tensor (intrinsic definition)For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property. A type (n,m) tensor is defined in this context as an element of the tensor product of vectorspaces,[12]If v i is a basis of V and w j is a basis of W, then the tensor product has anatural basis . The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {e i} for V and its dual {εj}, i.e.Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (m,n) tensor. Moreover, the universal property of the tensor product gives a 1-to-1 correspondence between tensors defined in this way and tensors defined as multilinear maps.OperationsThere are a number of basic operations that may be conducted on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component for component. These operations do not change the type of the tensor, however there also exist operations that change the type of the tensors.Raising or lowering an indexMain article: Raising and lowering indicesWhen a vector space is equipped with an inner product (or metric as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric itself is a (symmetric) (0,2)-tensor, it is thus possible to contract an upper index of a tensor with one of lower indices of the metric. This produces a new tensor with the same index structure as the previous, but with lower index in the position of the contracted upper index. This operation is quite graphically known as lowering an index.Conversely the matrix inverse of the metric can be defined, which behaves as a (2,0)-tensor. This inverse metric can be contracted with a lower index to produce an upper index. This operation is called raising an index.ApplicationsContinuum mechanicsImportant examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor. The stress tensor and strain tensor are both second order tensors, and are related in a general linear elastic material by a fourth-order elasticity tensor. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3×3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3×3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second order tensor is needed.If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2,0), in linear elasticity, or more precisely by a tensor field of type (2,0), since the stresses may vary from point to point.Other examples from physicsCommon applications include∙Electromagnetic tensor(or Faraday's tensor) in electromagnetism∙Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics∙Permittivity and electric susceptibility are tensors in anisotropic media∙Four-tensorsin general relativity (e.g. stress-energy tensor), used to represent momentum fluxes∙Spherical tensor operators are the eigen functions of the quantum angular momentum operator in spherical coordinates∙Diffusion tensors, the basis of Diffusion Tensor Imaging, represent rates of diffusion in biologic environments∙Quantum Mechanicsand Quantum Computing utilise tensor products for combination of quantum statesApplications of tensors of order > 2The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.The field of nonlinear optics studies the changes to material polarization density underextreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:Here is the linear susceptibility, gives the Pockels effect and secondharmonic generation, and gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.Generalizations[edit]Tensors in infinite dimensionsThe notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces.[15]Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual.[16] Tensors thus live naturally on Banach manifolds.[17]Tensor densitiesMain article: Tensor densityIt is also possible for a tensor field to have a "density". A tensor with density r transforms as an ordinary tensor under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian to the r th power.[18] Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range.In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles r times. While locally the more general transformation law can indeed be used to recognise these tensors, there is aglobal question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values.Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see density on a manifold.SpinorsMain article: SpinorStarting with an orthonormal coordinate system, a tensor transforms in a certain way when a rotation is applied. However, there is additional structure to the group of rotations that is not exhibited by the transformation law for tensors: see orientation entanglementand plate trick. Mathematically, the rotation group is not simply connected. Spinors are mathematical objects that generalize the transformation law for tensors in a way that is sensitive to this fact.Einstein summation conventionThe Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.。

相关文档
最新文档