Jacobi-like Algorithms for Eigenvalue Decomposition of a Real Normal Matrix Using Real Arit

合集下载

解线性方程组的Jacobi迭代法和投影法之比较

解线性方程组的Jacobi迭代法和投影法之比较

解线性方程组的Jacobi迭代法和投影法之比较
刘长河;刘世祥;寿玉亭
【期刊名称】《北京建筑工程学院学报》
【年(卷),期】2003(019)004
【摘要】从线性方程组的多参数投影法推出Jacobi迭代法.从最优化的观点分析
了Jacobi迭代法收敛速度较慢的原因,即其下降矩阵与步长向量两者并非最优组合.【总页数】3页(P64-66)
【作者】刘长河;刘世祥;寿玉亭
【作者单位】基础部,北京,100044;基础部,北京,100044;基础部,北京,100044
【正文语种】中文
【中图分类】O241.6
【相关文献】
1.在Excel中应用迭代法求解线性方程组——雅可比(Jacobi)和塞德尔(Seidel)迭代法 [J], 杨德祥
2.Jacobi与Gauss-Seidel迭代法求解线性方程组收敛性比较与研究 [J], 王育琳;
贺迅宇;张尚先
3.Jacobi迭代法与Gauss-Seidel迭代法的收敛性比较分析 [J], 白红梅
4.对称正定线性方程组的块Jacobi迭代法的一种改进算法 [J], 徐浩
5.M-矩阵线性方程组的一类Jacobi-Like迭代法 [J], 关晋瑞;宋儒瑛
因版权原因,仅展示原文概要,查看原文内容请购买。

丘赛计算与应用数学考试大纲

丘赛计算与应用数学考试大纲

丘赛计算与应⽤数学考试⼤纲原⽂地址:Computational MathematicsInterpolation and approximationPolynomial interpolation and least square approximation; trigonometric interpolation and approximation, fast Fourier transform; approximations by rational functions; splines.Nonlinear equation solversConvergence of iterative methods (bisection, secant method, Newton method, other iterative methods) for both scalar equations and systems; finding rootsof polynomials.Linear systems and eigenvalue problemsDirect solvers (Gauss elimination, LU decomposition, pivoting, operation count, banded matrices, round-off error accumulation); iterative solvers (Jacobi, Gauss-Seidel, successive over-relaxation, conjugate gradient method, multi-grid method, Krylov methods); numerical solutions for eigenvalues and eigenvectorsNumerical solutions of ordinary differential equationsOne step methods (Taylor series method and Runge-Kutta method); stability, accuracy and convergence; absolute stability, long time behavior; multi-stepmethodsNumerical solutions of partial differential equationsFinite difference method; stability, accuracy and convergence, Lax equivalence theorem; finite element method, boundary value problemsReferences:[1] C. de Boor and S.D. Conte, Elementary Numerical Analysis, an algorithmicapproach, McGraw-Hill, 2000.[2] G.H. Golub and C.F. van Loan, Matrix Computations, third edition, JohnsHopkins University Press, 1996.[3] E. Hairer, P. Syvert and G. Wanner, Solving Ordinary Differential Equations, Springer, 1993.[4] B. Gustafsson, H.-O. Kreiss and J. Oliger, Time Dependent Problems and Difference Methods, John Wiley Sons, 1995.[5] G. Strang and G. Fix, An Analysis of the Finite Element Method, second edition, Wellesley-Cambridge Press, 2008.Applied MathematicsODE with constant coefficients; Nonlinear ODE: critical points, phase space& stability analysis; Hamiltonian, gradient, conservative ODE's.Calculus of Variations: Euler-Lagrange Equations; Boundary Conditions, parametric formulation; optimal control and Hamiltonian, Pontryagin maximum principle.First order partial differential equations (PDE) and method of characteristics; Heat, wave, and Laplace's equation; Separation of variables and eigen-function expansions; Stationary phase method; Homogenization method for elliptic and linear hyperbolic PDEs; Homogenization and front propagation of Hamil ton-Jacobi equations; Geometric optics for dispersive wave equations. References:W.D. Boyce and R.C. DiPrima, Elementary Differential Equations, Wiley, 2009 F.Y.M. Wan, Introduction to Calculus of Variations and Its Applications, Cha pman & Hall, 1995G. Whitham, "Linear and Nonlinear Waves", John-Wiley and Sons, 1974.J. Keener, "Principles of Applied Mathematics", Addison-Wesley, 1988.A. Benssousan, P-L Lions, G. Papanicolaou, "Asymptotic Analysis for Periodic Structures", North-Holland Publishing Co, 1978.V. Jikov, S. Kozlov, O. Oleinik, "Homogenization of differential operators and integral functions", Springer, 1994.J. Xin, "An Introduction to Fronts in Random Media", Surveys and Tutorials in Applied Math Sciences, No. 5, Springer, 2009。

矩阵雅克比迭代算法

矩阵雅克比迭代算法

雅克比迭代实验目的:1.学习和掌握线性代数方程组的jacobi 迭代法。

2.运用jacobi 迭代法进行计算。

方法原理:设方程组Ax=b 的系数矩阵A 非奇异而且),...,2,1(0n i a ii =≠,将A 分裂为 A=D+L+U,可以使计算简便。

其中),,...,,(2211nn a a a diag D = ⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=0...............0...00 (002121)n n a a a L ,⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=0...00...............00...02112n n a a a U A=D+L+U ,其中),,...,,(2211nn a a a diag D =⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=0...............0...00 (002)121n n a a a L ,⎥⎥⎥⎥⎦⎤⎢⎢⎢⎢⎣⎡=0...00...............00...02112n n a a a U 将方程组n ,...,2,1i ,b x a i n 1j j ij ==∑=乘以ii a 1,得到等价的方程组⎪⎪⎪⎭⎫ ⎝⎛-=∑≠=n i j 1j j ij i ii i x a b a 1x ,i=1,2,…n ,简记为x Bx f =+。

其中 11()B I D A D L U --=-=-+, 1f D b -=.我们称()x Bx f ϕ=+为迭代函数。

任取初始向量(0)x x =,按照(1)()k k x Bx f +=+形成迭代格式,称这种迭代方法为Jacobi 迭代法。

算法描述:Step1:给定一组x ,即初值。

Step2:用for 循环计算:x[k+1]=(b[i]-∑∑+=-=-n1i j 1i 1j ]j [x ]j ][i [a ]j [x ]j ][i [a )/a[i][i].Step3:当fabs(x[k+1]-x[k])<eps时停止。

关于jacobian猜测和坐标多项式

关于jacobian猜测和坐标多项式

中国科学技术大学硕士学位论文关于Jacobian猜测和坐标多项式姓名:***申请学位级别:硕士专业:基础数学指导教师:***20050501第1章Jacobian猜测§1.1多项式映射和Jacobian猜测在这篇文章理,除非特别声明外,我们采用阻下记号:N7----:1,2,3,…自然数,Q=有理数,R一实数以及C=复数.更进一步的,k表示一个任意的域,R足任意的交换环.F=(R,…,R):k”一k”多项式映射,即,如下形式的映射(茹l,···,。

1。

)¨(Fl(zl,…,。

:。

),…,R(zl,…,o。

)),这里每个鼠属于多项式环k[x】:=七陋l,…,a:。

】线性映射是特须的多项式映射,也就足R(zl'.一,z。

)=ail3:1十…+ai。

z。

,aij∈k对所有的i,J.下面的结论足线性代数里面的结果.命题1.1.1.假设F:k”_七“是线性映射,那么r砂F是双射,当且仅当F是单的.进一步的,其逆也是一个线性映射例F是可逆的,当且仅当det(aij)∈k+.r圳如果F可逆,那么F是一些初等线性映射的乘积.似,如果F可逆,那么有公式可以描述F的逆例如,Cramer法则夕.研究多项式映射的上要只的就足考察上面的结果在仆么程度上可以推广到多项式映射的情形.罔此,让我们先行肴命题(1.1.1)的可能的推广,即,我们考虑:问题1.1.2.假设F:k”_k“是单的多项式映射,那么F是双射吗穸12005年4月中国科学技术大学硕士学位论文第2页摹1搴3acobian艟潮§l1多项式映射扣JacobiaⅡ精洲下面的例子说明一般的,问题的答案足否定的:例1.1.3.假设F:Q—Q定义为F0)==护,则F是单射,但不是双射显然,这里的问题在于域Q不足代数f|拍々(域k足代数闭的.如果每个非常数的多项式f(x)∈klxl在k都有根.)因此,我“】有下面的:问题1.1.4.假设k是代数闭的r枷l{o,k=(_,问一个单的多项式映射Fk”一k”足双射吗F令人惊奇的结果足:答案足正确的,实际上,有下面更强的定理1.1.5.假设☆是代数闭的,F:k“一驴是单的多项式映射,那么F也是双射,并且其逆也是一个多项式映射.因此在≈屉代数闭域的情形,我们得到丁命题(11.1)巾(1)刘多项式映射的完全推广.特别的,我们得到:推论1.1.6.每个单的多项式映射j1:护一k”假设k是代数闭的,F:舻一妒是单的多项式映射,那么F也是双射,并且其逆也是一个多项式映射.现在我们考虑命题(1.11)的第二部分定义1.1.7.我们称一个多项式映射F:k8一妒是可逆的,如果它有逆,并且逆也是多项式映射(因此定理¨.5断言:在代数闭域的情形,F是双射,’且仅斗F足可逆的,1注记1.1.8.我们可以证明?如果一个多项式映射F有一个左逆G,其中G也是一个多项式映射.那么G也是F的一个右逆饭.过来也对’.因此F可逆.2005年4月中国科学技术大学硕士学位论文第3页摹1章.1acobian待潮§1.1多项式映射和.1acobian精测利用这个注记,我们可以得到如下一个很简单,但足又很有用的判别可逆的法则.引理1.1.9.假设F:k“一k8是多项式映射,邵么F可逆,当且仅当k[xi:…,z。

北航数值分析1_Jacobi法计算矩阵特征值

北航数值分析1_Jacobi法计算矩阵特征值

准备工作➢算法设计矩阵特征值的求法有幂法、Jacobi法、QR法等,其中幂法可求得矩阵按模最大的特征值(反幂法可求得按模最小特征值),Jacobi法则可以求得对称阵的所有特征值。

分析一:由题目中所给条件λ1≤λ2≤…≤λn,可得出λ1、λn按模并不一定严格小于或大于其他特征值,且即使按模严格小于或大于其他特征值,也极有可能出现|λs|<λ1|<|λn |或|λs|<λn|<|λ1 |的情况,导致按幂法和反幂法无法求解λ1或λn二者中的一者;分析二:题目要求求解与数μk =λ1+k(λn-λ1)/40最接近的特征值λik(k=1,2,3…39),这个问题其实可以转换为求A-μk 按模最小的特征值的问题,但因为在第一个问题中无法确定能肯定的求得λ1和λn,所以第二个问题暂先搁浅;分析三:cond(A) 2 = ||A|| * ||A-1|| =|λ|max * |λ|min,这可以用幂法和反幂法求得,det(A) =λ1 *λ2 * … *λn,这需要求得矩阵A的所有特征值。

由以上分析可知,用幂法和反幂法无法完成所有问题的求解,而用Jacobi法求得矩阵所有特征值后可以求解题目中所给的各个问题。

所以该题可以用Jacobi法求解。

➢模块设计由➢数据结构设计由于矩阵是对称阵,上下带宽均为2,所以可以考虑用二维数组压缩存储矩阵上半带或下半带。

但由于Jacobi法在迭代过程中会破坏矩阵的形态,所以原来为零的元素可能会变为非零,这就导致原来的二维数组无法存储迭代后的矩阵。

基于此的考虑,决定采用一维数组存储整个下三角阵,以此保证迭代的正确进行。

完整代码如下(编译环境windows10 + visual studio2010):完整代码// math.cpp : 定义控制台应用程序的入口点。

//#include "stdafx.h"#include<stdio.h>#include<math.h>#include<time.h>#define N 501#define V (N+1)*N/2+1#define e 2.6630353#define a(i) (1.64 - 0.024 * (i)) * sin(0.2 * (i)) - 0.64 * pow(e , 0.1 / (i)) #define b 0.16#define c -0.064#define eps pow((double)10.0,-12)#define PFbits "%10.5f "#define PFrols 5#define PFe %.11e#define FK 39int p;int q;double cosz;double sinz;double MAX;int kk;//#define PTS pts#ifdef PTSvoid PTS(double *m){printf("-----------------------------------------------------------------------\n");printf(" 迭代第%d次\n",kk);for(int i = 1 ; i <= PFrols ; i++){for( int j = (i-1)*i/2+1 ; j <= (i+1)*i/2 ; j++){printf(PFbits,m[j]);}putchar(10);}for(int i = 1 ; i <= PFrols+1 ; i++){printf(" ... ");}putchar(10);printf(" . .\n");printf(" . .\n");printf(" . .\n");for(int i = 1 ; i <= PFrols+2 ; i++){printf(" ... ");}putchar(10);}#elsevoid PTS(double *m){}#endifvoid recounti(int i , int *pp, int *qq){for(int j = 0 ; j <= N-1 ; j++){if( (i - (j+1)*j/2) <= j+1){*pp = j+1;*qq = i - (j+1)*j/2;break;}}}void refreshMetrix(double *m){int ipr,ipc,iqr,iqc;m[(p+1)*p/2] = m[(p+1)*p/2] * pow(cosz,2) + m[(q+1)*q/2] * pow(sinz,2) + 2 * m[(p-1)*p/2+q] * cosz * sinz;m[(q+1)*q/2] = m[(p+1)*p/2] * pow(sinz,2) + m[(q+1)*q/2] * pow(cosz,2) - 2 * m[(p-1)*p/2+q] * cosz * sinz;for(int i = 1; i <= N ;i++){if(i != p && i != q){if(i > p){ipr = i;ipc = p;}else{ipr = p;ipc = i;}if(i > q){iqr = i;iqc = q;}else{iqr = q;iqc = i;}m[(ipr-1)*ipr/2+ipc] = m[(ipr-1)*ipr/2+ipc] * cosz + m[(iqr-1)*iqr/2+iqc] * sinz;m[(iqr-1)*iqr/2+iqc] = -m[(ipr-1)*ipr/2+ipc] * sinz + m[(iqr-1)*iqr/2+iqc] * cosz;}}m[(p-1)*p/2+q] = 0;PTS(m);}//void calCosSin(double *m){double app = m[(p+1)*p/2];double aqq = m[(q+1)*q/2];double apq = m[(p-1)*p/2+q];cosz = cos(atan(2 * apq / (app - aqq))/2);sinz = sin(atan(2 * apq / (app - aqq))/2); }//void find_pq(double *m){double max = 0.0;int pp = 0;int qq = 0;for(int i = 1 ; i <= V ; i++){if(fabs(m[i]) > max){recounti(i,&pp,&qq);if(pp != qq){max = fabs(m[i]);p = pp;q = qq;}}}MAX = max;}void init(double *m){for(int i = 1 ; i <= N ;i++)m[(i+1)*i/2] = a(i);for(int i = 2 ; i <= N ; i++)m[(i-1)*i/2+i-1] = b;for(int i = 3 ; i <= N ; i++)m[(i-1)*i/2+i-2] = c;PTS(m);}void calFinal(double *m){printf("---------------------------------------------------------------------------------------------------\n");printf("结果输出:\n\n");double conda;double deta = 1.0;double minlumda = pow((double)10.0,12);double maxlumda = pow((double)10.0,-12);double absminlumda = pow((double)10.0,12);for(int i = 1 ; i <=N ;i++){if(m[(i+1)*i/2] > maxlumda)maxlumda = m[(i+1)*i/2];if(m[(i+1)*i/2] < minlumda)minlumda = m[(i+1)*i/2];if(fabs(m[(i+1)*i/2]) < absminlumda)absminlumda = fabs(m[(i+1)*i/2]);deta *= m[(i+1)*i/2];}if(fabs(minlumda) < fabs(maxlumda))conda = fabs(maxlumda) / absminlumda;elseconda = fabs(minlumda) / absminlumda;printf(" Lumda(1)=%.11e Lumda(%d)=%.11e Lumda(s)=%.11e\n",minlumda,N,maxlumda,absminlumda);printf(" Cond(A)=%.11e\n",conda);printf(" Det(A)=%.11e\n\n",deta);for(int i = 1 ; i <= FK ; i++){double muk = minlumda + i * (maxlumda - minlumda) / 40;double lumdak = 0.0;double tempabsmin = pow((double)10.0,12);for(int j = 1 ; j <= N ;j++){if(fabs(muk - m[(j+1)*j/2]) < tempabsmin){lumdak = m[(j+1)*j/2];tempabsmin = fabs(muk - m[(j+1)*j/2]);}}printf(" Lumda(i%d)=%.11e ",i,lumdak);if(i%3==0)putchar(10);}putchar(10);printf("------------------------------------------------------------------------------------------------------\n");putchar(10);putchar(10);}int _tmain(int argc, _TCHAR* argv[]){double m[(N+1)*N/2+1] = {0.0};kk=0;MAX=1.0;time_t t0,t1;t0 = time( &t0);init(m);#ifndef PTSprintf("正在计算...\n\n");#endifwhile(true){kk++;find_pq(m);if(MAX<eps)break;#ifdef PTSprintf(" p=%d q=%d |max|=%e\n",p,q,MAX);printf("-----------------------------------------------------------------------\n\n"); #endifcalCosSin(m);refreshMetrix(m);}#ifdef PTSprintf(" p=%d q=%d |max|=%e\n",p,q,MAX);printf("-----------------------------------------------------------------------\n\n");#endifprintf("矩阵最终形态...\n");for(int i = 1 ; i <= PFrols ; i++){for( int j = (i-1)*i/2+1 ; j <= (i+1)*i/2 ; j++){printf(PFbits,m[j]);}putchar(10);}for(int i = 1 ; i <= PFrols+1 ; i++){printf(" ... ");}putchar(10);printf(" . .\n");printf(" . .\n");printf(" . .\n");for(int i = 1 ; i <= PFrols+2 ; i++){printf(" ... ");}putchar(10);t1 = time(&t1);#ifdef PTSprintf("计算并输出用时%.2f秒\n\n",difftime(t1,t0));#elseprintf("迭代次数%d,计算用时%.2f秒\n\n",kk,difftime(t1,t0)); #endifcalFinal(m);return 0;}运行结果如下:中间运行状态如下:结果分析➢有效性分析1.由输出结果可见矩阵经过21840次迭代后,非对角元全部为零或接近于零;2.代码中有定义预编译宏//#define PTS控制程序运行过程是否输出中间迭代结果,如果输出中间迭代结果,可以发现对角元素在迭代的后期变化非常小,达到收敛的效果;3.算法在多次运行中基本可以在45秒左右完成计算(酷睿i5双核处理器,10G存,64位windows10操作系统)。

jacobi迭代法matlab

jacobi迭代法matlab

jacobi迭代法matlabJacobi迭代法是一种常用的线性方程组求解方法,它是一种迭代法,通过不断迭代来逼近线性方程组的解。

Jacobi迭代法的基本思想是将线性方程组的系数矩阵分解为一个对角矩阵和一个非对角矩阵的和,然后通过迭代求解对角矩阵和非对角矩阵的乘积,最终得到线性方程组的解。

Jacobi迭代法的具体步骤如下:1. 将线性方程组的系数矩阵A分解为一个对角矩阵D和一个非对角矩阵R的和,即A=D+R。

2. 将线性方程组的右端向量b分解为一个对角矩阵D和一个非对角矩阵N的乘积,即b=Dx。

3. 对于任意的初始解向量x0,计算下一次迭代的解向量x1,即x1=D^(-1)(b-Rx0)。

4. 重复步骤3,直到达到预定的精度或迭代次数。

Jacobi迭代法的优点是简单易懂,易于实现,收敛速度较快。

但是,它的缺点也很明显,即收敛速度较慢,需要进行大量的迭代才能达到较高的精度。

在Matlab中,可以使用以下代码实现Jacobi迭代法:function [x,k]=jacobi(A,b,x0,tol,maxit)% Jacobi迭代法求解线性方程组Ax=b% 输入:系数矩阵A,右端向量b,初始解向量x0,精度tol,最大迭代次数maxit% 输出:解向量x,迭代次数kn=length(b); % 系数矩阵A的阶数D=diag(diag(A)); % 对角矩阵DR=A-D; % 非对角矩阵Rx=x0; % 初始解向量for k=1:maxitx1=D\(b-R*x); % 计算下一次迭代的解向量if norm(x1-x)<tol % 判断是否达到精度要求break;endx=x1; % 更新解向量end输出结果可以使用以下代码实现:A=[4 -1 0; -1 4 -1; 0 -1 4]; % 系数矩阵b=[15; 10; 10]; % 右端向量x0=[0; 0; 0]; % 初始解向量tol=1e-6; % 精度要求maxit=1000; % 最大迭代次数[x,k]=jacobi(A,b,x0,tol,maxit); % Jacobi迭代法求解线性方程组fprintf('解向量x=[%f; %f; %f]\n',x(1),x(2),x(3)); % 输出解向量fprintf('迭代次数k=%d\n',k); % 输出迭代次数以上就是Jacobi迭代法的主要内容,通过Matlab实现Jacobi迭代法可以更好地理解其基本思想和具体步骤。

matlabjacobi迭代法

matlabjacobi迭代法

matlabjacobi迭代法Jacobi迭代法是一种求解线性方程组的迭代法,其基本思想是将原方程组的系数矩阵分解为对角部分和非对角部分,对于对角矩阵使用前、后代替法求解,对于非对角部分使用迭代更新法求解。

Jacobi迭代法的基本形式如下:$\begin{cases}a_{11}x_1+a_{12}x_2+...+a_{1n}x_n=b_1 \\a_{21}x_1+a_{22}x_2+...+a_{2n}x_n=b_2 \\... \\a_{n1}x_1+a_{n2}x_2+...+a_{nn}x_n=b_n \\\end{cases}$其中,$a_{ij}$表示系数矩阵的第$i$行第$j$列的元素,$b_i$表示方程组的第$i$个方程的解。

设向量$x^{(k)}=(x_1^{(k)},x_2^{(k)},...,x_n^{(k)})$表示Jacobi迭代法的第$k$次迭代结果,则迭代公式为:$x_i^{(k+1)}=\frac{1}{a_{ii}}(b_i-\sum_{j=1,j\ne i}^n a_{ij}x_j^{(k)}),i=1,2,...,n$迭代公式的意义是,将第$i$个变量的系数$a_{ii}$看成系数矩阵的一个主对角元,将剩下的系数$a_{ij}(i\ne j)$看成非对角元,同时将当前未知量向量$x^{(k)}$看成已知量,利用这些参数求解第$i$个方程中未知量$x_i$。

Jacobi迭代法的收敛条件为原矩阵的对角线元素不为零,且矩阵的任意一行中非对角线元素绝对值之和小于对角线元素绝对值。

在Matlab中,可通过编写函数的方式实现Jacobi迭代法。

函数jacobi实现了迭代公式,并以向量形式返回迭代结果,如下所示:```function xnew = jacobi(A, b, xold)% Jacobi迭代法求解线性方程组Ax=b% A为系数矩阵,b为常数向量,xold为迭代初值% 输出迭代后的解向量xnew% 初始化迭代初值n = length(b);xnew = zeros(n,1);% 迭代更新for i = 1:nxnew(i) = (b(i) - A(i,:)*xold + A(i,i)*xold(i)) / A(i,i);endend```在主程序中可按以下步骤使用函数jacobi求解线性方程组:1.构造系数矩阵A和常数向量b;2.设定迭代初值xold;3.利用jacobi函数求解迭代结果,并对迭代过程进行循环。

用共轭梯度法解方程,用Jacobi方法求矩阵的全部特征值和特征向量

用共轭梯度法解方程,用Jacobi方法求矩阵的全部特征值和特征向量
A(i,i+1)=1;
end
A(1,1)=4;
A(n,n)=4;
A(n,n-1)=1;
X=zeros(n,1);
fori=1:n
X(i,1)=1;
end
%用共轭梯度法求解方程
fprintf('方程的精确解\n');
X
fprintf('用共轭法求解方程\n');
x=cg(A,b)
%用方法求解方程的特征值和特征向量
-0.1859 -0.0000 0.0000 0.0000 5.6180 -0.0000 -0.0000
-0.2236 0.0000 -0.0000 0.0000 0.0000 5.4142 0.0000
-0.2558 0.0000 -0.0000 0.0000 0.0000 0.0000 5.1756
0.1859 -0.0000 -0.0000 -0.0000 -0.0000 0.0000 -0.0000
0.1436 -0.0000 -0.0000 -0.0000 0.0000 -0.0000 0.0000
-0.0977 0.0000 0.0000 -0.0000 0.0000 -0.0000 0.0000
0.6634
1.1794
0.6188
1.3453
用方法Jacobi求解矩阵的全部特征值及特征向量
D =
Columns 1 through 7
4.0000 0 0 0 0 0 0
0.1382 5.9021 0.0000 0.0000 -0.0000 0.0000 0.0000
-0.2629 0.0000 5.6180 0.0000 0.0000 -0.0000 -0.0000

(k)Jacobi矩阵特征值反问题算法的稳定性分析

(k)Jacobi矩阵特征值反问题算法的稳定性分析

AbstractIfwedon,tknowtheJacobimatrix正m,butweknowalleigen—valuesofmatrixT1.1—1,alleigenvaluesofmatrixTk+l,n,andalleigen’valuesofmatrixT1m,couldweconstructthematrix乃,n.Le七S1==(芦1,p2,…,pk—1),S2=(弘七,p七+1,…,∥n一1),A=(A1,A2,…,An)a,retheeigenvaluesofmatrices乃,≈一1,Tk+l,nand丑,nrespectively.Theproblemisthatfromabove2n-1datatofindother2n-1data:a1,a2,…,dn,and卢1,岛,…,风一1Obviously,whenk=lork=nthisproblemhasbeensolvedandtherearemanyalgorithmstoconstruct乃,”【3】[6儿9】,anditsstabilityanalysiscanbefoundedinf21.Wrhile南=2,3…佗一1,anewalgorithmhasbeenputforwardtoconstruct乃,"【1】.Inthispaperwewillgivesomek=2.3…n一1.stabilityqualitiesofthenewalgorithmincaseInthispaperwewillanalyzetheperturbationqualityforanewalgorithmofthe(k1Jacobimatrixinverseeigenvalueproblem.Andfinallytaketheconclusionthatthealgorithmofthe(k)JacobiinverseproblemisLipschitzcontinuous.Keywords:Jacobimatrix,(k)inverseproblem,stabilaty,eigen-value.上海大学本论文经答辩委员会全体委员审查,确认符合上海大学硕士学位论文质量要求。

matlab中jacobi迭代法

matlab中jacobi迭代法

一、简介Matlab中jacobi迭代法是一种用于求解线性方程组的迭代方法,适用于系数矩阵为对称、正定矩阵的情况。

该迭代方法通过将系数矩阵分解为对角矩阵、上三角矩阵和下三角矩阵的形式,然后通过迭代计算得到方程组的解。

在Matlab中,可以利用矩阵运算和迭代循环来实现jacobi迭代法。

二、 jacobi迭代法原理1. 基本思想jacobi迭代法的基本思想是将系数矩阵分解为对角矩阵D、上三角矩阵U和下三角矩阵L的形式,即A=D+L+U,其中D为系数矩阵A 的对角线元素组成的对角矩阵,L为系数矩阵A的下三角部分,U为系数矩阵A的上三角部分。

令x为方程组的解向量,b为方程组的右端向量,则方程组可表示为Ax=b。

根据方程组的性质,可将方程组表示为(D+L+U)x=b,然后利用迭代的方式逐步逼近方程组的解。

2. 迭代公式假设迭代到第k次,方程组可表示为(D+L+U)x=b,将其转化为迭代形式x(k+1)=(D+L)^(-1)(b-Ux(k)),利用迭代公式可以逐步计算出方程组的解。

3. 收敛条件对于jacobi迭代法,收敛条件为系数矩阵A为对角占优矩阵或正定矩阵。

如果满足这一条件,迭代计算会逐步收敛于方程组的解。

三、 Matlab中jacobi迭代法实现在Matlab中,可以利用矩阵运算和迭代循环来实现jacobi迭代法。

具体步骤如下:1. 对系数矩阵进行分解将系数矩阵A分解为对角矩阵D、上三角矩阵U和下三角矩阵L的形式。

2. 初始化迭代变量初始化迭代的初始值x0、迭代次数k、逐次逼近解向量x(k+1)。

3. 迭代计算利用迭代公式x(k+1)=(D+L)^(-1)(b-Ux(k))来逐步计算出方程组的解。

4. 判断收敛条件在迭代计算过程中,需要实时判断迭代计算是否满足收敛条件,如果满足则停止迭代计算,得到方程组的解。

四、实例分析假设有如下方程组:2x1 + x2 + 4x3 = 103x1 + 4x2 - x3 = 10x1 + 2x2 + 3x3 = 0可以利用jacobi迭代法来求解该方程组,在Matlab中可以通过编程实现迭代计算过程。

列主元素Gauss消去法Jacobi迭代法原理及计算方法

列主元素Gauss消去法Jacobi迭代法原理及计算方法

一、 列主元素Gauss 消去法、Jacobi 迭代法原理及计算方法1. 列主元素Gauss 消去法:1.1 Gauss 消去法基本原理设有方程组Ax b =,设A 是可逆矩阵。

高斯消去法的基本思想就是将矩阵的初等行变换作用于方程组的增广矩阵[]B A b = ,将其中的A 变换成一个上三角矩阵,然后求解这个三角形方程组。

1.2 列主元Gauss 消去法计算步骤将方程组用增广矩阵[]()(1)ijn n B A b a ⨯+== 表示。

1). 消元过程对1,2,,1k n =-(1) 选主元,找{},1,,k i k k n ∈+ 使得 ,max k i k ik k i na a ≤≤= (2) 如果,0k i k a =,则矩阵A 奇异,程序结束;否则执行(3)。

(3) 如果k i k ≠,则交换第k 行与第k i 行对应元素位置,k kj i j a a ↔,,,1j k n =+ 。

(4) 消元,对,,i k n = ,计算/,ik ik kk l a a =对1,,1j k n =++ ,计算.ij ij ik kj a a l a =-2). 回代过程(1) 若0,nn a =则矩阵奇异,程序结束;否则执行(2)。

(2) ,1/;n n n nn x a a +=对1,,2,1i n =- ,计算,11/n i i n ij j ii j i x a a x a +=+⎛⎫=- ⎪⎝⎭∑2. Jacobi 迭代法2.1 Jacobi 迭代法基本原理Jacobi 迭代法的基本思想是对n 元线性方程组b Ax =,.,n n R b R A ∈∈将其变形为等价方程组f Bx x +=,其中.,,n n n n R x R f R B ∈∈∈⨯B 成为迭代矩阵。

从某一取定的初始向量)0(x 出发,按照一个适当的迭代公式 ,逐次计算出向量f Bx x k k +=+)()1( ( 1,0=k ),使得向量序列}{)(k x 收敛于方程组的精确解.(1)输入1,,,,)0(=k n xb A ε,. (2) )(1,1)0()1(∑≠=-=n j i i j ij i iii x a b a x )1,0(n i = (3)判断 ε≤--≤≤)0()1(10max i i n i x x ,若是,输出1)1(2)1(1,,n x x x ,若否,置1+=k k ,)1()0(i i x x =,)2,1(n i =。

Algebraic eigenvalue problems

Algebraic eigenvalue problems

6.0. Introduction 113 Chapter 6Algebraic eigenvalue problemsDas also war des Pudels Kern! G OETHE.6.0. IntroductionDetermination of eigenvalues and eigenvectors of matrices is one of the most important problems of numerical analysis. Theoretically, the problem has been reduced to finding the roots of an algebraic equation and to solving linear homogeneous systems of equations. In practical computation, as a rule, this method is unsuitable, and better methods must be applied.When there is a choice between different methods, the following questions should be answered:(a)Are both eigenvalues and eigenvectors asked for, or are eigenvalues alone sufficient?(b)Are only the absolutely largest eigenvalue(s) of interest?(c)Does the matrix have special properties (real symmetric, Hermitian, and so on)?If the eigenvectors are not needed less memory space is necessary, and further, if only the largest eigenvalue is wanted, a particularly simple technique can be used. Except for a few special cases a direct method for computation of the eigenvalues from the equation is never used. Further it turns out that practically all methods depend on transforming the initial matrix one way or other without affecting the eigenvalues. The table on p. 114 presents a survey of the most important methods giving initial matrix, type of transformation, and transformation matrix. As a rule, the transformation matrix is built up successively, but the resulting matrix need not have any simple properties, and if so, this is indicated by a horizontal line. It is obvious that such a compact table can give only a superficial picture; moreover, in some cases the computation is performed in two steps. Thus a complex matrix can be transformed to a normal matrix following Eberlein, while a normal matrix can be diagonalized following Goldstine-Horwitz. Incidentally, both these procedures can be performed simultaneously giving a unified method as a result. Further, in some cases we have recursive techniques which differ somewhat in principle from the other methods.It is not possible to give here a complete description of all these methods because of the great number of special cases which often give rise to difficulties. However, methods which are important in principle will be treated carefully114 Algebraic eigenvalue problems6.1. The power method 115 and in other cases at least the main features will be discussed. On the whole we can distinguish four principal groups with respect to the kind of transformation used initially:1.Diagonalization,2.Almost diagonalization (tridiagonalization),3.Triangularization,4.Almost triangularization (reduction to Hessenberg form).The determination of the eigenvectors is trivial in the first case and almost trivialin the third case. In the other two cases a recursive technique is easily established which will work without difficulties in nondegenerate cases. To a certain amount we shall discuss the determination of eigenvectors, for example, Wilkinson's technique which tries to avoid a dangerous error accumulation. Also Wielandt's method, aiming at an improved determination of approximate eigenvectors, will be treated.6.1. The power methodWe assume that the eigenvalues of are where Now we let operate repeatedly on a vector which we express as a linear combination of the eigenvectors(6.1.1) Then we haveand through iteration we obtain(6.1.2). For large values of the vectorwill converge toward that is, the eigenvector of The eigenvalue is obtained as(6.1.3) where the index signifies the component in the corresponding vector. The rate of convergence is determined by the quotient convergence is faster the116 Algebraic eigenvalue problemssmaller is. For numerical purposes the algorithm just described can be formulated in the following way. Given a vector we form two other vectors, and(6.1.4)The initial vector should be chosen in a convenient way, often one tries vector with all components equal to 1.E XAMPLEStarting fromwe find thatandAfter round-off, we getIf the matrix is Hermitian and all eigenvalues are different, the eigenvectors, as shown before, are orthogonal. Let be the vector obtained after iterations:We suppose that all are normalized:6.1. The power method 117 Then we haveandFurther,When increases, all tend to zero,and with, we get Rayleigh's quotient(6.1.5) ExampleWith andwe obtain for and 3,,and respectively, compared with the correct value The corresponding eigenvector isThe quotients of the individual vector components give much slower convergence; for example,The power method can easily be modified in such a way that certain other eigenvalues can also be computed. If, for example,has an eigenvalue then has an eigenvalue Using this principle, we can produce the two outermost eigenvalues. Further, we know that is an eigenvalue of and analogously that is an eigenvalue of If we know that an eigenvalue is close to we can concentrate on that, since becomes large as soon as is close toWe will now discuss how the absolutely next largest eigenvalue can be calculated if we know the largest eigenvalue and the corresponding eigenvector Let be the first row vector of and form(6.1.6)Here is supposed to be normalized in such a way that the first component is Hence the first row of is zero. Now let and be an eigenvalue and the corresponding eigenvector with the first component of equal to Then118 Algebraic eigenvalue problems we havesince and(note that the first component of as well as of is 1).Thus is an eigenvalue and is an eigenvector of Since has the first component equal to 0, the first column of is irrelevant, and in fact we need consider only the-matrix, which is obtained when the first row and first column of are removed. We determine an eigenvector of this matrix, and by adding a zero as first component, we get a vector Then we obtain from the relationMultiplying with we find and hence When and have been determined, the process, which is called deflation, can be repeated.E XAMPLEThe matrixhas an eigenvalue and the corresponding eigenvectoror normalized,Without difficulty we findNow we need consider onlyand we find the eigenvalues which are also eigenvalues of6.1. The power method 119 the original matrix The two-dimensional eigenvector belonging to isand henceSince we get andWith we findand Hence andand all eigenvalues and eigenvectors are known.If is Hermitian, we have when Now suppose thatand form(6.1.7) It is easily understood that the matrix has the same eigenvalues and eigenvectors as exceptwhich has been replaced by zero. In fact, we haveand and so on. Then we can again use the power method on the matrix120 Algebraic eigenvalue problems With the starting vectorwe find the following values for Rayleigh's quotient:and compared with the correct valueIf the numerically largest eigenvalue of a real matrix is complex,then must also be an eigenvalue. It is also clear that if is the eigenvector belonging to then is the eigenvector belonging toNow suppose that we use the power method with a real starting vectorThen we form with so large that the contributions from all the other eigenvectors can be neglected. Further, a certain component of is denoted by Thenwhere and the initial component of corresponding to is Hencewhere we have put Now we formHence(6.1.8) Then we easily findIn particular, if that is, if the numerically largest eigenvalues are ofthe form with real then we have the simpler formula(6.1.10)6.2. Jacobi's methodIn many applications we meet the problem of diagonalizing real, symmetric matrices. This problem is particularly important in quantum mechanics.In Chapter 3 we proved that for a real symmetric matrix all eigenvalues are real, and that there exists a real orthogonal matrix such that is diagonal. We shall now try to produce the desired orthogonal matrix as a— product of very special orthogonal matrices. Among the off-diagonal elements6.2. Jacobi's method 121 we choose the numerically largest element:The elementsand form a submatrix which can easily be transformed to diagonal form. We putand get(6.2.1) Now choose the angle such that that is, tan This equationgives 4 different values of and in order to get as small rotations as possible we claimPuttingandwe obtain:since the angle must belong to the first quadrant if tan and to the fourth quadrant if tan Hence we have for the anglewhere the value of the arctan-function is chosen between After a few simple calculations we get finally:(6.2.2)(Note that andWe perform a series of such two-dimensional rotations; the transformation matrices have the form given above in the elements and and are identical with the unit matrix elsewhere. Each time we choose such values and that We shall show that with the notation the matrix for increasing will approach a diagonal122 Algebraic eigenvalue problems matrix with the eigenvalues of along the main diagonal. Then it is obvious that we get the eigenvectors as the corresponding columns of since we have that is, Let be the column vector of and the diagonal element of Then we haveIf is denoted by we know from Gershgorin's theorem that for some value of and if the process has been brought sufficiently far, every circle defined in this way contains exactly one eigenvalue. Thus it is easy to see when sufficient accuracy has been attained and the procedure can be discontinued.The convergence of the method has been examined by von Neumann and Goldstine in the following way. We put and, as before,The orthogonal transformation affects only the row and column and the row and column. Taking only off-diagonal elements into account, we find for and relations of the formand hence Thus will be changed only through the cancellation of the elements and that is,Since was the absolutely largest of all off-diagonal elements, we haveandHence we get the final estimate,(6.2.3)After iterations,has decreased with at least the factor and for a sufficiently large we come arbitrarily close to the diagonal matrix containing the eigenvalues.In a slightly different modification, we go through the matrix row by row performing a rotation as soon as Here is a prescribed tolerance which, of course, has to be changed each time the whole matrix has been passed. This modification seems to be more powerful than the preceding one. The method was first suggested by Jacobi. It has proved very efficient for diagonalization of real symmetric matrices on automatic computers.6.2. Jacobi's method 123 ExampleChoosing we obtain, tan andAfter the first rotation, we haveHere we take and obtain tan andAfter the second rotation we haveand after 10 rotations we haveAfter rotations the diagonal elements are and while the remaining elements are equal to to decimals accuracy. The sum of the diagonal elements is and the product in good agreement with the exact characteristic equation:Generalization to Hermitian matrices, which are very important in modern physics, is quite natural. As has been proved before, to a given Hermitian matrix we can find a unitary matrix such that becomes a diagonal matrix. Apart from trivial factors, a two-dimensional unitary matrix has the formA two-dimensional Hermitian matrix124 Algebraic eigenvalue problems is transformed to diagonal form by wherePutting we separate the real and imaginary parts and then multiply the resulting equations, first by and then by and and finally add them together. Using well-known trigonometric formulas, we get(6.2.4) In principle we obtain from the first equation and then can be solved from the second. Rather arbitrarily we demand and hencewhereSince the remaining equation has the solutionwith and Now we want to choose according to in order to get as small a rotation as possible which impliesThe following explicit solution is now obtained (note that and cannot both be equal to because then would already be diagonal):(6.2.5) As usual the value of the arctan-function must be chosen between and6.3. Givens' method 125The element can now be writtenand consequently:(6.2.6) If we get and recover the result in Jacobi's method.This procedure can be used repeatedly on larger Hermitian matrices, where the unitary matrices differ from the unit matrix only in four places. In the places and we introduce the elements of our two-dimensional matrix. The product of the special matrices is a new unitary matrix approaching when is increased.Finally we mention that a normal matrix (defined through can always be diagonalized with a unitary matrix. The process can be performed following a technique suggested by Goldstine and Horwitz which is similar to the method just described for Hermitian matrices. The reduction of an arbitrary complex matrix to normal form can be accomplished through a method given by Patricia Eberlein In practice, both these processes are performed simultaneously.6.3. Givens' methodAgain we assume that the matrix is real and symmetric. In Givens' method we can distinguish among three different phases. The first phase is concerned with orthogonal transformations, giving as result a band matrix with unchanged characteristic equation. In the second phase a sequence of, functions is generated, and it is shown that it forms a Sturm sequence, the last member of which is the characteristic polynomial. With the aid of the sign changes in this sequence, we can directly state how many roots larger than the inserted value the characteristic equation has. By testing for a number of suitable values we can obtain all the roots. During the third phase, the eigenvectors are computed. The orthogonal transformations are performed in the following order. The elements and define a two-dimensional subspace, and we start by performing a rotation in this subspace. This rotation affects all elements in the second and third rows and in the second and third columns. However, the quantity defining the orthogonal matrixis now determined from the condition and not, as in Jacobi's method, by We have and The next rotation is performed in the (2, 4)-plane with the new126 Algebraic eigenvalue problemsdetermined from that is, tan that the element was changed during the preceding Now all elements in the second and fourth rows and in the second and fourth columns are changed, and it should be particularly observed that the element is not affected. In the same way, we make the elements equal to zero by rotations in the-planes.Now we pass to the elements and they are all set to zero by rotations in the planesDuring the first of these rotations, the elements in the third and fourth rows and in the third and fourth columns are changed, and we must examine what happens to the elements and which were made equal to zero earlier. We findFurther, we get and By now the procedure should be clear, and it is easily understood that we finally obtain a band matrix, that is, such a matrix that In this special case we have Now we put(6.3.1)has been obtained from by a series of orthogonal transformations,with In Chapter it was proved that and have the same eigenvalues and further that, if is an eigenvector of and an eigenvector of(both with the same eigenvalue), then we have Thus the problem has been reduced to the computation of eigenvalues and eigenvectors of the band matrixWe can suppose that all otherwise could be split into two determinants of lower order Now we form the following sequence of functions:(6.3.2)with and We find at once that which can be interpreted as the determinant of the-element in the matrix6.3. Givens' method 127 Analogously, we have which is the-minor ofBy induction, it is an easy matter to prove that is the characteristic polynomial.Next we shall examine the roots of the equation For we have the only root.For we observe that, Hence we have two real roots and with, for example,For we will use a method which can easily be generalized to an induction proof. Then we write and obtain from (6.3.2):Now it suffices to examine the sign of in a few suitable points:We see at once that the equation has three real roots and such thatIn general, if has the roots and the roots thenwhereBy successively putting and we find that has different signs in two arbitrary consecutive points. Hence has real roots, separated by the roots ofWe are now going to study the number of sign changes in the sequenceIt is evident that and Suppose that and are two such real numbers that in the closed interval Then obviously First we examine what happens if the equation has a root in the interval. From it follows for thatHence and have different signs, and clearly this is also true in an interval Suppose, for example, that then we may have the following combination of signs:Hence, the number of sign changes does not change when we pass through a root of When however, the situation is different.128 Algebraic eigenvalue problemsSuppose, for example, that& odd. Denoting the roots of by and the roots of by we haveThen we see that Now we let increase until it reaches the neighborhood of |where we find the following scheme:Hence Then we let increase again (now a sign change of may appear, but, as shown before, this does not affect until we reach the neighborhood of where we haveand hence Proceeding in the same way through all the rootswe infer that the number of sign changes decreases by one unit each time a root is passed. Hence we have proved that if is the number of eigenvalues of the matrix which are larger than then(6.3.3) The sequence is called a The described technique makes it possible to compute all eigenvalues in a given interval ("telescope method").For the third phase, computation of the eigenvectors, we shall follow J. H. Wilkinson in Let be an exact eigenvalue of Thus we search for a vector such that Since this is a homogeneous system in variables, and since we can obtain a nontrivial solution by choosing equations and determine the components of(apart from a constant factor); the remaining equation must then be automatically satisfied. In practical work it turns out, even for quite well-behaved matrices, that the result to a large extent depends on which equation was excluded from the Essentially, we can say that the serious errors which appear on an unsuitable choice of equation to be excluded depend on numerical compensations; thus round-off errors achieve a dominant influence.Let us assume that the equation is excluded, while the others are solved by elimination. The solution (supposed to be exact) satisfies the equations used for elimination but gives an error when inserted into the6.3. Givens' method 129Actually, we have solved the system(We had to use an approximation instead of the exact eigenvalue.) Since constant factors may be omitted, this system can be written in a simpler way:(6.3.5)where is a column vector with the component equal to and the others equal to If the eigenvectors of are this vector can be expressed as a linear combination, that is,(6.3.6) and from (6.3.5) we get(6.3.7) Now let and we obtain(6.3.8)Under the assumption that our solution approaches as(apart from trivial factors). However, it may well happen that is of the same order of magnitude as(that is, the vector is almost orthogonal to), and under such circumstances it is clear that the vector in (6.3.8) cannot be a good approximation of Wilkinson suggests that (6.3.5) be replaced by(6.3.9)where we have the vector at our disposal. This system is solved by Gaussian elimination, where it should be observed that the equations are permutated properly to make the pivot element as large as possible. The resulting system is written:(6.3.10)As a rule, most of the coefficients are zero. Since the have been obtained from the which we had at our disposal, we could as well choose the constants deliberately. It seems to be a reasonable choice to take all130 Algebraic eigenvalue problems equal to no eigenvector should then be disregarded. Thus we choose(6.3.11) The system is solved, as usual, by back-substitution, and last, the vector is normalized. Even on rather pathological matrices, good results have been obtained by Givens' method.6.4. Householder's methodThis method, also, has been designed for real, symmetric matrices. We shall essentially follow the presentation given by Wilkinson The first step consists of reducing the given matrix to a band matrix. This is done by orthogonal transformations representing reflections. The orthogonal matrices, will be denoted by with the general structure(6.4.1) Here is a column vector such that(6.4.2) It is evident that is symmetric. Further, we havethat is,is also orthogonal.The matrix acting as an operator can be given a simple geometric interpretation. Let t operate on a vector from the left:In Fig. 6.4 the line is perpendicular to the unit vector in a plane defined by and The distance from the endpoint of to is and the mapping means a reflection in a plane perpendicular toFigure 6.46.4. Householder's method 131 Those vectors which will be used are constructed with the first components zero, orWith this choice we form Further, by (6.4.2) we haveNow put and form successively(6.4.3)At the first transformation, we get zeros in the positionsand in the corresponding places in the first column. The final result will become a band matrix as in Givens' method. The matrix contains elements in the row, which must be reduced to zero by transformation with this gives equations for theelements and further we have the condition that the sum of the squares must beWe carry through one step in the computation in an example:The transformation must now produce zeros instead of and Obviously, the matrix has the following form:Since in the first row of only the first element is not zero, for example, the -element of can become zero only if the corresponding element is zero already in Puttingwe find that the first row of has the following elements:Now we claim that(6.4.4) Since we are performing an orthogonal transformation, the sum of the squares132 Algebraic eigenvalue problems of the elements in a row is invariant, and hencePutting we obtain(6.4.5) Multiplying (6.4.5) by and (6.4.4) by and we getThe sum of the first three terms is and further Hence(6.4.6) Inserting this into (6.4.5), we find that and from (6.4.4), andIn the general case, two square roots have to be evaluated, one for and one for Since we have in the denominator, we obtain the best accuracy if is large. This is accomplished by choosing a suitable sign for the square-root extraction for Thus the quantities ought to be defined as follows:(6.4.7) The sign for this square root is irrelevant and we choose plus. Hence we obtain for and(6.4.8) The end result is a band matrix whose eigenvalues and eigenvectors are computed exactly as in Givens' method. In order to get an eigenvector of an eigenvector of the band matrix has to be multiplied by the matrix this should be done by iteration:(6.4.9) 6.5. Lanczos' methodThe reduction of real symmetric matrices to tridiagonal form can be accomplished through methods devised by Givens and Householder. For arbitrary matrices a similar reduction can be performed by a technique suggested by Lanczos. In this method two systems of vectors are constructed, and which are biorthogonal; that is, for we have6.5. Lanczos' method 133 The initial vectors and can be chosen arbitrarily though in such a way that The new vectors are formed according to the rulesThe coefficients are determined from the biorthogonality condition, and for we form:If we getAnalogouslyLet us now consider the numerator in the expression for whenbecause of the biorthogonality. Hence we have for and similarly we also have under the same condition. In this way the following simpler formulas are obtained:If the vectors are considered as columns in a matrix and if further a tridiagonal matrix is formed from the coefficients and with one's in the remaining diagonal:then we can simply write and provided the vectors are linearly independent134 Algebraic eigenvalue problemsIf similar matrices are formed from the vectors and from the coefficientswe getCertain complications may arise, for example, that some or may become zero, but it can also happen that even if and The simplest way out is to choose other initial vectors even if it is sometimes possible to get around the difficulties by modifying the formulas themselves.Obviously, Lanczos' method can be used also with real symmetric or Hermi tian matrices. Then one chooses just one sequence of vectors which must form an orthogonal system. For closer details, particularly concerning the determination of the eigenvectors, Lanczos' paper should be consulted; a detailed discussion of the degenerate cases is given by Causey and GregoryHere we also mention still one method for tridiagonalization of arbitrary real matrices, first given by La Budde. Space limitations prevent us from a closer discussion, and instead we refer to the original paper6.6. Other methodsAmong other interesting methods we mention the method. Starting from a matrix we split it into two triangular matrices with and then we form Since the new matrix has the same eigenvalues as Then we treat in the same way as and so on, obtaining a sequence of matrices which in general converges toward an upper triangular matrix. If the eigenvalues are real, they will appear in the main diagonal. Even the case in which complex eigenvalues are present can be treated without serious complications. Closer details are given in where the method is described by its inventor, H. Rutishauser. Here we shall also examine the more general eigenvalue problem,where and are symmetric and, further,is positive definite. Then we can split according to where is a lower triangular matrix. Henceand where Sincethe problem has been reduced to the usual type treated before.6.7. Complex matricesFor computing eigenvalues and eigenvectors of arbitrary complex matrices (also, real nonsymmetric matrices fall naturally into this group), we shall first discuss a triangularization method suggested by Lotkin and Greenstadt6.7. Complex matrices 135The method depends on the lemma by Schur stating that for each square matrix there exists a unitary matrix such that where is a (lower or upper) triangular matrix (see Section 3.7). In practical computation one tries to find as a product of essentially two-dimensional unitary matrices, using a procedure similar to that described for Hermitian matrices in Section 6.2. It is possible to give examples for which the method does not converge (the sum of the squares of the absolute values of the subdiagonal elements is not monotonically decreasing, cf.but in practice convergence is obtained in many cases. We start by examining the two-dimensional case and put(6.7.1)From we get Further, we suppose that whereand obtain(6.7.2) Clearly we have Claiming we find withand(6.7.3) Here we conveniently choose the sign that makes as small as possible; with and we get Hence is obtained directly from the elements and Normally, we must take the square root of a complex number, and this can be done by the formulawhere When has been determined, we get and from(6.7.4) Now we pass to the main problem and assume that is an arbitrary complex matrix We choose that element below the main diagonal which is largest。

雅克比(Jacobi)方法

雅克比(Jacobi)方法

雅克⽐(Jacobi)⽅法可以⽤来求解协⽅差矩阵的特征值和特征向量。

雅可⽐⽅法(Jacobian method)求全积分的⼀种⽅法,把拉格朗阶查⽪特⽅法推⼴到求n个⾃变量⼀阶⾮线性⽅程的全积分的⽅法称为雅可⽐⽅法。

雅克⽐迭代法的计算公式简单,每迭代⼀次只需计算⼀次矩阵和向量的乘法,且计算过程中原始矩阵A始终不变,⽐较容易并⾏计算。

考虑线性⽅程组Ax=b时,⼀般当A为低阶稠密矩阵时,⽤主元消去法解此⽅程组是有效⽅法。

但是,对于由⼯程技术中产⽣的⼤型稀疏矩阵⽅程组(A的阶数很⾼,但零元素较多,例如求某些偏微分⽅程数值解所产⽣的线性⽅程组),利⽤迭代法求解此⽅程组就是合适的,在计算机内存和运算两⽅⾯,迭代法通常都可利⽤A中有⼤量零元素的特点。

雅克⽐迭代法就是众多迭代法中⽐较早且较简单的⼀种,其命名也是为纪念普鲁⼠著名数学家雅可⽐。

原理【收敛性】设Ax=b,其中A=D+L+U为⾮奇异矩阵,且对⾓阵D也⾮奇异,则当迭代矩阵J的谱半径ρ(J)<1时,雅克⽐迭代法收敛。

⾸先将⽅程组中的系数矩阵A分解成三部分,即:A = L+D+U,其中D为对⾓阵,L为下三⾓矩阵,U为上三⾓矩阵。

之后确定迭代格式,X^(k+1) = B*X^(k) +f ,(这⾥^表⽰的是上标,括号内数字即迭代次数),其中B称为迭代矩阵,雅克⽐迭代法中⼀般记为J。

(k = 0,1,......)再选取初始迭代向量X^(0),开始逐次迭代。

【优缺点】雅克⽐迭代法的优点明显,计算公式简单,每迭代⼀次只需计算⼀次矩阵和向量的乘法,且计算过程中原始矩阵A始终不变,⽐较容易并⾏计算。

然⽽这种迭代⽅式收敛速度较慢,⽽且占据的存储空间较⼤,所以⼯程中⼀般不直接⽤雅克⽐迭代法,⽽⽤其改进⽅法。

实现通过雅克⽐(Jacobi)⽅法求实对称矩阵的特征值和特征向量操作步骤:S′=G T SG,其中G是旋转矩阵,S′和S均为实对称矩阵,S′和S有相同的Frobenius norm,可以⽤⼀个最简单的3维实对称矩阵为例,根据公式进⾏详细推导(参考 ):通过旋转矩阵将对称矩阵转换为近似对⾓矩阵,进⽽求出特征值和特征向量,对⾓矩阵中主对⾓元素即为S近似的实特征值。

线性方程组迭代算法——Jacobi迭代算法的python实现

线性方程组迭代算法——Jacobi迭代算法的python实现

线性⽅程组迭代算法——Jacobi迭代算法的python实现原理:请看本⼈博客:代码:import numpy as npmax=100#迭代次数上限Delta=0.01m=2#阶数:矩阵为2阶n=3#维数:3X3的矩阵shape=np.full(m, n)shape=tuple(shape)def read_tensor(f,shape):#读取张量data=[]for i in range(n**(m-1)):line = f.readline()data.append(list(map(eval, line.split(","))))return np.array(data).reshape(shape)def read_vector(f):#读取向量line = f.readline()line = line.replace("\n","")line=list(map(eval, line.split(",")))return np.array(line)#读取数据f = open("jacobi_data.txt")A=read_tensor(f,shape)#读取矩阵Ab=read_vector(f)#读取bf.close()print('A:')print(A)print('b:',b)#求LU=L+ULU=np.copy(A)for i in range(n):LU[i,i]=0LU=0-LU#求D:系数矩阵的对⾓线元素D=np.copy(A)D=D+LU#迭代求解x=np.ones(n)#⽤于存储迭代过程中x的值y=np.ones(n)#⽤于存储中间结果DLU=np.dot(np.linalg.inv(D),LU)#对D求逆,然后和LU相乘Db=np.dot(np.linalg.inv(D),b)print('x:',x)for iteration in range(max):#迭代计算y=np.dot(DLU,x)+Db#判断是否达到精度要求if np.max(np.fabs(x-y))<Delta:print('iteration:',iteration)break#将y幅值到x,开始下⼀轮迭代x=np.copy(y)print('x:',x)数据:组织形式:前3⾏是A的数据,最后1⾏是b的数据。

Implementing the Jacobi Algorithm for Solving Eigenvalues of Symmetric Matrices with CUDA

Implementing the Jacobi Algorithm for Solving Eigenvalues of Symmetric Matrices with CUDA

2012 IEEE Seventh International Conference on Networking, Architecture, and StorageImplementing the Jacobi Algorithm for Solving Eigenvalues of Symmetric Matrices with CUDATao Wang1 , Longjiang Guo1,2† , Guilin Li3 , Jinbao Li1,2 , Renda Wang1 , Meirui Ren1,2 , Jing (Selena) He4 1 School of Computer Science and Technology, Heilongjiang University, Harbin, 150080, China 2 Key Laboratory of Database and Parallel Computing of Heilongjiang Province, Harbin 150080, China 3 Software School of Xiamen University, Xiamen, Fujian 361005, China 4 Department of Computer Science, Kennesaw State University, Kennesaw, GA, 30144, USA Email: longjiangguo@Abstract—Solving the eigenvalues of matrices is an open problem which is often related to scientific computation. With the increasing of the order of matrices, traditional sequential algorithms are unable to meet the needs for the calculation time. Although people can use cluster systems in a short time to solve the eigenvalues of large-scale matrices, it will bring an increase in equipment costs and power consumption. This paper proposes a parallel algorithm named Jacobi on gpu which is implemented by CUDA (Computer Unified Device Architecture) on GPU (Graphic Process Unit) to solve the eigenvalues of symmetric matrices. In our experimental environment, we have Intel Core i5-760 quad-core CPU, NVIDIA GeForce GTX460 card, and Win7 64-bit operating system. When the size of matrix is 10240×10240, the number of iterations is 10000 times, the speedup ratio is 13.71. As the size of matrices increase, the speedup ratio increases correspondingly. Moreover, as the number of iterations increases, the speedup ratio is very stable. When the size of matrix is 8192×8192, the number of iterations are 1000, 2000, 4000, 8000 and 16000 respectively, the standard deviation of the speedup ratio is 0.1161. The experimental results show that the Jacobi on gpu algorithm can save more running time than traditional sequential algorithms and the speedup ratio is 3.02∼13.71. Therefore, the computing time of traditional sequential algorithms to solve the eigenvalues of matrices is reduced significantly. Keywords-Matrix eigenvalue; CUDA; Jacobi iteration; GPU; Symmetric Matrix;I. I NTRODUCTION Solving the eigenvalues of matrices is one of the most common and important operation in linear algebra. It has wide applications in scientific computing [1]. Solving the eigenvalues of matrices not only can directly help to solve the nonlinear programming, optimization, ordinary differential equations, and a variety of math problems, but also plays an important role in structural mechanics, engineering design, computational physics, and quantum mechanics. Nowadays, because of its important and wide applications, the matrix eigenvalue problem becomes one of the main computational tasks of high-performance computers. However, with the development of advanced technologies, and1 †To whom correspondences should be addressed. Email: longjiangguo@the improvement of computing power, the size of matrices which people deal with has been increased dramatically. For example, in the field of computational fluid dynamics, statistical, structural engineering, quantum physics, chemical engineering, economic models, the aerospace industry, hydropower, weather forecasting, integrated circuit simulation, signal processing and control, network queuing simulation of Markov chains, and many other fields , it is needed to solve the eigenvalues with large-scale matrices. For some practical problems, the orders of matrices often reach thousands, tens of thousands, or even millions [2]. However, the majority of methods for solving the eigenvalues of matrices adopts the sequential algorithm, or parallel methods on cluster systems. The speed of sequential solutions is undoubtedly very slow for large matrices. The speed of cluster systems is much faster than the sequential methods in solving the eigenvalues of matrices. Nevertheless, using cluster systems brings an increase in equipment costs and power consumptions. Thus, people need to pay more expensive computational costs. For example, a cluster system consists of eight computers. The total cost includes the costs of the eight stand-alone machines, the costs of the connections among these eight machines, and the high maintenance costs. Additionally, the power consumption is eight times or more than using only one machine. The costs of using cluster systems usually range from tens of thousands to hundreds of thousands of dollars (e.g., the Huaan-RAC cluster costs 185,000 RMB in 2011), and their power consumptions are very large. This expensive cost for accelerating computing ability is not affordable for some people. In recent years, GPU is already famous for the programming capabilities for the large-scale fast calculations. The CUDA technology proposed by NVIDIA is an outstanding representative of this area [3]. CUDA programming gives people a new idea, and also provides us a new way to solve the eigenvalues of matrices. In recent years, the development of CUDA has become a focus on academic researches. People not only use CUDA to deal with graphics and images, but also use it to handle numerical calculations.69978-0-7695-4722-0/12 $26.00 © 2012 IEEE DOI 10.1109/NAS.2012.12For example, R. R. Amossen et al. presented a novel data layout, BATMAP, which is particularly well suitable for parallel processing. Moreover BATMAP is compact even for sparse data [4]. Grand et al. proposed a broad-phase collision detection with CUDA [5]. A. A. Aqrawi et al. presented a method using compression for large seismic data sets on modern GPUs and CPUs [6]. Later, A. A. Aqrawi et al. presented the 3D convolution for large data sets on modern GPUs [7]. J. Barnat et al. designed a new CUDAaware procedure for pivot selection and implemented parallel algorithms using CUDA accelerated computation [8]. A. Hagiescu et al. described an automated compilation flow that maps most stream processing applications onto GPUs by taking two important architectural features of NVIDIA GPUs into consideration , namely, interleaved execution, as well as the small amount of shared memory available in each streaming multiprocessor [9]. This paper proposes a parallel algorithm named Jacobi on gpu which is implemented by CUDA on GPU to solve the eigenvalues of symmetric matrices. In our experimental environment, we have Intel Core i5-760 quadcore CPU, NVIDIA GeForce GTX460 card, and Win7 64-bit operating system. When the size of matrix is 10240×10240, the number of iterations is 10000 times, the speedup ratio is 13.71. As the size of matrices increases, the speedup ratio increases. As the number of iterations increases, the speedup ratio is very stable. When the size of matrix is 8192×8192, the number of iterations are 1000, 2000, 4000, 8000 and 16000 respectively, the standard deviation of the speedup ratio is 0.1161. The experimental results show that the Jacobi on gpu algorithm can save more running time than traditional sequential algorithms. Moveover the speedup ratio is increased from 3.02 to 13.71. Therefore, the computation time is reduced significantly compared with traditional sequential algorithms. The contributions of this paper are summarized as follows:∙II. R ELATE W ORK There are many algorithms proposed to solve the eigenvalues of matrices, such as, Jacobi iteration method, QR method, and Power method [10]. However, the majority of these algorithms are implemented sequentially. The speed of sequential solutions undoubtedly is very slow for large matrices. B. Butrylo et al. presented a parallel algorithm called SSOR which preconditioning implemented on dynamic SMP clusters with communication on the fly [11]. In this paper, they use SMP cluster system to improve the efficiency of solving the matrices eigenvalues. T. Auckenthaler et al. presented a parallel solution of partial symmetric eigenvalue problems, by using the parallel computer system to solve the eigenvalues of matrices [12]. Cluster systems can be used to solve the eigenvalues of large-scale matrices in a short period of time. However, it brings an increasement in equipment costs and power consumptions. J. Xia et al. presented a GPGPU (General Purpose computing on GPU) implementation for solving the eigenvalues of matrices [13]. In the paper, they presented a power method for solving the maximum eigenvalue of matrices, and QR method for solving all eigenvalues of matrices based on OpenGL (which is a professional graphics interface). Programmers must be very familiar with their proposed parallel algorithms and fully grasp the programming interface of the graphic hardware to implement the approach. As the implementation is difficult, this GPU development approach has not been widely applied to various fields [14]. Since the architecture differences between OpenGL and CUDA, it is very difficult to implement the algorithm [13] on CUDA platform. Their speedup ratio is 2.7∼7.6. The speedup ratio 7.6 is reached when using the power method for solving the maximum eigenvalue of the matrix with the size of 2048×2048, while the speedup ratio is 2.7 when using the QR method for solving all eigenvalues of the matrix with the size of 448×448. However, the algorithm Jacobi on gpu presented in our paper is based on CUDA, which is widely used in many fields. In addition, our proposed algorithm solves all eigenvalues of matrices with the speedup ratio of 3.02∼13.71. The speedup ratio 3.02 is reached when the size of the matrix is 1024×1024, while the speedup ratio 13.71 is reached when the size of the matrix is 10240×10240. The graphic hardware can be easily applied because of its programmability and parallelism to implement the fast general-purpose computing of some complex models. It has become one of today’s research focus. Because of GPU’s excellent floating-point calculation capabilities, large memory bandwidth, and relatively low price, GPGPU plays a very important role in many applications fields [15]. Hence, in this paper we proposes a CUDA parallel implementation for the Jacobi algorithm to solve the eigenvalues of symmetric matrices. This provides people a new way when solving the∙∙To the best of our knowledge, this paper firstly proposes a parallel algorithm to solve the eigenvalues of symmetric matrices with CUDA on GPU. The paper implements the proposed parallel algorithm on Intel Core i5-760 quad-core CPU, NVIDIA GeForce GTX460 card, and Win7 64-bit operating system. The speedup ratio is 3.02∼13.71. The theoretical analysis show that the time complexity of the parallel algorithm is O(n).The rest of the paper is organized as follows: Section II presents related work. Section III gives an overview of CUDA. In Section IV, we first describes the sequential Jacobi algorithm to solve the eigenvalues of symmetric matrices. Then we presents the proposed Jacobi on gpu algorithm. Finally, we gives the time complexity analysis. Section V shows the experimental results. Section VI summarizes the paper.70is that how to use QR method to solve the eigenvalues of the general matrices with CUDA. ACKNOWLEDGMENT This work is supported by Program for New Century Excellent Talents in University under grant No.NCET-11-0955, Programs Foundation of Heilongjiang Educational Committee for New Century Excellent Talents in University under grant No.1252-NCET-011, Program for Group of Science and Technology Innovation of Heilongjiang Educational Committee under grant No.2011PYTD002, the Science and Technology Research of Heilongjiang Educational Committee under grant No.12511395, the Science and Technology Innovation Research Project of Harbin for Young Scholar under grant No.2008RFQXG107 and No.2011RFXXG014, the National Natural Science Foundation of China under grant No.61070193, 61100032, 60803015, Heilongjiang Province Founds for Distinguished Young Scientists under Grant No.JC201104, Heilongjiang Province Science and Technique Foundation under Grant No.GC09A109, Basal Research Fund of Xiamen University under Grant No.2010121072. R EFERENCES[1] Andrea Brini, Marcos Marino, and Sebastien Stevan. The uses of the refined matrix model recursion. Journal of Mathematical Physics, 2011, 52(5): pp. 291-315. [2] S. L. Foo and P. P. Silvester. Finite Element Analysis of Inductive Strips in Unilateral Finlines. IEEE Microwave Theory and Techniques, 1993, 41(2): pp. 298-304. [3] NVIDIA. NVIDIA CUDA Programming Guide[ EB /OL ]. http: / /developer. download. nvidia. com / compute / cuda /2.2 / toolkit/docs/NVIDIA CUDA Programming Guide 2. 2. 1. pdf, 2009 - 5 - 26. [4] Rasmus Resen Amossen, and Rasmus Pagh. A New Data Layout For Set Intersection on GPUs. IEEE IPDPS, 2011. [5] S. L. Grand. Broad-phase collision detection with cuda. In GPU Gems 3, 2007. [6] Ahmed A.Aqrawi, and Anne C.Elster. Accelerating Disk Access Using Compression for Large Seismic Datasets on modern GPU and CPU. Scientific and Parallel Computing, 2010. [7] A. A. Aqrawi. 3d convolution of large datasets on modern gpus. Norwegian University of Science and Technology, 2009. [8] J. Barnat, Petr Bauch, L. Brim, and M. Ceska. Computing Strongly Connected Components in Parallel on CUDA. IEEE IPDPS, 2011. [9] Andrei Hagiescu, Huynh Phung Huynh, Wengfai Wong, and Rick Siow Mong Goh. Automated architecture-aware mapping of streaming applications onto GPUs. IEEE IPDPS, 2011. [10] David Kincaid, and Ward Cheney. Numerical Analysis Mathematics of Scientific Computing(Third Edition). Thomson Publishers, 2003.[11] Boguslaw Butrylo, Marek Tudruj, and Lukasz Masko. Parallel SSOR preconditioning implemented on dynamic SMP clusters with communication on the fly. Future Generation Computer Systems, 2010, 26(3): pp. 491-497. [12] T. Auckenthaler, V. Blum, HJ. Bungartz, T. Huckle, R. Johanni, L. Kramer, B. Lang, H. Lederer, and P.R.WillemsParallel solution of partial symmetric eigenvalue problems from electronic structure calculations. Parallel Computing, 2011, 37 (12): pp. 783-794. [13] Xiajian-Ming, and WeiDe-Min. GPU Implementation for Solving Eigenvalues of a Matrix. Acta Scientiarum Naturalium Universitatis Sunyatseni, 2008, 47(2): pp. 89-92. [14] M Snyder. Solving the embedded OpenGL puzzle - making standards, tools, and APIs work together in highly embedded and safety critical environments. IEEE DASC, 2005. [15] Charl van Deventer, Willem A.Clarke, and Scott Hazelhurst. BOINC and CUDA: Distributed High-Performance Computing for Bioinformatics String Matching Problems. IEEE ACMW, 2006.78。

Jacobi迭代法C程序方法

Jacobi迭代法C程序方法

一、算法理论Jacobi 迭代格式的引出是依据迭代法的基本思想:构造一个向量系列(){}n X ,使其收敛至某个极限*X ,则*X 就是要求的方程组的准确解。

Jacobi 迭代将方程组:⎪⎪⎩⎪⎪⎨⎧=+++=+++=+++nn nn n n n n n n b x a x a x a b x a x a x a b x a x a x a 22112222212111212111 )1( 在假设0≠ii a ,改写成()⎪⎪⎩⎪⎪⎨⎧++++=++++=++++=--n n n n n n n n n n n g x b x b x b x g x b x b x b x g x b x b x b x 112211223231212113132121 )2( 如果引用系数矩阵⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=nn n n a a a a A 1111,⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=0011 n n b b B及向量⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=n x x X 1,⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=n b b b 1,⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=n g g g 1, 方程组(1)和(2)分别可写为:b AX =及g BX X +=,这样就得到了jacobi 迭代格式01g BX X k k +=+用jacobi 迭代解方程组b AX =时,就可任意取初值0X 带入迭代可知式g BX X k k +=+1,然后求k k X ∞→lim 。

但是,n 比较大的时候,写方程组)1(和)2(是很麻烦的,如果直接由A ,b 能直接得到B ,g 就是矩阵与向量的运算了,那么如何得到B ,g 呢 实际上,如果引进非奇异对角矩阵()0≠ii a⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=nn a a D 00011将A 分解成:,D D A A +-=要求b AX =的解,实质上就有,)(DX X D A AX +-=而D 是非奇异的,所以1-D 存在,,)(X A D AX DX -+=从而有,11b D AX D X --+=我们在这里不妨令,1A D I B --=b D g 1-=就得到jacobi 迭代格式:g BX X k k +=+1 现在考虑jacobi 迭代法的计算程序float {}{}{}{};5,2,1,1,10,2,1,2,10]3][3[------=a float {};10,15,3]3[=b 分别代表X 的系数和等号右边的常数项即 ⎪⎩⎪⎨⎧=+--=-+-=--1052151023210333231232221131211x x x x x x x x x 先输入方程,运行main 函数,如果first 不为null ,则执行if 括号里的,否则执行else 里面的,最后会调用方法()sum 。

关于Jacobi矩阵的特征值反问题可解的充分必要条件的一个注记 (英)

关于Jacobi矩阵的特征值反问题可解的充分必要条件的一个注记 (英)

108
Journal of Mathematical Research and Exposition
Vol.27
new necessary and sufficient conditions for such inverse eigenvalue problem to have solutions. They also present an algorithm based on their theoretical results. In this paper, we give some more necessary conditions when n > 3 and simpler proofs to the results obtained in [7] and [8]. We present some examples to show the incorrectness of some theorems and algorithm in [8].
n n
λi = 1,
i=1 i=1
λi − n > 0.
(10)
Theorem 3 For n ≥ 3, a necessary condition for (λ1 , λ2 , · · · , λn ) ∈ Sn is b − 4a < a2 ≤ (n − 1)(b − 4a), where a =
Let ∆k denote the sum of all k × k principal minors of Jn (β1 , β2 , · · · , βn−1 ), and δk denote the elementary symmetric function of degree k of the n positive numbers λ1 , λ2 , · · · , λn , i.e. δk (λ1 , λ2 , · · · , λn ) = By Vieta’s Theorem, we have Theorem 1 A necessary and sufficient condition for (λ1 , λ2 , · · · , λn ) ∈ Sn is that the equation ∆k (β1 , β2 , · · · , βn−1 ) = δk (λ1 , λ2 , · · · , λn ), has positive solution. The following two lemmas can be verified by direct calculation: Lemma 1 For the matrix Jn (β1 , β2 , · · · , βn−1 ) in (1), we have

jacobi迭代法c++代码

jacobi迭代法c++代码

jacobi迭代法c++代码以下是使用C++编写的Jacobi迭代法的代码示例: cpp.#include <iostream>。

#include <cmath>。

// 定义矩阵尺寸。

#define N 3。

// 定义最大迭代次数和收敛精度。

#define MAX_ITER 1000。

#define EPSILON 1e-6。

// Jacobi迭代函数。

void jacobiIteration(double A[N][N], double b[N], double x[N]) {。

double x_new[N]; // 存储新的解向量。

int iter = 0; // 迭代次数。

while (iter < MAX_ITER) {。

for (int i = 0; i < N; i++) {。

double sum = 0.0;for (int j = 0; j < N; j++) {。

if (j != i) {。

sum += A[i][j] x[j];}。

}。

x_new[i] = (b[i] sum) / A[i][i]; }。

// 判断迭代是否收敛。

double error = 0.0;for (int i = 0; i < N; i++) {。

error += std::abs(x_new[i] x[i]); }。

if (error < EPSILON) {。

break;}。

// 更新解向量。

for (int i = 0; i < N; i++) {。

x[i] = x_new[i];}。

iter++;}。

// 输出结果。

std::cout << "Jacobi迭代法结果," << std::endl;for (int i = 0; i < N; i++) {。

std::cout << "x[" << i << "] = " << x[i] << std::endl;}。

jacobi迭代法python代码

jacobi迭代法python代码

jacobi迭代法python代码一、什么是Jacobi迭代法?Jacobi迭代法是一种线性方程组的迭代解法,用于求解形如Ax=b的线性方程组,其中A为系数矩阵,b为常数向量。

该方法的基本思想是将A分解为D-L-U三个矩阵之和,其中D为A的对角线矩阵,L和U分别为A的下三角和上三角矩阵。

然后将Ax=b转化为(D-L-U)x=b,即Dx=(L+U)x+b。

因此可以得到迭代公式x^(k+1)=D^(-1)(L+U)x^(k)+D^(-1)b。

二、Jacobi迭代法的Python实现以下是使用Python实现Jacobi迭代法的代码:```import numpy as npdef jacobi(A, b, x0, tol=1e-6, max_iter=1000):"""Jacobi iteration method to solve Ax=b.:param A: coefficient matrix:param b: constant vector:param x0: initial guess of solution:param tol: tolerance of error:param max_iter: maximum number of iterations:return: solution vector and number of iterations"""n = len(b)D = np.diag(np.diag(A))L = -np.tril(A, k=-1)U = -np.triu(A, k=1)x = x0.copy()for k in range(max_iter):x_new = np.dot(np.linalg.inv(D), b + np.dot(L+U, x)) error = np.linalg.norm(x_new - x)if error < tol:return x_new, k+1x = x_newreturn x, max_iter```三、代码解析1. import numpy as np:导入NumPy库并使用别名np。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

2 An Analysis of the RTZ Algorithm
We now show that when applying the RTZ algorithm,
we can only obtain at best a linear convergence rate if the
matrix has complex eigenvalues.
through unitary similarity transformations
QAQ¯ T = D
where Q is unitary and D is diagonal. In this paper we
consider how to design a scalable Jacobi-like algorithm for efficient parallel computation of the eigenvalue decomposition of a real normal matrix over the real field. Real normal matrices are generalisations of real symmetric matrices. A real symmetric matrix is normal, but a real normal matrix is not necessarily symmetric. We shall focus our attention on the unsymmetric case although the method to be described applies to both cases.
In this paper we describe a method for designing efficient Jacobi-like algorithms for eigenvalue decomposition of a real normal matrix. The designed algorithms use only real arithmetic and orthogonal similarity transformations. The theoretical analysis and experimental results show that ultimate quadratic convergence can be achieved for general real normal matrices with distinct eigenvalues.
erate a Householder matrix which is then applied to update
the matrix A so that two off-diagonal elements a31 and a41
are annihilated. After that the second leading diagonal ele-
1 Introduction
A real matrix A is said to be normal if it satisfies the
equation
AAT = AT A
where AT is the transpose of matrix A. This kind of matrix
has the property that it can be reduced to a diagonal form
this method a 4 4 symmetric matrix can be reduced to a 2 2 block diagonal form using one orthogonal similar-
ity transformation. This method can also be extended to compute eigenvalues of a general normal matrix. However, the problems are that the original matrix has to be divided into a sum of a symmetric matrix and a skew-symmetric matrix, which is not an orthogonal operation, and that the algorithm cannot be used to solve the eigenvalue problem of near-normal matrices. Another parallel Jacobi-like algorithm, named the RTZ (which stands for Real Two-Zero) algorithm, was also proposed recently [5]. This method uses real arithmetic and orthogonal similarity transformations. It is claimed that quadratic convergence can be obtained when computing eigenvalues of a real near-normal matrix with real distinct eigenvalues. However, a serious problem with this method is that the process may not converge if the matrix has complex eigenvalues.
பைடு நூலகம்
tTghehetehwefirhrowstlieltehoaftdhAien2gfi2 rdtsoitafrgooorwnmailnae3Ale1m2,e3tnhmteaafit1rr1isxtin,ctoAhlau1t1misins,
chosen to-
A in 21 and
0@ 1A A1 =
a11 a31 a41
a13 a33 a43
a14 a34 a44
:
It is known that any real 3 3 matrix has at least one real
eigenvalue. An eigenvector associated with a real eigen-
value of the above matrix can be obtained and used to gen-
Jacobi-like Algorithms for Eigenvalue Decomposition of a Real Normal Matrix Using Real Arithmetic
B. B. Zhou and R. P. Brent Computer Sciences Laboratory The Australian National University Canberra, ACT 0200, Australia fbing,rpbg@.au
;
is used to show how the convergence rate may be affected
when applying the RTZ algorithm.
The basic idea for computing the eigenvalue decomposi-
tion of a 4 4 matrix using the RTZ algorithm is as follows:
Since the RTZ algorithm uses a similar idea to our method, an analysis of the RTZ algorithm is presented in Section 2. Our method is described in Section 3. In that section a theoretical analysis of convergence is also presented. Some experimental results are given in Section 4. Section 5 is the conclusion.
The Jacobi method, though originally designed for symmetric eigenvalue problems, can be extended to solve eigenvalue problems for unsymmetric normal matrices [3, 7]. However, one problem is that we have to use complex arithmetic even for real-valued normal matrices. Complex operations are expensive and should be avoided if possible. A quaternion-Jacobi method was recently introduced [4]. In
The standard sequential method for eigenvalue decomposition of this kind of matrix is the QR algorithm. When massively parallel computation is considered, however, the parallel version of the QR algorithm for solving unsymmetric eigenvalue problems may not be very efficient because the algorithm is sequential in nature and not scalable.
相关文档
最新文档