外文数值解论文A comparison of numerical solution
偏微分方程数值解PPT课件
从(1)得到:
u(ti)u(ti1)hu(ti)O(h)
精选
14
从(2)得到:
u(ti)u(ti1)hu(ti)O(h)
从(1)-(2)得到:
u(ti)u(ti1)2 hu(ti1)O (h2)
从(1)+(2)得到:
u (ti)u (ti 1) 2 u h (2 ti) u (ti 1 ) O (h 2)
精选
15
对经典的初值问题
du
dt
f (t,u )
u ( 0 ) u 0
t (0,T)
满足Lipschitz条件
4
常微分方程的数值解
大气科学中
常微分方程和偏微分方程的关系
1. 大气行星边界层(近地面具有湍流运动特性的大 气薄层,1—1.5km), 埃克曼(V.W.Ekman)(瑞典) 螺线的导出;
2. 1963年,美国气象学家Lorenz在研究热对流的 不稳定问题时,使用高截断的谱方法,由 Boussinesq流体的闭合方程组得到了一个完全确 定的三阶常微分方程组,即著名的Lorenz系统。
2. Curtis F.Gerald and Patrick O., Applied Numerical Analysis, Person Education, Inc., 2004.
3. Eugenia Kalnay, Atmospheric Modeling, Data Assimilation and Predictability, the press Syndicate of the University of Cambridge,2003.
ìïïïïïïïïïíïïïïïïïïî
x1¢=
x
¢
A Numerical Method for Solution of Ordinary Differential Equations of Fractional Order
(6)
In the general case the Riemann-Liouville fractional derivatives do not commute
β α+β β α α (a Dt (a Dt y ))(t) = (a Dt )(t) = (a Dt (a Dt y ))(t) .
(7)
Extending our considerations, the integer operator commutes with the fractional operator
β α−β α (a Dt (a It y ))(t) = (a Dt )(t) ,
(9)
but they do not commute in the opposite way. We turn our attention to the composition rule in two following ways. First of all we found in literature [10,11] the fact, that authors neglect the general property of fractional derivatives given by eqn. (7). Regarding to solution of fractional differential equations we will apply above properties in the next sections.
α y )(t) = (a It
Econometric and Statistical Computing Using Ox
Econometric and Statistical Computing Using OxFRANCISCO CRIBARI–NETO1and SPYROS G.ZARKOS21Departamento de Estat´ıstica,CCEN,Universidade Federal de Pernambuco,Recife/PE,50740–540,Brazil E-mail:cribari@npd.ufpe.br2National Bank of Greece,86Eolou str.,Athens10232,GreeceE-mail:s.zarkos@primeminister.grAbstract.This paper reviews the matrix programming language Ox from the viewpoint of an econometri-cian/statistician.We focus on scientific programming using Ox and discuss examples of possible interest to econometricians and statisticians,such as random number generation,maximum likelihood estimation,and Monte Carlo simulation.Ox is a remarkable matrix programming language which is well suited to research and teaching in econometrics and statistics.Key words:C programming language,graphics,matrix programming language,maximum likelihood estima-tion,Monte Carlo simulation,OxOne of the cultural barriers that separates computer scientists from regular scientists and engineers is a differing point of view on whether a30%or50%loss of speed is worth worrying about.In many real-time state-of-the art scientific applications,such a loss is catastrophic.The practical scientist is trying to solve tomorrow’s problem with yesterday’s computer;the computer scientist,we think, often has it the other way.Press et.al.(1992,p.25) 1.IntroductionApplied statisticians,econometricians and economists often need to write programs that implement estimation and testing procedures.With computers powerful and affordable as they are nowadays,they tend to do that in programming environments rather than in low level programming languages.The former(e.g.,GAUSS,MATLAB,R,S-PLUS)make programming accessible to the vast majority of researchers,and,in many cases,can be combined with the latter(e.g.,C,Fortran)to achieve additional gains in speed.The existence of pre-packaged routines in statistical software that is otherwise best suited to perform data analysis(such as in S-PLUS)does not make the need for“statistical comput-ing”any less urgent.Indeed,many newly developed techniques are not rapidly implemented into statistical software.If one wishes to use such techniques,he/she would have to program them.Additionally,several techniques are very computer-intensive,and require efficient pro-gramming environments/languages(e.g.,bootstrap within a Monte Carlo simulation,double bootstrap,etc.).It would be nearly impossible to perform such computer-intensive tasks with traditional statistical software.Finally,programming forces one to think harder about the problem at hand,the estimation and testing methods that he/she will choose to use.Of course,the most convincing argument may be the following quote from the late John Tukey:“In a world in which the price of calculation continues to decrease rapidly,but the price of theorem proving continues to hold steady or increase,elementary economics indicates that we ought to spend a larger fraction of our time on calculation.”1The focus of our paper is on the use of Ox for‘econometric computing’.That is,we discuss features of the Ox language that may be of interest to statisticians and econometricians,and exemplify their use through examples.Readers interested in reviews of Ox,including the language structure,its syntax,and its advantages and disadvantages,are referred to Cribari–Neto(1997),Keng and Orzag(1997),Kusters and Steffen(1996)and Podivinsky(1999).1 2.A Brief Overview of OxOx is a matrix programming language with object-oriented support developed by Jur-gen Doornik,a Dutch graduate student(at the time)at Nuffield College,Oxford.The development of Ox started in April1994.Doornik’s primary goal was to develop a matrix programming language for the simulations he wished to perform for his doctoral dissertation. The veryfirst preliminary version of Ox dates back to November1994.In the summer of 1995,two other econometricians at Nuffield College started using Ox for their research:Neil Shephard and Richard Spady.From that point on,the development of Ox became a serious affair.The current Ox version is numbered3.00.Ox binaries are available for Windows and severalflavors of UNIX(including Linux)and can be downloaded from /Users/Doornik/,which is the main Ox web page.All versions are free for educational purposes and academic research,with the exception of the‘Professional Windows version’.This commercial version comes with a nice interface for graphics known as GiveWin(available for purchase from Timberlake Consultants, ).The free Ox versions can be launched from the command line in a console/terminal win-dow,which explains why they are also known as‘console versions’.Doornik also distributes freely a powerful text editor for Windows:OxEdit(see also the OxEdit web page,which is currently at ).It can be used as a front-end not only to Ox(the console version)but also to other programs and languages,such as C,C++,T E X,L a T E X,etc.The Ox syntax is very similar to that of C,C++and Java.In fact,its similarity to C (at least as far as syntax goes)is one of its major advantages.2One characteristic similarity with C/C++is in the indexing,which starts at zero,and not at one.This means that thefirst element of a matrix,say A,is accessed as A[0][0]instead of as A[1][1].A key difference between Ox and languages such as C,C++and Java is that matrix is a basic type in Ox. Also,when programming in Ox one needs to declare the variables that will be used in the program(as is the case in C/C++),but unlike in C/C++,one does not have to specify the type of the variables that are declared.Ox’s most impressive feature is that it comes with a comprehensive mathematical and statistical function library.A number of useful functions and methods are implemented into the language,which makes it very useful for scientific 1A detailed comparison involving GAUSS,Macsyma,Maple,Mathematica,MATLAB,MuPAD,O-Matrix,Ox, R-Lab,Scilab,and S-PLUS can be found at http://www.scientificweb.de/ncrunch/ncrunch.pdf(“Com-parison of mathematical programs for data analysis”by Stefan Steinhaus).Ox is the winner when it comes to speed.2Other important advantages of Ox are the fact that it is fast,free,can be easily linked to C,Fortran, etc.,and can read and write data in several different formats(ASCII,Gauss,Excel,Stata,Lotus,PcGive, etc.).2programming.Ox comes with a comprehensive set of helpfiles in HTML form.The documentation of the language can be also found in Doornik(2001).A good introduction to Ox is Doornik, Draisma and Ooms(1998).3.A Few Simple IllustrationsOurfirst example is a very simple one,and intends to show the similarity between the Ox and C syntaxes.We wish to develop a program that produces a small table converting temperatures in Fahrenheit to Celsius(from0F to300F in steps of20F).The source of this example is Kerninghan and Ritchie(1988).The C code can be written as follows./****************************************************************PROGRAM:celsius.c**USAGE:To generate a conversion table of temperatures(from*Fahrenheit to Celsius).Based on an example in the*Kernighan&Ritchie’s book.****************************************************************/#include<stdio.h>int main(void){int fahr;printf("\nConversion table(F to C)\n\n");printf("\t%3s%5s\n","F","C");/*Loop over temperatures*/for(fahr=0;fahr<=300;fahr+=20){printf("\t%3d%6.1f\n",fahr, 5.0*(fahr-32)/9.0);}printf("\n");return0;}The output produced by compiled C code using the gcc compiler(Stallman,1999)under the Linux operating system(MacKinnon,1999)is:[cribari@edgeworth c]$gcc-O2-o celsius celsius.c[cribari@edgeworth c]$./celsiusConversion table(F to C)F C0-17.820-6.7340 4.46015.68026.710037.812048.914060.016071.118082.220093.3220104.4240115.6260126.7280137.8300148.9The next step is to write the same program in Ox code.The Ox transcription of the celcius.c program follows:/****************************************************************PROGRAM:celsius.ox**USAGE:To generate a conversion table of temperatures(from*Fahrenheit to Celsius).Based on an example in the*Kernighan&Ritchie’s book.***************************************************************/#include<oxstd.h>main(){decl fahr;print("\nConversion table(F to C)\n\n");print("\t F C\n");//Loop over temperaturesfor(fahr=0;fahr<=300;fahr+=20){print("\t","%3d",fahr);print("","%6.1f", 5.0*(fahr-32)/9.0,"\n");}print("\n");}The Ox output is:[cribari@edgeworth programs]$oxl celsiusOx version 3.00(Linux)(C)J.A.Doornik,1994-2001Conversion table(F to C)F C40-17.820-6.740 4.46015.68026.710037.812048.914060.016071.118082.220093.3220104.4240115.6260126.7280137.8300148.9The two programs above show that the Ox and C syntaxes are indeed very similar.Note that Ox accepts C style comments(/*...*/),and also C++like comments to the end of the line(//).3We also note that,unlike C,Ox accepts nested comments.The similarity between the Ox and C syntaxes is a major advantage of Ox over other matrix languages.Kendrick and Amman(1999)provide an overview of programming languages in economics.In the introduction of their paper,they give the following advice to users who are starting to program:“Begin with one of the high-level or modeling languages.(...)Then work downward in the chain and learn either Fortran,C,C++,or Java.”If a user then starts with Ox and‘works downward’to C or C++the transition will be smoother than if he/she starts the chain with other high level languages.As a second illustration of the use of Ox in econometrics and statistics,we develop a simple program thatfirst simulates a large number of coin tosses,and then counts the frequency (percentage)of tails.The code which is an Ox translation,with a smaller total number of runs,of the C code given in Cribari–Neto(1999),thus illustrates Kolmogorov’s Law of Large Numbers.We begin by writing a loop-based version of the coin tossing experiment./*******************************************************************PROGRAM:coin_loop.ox**USE:Simulates a large number of coin tosses and prints*the percentage of tails.**PURPOSE:The program illustrates the first version of the*law of large numbers which dates back to James*Bernoulli.******************************************************************/#include<oxstd.h>/*maximum number of coin tosses*/3Ox also borrows from Java;the println function,for instance,comes from the Java programming language.5const decl COIN_MAX=1000000;main(){decl j,dExecTime,temp,result,tail,s;//Start the clock(to time the execution of the program).dExecTime=timer();//Choose the random number generator.ranseed("GM");//Main loop:for(j=10;j<=COIN_MAX;j*=10){tail=0;for(s=0;s<j;s++){temp=ranu(1,1);tail=temp>0.5?tail:tail+1;}result=100.0*tail/j;print("Percentage of tails from",j,"tosses:","%8.2f",result,"\n");}print("\nEXECUTION TIME:",timespan(dExecTime),"\n");}The instruction tail=temp>0.5?tail:tail+1;does exactly what it does in C: it sets the variable tail equal to itself if the stated condition is true(temp>0.5)and to tail+1otherwise.We now vectorize the above code for speed.The motivation is obvious:vectorization usually leads to efficiency gains,unless of course one runs into memory problems.It is note-worthy that one of the main differences between a matrix programming language and a low level language,such as C and C++,is that programs should exploit vector and matrix opera-tions when written and executed in a matrix-oriented language,such as Ox.The vectorized code for the example at hand is:/*******************************************************************PROGRAM:coin_vec.ox**USE:Simulates a large number of coin tosses and prints*the percentage of tails.**PURPOSE:The program illustrates the first version of the*law of large numbers which dates back to James*Bernoulli.******************************************************************/6#include<oxstd.h>/*maximum number of coin tosses*/const decl COIN_MAX=1000000;main(){decl j,dExecTime,temp,tail;//Start the clock(to time the execution of the program).dExecTime=timer();//Choose the random number generator.ranseed("GM");//Coin tossing:for(j=10;j<=COIN_MAX;j*=10){temp=ranu(1,j);tail=sumr(temp.<0.5)*(100.0/j);print("Percentage of tails from",j,"tosses:","%8.2f",double(tail),"\n");}print("\nEXECUTION TIME:",timespan(dExecTime),"\n");}The output of the loop-based program is:[cribari@edgeworth programs]$oxl coin_loopOx version 3.00(Linux)(C)J.A.Doornik,1994-2001Percentage of tails from10tosses:40.00Percentage of tails from100tosses:53.00Percentage of tails from1000tosses:49.10Percentage of tails from10000tosses:49.69Percentage of tails from100000tosses:49.83Percentage of tails from1000000tosses:49.99EXECUTION TIME: 2.41whereas the vectorized code generates the following output: [cribari@edgeworth programs]$oxl coin_vecOx version 3.00(Linux)(C)J.A.Doornik,1994-2001Percentage of tails from10tosses:40.00Percentage of tails from100tosses:53.00Percentage of tails from1000tosses:49.10Percentage of tails from10000tosses:49.69Percentage of tails from100000tosses:49.83Percentage of tails from1000000tosses:49.99EXECUTION TIME:0.237Note that the empirical frequency of tails approaches1/2,the population mean,as predicted by the Law of Large Numbers.As far as efficiency goes,we see that vectorization leads to a sizeable improvement.The loop-based program yields an execution time which is over10 times greater than that of its vectorized version,on a DELL Pentium III1GHz computer with512MB RAM running on Linux.4Some languages,like C,operate faster on rows than on columns.The same logic applies to Ox.To illustrate the claim,we modify the vectorized code so that the random draws are stored in a column vector(they were previously stored in a row vector).To that end,one only needs to change two lines of code:for(j=10;j<=COIN_MAX;j*=10){temp=ranu(j,1);//1st changetail=sumc(temp.<0.5)*(100.0/j);//2nd changeprint("Percentage of tails from",j,"tosses:","%8.2f",double(tail),"\n");}This new vectorized code now runs in0.35second.That is,we see a speed penalty of over 50%when we transpose the code so that we work with a large column vector instead of working with a large row vector.4.Econometric ApplicationsMaximum likelihood estimates oftentimes need to be computed using a nonlinear op-timization scheme.In order to illustrate how that can be done using Ox,we consider the maximum likelihood estimation of the number of degrees-of-freedom of a Student t distri-bution.Maximization is performed using a quasi-Newton method(known as the‘BFGS’method)with numerical gradient,i.e.,without specifying the score function.(Note that this estimator is substantially biased in small samples.)It is noteworthy that Ox has routines for other optimization methods as well,such as the Newton-Raphson and the BHHH methods. An advantage of the BFGS method is that it allows users to maximize likelihoods without having to specify a score function.See Press et al.(1992,Chapter10)for details on the BFGS and other nonlinear optimization methods.See also Mittelhammer,Judge and Miller(2000,§8.13),who on page199write that“[t]he BFGS algorithm is generally regarded as the best performing method.”The example below uses a random sample of size50,the true value of the parameter is3,and the initial value of the optimization scheme is2.(We have neglected a constant in the log-likelihood function.)/**************************************************************PROGRAM:t.ox**USAGE:Maximum likelihood estimation of the number of*degrees of freedom of a Student t distribution.*************************************************************/4The operating system was Mandrake Linux8.0running on kernel2.4.3.8#include<oxstd.h>#include<oxprob.h>#import<maximize>const decl N=50;static decl s_vx;fLogLik(const vP,const adFunc,const avScore,const amHess) {decl vone=ones(1,N);decl nu=vP[0];adFunc[0]=double(N*loggamma((nu+1)/2)-(N/2)*log(nu)-N*loggamma(nu/2)-((nu+1)/2)*(vone*log(1+(s_vx.^2)/nu)));if(isnan(adFunc[0])||isdotinf(adFunc[0]))return0;elsereturn1;//1indicates success}main(){decl vp,dfunc,dnu,ir;ranseed("GM");vp= 2.0;dnu= 3.0;s_vx=rant(N,1,3);ir=MaxBFGS(fLogLik,&vp,&dfunc,0,TRUE);print("\nCONVERGENCE:",MaxConvergenceMsg(ir));print("\nMaximized log-likelihood:","%7.3f",dfunc);print("\nTrue value of nu:","%6.3f",dnu);print("\nML estimate of nu:","%6.3f",double(vp));print("\nSample size:","%6d",N);print("\n");}Here is the Ox output:[cribari@edgeworth programs]$oxl tOx version 3.00(Linux)(C)J.A.Doornik,1994-2001CONVERGENCE:Strong convergenceMaximized log-likelihood:-72.813True value of nu: 3.0009ML estimate of nu: 1.566Sample size:50The maximum likelihood estimate ofν,whose true value is3,is ν=1.566.This example shows that nonlinear maximization of functions can be done with ease using Ox.Of course, one can estimate more complex models in a similar fashion.For example,the parameters of a nonlinear regression model can be estimated by setting up a log-likelihood function,and maximizing it with a MaxBFGS call.It is important to note,however,that Ox does not come with routines for performing constrained maximization.The inclusion of such functions in Ox would be a great addition to the language.A number of people have developed add-on packages for Ox.These handle dynamic panel data(DPD),ARFIMA models,conditionally heteroskedastic models,stochastic volatil-ity models,state space forms.There is,moreover,Ox code for quantile regressions,and in particular, 1(i.e.,least absolute deviations)regressions.The code corresponds to the al-gorithm described in Portnoy and Koenker(1997)and is available at Roger Koenker’s web page(/roger/research/rqn/rqn.html).We consider,next,the G@RCH2.0package recently developed by S´e bastien Laurent and Jean–Philippe Peters,which is dedicated to the estimation and forecasting of ARCH,GARCH models.The GARCH add-on package comes in two versions,namely:(i)the‘Full Version’which requires a registered copy of Ox Professional3.00,since it is launched from OxPack and makes use of the GiveWin interface,and(ii)the‘Light Version’which only requires the free (‘console’)version of Ox.It relies on Ox’s object-oriented programming capabilities,being a derived class of Ox’s Modelbase type of class.The package is available for download at http://www.egss.ulg.ac.be/garch.We borrow the example program(GarchEstim.ox)in order to illustrate the use of the GARCH code(as with everything else,in the context of the console,i.e.free,version of Ox).The GARCH object(which is created with the source code provided by this add-on package)allows for the estimation of a large number of uni-variate ARCH-type models(e.g.,ARCH,GARCH,IGARCH,FIGARCH,GJR,EGARCH, APARCH,FIEGARCH,FIAPARCH)under Gaussian,Student–t,skewed Student and gen-eralized error distributions.Forecasts(one-step-ahead density forecasts)of the conditional mean and variance are also available,as well as several misspecification tests and graphics commands.#include<oxstd.h>#import<packages/garch/garch>main(){decl garchobj;garchobj=new Garch();//***DATA***//garchobj.Load("Data/demsel.in7");();garchobj.Select(Y_VAR,{"DEM",0,0});10garchobj.SetSelSample(-1,1,-1,1);//***SPECIFICATIONS***//garchobj.CSTS(1,1);//cst in Mean(1or0),cst in Variance(1or0)garchobj.DISTRI(0);//0for Gauss,1for Student,2for GED,3for Skewed-Student garchobj.ARMA(0,0);//AR order(p),MA order(q).garchobj.ARFIMA(0);//1if Arfima wanted,0otherwisegarchobj.GARCH(1,1);//p order,q ordergarchobj.FIGARCH(0,0,1000);//Arg.1:1if Fractionnal Integration wanted.//Arg.2:0->BBM,1->Chung//Arg.3:if BBM,Truncation ordergarchobj.IGARCH(0);//1if IGARCH wanted,0otherwisegarchobj.EGARCH(0);//1if EGARCH wanted,0otherwisegarchobj.GJR(0);//1if GJR wanted,0otherwisegarchobj.APARCH(0);//1if APARCH wanted,0otherwise//***TESTS&FORECASTS***//garchobj.BOXPIERCE(<5;10;20>);//Lags for the Box-Pierce Q-statistics.garchobj.ARCHLAGS(<2;5;10>);//Lags for Engle’s LM ARCH test.garchobj.NYBLOM(1);//1to compute the Nyblom stability test,0otherwisegarchobj.PEARSON(<40;50;60>);//Cells(<40;50;60>)for the adjusted Pearson//Chi-square Goodness-of-fit test,0if not computed//G@RCH1.12garchobj.FORECAST(0,100);//Arg.1:1to launch the forecasting procedure,//0elsewhere//Arg.2:Number of one-step ahead forecasts//***OUTPUT***//garchobj.MLE(1);//0:both,1:MLE,2:QMLEgarchobj.COVAR(0);//if1,prints variance-covariance matrix of the parameters.garchobj.ITER(0);//Interval of iterations between printed intermediary results//(if no intermediary results wanted,enter’0’) garchobj.TESTSONLY(0);//if1,runs tests for the raw Y series,prior to//any estimation.garchobj.GRAPHS(0);//if1,prints graphics of the estimations//(only when using GiveWin).garchobj.FOREGRAPHS(0);//if1,prints graphics of the forecasts//(only when using GiveWin).//***PARAMETERS***//garchobj.BOUNDS(0);//1if bounded parameters wanted,0otherwisegarchobj.DoEstimation(<>);garchobj.STORE(0,0,0,0,0,"01",0);//Arg.1,2,3,4,5:if1->stored.(Res-SqRes-CondV-MeanFor-VarFor)//Arg.6:Suffix.The name of the saved series will be"Res_ARG6"//(or"MeanFor_ARG6",...).//Arg.7:if0,saves as an Excel spreadsheet(.xls).//If1,saves as a GiveWin dataset(.in7)delete garchobj;}11We have run the above code to obtain the MLE and QMLE results of an ARMA(0,0)model in the mean equation and GARCH(1,1)model in the variance equation,assuming Gaussian distributed errors.Some portmanteau tests,such as the Box–Pierce Q-statistic and the LM ARCH test,the Jarque–Bera normality test etc,were also calculated for the daily observations on the Dow Jones Industrial Average(Jan.1982-Dec.1999,a total of4,551observations). The output follows.Ox version 3.00(Linux)(C)J.A.Doornik,1994-2001Copyright for this package:urent and J.P.Peters,2000,2001.G@RCH package version 2.00,object created on14-08-2001----Database information----Sample:1-4313(4313observations)Frequency:1Variables:4Variable#obs#miss min mean max std.devDEM43130-6.3153-0.0022999 3.90740.75333PREC4313000.4259250.82935SUCC4313000.418550.81568OBSVAR43130 3.3897e-060.567539.853 1.3569 **********************SPECIFICATIONS*********************Mean Equation:ARMA(0,0)model.No regressor in the meanVariance Equation:GARCH(1,1)model.No regressor in the varianceThe distribution is a Gauss distribution.Strong convergence using numerical derivativesLog-likelihood=-4651.57Please wait:Computing the Std Errors...Maximum Likelihood EstimationCoefficient Std.Error t-value t-probCst(M)0.0031860.0100190.31800.7505Cst(V)0.0178730.003216 5.5580.0000GARCH(Beta1)0.8702150.01168674.460.0000ARCH(Alpha1)0.1028470.00964210.670.0000Estimated Parameters Vector:0.003186;0.017873;0.870215;0.102847No.Observations:4313No.Parameters:4*************TESTS**12***********Statistic t-Test P-ValueSkewness-0.20031 5.37237.7733e-08Excess Kurtosis 1.868425.061 1.3133e-138Jarque-Bera656.19656.19 3.2440e-143---------------Information Criterium(minimize)Akaike 2.158856Shibata 2.158855Schwarz 2.164763Hannan-Quinn 2.160942---------------BOX-PIERCE:ValueMean of standardized residuals-0.00065Mean of squared standardized residuals0.99808H0:No serial correlation==>Accept H0when prob.is High[Q<Chisq(lag)] Box-Pierce Q-statistics on residualsQ(5)=17.7914[0.00321948]Q(10)=26.4749[0.00315138]Q(20)=44.9781[0.00111103]Box-Pierce Q-statistics on squared residuals-->P-values adjusted by2degree(s)of freedomQ(5)=8.01956[0.0456093]Q(10)=12.4119[0.133749]Q(20)=34.563[0.0107229]--------------ARCH1-2test:F(2,4306)= 2.7378[0.0648]ARCH1-5test:F(5,4300)= 1.5635[0.1668]ARCH1-10test:F(10,4290)= 1.2342[0.2632]--------------Diagnostic test based on the news impact curve(EGARCH vs.GARCH)Test ProbSign Bias t-Test 1.175980.23960Negative Size Bias t-Test 1.828560.06747Positive Size Bias t-Test0.975420.32935Joint Test for the Three Effects 4.468820.21509---------------Joint Statistic of the Nyblom test of stability: 1.77507Individual Nyblom Statistics:Cst(M)0.43501Cst(V)0.22234GARCH(Beta1)0.10147ARCH(Alpha1)0.10050Rem:Asymptotic1%critical value for individual statistics=0.75.Asymptotic5%critical value for individual statistics=0.47.---------------Adjusted Pearson Chi-square Goodness-of-fit testLags Statistic P-Value(lag-1)P-Value(lag-k-1)4078.06890.0002040.0000405089.05190.0004090.00010060103.25320.0003250.00008913Rem.:k=#estimated parameters---------------Elapsed Time: 4.67seconds(or0.0778333minutes).The stochastic volatility package(SvPack),written by Neil Shephard,is essentially a dy-namic link library for Ox of C code that deals with the implementation of likelihood inference in volatility models.The fact that it is written in C guarantees optimal speed,whereas the linking to Ox definitely improves usability.It requires the Ox state space package(SsfPack), which provides for Kalmanfiltering,smoothing and simulation smoothing algorithms of Gaus-sian multivariate state space forms(see Koopman,Shephard and Doornik,1999;Ooms,1999, and also ),as well as ARMS(Adaptive Rejection Metropolis Sam-pling),an Ox front-end for C code for adaptive rejection sampling algorithms(i.e.,routines for efficient sampling from complicated univariate densities)developed and documented by Michael Pitt(based on C code by Wally Gilks).The Arfima package is a set of Ox functions that create a class(an ARFIMA object) for the estimation and testing of AR(F)IMA models(Beran,1994).The models can be esti-mated via exact maximum likelihood,modified profile likelihood and nonlinear least squares. ArfimaSim is an additional simulation class included in the Arfima package that provides the means for Monte Carlo experiments based on the Arfima class.The Dynamic Panel Data package,DPD,like the Arfima and G@RCH packages,is a nice example of object-oriented Ox programming.They are derived classes written in Ox.DPD, which is entirely written in Ox,implements dynamic panel data models,as well as some static ones,and can handle unbalanced panels.Monte Carlo experimentation is possible with the simulation class DPSSim,included in this Ox add-on package.5.GraphicsOx has a number of commands that help create publication-quality graphics.This is, however,one of the areas where more progress is expected.The graphics capabilities of the console version of Ox are not comparable to those of,say,GAUSS,MATLAB,R or S-PLUS.It is important to note,however,that the professional version of Ox comes with an impressive interface for graphics:GiveWin.It allows users,for example,to modify a graph with a few clicks of the mouse.With GiveWin,it is possible to edit all graphs on the screen, manipulate areas,add Greek letters,add labels,change fonts,etc.Therefore,users who intend to make extensive use of the plotting capabilities of the language to produce publication quality graphics should consider using the professional version of Ox.5An alternative strategy would be to use Ox for programming,save the results to afile, read the resultsfile into R,which is also free,and then produce publication quality plots from there.6It is also possible to use GnuDraw,an Ox package written by Charles Bos (http://www2.tinbergen.nl/~cbos/).GnuDraw allows users to create gnuplot(http:// )graphics from Ox,extending the possibilities offered by existing OxDraw 5The newest,just released,version3.00of Ox has improved graphics capabilities.For instance,it now has built-in functions for producing3D plots.6For details on R,see .14。
数值分析论文范文
数值分析论文范文Title: An Overview of Numerical AnalysisIntroduction:Numerical analysis is a field of mathematics that deals with the development and application of algorithms to solve mathematical problems. In this paper, we will provide an overview of numerical analysis, including its history, important concepts, and applications in various fields.History of Numerical Analysis:Important Concepts in Numerical Analysis:2. Error Analysis: Since numerical methods involve approximation, it is essential to quantify and analyze theerrors associated with these approximations. Error analysis provides insights into the accuracy and efficiency of numerical algorithms. Different error measures, such as absolute error, relative error, and truncation error, are used to evaluate the quality of the approximate solutions.3. Numerical Algorithms: Numerical analysis relies on the development and implementation of efficient algorithms to solve mathematical problems. Iterative methods, such as Newton's method for finding roots of equations and the Jacobi method for solving linear systems, are widely used in numerical analysis.Applications of Numerical Analysis:3. Finance: In finance, numerical analysis is used to model and solve problems related to option pricing, portfolio optimization, risk management, and financial derivatives pricing. The Black-Scholes-Merton model, for instance, relies heavily on numerical methods for pricing options.Conclusion:。
New Newton's Method with Third-order Convergence for Solving Nonlinear Equations
New Newton’s Method with Third-order Convergence for Solving Nonlinear EquationsOsama Yusuf AbabnehAbstract—For the last years,the variants of the Newton’s methodwith cubic convergence have become popular iterative methods tofind approximate solutions to the roots of non-linear equations.Thesemethods both enjoy cubic convergence at simple roots and do notrequire the evaluation of second order derivatives.In this paper,wepresent a new Newton’s method based on contra harmonic mean withcubically convergent.Numerical examples show that the new methodcan compete with the classical Newton’s method.Keywords—Third-order convergence,Non-linear equations,Root-finding,Iterative method.I.I NTRODUCTIONS OLVING non-linear equations is one of the most impor-tant problems in numerical analysis.In this paper,weconsider iterative methods tofind a simple root of a non-linear equation f(x)=0,where f:D⊂R→R for anopen interval D is a scalar function.The classical Newtonmethod for a single non-linear equation is written asx n+1=x n−f(x n)(x n).(1)This is an important and basic method[8],which converges quadratically.Recently,some modified Newton methods with cubic convergence have been developed in[1],[2],[3],[4],[5], [6]and[7].Here,we will obtain a new modification of New-tons method.Analysis of convergence shows the new method is cubically convergent.Its practical utility is demonstrated by numerical examples.Letαbe a simple zero of a sufficiently differentiable function f and consider the numerical solution of the equation f(x)=0.It is clear thatf(x)=f(x n)+xx nf (t)dt.(2)Suppose we interpolate f on the interval[x n,x]by the con-stant f (x n),then(x−x n)f (x n)provides an approximation for the indefinite integral in(2)and by taking x=αwe obtain 0≈f(x n)+(α−x n)f (x n).Thus,a new approximation x n+1toαis given byx n+1=x n−f(x n) f (x n).Dr.Osama Yusuf Ababneh is with the Department of Mathematics,Irbid National University,Irbid,Jordan e-mail:(ababnehukm@).On the other hand,if we approximate the indefinite integral in(2)by the trapezoidal rule and take x=α,we obtain 0≈f(x n)+12(α−x n)(f (x n)+f (α)),and therefore,a new approximation x n+1toαis given byx n+1=x n−2f(x n)f (x n)+f (x n+1).If the Newton’s method is used on the right-hand side of the above equation to overcome the implicity problem,thenx n+1=x n−2f(x n)f (x n)+f (z n+1),(3) wherez n+1=x n−f(x n)f (x n)is obtained which is,for n=0,1,2,...,the trapezoidal Newton’s method of Fernando et al.[1].Let us rewrite equation(3)asx n+1=x n−f(x n)(f (x n)+f (z n+1))/2,n=0,1, (4)So,this variant of Newton’s method can be viewed as obtained by using arithmetic mean of f (x n)and f (z n+1)instead of f (x n)in Newton’s method defined by(1).Therefore,we call it arithmetic mean Newton’s(AN)method.In[3],the harmonic mean instead of the arithmetic mean is used to get a new formulax n+1=x n−f(x n)(f (x n)+f (z n+1))2f (x n)f (z n+1),n=0,1, (5)which is called harmonic mean Newton’s(HN)method and used the midpoint to getx n+1=x n−f(x n)((x n+z n+1)/2),n=0,1, (6)which is called midpoint Newton’s(MN)method.II.N EW ITERATIVE METHOD AND CONVERGENCEANALYSISIf we use the contra harmonic mean instead of the arithmetic mean in(4),we get new Newton methodx n+1=x n−f(x n)(f (x n)+f (z n+1))f 2(x n)+f 2(z n+1),n=0,1, (7)wherez n +1=x n −f (x n )f (x n ),n =0,1, (8)we call contra harmonic Newton’s (CHN)method.Theorem 2.1:Let α∈D be a simple zero of a sufficientlydifferentiable function f :D ⊂R →R for an open interval D.If x 0is sufficiently close to α,then the methods defined by (7)converge cubically.Proof Let αbe a simple zero of f .Since f is sufficiently differentiable,by expanding f (x n )and f (x n )about αwe getf (x n )=f (α) e n +c 2e 2n +c 3e 3n+... ,(9)andf (x n )=f (α) 1+2c 2e n +3c 3e 2n +4c 4e 3n+... ,(10)where c k =(1/k !)f (k )(α)/f(α),k =2,3,...and e n =x n −α.Direct division gives usf (x n )f (x n )=e n −c 2e 2n +2(c 22−c 3)e 3n +O (e 4n ),and hence,for z n +1given in (8)we havez n +1=α+c 2e 2n +2(c 3−c 22)e 3n +O (e 4n ).(11)Again expanding f (z n +1)about αand using (11)we obtainf (z n +1)=f (α)+(z n +1−α)f (α)+(z n +1−α)2!f(α)+...=f (α)+[c 2e 2n +2(c 3−c 22)e 3n +O (e 4n )]f(α)+O (e 4n )=f (α)[1+2c 22e 2n +4(c 2c 3−c 32)e 3n +O (e 4n )].(12)By using (10)we obtainf 2(x n )=f 2(α)[1+4c 2e n + 4c 22+6c 3 e 2+(12c 2c 3+8c 4)e 3+...].From (12),we getf 2(z n +1)=f 2(α) 1+4c 22e 2n + 8c 2c 3−8c 32 e 3+... ,andf 2(x n )+f 2(z n +1)=2f 2(α)[1+2c 2e n + 4c 22+3c 3 e 2n + 4c 4+10c 2c 3−4c 32 e 3n +...].From (10)and (12)we getf (x n )+f (z n +1)=2f (α)[1+c 2e n +c 22+32c 3e 2n +2 c 2c 3−c 32+c 4 e 3n +O (e 4n )],and using (9)to getf (x n )(f (x n )+f (z n +1))=2f 2(α)[e n +2c 2e 2n+2c 22+52c 3e 3n +O (e 4n )].Hence,f (x n )(f (x n )+f (z n +1))f 2(x n )+f 2(z n +1)=e n −2c 22+12c 3 e 3n +O (e 4n ),x n +1=x n −f (x n )(f (x n )+f (z n +1))f 2(x n )+f 2(z n +1),x n +1=x n −e n −2c 22+12c 3 e 3n +O (e 4n ) ,or subtracting αfrom both sides of this equation we gete n +1=2c 22+12c 3 e 3n +O (e 4n ),which shows that contra harmonic Newton’s method is ofthird order.III.N UMERICAL RESULTS AND CONCLUSIONSIn this section,we present the results of some numerical tests to compare the efficiencies of the new method (CHN).We employed (CN)method,(AN)method of Fernando et al.[1],and (HN)and (MN)methods in [3].Numerical computations reported here have been carried out in a MTHEMATICA environment .The stopping criterion has been taken as |x n +1−x n |<ε,We used the fixed stopping criterion ε=10−14and the following test functions have been used.f 1(x )=x 3+4x 2−10,α=1.365230013414097,f 2(x )=x 2−e x −3x +2,α=0.2575302854398608,f 3(x )=Sinx 2−x 2+1,α=1.404491648215341,f 4(x )=Cosx −x,α=0.7390851332151607,f 5(x )=(x −1)3−1,α=2.In Table 1and Table 2,we give the number of iterations (N)and total number of function evaluations (TNFE)required to satisfy the stopping criterion.As far as the numerical results are considered,for most of the cases HN and MN methods requires the least number of function evaluations.All numerical results are in accordance with the theory and the basic advantage of the variants of Newton’s method based on means or integration methods that they do not require the computation of second-or higher-order derivatives although they are of third order.R EFERENCES[1]S.Weerakoon,T.G.I.Fernando,A variant of Newtons method withaccelerated third-order convergence,Appl.Math.Lett.13(2000)87-93.[2]M.Frontini,E.Sormani,Some variants of Newtons method with third-order convergence,put.140(2003)419-426.[3] A.Y .¨o zban,Some new variants of Newtons method,Appl.Math.Lett.17(2004)677-682.[4]M.Frontini,E.Sormani,Modified Newtons method with third-orderconvergence and multiple roots,put.Appl.Math.156(2003)345-354.TABLE II TERATION NUMBER(N)f(x)NCN AN HN MN CHNf1,x0=164445f2,x0=154444f2,x0=265545f2,x0=375555f3,x0=175455f3,x0=375445f4,x0=153444f4,x0=1.754444f4,x0=−0.364555f5,x0=375555TABLE IIT HE TOTAL NUMBER OF FUNCTION EVALUATIONS(TNFE)f(x)TNEFCN AN HN MN CHNf1,x0=11212121215f2,x0=11012121212f2,x0=21215151515f2,x0=31415151515f3,x0=11415121515f3,x0=31415121215f4,x0=1109121212f4,x0=1.71012121212f4,x0=−0.31212151515f5,x0=31415151515[5]Changbum Chun,A two-parameter third-order family of methods forsolving nonlinear equations,Applied Mathematics and Computation189 (2007)1822-1827.[6]Kou Jishenga,LiYitianb,Wang Xiuhuac,Third-order modification ofNewtons method,Journal of Computational and Applied Mathematics 205(2007)1 5.[7]Mamta,V.Kanwar,V.K.Kukreja,Sukhjit Singh,On some third-orderiterative methods for solving nonlinear equations,Applied Mathematics and Computation171(2005)272-280.[8] A.M.Ostrowski,Solution of Equations in Euclidean and Banach Space,third ed.,Academic Press,NewYork,1973.。
mapreduce数据分析-文档资料
ABSTRACT:There is currently considerable enthusiasm around the MapReduce (MR) paradigm for large-scale data analysis. Although the basic control flow of this framework has existed in parallel SQL database management systems (DBMS) for over 20 years, some have called MR a dramatically new computing model. In this paper, we describe and compare both paradigms. Furthermore, we evaluate both kinds of systems in terms of performance and development complexity. To this end, we define a benchmark consisting of a collection of tasks that we have run on an open source version of MR as well as on two parallel DBMSs. For each task, we measure each system’s performance for various degrees of parallelism on a cluster of 100 nodes. Our results reveal some interesting trade-offs. Although the process to load data into and tune the execution of parallel DBMSs took much longer than the MR system, the observed performance of these DBMSs was strikingly better. We speculate about the causes of the dramatic performance difference and consider implementation concepts that future systems should take from both kinds of architectures.
数学英文论文
070451 Controlling chaos based on an adaptive nonlinear compensatingmechanism*Corresponding author,Xu Shu ,email:123456789@Abstract The control problems of chaotic systems are investigated in the presence of parametric u ncertainty and persistent external distu rbances based on nonlinear control theory. B y designing a nonlinear compensating mechanism, the system deterministic nonlinearity, parametric uncertainty and disturbance effect can be compensated effectively. The renowned chaotic Lorenz system subject to parametric variations and external disturbances is studied as an illustrative example. From Lyapu nov stability theory, sufficient conditions for the choice of control parameters are derived to guarantee chaos control. Several groups of experiments are carried out, including parameter change experiments, set-point change experiments and disturbance experiments. Simulation results indicate that the chaotic motion can be regulated not only to stead y states but also to any desired periodic orbits with great immunity to parametric variations and external distu rbances.Keywords: chaotic system, nonlinear compensating mechanism, Lorenz chaotic systemPACC: 05451. IntroductionChaotic motion, as the peculiar behavior in deterministic systems, may be undesirable in many cases, so suppressing such a phenomenon has been intensively studied in recent years. Generally speaking chaos suppression and chaos synchronization[1-4 ]are two active research fields in chaos control and are both crucial in application of chaos. In the following letters we only deal with the problem of chaos suppression and will not discuss the chaos synchronization problem.Since the early 1990s, the small time-dependent parameter perturbation was introduced by Ott,Grebogi, and Y orke to eliminate chaos,[5]many effective control methods have been reported in various scientific literatures.[1-4,6-36,38-44,46] There are two lines in these methods. One is to introduce parameter perturbations to an accessible system parameter, [5-6,8-13] the other is to introduce an additive external force to the original uncontrolled chaotic system. [14-37,39-43,47] Along the first line, when system parameters are not accessible or can not be changed easily, or the environment perturbations are not avoided, these methods fail. Recently, using additive external force to achieve chaos suppression purpose is in the ascendant. Referring to the second line of the approaches, various techniques and methods have been proposed to achieve chaos elimination, to mention only a few:(ⅰ) linear state feedback controlIn Ref.[14] a conventional feedback controller was designed to drive the chaotic Duffing equation to one of its inherent multiperiodic orbits.Recently a linear feedback control law based upon the Lyapunov–Krasovskii (LK) method was developed for the suppression of chaotic oscillations.[15]A linear state feedback controller was designed to solve the chaos control problem of a class of new chaotic system in Ref.[16].(ⅱ) structure variation control [12-16]Since Y u X proposed structure variation method for controlling chaos of Lorenz system,[17]some improved sliding-mode control strategies were*Project supported by the National Natural Science Foundation of C hina (Grant No 50376029). †Corresponding au thor. E-mail:zibotll@introduced in chaos control. In Ref.[18] the author used a newly developed sliding mode controller with a time-varying manifold dynamic to compensate the external excitation in chaotic systems. In Ref.[19] the design schemes of integration fuzzy sliding-mode control were addressed, in which the reaching law was proposed by a set of linguistic rules. A radial basis function sliding mode controller was introduced in Ref.[20] for chaos control.(ⅲ) nonlinear geometric controlNonlinear geometric control theory was introduced for chaos control in Ref.[22], in which a Lorenz system model slightly different from the original Lorenz system was studied considering only the Prandtl number variation and process noise. In Ref.[23] the state space exact linearization method was also used to stabilize the equilibrium of the Lorenz system with a controllable Rayleigh number. (ⅳ)intelligence control[24-27 ]An intelligent control method based on RBF neural network was proposed for chaos control in Ref.[24]. Liu H, Liu D and Ren H P suggested in Ref.[25] to use Least-Square Support V ector Machines to drive the chaotic system to desirable points. A switching static output-feedback fuzzy-model-based controller was studied in Ref.[27], which was capable of handling chaos.Other methods are also attentively studied such as entrainment and migration control, impulsive control method, optimal control method, stochastic control method, robust control method, adaptive control method, backstepping design method and so on. A detailed survey of recent publications on control of chaos can be referenced in Refs.[28-34] and the references therein.Among most of the existing control strategies, it is considered essentially to know the model parameters for the derivation of a controller and the control goal is often to stabilize the embedded unstable period orbits of chaotic systems or to control the system to its equilibrium points. In case of controlling the system to its equilibrium point, one general approach is to linearize the system in the given equilibrium point, then design a controller with local stability, which limits the use of the control scheme. Based on Machine Learning methods, such as neural network method[24]or support vector machine method,[25]the control performance often depends largely on the training samples, and sometimes better generalization capability can not be guaranteed.Chaos, as the special phenomenon of deterministic nonlinear system, nonlinearity is the essence. So if a nonlinear real-time compensator can eliminate the effect of the system nonlinearities, chaotic motion is expected to be suppressed. Consequently the chaotic system can be controlled to a desired state. Under the guidance of nonlinear control theory, the objective of this paper is to design a control system to drive the chaotic systems not only to steady states but also to periodic trajectories. In the next section the controller architecture is introduced. In section 3, a Lorenz system considering parametric uncertainties and external disturbances is studied as an illustrative example. Two control schemes are designed for the studied chaotic system. By constructing appropriate L yapunov functions, after rigorous analysis from L yapunov stability theory sufficient conditions for the choice of control parameters are deduced for each scheme. Then in section 4 we present the numerical simulation results to illustrate the effectiveness of the design techniques. Finally some conclusions are provided to close the text.2. Controller architectureSystem differential equation is only an approximate description of the actual plant due to various uncertainties and disturbances. Without loss of generality let us consider a nonlinear continuous dynamic system, which appears strange attractors under certain parameter conditions. With the relative degree r n(n is the dimension of the system), it can be directly described or transformed to the following normal form:121(,,)((,,)1)(,,,)(,,)r r r z z z z za z v wb z v u u d z v u u vc z v θθθθθθθθ-=⎧⎪⎪⎪=⎪=+∆+⎨⎪ ++∆-+⎪⎪ =+∆+⎪=+∆⎩ (1) 1y z =where θ is the parameter vector, θ∆ denotes parameter uncertainty, and w stands for the external disturbance, such that w M ≤with Mbeingpositive.In Eq.(1)1(,,)T r z z z = can be called external state variable vector,1(,,)T r n v v v += called internal state variable vector. As we can see from Eq.(1)(,,,,)(,,)((,,)1)d z v w u a z v w b z v uθθθθθθ+∆=+∆+ ++∆- (2)includes system nonlinearities, uncertainties, external disturbances and so on.According to the chaotic system (1), the following assumptions are introduced in order to establish the results concerned to the controller design (see more details in Ref.[38]).Assumption 1 The relative degree r of the chaotic system is finite and known.Assumption 2 The output variable y and its time derivatives i y up to order 1r -are measurable. Assumption 3 The zero dynamics of the systemis asymptotically stable, i.e.,(0,,)v c v θθ=+∆ is asymptotically stable.Assumption 4 The sign of function(,,)b z v θθ+∆is known such that it is always positive or negative.Since maybe not all the state vector is measurable, also (,,)a z v θθ+∆and (,,)b z v θθ+∆are not known, a controller with integral action is introduced to compensate theinfluenceof (,,,,)d z v w u θθ+∆. Namely,01121ˆr r u h z h z h z d------ (3) where110121112100ˆr i i i r r r r i i ii r i i d k z k k k z kz k uξξξ-+=----++-==⎧=+⎪⎪⎨⎪=----⎪⎩∑∑∑ (4)ˆdis the estimation to (,,,,)d z v w u θθ+∆. The controller parameters include ,0,,1i h i r =- and ,0,,1i k i r =- . Here011[,,,]Tr H h h h -= is Hurwitz vector, such that alleigenvalues of the polynomial121210()rr r P s s h sh s h s h --=+++++ (5)have negative real parts. The suitable positive constants ,0,,1i h i r =- can be chosen according to the expected dynamic characteristic. In most cases they are determined according to different designed requirements.Define 1((,,))r k sign b z v θμ-=, here μstands for a suitable positive constant, and the other parameters ,0,,2i k i r =- can be selected arbitrarily. After011[,,,]Tr H h h h -= is decided, we can tune ,0,,1i k i r =- toachievesatisfyingstaticperformances.Remark 1 In this section, we consider a n-dimensional nonlinear continuous dynamic system with strange attractors. By proper coordinate transformation, it can be represented to a normal form. Then a control system with a nonlinear compensator can be designed easily. In particular, the control parameters can be divided into two parts, which correspond to the dynamic characteristic and the static performance respectively (The theoretic analysis and more details about the controller can be referenced to Ref.[38]).3. Illustrative example-the Lorenz systemThe Lorenz system captures many of the features of chaotic dynamics, and many control methods have been tested on it.[17,20,22-23,27,30,32-35,42] However most of the existing methods is model-based and has not considered the influence ofpersistent external disturbances.The uncontrolled original Lorenz system can be described by112121132231233()()()()x P P x P P x w x R R x x x x w xx x b b x w =-+∆++∆+⎧⎪=+∆--+⎨⎪=-+∆+⎩ (6) where P and R are related to the Prendtl number and Rayleigh number respectively, and b is a geometric factor. P ∆, R ∆and b ∆denote the parametric variations respectively. The state variables, 1x ,2x and 3x represent measures of fluid velocity and the spatial temperature distribution in the fluid layer under gravity , and ,1,2,3i w i =represent external disturbance. In Lorenz system the desired response state variable is 1x . It is desired that 1x is regulated to 1r x , where 1r x is a given constant. In this section we consider two control schemes for system (6).3.1 Control schemes for Lorenz chaotic system3.1.1 Control scheme 1The control is acting at the right-side of the firstequation (1x), thus the controlled Lorenz system without disturbance can be depicted as1122113231231x Px Px u xRx x x x x x x bx y x =-++⎧⎪=--⎨⎪=-⎩= (7) By simple computation we know system (7) has relative degree 1 (i.e., the lowest ordertime-derivative of the output y which is directly related to the control u is 1), and can be rewritten as1122113231231z Pz Pv u vRz z v v v z v bv y z =-++⎧⎪=--⎨⎪=-⎩= (8) According to section 2, the following control strategy is introduced:01ˆu h z d=-- (9) 0120010ˆ-d k z k k z k uξξξ⎧=+⎪⎨=--⎪⎩ (10) Theorem 1 Under Assumptions 1 toAssumptions 4 there exists a constant value *0μ>, such that if *μμ>, then the closed-loop system (8), (9) and (10) is asymptotically stable.Proof Define 12d Pz Pv =-+, Eq.(8) can be easily rewritten as1211323123z d u v Rz z v v vz v bv =+⎧⎪=--⎨⎪=-⎩ (11) Substituting Eq.(9) into Eq.(11) yields101211323123ˆz h z d dv R z z v v v z v bv ⎧=-+-⎪=--⎨⎪=-⎩ (12) Computing the time derivative of d and ˆdand considering Eq.(12) yields12011132ˆ()()dPz Pv P h z d d P Rz z v v =-+ =--+- +-- (13) 0120010000100ˆ-()()ˆ=()d k z k k z k u k d u k d k z k d d k dξξξ=+ =--++ =-- - = (14)Defining ˆdd d =- , we have 011320ˆ()()dd d P h P R z P z v P v P k d=- =+- --+ (15) Then, we can obtain the following closed-loop system101211323123011320()()z h z dvRz z v v v z v bv d Ph PR z Pz v Pv P k d⎧=-+⎪=--⎪⎨=-⎪⎪=+---+⎩ (16) To stabilize the closed-loop system (16), a L yapunovfunction is defined by21()2V ςς=(17)where, ςdenotes state vector ()123,,,Tz v v d, isthe Euclidean norm. i.e.,22221231()()2V z v v dς=+++ (18) We define the following compact domain, which is constituted by all the points internal to the superball with radius .(){}2222123123,,,2U z v v d zv v dM +++≤(19)By taking the time derivative of ()V ςand replacing the system expressions, we have11223322*********01213()()(1)V z z v v v v dd h z v bv k P d R z v P R P h z d P v d P z v d ς=+++ =----++ +++-- (20) For any ()123,,,z v v d U ∈, we have: 222201230120123()()(1)V h z v b v k P dR z v PR Ph z d P v d d ς≤----+ ++++ ++ (21)Namely,12300()(1)22020V z v v dPR Ph R h R P ς⎡⎤≤- ⎣⎦++ - 0 - - 1 - 2⨯00123(1)()2Tb PR Ph P k P z v v d ⎡⎤⎢⎥⎢⎥⎢⎥⎢⎥⎢⎥⎢⎥0 ⎢⎥2⎢⎥++⎢⎥- - - +⎢⎥⎣22⎦⎡⎤⨯ ⎣⎦(22) So if the above symmetrical parameter matrix in Eq.(22) is positive definite, then V is negative and definite, which implies that system (16) is asymptotically stable based on L yapunov stability theory.By defining the principal minor determinants of symmetrical matrix in Eq.(22) as ,1,2,3,4i D i =, from the well-known Sylvester theorem it is straightforward to get the following inequations:100D h => (23)22004RD h =-> (24)23004R b D bh =-> (25)240302001()(1)(2)821[2(1)]08P M D k P D b PR Ph PR D Pb Ph R PR Ph =+-+++--+++>(26)After 0h is determined by solving Inequalities (23) to (25), undoubtedly, the Inequalities (26) can serve effectively as the constraints for the choice of 0k , i.e.20200031(1)(2)821[2(1)]8P M b PR Ph PR D Pb Ph R PR Ph k P D ++++ ++++>- (27)Here,20200*31(1)(2)821[2(1)]8P M b PR Ph PR D Pb Ph R PR Ph P D μ++++ ++++=-.Then the proof of the theorem 1 is completed. 3.1.2 Control scheme 2Adding the control signal on the secondequation (2x ), the system under control can be derived as112211323123x P x P x x R x x x x u xx x bx =-+⎧⎪=--+⎨⎪=-⎩ (28) From Eq.(28), for a target constant 11()r x t x =,then 1()0xt = , by solving the above differential equation, we get 21r r x x =. Moreover whent →∞,3r x converges to 12r x b . Since 1x and 2x havethe same equilibrium, then the measured state can also be chosen as 2x .To determine u , consider the coordinate transform:122133z x v x v x=⎧⎪=⎨⎪=⎩ and reformulate Eq.(28) into the following normal form:1223121231231zRv v v z u vPz Pv v z v bv y z =--+⎧⎪=-⎨⎪=-⎩= (29) thus the controller can be derived, which has the same expression as scheme 1.Theorem 2 Under Assumptions 1, 2, 3 and 4, there exists a constant value *0μ>, such that if *μμ>, then the closed-loop system (9), (10) and (29) is asymptotically stable.Proof In order to get compact analysis, Eq.(29) can be rewritten as12123123z d u v P z P v vz v bv =+⎧⎪=-⎨⎪=-⎩ (30) where 2231d Rv v v z =--Substituting Eq.(9) into Eq.(30),we obtain:1012123123ˆz h z d dv P z P v v z v bv ⎧=-+-⎪=-⎨⎪=-⎩ (31) Giving the following definition:ˆdd d =- (32) then we can get22323112123212301()()()()dRv v v v v z R Pz Pv Pz Pv v v z v bv h z d =--- =--- ----+ (33) 012001000ˆ-()d k z k k z k u k d u k dξξ=+ =--++ = (34) 121232123010ˆ()()()(1)dd d R Pz Pv Pz Pv v v z v bv h z k d=- =--- --+-+ (35)Thus the closed-loop system can be represented as the following compact form:1012123123121232123010()()()(1)zh z d v Pz Pv v z v bv d R Pz Pv Pz Pv v v z v bv h z k d⎧=-+⎪⎪=-⎪=-⎨⎪=---⎪⎪ --+-+⎩(36) The following quadratic L yapunov function is chosen:21()2V ςς=(37)where, ςdenotes state vector ()123,,,Tz v v d , is the Euclidean norm. i.e.,22221231()()2V z v v dς=+++ (38) We can also define the following compact domain, which is constituted by all the points internalto the super ball with radius .(){}2222123123,,,2U z v v d zv v dM =+++≤ (39)Differentiating V with respect to t and using Eq.(36) yields112233222201230011212322321312()(1)(1)()V z z v v v v dd h z P v bv k dP R h z d P z v z v v P b v v d P v d P z v d z v d ς=+++ =----+ +++++ ++--- (40)Similarly, for any ()123,,,z v v d U ∈, we have: 2222012300112133231()(1)(1)(2V h z P v b v k dPR h z d P z v v P b d P v d d M z dς≤----+ +++++ ++++ + (41)i.e.,12300()(12)22V z v v dPR M h P h P Pς⎡⎤≤- ⎣⎦+++ - -2 - 0 ⨯ 001230(12)(1)2TP b PR M h P k z v v d ⎡⎤⎢⎥⎢⎥⎢⎥ - ⎢⎥⎢⎥⎢⎥ ⎢⎥22⎢⎥⎢⎥ +++ - - -+⎢⎥⎣22⎦⎡⎤⨯ ⎣⎦(42) For brevity, Let1001(12)[(222)82(23)]P PR M h b PR P h M P b α=++++++ ++(43) 2201[(231)(13)]8P M P b b PR h α=+-+++ (44)230201(2)[2(12)8(2)(4)]PM P b P P PR M h P b Ph P α=++ +++ ++- (45)Based on Sylvester theorem the following inequations are obtained:100D h => (46)22004PD h P =-> (47)3202PMD bD =-> (48)403123(1)0D k D ααα=+---> (49)where,1,2,3,4i D i =are the principal minordeterminants of the symmetrical matrix in Eq.(42).*0k μ>*12331D αααμ++=- (50)The theorem 2 is then proved.Remark 2 In this section we give two control schemes for controlling chaos in Lorenz system. For each scheme the control depends on the observed variable only, and two control parameters are neededto be tuned, viz. 0h and 0k . According to L yapunov stability theory, after 0h is fixed, the sufficient condition for the choice of parameter 0k is also obtained.4. Simulation resultsChoosing 10P =,28R =, and 8/3b =, the uncontrolled Lorenz system exhibits chaotic behavior, as plotted in Fig.1. In simulation let the initial values of the state of thesystembe 123(0)10,(0)10,(0)10x x x ===.x1x 2x1x 3Fig.1. C haotic trajectories of Lorenz system (a) projected on12x x -plane, (b) projected on 13x x -plane4.1 Simulation results of control the trajectory to steady stateIn this section only the simulation results of control scheme 2 are depicted. The simulation results of control scheme 1 will be given in Appendix. For the first five seconds the control input is not active, at5t s =, control signal is input and the systemtrajectory is steered to set point2121(,,)(8.5,8.5,27.1)T Tr r r x x x b =, as can be seen inFig.2(a). The time history of the L yapunov function is illustrated in Fig.2(b).t/sx 1,x 2,x 3t/sL y a p u n o v f u n c t i o n VFig.2. (a) State responses under control, (b) Time history of the Lyapunov functionA. Simulation results in the presence ofparameters ’ changeAt 9t s =, system parameters are abruptly changed to 15P =,35R =, and 12/3b =. Accordingly the new equilibrium is changedto 2121(,,)(8.5,8.5,18.1)T Tr r r x x x b =. Obviously, aftervery short transient duration, system state converges to the new point, as shown in Fig.3(a). Fig.4(a) represents the evolution in time of the L yapunov function.B. Simulation results in the presence of set pointchangeAt 9t s =, the target is abruptly changedto 2121(,,)(12,12,54)T Tr r r x x x b =, then the responsesof the system state are shown in Fig.3(b). In Fig.4(b) the time history of the L yapunov function is expressed.t/sx 1,x 2,x 3t/sx 1,x 2,x 3Fig.3. State responses (a) in the presence of parameter variations, (b) in the presence of set point changet/sL y a p u n o v f u n c t i o n Vt/sL y a p u n o v f u n c t i o n VFig.4. Time history of the Lyapunov fu nction (a) in the presence of parameter variations, (b) in the presence of set point changeC. Simulation results in the presence ofdisturbanceIn Eq.(5)external periodic disturbance3cos(5),1,2,3i w t i π==is considered. The time responses of the system states are given in Fig.5. After control the steady-state phase plane trajectory describes a limit cycle, as shown in Fig.6.t/sx 1,x 2,x 3Fig.5. State responses in the presence of periodic disturbancex1x 3Fig.6. The state space trajectory at [10,12]t ∈in the presence ofperiodic disturbanceD. Simulation results in the presence of randomnoiseUnder the influence of random noise,112121132231233xPx Px x Rx x x x u xx x bx εδεδεδ=-++⎧⎪=--++⎨⎪=-+⎩ (51) where ,1,2,3i i δ= are normally distributed withmean value 0 and variance 0.5, and 5ε=. The results of the numerical simulation are depicted in Fig.7,which show that the steady responses are hardly affected by the perturbations.t/sx 1,x 2,x 3t/se 1,e 2,e 3Fig.7. Time responses in the presence of random noise (a) state responses, (b) state tracking error responses4.2 Simulation results of control the trajectory to periodic orbitIf the reference signal is periodic, then the system output will also track this signal. Figs.8(a) to (d) show time responses of 1()x t and the tracking trajectories for 3-Period and 4-period respectively.t/sx 1x1x 2t/sx 1x1x 2Fig.8. State responses and the tracking periodic orbits (a)&( b)3-period, (c)&(d) 4-periodRemark 3 The two controllers designed above solved the chaos control problems of Lorenz chaoticsystem, and the controller design method can also beextended to solve the chaos suppression problems of the whole Lorenz system family, namely the unified chaotic system.[44-46] The detail design process and close-loop system analysis can reference to the author ’s another paper.[47] In Ref.[47] according to different positions the scalar control input added,three controllers are designed to reject the chaotic behaviors of the unified chaotic system. Taking the first state 1x as the system output, by transforming system equation into the normal form firstly, the relative degree r (3r ≤) of the controlled systems i s known. Then we can design the controller with the expression as Eq.(3) and Eq.(4). Three effective adaptive nonlinear compensating mechanisms are derived to compensate the chaotic system nonlinearities and external disturbances. According toL yapunov stability theory sufficient conditions for the choice of control parameters are deduced so that designers can tune the design parameters in an explicit way to obtain the required closed loop behavior. By numeric simulation, it has been shown that the designed three controllers can successfully regulate the chaotic motion of the whole family of the system to a given point or make the output state to track a given bounded signal with great robustness.5. ConclusionsIn this letter we introduce a promising tool to design control system for chaotic system subject to persistent disturbances, whose entire dynamics is assumed unknown and the state variables are not completely measurable. By integral action the nonlinearities, including system structure nonlinearity, various disturbances, are compensated successfully. It can handle, therefore, a large class of chaotic systems, which satisfy four assumptions. Taking chaotic Lorenz system as an example, it has been shown that the designed control scheme is robust in the sense that the unmeasured states, parameter uncertainties and external disturbance effects are all compensated and chaos suppression is achieved. Some advantages of this control strategy can be summarized as follows: (1) It is not limited to stabilizing the embeddedperiodic orbits and can be any desired set points and multiperiodic orbits even when the desired trajectories are not located on the embedded orbits of the chaotic system.(2) The existence of parameter uncertainty andexternal disturbance are allowed. The controller can be designed according to the nominal system.(3) The dynamic characteristics of the controlledsystems are approximately linear and the transient responses can be regulated by the designer through controllerparameters ,0,,1i h i r =- .(4) From L yapunov stability theory sufficientconditions for the choice of control parameters can be derived easily.(5) The error converging speed is very fast evenwhen the initial state is far from the target one without waiting for the actual state to reach the neighborhood of the target state.AppendixSimulation results of control scheme 1.t/sx 1,x 2,x 3t/sL y a p u n o v f u n c t i o n VFig.A1. (a) State responses u nder control, (b) Time history of the Lyapunov functiont/sx 1,x 2,x 3t/sx 1,x 2,x 3Fig.A2. State responses (a) in the presence of parameter variations, (b) in the presence of set point changet/sL y a p u n o v f u n c t i o n Vt/sL y a p u n o v f u n c t i o n VFig.A3. Time history of the L yapu nov fu nction (a) in the presence of parameter variations, (b) in the presence of set point changet/sx 1,x 2,x 3Fig.A4. State responses in the presence of periodic disturbanceresponsest/sx 1,x 2,x 3Fig.A5. State responses in the presence of rand om noiset/sx 1x1x 2Fig.A6. State response and the tracking periodic orbits (4-period)References[1] Lü J H, Zhou T S, Zhang S C 2002 C haos Solitons Fractals 14 529[2] Yoshihiko Nagai, Hua X D, Lai Y C 2002 C haos Solitons Fractals 14 643[3] Li R H, Xu W , Li S 2007 C hin.phys.16 1591 [4]Xiao Y Z, Xu W 2007 C hin.phys.16 1597[5] Ott E ,Greb ogi C and Yorke J A 1990 Phys.Rev .Lett. 64 1196 [6]Yoshihiko Nagai, Hua X D, Lai Y C 1996 Phys.Rev.E 54 1190 [7] K.Pyragas, 1992 Phys. Lett. A 170 421 [8] Lima,R and Pettini,M 1990 Phys.Rev.A 41 726[9] Zhou Y F, Tse C K, Qiu S S and Chen J N 2005 C hin.phys. 14 0061[10] G .Cicog na, L.Fronzoni 1993 Phys.Rew .E 30 709 [11] Rakasekar,S. 1993 Pramana-J.Phys.41 295 [12] Gong L H 2005 Acta Phys.Sin.54 3502 (in C hinese) [13] Chen L,Wang D S 2007 Acta Phys.Sin.56 0091 (in C hinese) [14] C hen G R and Dong X N 1993 IEEE Trans.on Circuits andSystem-Ⅰ:Fundamental Theory and Applications 40 9 [15] J.L. Kuang, P.A. Meehan, A.Y.T. Leung 2006 C haos SolitonsFractals 27 1408[16] Li R H, Xu W, Li S 2006 Acta Phys.Sin.55 0598 (in C hinese) [17] Yu X 1996 Int.J.of Systems Science 27 355[18] Hsun-Heng Tsai, C hyu n-C hau Fuh and Chiang-Nan Chang2002 C haos,Solitons Fractals 14 627[19] Her-Terng Yau and C hieh-Li C hen 2006 C hao ,SolitonsFractal 30 709[20] Guo H J, Liu J H, 2004 Acta Phys.Sin.53 4080 (in C hinese) [21] Yu D C, Wu A G , Yang C P 2005 Chin.phys.14 0914 [22] C hyu n-C hau Fuh and Pi-Cheng Tu ng 1995 Phys.Rev .Lett.752952[23] Chen L Q, Liu Y Z 1998 Applied Math.Mech. 19 63[24] Liu D, R en H P, Kong Z Q 2003 Acta Phys.Sin.52 0531 (inChinese)[25] Liu H, Liu D and Ren H P 2005 Acta Phys.Sin.54 4019 (inChinese)[26] C hang W , Park JB, Joo YH, C hen GR 2002 Inform Sci 151227[27] Gao X, Liu X W 2007 Acta Phys.Sin. 56 0084 (in C hinese) [28] Chen S H, Liu J, Lu J 2002 C hin.phys.10 233 [29] Lu J H, Zhang S. 2001 Phys. Lett. A 286 145[30] Liu J, Chen S H, Xie J. 2003 C haos Solitons Fractals 15 643 [31] Wang J, Wang J, Li H Y 2005 C haos Solitons Fractals 251057[32] Wu X Q, Lu JA, C hi K. Tse, Wang J J, Liu J 2007 ChaoSolitons Fractals 31 631[33] A.L.Fradkov , R .J.Evans, 2002 Preprints of 15th IF AC W orldCongress on Automatic Control 143[34] Zhang H G 2003 C ontrol theory of chaotic systems (Shenyang:Northeastern University) P38 (in C hinese)[35] Yu-Chu Tian, Moses O.Tadé, David Levy 2002Phys.Lett.A.296 87[36] Jose A R , Gilberto E P, Hector P, 2003 Phys. Lett. A 316 196 [37] Liao X X, Yu P 2006 Chaos Solitons Fractals 29 91[38] Tornambe A, V aligi P.A 1994 Measurement, and C ontrol 116293[39] Andrew Y.T.Leung, Liu Z R 2004 Int.J.Bifurc.C haos 14 2955 [40] Qu Z L, Hu,G .,Yang,G J, Qin,G R 1995 Phys.Rev .Lett.74 1736 [41] Y ang J Z, Qu Z L, Hu G 1996 Phys.Rev.E.53 4402[42] Shyi-Kae Yang, C hieh-Li Chen, Her-Terng Yau 2002 C haosSolitons Fractals 13 767。
美赛外文数据库
美赛外文数据库评价模型:层次分析模糊数学灰色系统投资组合的熵理论和信息价值literaturetomotivateyourmodelandtovalidateyourreult.egregioulyimplifietheproblem,itmayprovideinighttothebaicbeha viorandinpireimprovedmodel.Thiprogreionleadtoarepectableandatifyingfinalmodel,wit hitvaliditytemmingfromtheprecedingmodel.建立多个模型。
即使一开始的模型假设太过过分,但可以使我们对原理有所了解。
因此,如果有的模型错了的话,可以写上,但要注明错在哪里the\A连续B离散问题三大组织:TheIntituteforOperationReearchandtheManagementScience(INFORM S)(管理科学与运用协会)TheSocietyforIndutrialandAppliedMathematic(SIAM)(美国工程与运用数学协会)TheMathematicalAociationofAmerica(MAA)(美国数学协会)国家科技图书文献中心中科院武汉情报中心:外文数据库”1同济图书馆:美国生态学会ESA运筹学和管理学研究协会是出题的组织之一发布最新的运筹学和管理学方法和应用。
此外,INFORMS还不定期组织众多的专业会议。
Paword:Inform05具体11个杂志的研究领域:1.DeciionAnalyi---决策分析本期刊创刊于2004年,是同行评审性季刊。
推动决策分析的理论、应用和教学。
本刊主要聚焦于发展和研究运筹决策制定方法,吸收决策理论和决策分析的各个方面,为决策制定者们提供实用的指导。
rmationSytemReearch信息系统研究国际领先的理论、研究和学术思想发展的期刊,聚焦于组织、机构、社会的信息系统,为人类组织和管理提供更多的信息技术生产应用知识关于运筹研究和计算交集的论文。
科技论文英语摘要中特殊字符的统计分析——以数学为例
值 得商 榷 的 问题 。通过 对 比 、参 照 大 ,分别 有4 % H 3 的中外 期刊 样 7 ̄ 4% 中 外数 学期 刊英 语摘 要 中特 殊字 符 本 中并未 出现 任 何特 殊字 符 ,与 上 的处理 办法 ,我们对 这 个 问题将 会 述检 索 系统 的规 定相 符 。 因此 ,如
受 到 充分 的重 视 ,在 编 写格 式上 也 应 与 国 际趋势 接 轨 ,走标 准 化和 规
v s o iy c e f c e t - 0 [ ] i c s t o f i in u - . 2 >
同样 , 可 以 使 用 文 字 叙 述 替
换 数 学 符 号 。 如 例 6 的 ∈可 以 中
T E AM RI A MA HE A C L H E C N T M TI A
S0C IETY , AD VANCES IN
表 1申外数学期刊英文摘要 中的特殊字符比较
无( 0 ) 11 % 47 4
o b t [] r i … 7
J
例5 :…m l i l s l t o s 同义语 ,因此 可 以省 略 ,而 数学表 符 及 其构 成 的数学 表达 式 。 u t p e o u in
t nonli ar 2mt 0rd 0 ne h er
达式 也 可用 文字 替 代 ,现将 本 例更
数学表达式
例8 n h s o e e h w :I t i n t w s o
ta T a Mn h t n nd
, ,
,
t fr cti nal he a o
i n t e g ra 1 a n d m a x im a l o e a o s w t r u h e n l p r t r i h o g k r e ,
(完整版)偏微分方程数值解
在常微分方程差分法中最简单的方法是 Euler方法,尽管在计算中不会使用,但从 中可领悟到建立差分格式的技术路线,下 面将对其作详细介绍:
13
差分方法的基本思想“就是以差商 代替微商”
考虑如下两个Taylor公式:
u(t h) u(t) u(t)h 1 u(t)h2 1 u(t)h3 L 1 u(n) (t)hn O(hn1) (1)
6
50 40 30 20 10
0 -10 -20 -30
0
5 10 15 20 25 30 35 40 45 50
7
50 40 30 20 10
0 20
0
-20 30
20
10
0
-10
-20
-30
8
9
10
Franceshini 将Navier-Stokes方程截断为五维的
截谱模型如下:
ìïïïïïïïïïíïïïïïïïïî
Meteorology(2nd Edition), the United States of America, 1979. 2. Curtis F.Gerald and Patrick O., Applied Numerical Analysis, Person Education,
Inc., 2004. 3. Eugenia Kalnay, Atmospheric Modeling, Data Assimilation and Predictability,
• 建立差分算法的两个基本的步骤: 1. 建立差分格式,包括:a. 对解的存在域剖分; b. 采用不同的算法可得到不同的逼近误差—截断 误差(相容性);c.数值解对真解的精度—整体 截断误差(收敛性);d.数值解收敛于真解的速 度;e. 差分算法—舍人误差(稳定性).
外文数学期刊(SCI)_8
外文数学期刊(SCI)(注:仅供参考)Journal of Differential Equations《微分方程杂志》美国ISSN:0022-0396,1965年创刊,全年18期,Elsevier Science 出版社出版,SCI收录期刊,影响因子0.862。
刊载微分方程理论及应用的研究论文。
Journal of Mathematical Analysis and Applications《数学分析与应用杂志》美国ISSN:0022-247X,1960年创刊,全年24期,Elsevier Science 出版社出版,SCI收录期刊,影响因子0.473。
刊载数学分析及其应用方面的研究论文,侧重于运用数学解决物理学、化学、生物学和工程学等领域出现的问题。
Nonlinear Analysis《非线性分析》美国ISSN:0362-546X,Elsevier Science 出版社出版,SCI、EI收录期刊,影响因子0.354。
Nonlinear Analysis: Real World Applications《非线性分析:真实世界的应用》美国ISSN:1468-1218,2000年创刊,全年5期,Elsevier Science 出版社出版,SCI收录期刊,影响因子0.257。
Journal of Computational and Applied Mathematics《计算与应用数学杂志》荷兰ISSN:0377-0427,1975年创刊,全年24期,Elsevier Science 出版社出版,SCI、EI收录期刊,影响因子0.567。
刊载应用数学,特别是解决科学问题的新计算技术,以及经过认真验证的算法等方面的研究论文、评论及快报。
Journal of Nonlinear Science《非线性科学杂志》德国ISSN:0938-8974,1991年创刊,全年6期,Springer-Verlag出版社。
Galerkin Finite Element Methods
GALERKIN FINITE ELEMENT APPROXIMATIONS OFSTOCHASTIC ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS ∗IVO BABU ˇSKA†,RA ´UL TEMPONE †,AND GEORGIOS E.ZOURARIS ‡SIAM J.N UMER.A NAL .c2004Society for Industrial and Applied Mathematics Vol.42,No.2,pp.800–825Abstract.We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions.Here the aim of the com-putations is to approximate statistical moments of the solution,and,in particular,we give a priori error estimates for the computation of the expected value of the solution.The first method gener-ates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation.The Monte Carlo method then uses these approximations to compute corresponding sample averages.The sec-ond method is based on a finite dimensional approximation of the stochastic coefficients,turning the original stochastic problem into a deterministic parametric elliptic problem.A Galerkin finite element method,of either the h -or p -version,then approximates the corresponding deterministic solution,yielding approximations of the desired statistics.We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy.This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.Key words.stochastic elliptic equation,perturbation estimates,Karhunen–Lo`e ve expansion,finite elements,Monte Carlo method,k ×h -version,p ×h -version,expected value,error estimatesAMS subject classifications.65N30,65N15,65C05,65C20DOI.10.1137/S00361429024186801.Introduction.Due to the great development in computational resources and scientific computing techniques,more mathematical models can be solved efficiently.Ideally,this artillery could be used to solve many classical partial differential equa-tions,the mathematical models we shall focus on here,to high accuracy.However,in many cases,the information available to solve a given problem is far from complete and is in general very limited.This is the case when solving a partial differential equation whose coefficients depend on material properties that are known to some ac-curacy.The same may occur with its boundary conditions and even with the geometry of its domain;see,for example,the works [5,4].Naturally,since the current engi-neering trends are going toward more reliance on computational predictions,the need for assessing the level of accuracy in the results grows accordingly.More than ever,the goal then becomes to represent and propagate uncertainties from the available data to the desired result through our partial differential equation.By uncertainty we mean either intrinsic variability of physical quantities or simply lack of knowledge about some physical behavior;cf.[38].If variability is interpreted as randomness,∗Receivedby the editors November 26,2002;accepted for publication (in revised form)September26,2003;published electronically June 4,2004./journals/sinum/42-2/41868.html †Institute for Computational and Engineering Sciences (ICES),University of Texas at Austin,Austin,TX 78759(babuska@,rtempone@).The first author was par-tially supported by the National Science Foundation (grant DMS-9802367)and the Office of Naval Research (grant N00014-99-1-0724).The second author was partially supported by the Swedish Council for Engineering Science grant 222-148,National Science Foundation grant DMS-9802367,UdelaR and UdeM in Uruguay,and the European network HYKE (contract HPRN-CT-2002-00282).‡Department of Mathematics,University of the Aegean,GR–83200Karlovassi,Samos,Greece,and Institute of Applied and Computational Mathematics,FO.R.T.H.,GR–71110Heraklion,Crete,Greece (zouraris@aegean.gr).This author was partially supported by the European network HYKE (contract HPRN-CT-2002-00282).800FEM FOR STOCHASTIC ELLIPTIC PDEs801 then naturally we can apply probability theory.To be fruitful,probability theory requires considerable empirical information about the random quantities in question, usually in the form of probability distributions or their statistical moments.On the other hand,if the only available information comes in the form of some bounds for the uncertain variables,the description and analysis of uncertainty may be based on other methods,such as convexity methods;cf.[8,18].This approach is closely related to the so-called worst case scenario.This work addresses elliptic partial differential equations with stochastic coef-ficients,with applications to physical phenomena,e.g.,random vibrations,seismic activity,oil reservoir management,and composite materials;see[2,17,19,22,27,28, 30,39,43]and the references therein.Solving a stochastic partial differential equation entailsfinding the joint probability distribution of the solution,which is a hard prob-lem.In practice we shall usually be satisfied with much less,namely,the computation of some moments,e.g.,the expected value of the solution,or some probability related to a given event,e.g.,the probability of some eventual failure;cf.[26,34].Although the results presented in this paper can be generalized to linear elliptic stochastic partial differential equations we now focus our study on the standard model prob-lem,a second order linear elliptic equation with homogeneous Dirichlet boundary conditions.Let D be a convex bounded polygonal domain in R d and(Ω,F,P)be a complete probability space.HereΩis the set of outcomes,F⊂2Ωis theσ-algebra of events, and P:F→[0,1]is a probability measure.Consider the following stochastic linear elliptic boundary value problem:find a stochastic function,u:Ω×D→R,such that P-a.e.inΩ,or,in other words,almost surely(a.s.),the following equation holds:−∇·(a(ω,·)∇u(ω,·))=f(ω,·)on D,u(ω,·)=0on∂D.(1.1)Here a,f:Ω×D→R are stochastic functions with continuous and bounded co-variance functions.If we denote by B(D)the Borelσ-algebra generated by the open subsets of D,then a,f are assumed measurable with theσ-algebra(F⊗B(D)).In what follows we shall assume that a is bounded and uniformly coercive,i.e.,∃a min,a max∈(0,+∞):Pω∈Ω:a(ω,x)∈[a min,a max]∀x∈D=1.(1.2)To ensure regularity of the solution u we assume also that a has a uniformly bounded and continuousfirst derivative;i.e.,there exists a real deterministic constant C such thatPω∈Ω:a(ω,·)∈C1(D)and maxD|∇x a(ω,·)|<C=1(1.3)and that the right-hand side in(1.1)satisfiesΩDf2(ω,x)dx dP(ω)<+∞,which impliesDf2(ω,x)dx<+∞a.s.(1.4)802IVO BABUˇSKA,RA´UL TEMPONE,AND GEORGIOS E.ZOURARIS Depending on the structure of the noise that drives an elliptic partial stochastic differential equation,there are different numerical approximations.For example,when the size of the noise is relatively small,a Neumann expansion around the mean value of the elliptic operator in(1.1)is a popular approach.It requires only the solution of standard deterministic partial differential equations,the number of them being equal to the number of terms in the expansion.Equivalently,a Taylor expansion of the solution around its mean value with respect to the noise yields the same result. Similarly,the work[30]uses formal Taylor expansions up to second order of the solution but does not study their convergence properties.Recently,the work[3] proposed a perturbation method with successive approximations.It also proves that uniform coercivity of the diffusion is sufficient for the convergence of the perturbation method.When only the load f is stochastic,it is also possible to derive deterministic equations for the statistical moments of the solution.This case was analyzed in[1,32] and more recently in the work[40],where a new method to solve these equations with optimal complexity is presented.On the other hand,the works by Deb[14],Deb,Babuˇs ka,and Oden[15],Ghanem and Red-Horse[21],and Ghanem and Spanos[22]address the general case where all the coefficients are stochastic.Both approaches transform the original stochastic problem into a deterministic one with a large dimensional parameter,and they differ in the choice of the approximating functional spaces.The works[14,15]usefinite ele-ments to approximate the noise dependence of the solution,while[21,22]use a formal expansion in terms of Hermite polynomials.The approximation error in the approach [14,15]can then be bounded in terms of deterministic quantities,as described in this work.Afterfinishing this paper the authors became aware of the work[9],which de-veloped a related error analysis for elliptic stochastic differential equations.The work [9]gives approximation error estimates for functionals of the solution,while our work focuses on error estimates for the strong approximation of the statistical moments of the solution.Besides,we use the Karhunen–Lo`e ve expansion and characterize the regularity of the solution,yielding,e.g.,exponential rates of convergence;cf.section 6.On the other hand,the analysis in[9]uses the regularity of the computed func-tional together with estimates in negative spaces for the approximation error in the solution of the stochastic partial differential equation.This negative estimate can in principle accommodate rough solutions;however,they require H2spatial regularity, an assumption that is not clearly fulfilled by rough solutions.Monte Carlo methods are both general and simple to code,and they are naturally suited for parallelization.They generate a set of independent identically distributed (iid)approximations of the solution by sampling the coefficients of the equation,using a spatial discretization of the partial differential equation,e.g.,by a Galerkinfinite element method.Then using these approximations we can compute corresponding sample averages of the desired statistics.Monte Carlo methods have a rate of con-vergence that may be considered slow,but its computational work grows only like a polynomial with respect to the number of random variables present in the problem. It is worth mentioning that in particular cases their convergence can be accelerated by variance reduction techniques;see[29].The convergence rate of the Monte Carlo method is interpreted in the probability sense,and a practical estimate of its error needs an a posteriori estimate of the variance of the sampled random variable,which in turn requires an a priori bound on higher statistical moments;cf.the Berry-Essen theorem in[16].Besides this,if the probability density of a random variable is smooth,FEM FOR STOCHASTIC ELLIPTIC PDEs 803the convergence rate of the Monte Carlo method for the approximation of its expected value can be improved;cf.[35,45].Quasi-Monte Carlo methods (see [12,41,42])offer a way to get a better convergence rate than the Monte Carlo method,although this advantage seems to deteriorate in general when the number of random variables present in the problem becomes large.Another way to provide a notion of stochastic partial differential equations is based on the Wick product and the Wiener chaos expansion;see [27]and [46].This approach yields solutions in Kondratiev spaces of stochastic distributions that are based on a different interpretation of (1.1);the solutions proposed in [27]and [46]are not the same as those arising from (2.1).The choice between (2.1)and [27]is a modeling decision,based on the physical situation under study.For example,with the Wick product we have E [a u ]=E [a ]E [u ],regardless of the correlation between a and u ,whereas this is in general not true with the usual product.A numerical ap-proximation for Wick stochastic linear elliptic partial differential equations is studied in [44],yielding a priori convergence rates.This work studies the case of stochastic linear elliptic partial differential equa-tions with random diffusion and load coefficients,stating and proving conditions for existence and uniqueness of solutions;for example,to obtain a meaningful numerical solution for (1.1)its diffusion coefficient should be uniformly coercive.Besides,it compares a Monte Carlo Galerkin method with the stochastic Galerkin finite element method introduced in [14]and introduces a related p -version,developing a priori er-ror estimates for each case.A priori error estimates are useful to characterize the convergence,and ultimately they provide information to compare the number of op-erations required by numerical methods.The conclusion for now is that if the noise is described by a small number of random parameters or if the accuracy require-ment is sufficiently strict,then a stochastic Galerkin method is preferred;otherwise,a Monte Carlo Galerkin method still seems to be the best choice;see section 8.It is worth mentioning that the development of numerical methods for stochastic differen-tial equations is still very much ongoing,and better numerical methods are expected to appear.2.Theoretical aspects of the continuous problem.2.1.Notation and function spaces.Let d be a positive integer and D bean open,connected,bounded,and convex subset of R d with polygonal boundary∂D .Denote the volume of D by |D |≡ D 1dx.For a nonnegative integer s and 1≤p ≤+∞,let W s,q (D )be the Sobolev space of functions having generalized derivatives up to order s in the space L q (D ).Using the standard multi-index notation,α=(α1,...,αd )is a d -tuple of nonnegative integers,and the length of αis given by |α|= d i =1αi .The standard Sobolev norm of v ∈Ws,q (D )will be denoted by v W s,q (D ),1≤q ≤+∞.Whenever q =2,we shall use the notation H s (D )insteadof W s,2(D ).As usual,the function space H 10(D )is the subspace of H 1(D )consisting of functions which vanish at theboundary of D in the sense of trace,equipped with the norm v H 10(D )={ D|∇v |2dx }1/2.Whenever s =0we shall keep the notation with L q (D )instead of W 0,q (D ).For the sake of generality,sometimes we shall let H be a Hilbert space with inner product (·,·)H .In that case we shall also denote the dual space of H ,H ,that contains linear bounded functionals,L :H →R ,and is endowed with the operator norm L H =sup 0=v ∈H L (v ) v H .Since stochastic functions intrinsically have different structure with respect to ωand with respect to x ,the analysis of numerical approximations requires tensor804IVO BABU ˇSKA,RA ´ULTEMPONE,AND GEORGIOS E.ZOURARIS spaces.Let H 1,H 2be Hilbert spaces.The tensor space H 1⊗H 2is the completion of formal sums u (y,x )= i =1,...,n v i (y )w i (x ),{v i }⊂H 1,{w i }⊂H 2,with respect to the inner product (u,ˆu )H 1⊗H 2= i,j (v i ,ˆvj )H 1(w i ,ˆw j )H 2.For example,let us consider two domains,y ∈Γ,x ∈D ,and the tensor space L 2(Γ)⊗H 1(D ),with tensor inner product(u,ˆu )L 2(Γ)⊗H 1(D )=ΓD u (y,x )ˆu (y,x )dx dy + ΓD∇x u (y,x )·∇x ˆu(y,x )dx dy.Thus,if u ∈L 2(Γ)⊗H k (D ),then u (y,·)∈H k (D )a.e.on Γand u (·,x )∈L 2(Γ)a.e.on D .Moreover,we have the isomorphism L 2(Γ)⊗H k (D ) L 2(Γ;H k (D )) H k (D ;L 2(Γ))with the definitionsL 2(Γ;H k (D ))= v :Γ×D →R |v is strongly measurable andΓv (y,·) 2H k (D )<+∞ ,H k (D ;L 2(Γ))=v :Γ×D →R |v is strongly measurable and ∀|α|≤k ∃∂αv ∈L 2(Γ)⊗L 2(D )with ΓD ∂αv (y,x )ϕ(y,x )dxdy =(−1)|α|Γ Dv (y,x )∂αϕ(y,x )dxdy ∀ϕ∈C ∞0(Γ×D ) .Similar constructions can be done for the tensor product of Banach spaces,although the norm for the tensor space used to obtain the completion of the formal sums has to be defined explicitly on each case.Here the Banach space C (Γ;H )comprises all continuous functions u :Γ→H with the norm u C (Γ;H )≡sup y ∈Γ u (y ) H .Similar definitions apply to the spaces C k (Γ;H ),k =1,...;cf.[20,p.285].Let Y be an R N -valued random variable in (Ω,F ,P ).If Y ∈L 1P (Ω),we denote itsexpected value by E [Y ]= ΩY (ω)dP (ω)= R N y dµY (y ),where µY is the distributionmeasure for Y ,defined for the Borel sets ˜b ∈B (R N )by µY (˜b )≡P (Y −1(˜b )).If µY is absolutely continuous with respect to the Lebesgue measure,then there exists a density function ρY :R →[0,+∞)such that E [Y ]= R N y ρY (y )dy .Analogously,whenever Y i ∈L 2P (Ω)for i =1,...,d ,the covariance matrix of Y ,Cov [Y ]∈Rd ×d ,is defined by Cov [Y ](i,j )=Cov (Y i ,Y j )=E [(Y i −E [Y i ])(Y j −E [Y j ])],i,j =1,...,d .Besides this,whenever u (ω,x )is a stochastic process the positive semidefinite function Cov [u ](x 1,x 2)=Cov [u (x 1),u (x 2)]=Cov [u (x 2),u (x 1)]is the covariance function of the stochastic process u .To introduce the notion of stochastic Sobolev spaces we first recall the defini-tion of stochastic weak derivatives.Let v ∈L 2P (Ω)⊗L 2(D );then the αstochastic weak derivative of v,w =∂αv ∈L 2P (Ω)⊗L 2(D ),satisfies D v (ω,x )∂αφ(x )dx =(−1)|α| D w (ω,x )φ(x )dx ∀φ∈C ∞0(D )a.s.We shall work with stochastic Sobolev spaces W s,q (D )=L q P (Ω,W s,q (D ))con-taining stochastic functions,v :Ω×D →R ,that are measurable with respect to theFEM FOR STOCHASTIC ELLIPTIC PDEs 805product σ-algebra F ⊗B (D )and equipped with the averaged norms v W s,q (D )=(E [ v qW s,q (D )])1/q =(E [ |α|≤s D |∂αv |q dx ])1/q ,1≤q <+∞,and v W s,∞(D )=max |α|≤s ess sup Ω×D |∂αv | .Observe that if v ∈ Ws,q (D ),then v (ω,·)∈W s,q (D )a.s.and ∂αv (·,x )∈L qP (Ω)a.e.on D for |α|≤s .Whenever q =2,the above space is a Hilbert space,i.e., W s,2(D )= H s (D ) L 2P(Ω)⊗H s (D ).2.2.Existence and uniqueness for the solution of a linear stochastic elliptic problem.Let us consider the tensor product Hilbert space H = H 10(D ) L 2P (Ω;H 10(D ))endowed with the inner product (v,u )H ≡E [ D ∇v ·∇udx ].Define the bilinear form,B :H ×H →R ,by B (v,w )≡E [ D a ∇v ·∇wdx ]∀v,w ∈H .The standard assumption (1.2)yields both the continuity and the coercivity of B ;i.e.,|B (v,w )|≤a max v H w H ∀v,w ∈H ,and a min v 2H ≤B (v,v )∀v ∈H .A direct application of the Lax–Milgram lemma (cf.[11])implies the existence anduniqueness for the solution to the following variational formulation:find u ∈H such thatB (u,v )=L (v )∀v ∈H.(2.1)Here L (v )≡E [ D fvdx ]∀v ∈H defines a bounded linear functional since the randomfield f satisfies (1.4).Since the domain D is convex and bounded and assumptions (1.2),(1.3)on the diffusion a hold,the theory of elliptic regularity (cf.[20])impliesthat the solution of (1.1)satisfies u (ω,·)∈H 2(D )∩H 10(D )a.s.Moreover,standardarguments from measure theory show that the solution to (2.1)also solves (1.1).The formulation (2.1),together with assumption (2.1)on finite dimensional noise,gives the basis for the stochastic Galerkin finite element method (SGFEM)introduced in sections 5and 6,while formulation (1.1)is the basis for the Monte Carlo Galerkin finite element method (MCGFEM),discussed in section 4.2.3.Continuity with respect to the coefficients a and f .Since the coeffi-cients a and f are not known exactly,in the next proposition we consider a perturbed weak formulation and estimate the size of the corresponding perturbation in the so-lution.The proof uses standard estimates and is included in [6].Proposition 2.1.Let (H,(·,·)H )be a Hilbert space.Consider two symmetric bilinear forms B , B :H ×H →R that are H -coercive and bounded;i.e.,there exist real constants 0<a min ≤a max such thata min v 2H ≤min {B (v,v ), B (v,v )}∀v ∈H (2.2)andmax {|B (v,w )|,| B (v,w )|}≤a max v H w H ∀v,w ∈H.(2.3)Consider two bounded linear functionals,L ,ˆL ∈H ,and let u ,ˆu ∈H be the solutionsof the problemsB (u,v )=L (v )∀v ∈H,ˆB (ˆu ,v )=ˆL(v )∀v ∈H.If,in addition,we know that there exist Banach spaces,V 1,V 2,and positive constants,C ,γ ,such that u ∈V 2⊆H ⊂V 1, · V 1≤C · H ,and|(ˆB −B )(w,v )|≤γ w V 1 v V 2∀w ∈H,v ∈V 2,(2.4)806IVO BABU ˇSKA,RA ´UL TEMPONE,AND GEORGIOS E.ZOURARIS thenu −ˆu H ≤1min ( L −ˆL H +Cγ u V 2).(2.5)Next we consider a perturbation of (2.1).A direct application of Proposition 2.1yields the following estimate.Corollary 2.1.Let 1<p <+∞with 1/p +1/q =1.Consider the Hilbert spaceH = H 10(D )and perturbed coefficients,ˆa ,ˆf ,satisfying 0<a min ≤ˆa ≤a max <∞,(P ⊗dx )a.e.on D ×Ω,ˆf ∈ L 2(D ).Let u and ˆu solve E [ D ˆa ∇ˆu ·∇vdx ]=E [ D ˆfvdx ]∀v ∈H,E [ D a ∇u ·∇vdx ]=E [ D fvdx ]∀v ∈H .Besides this,assume that thesolution u belongs to the stochastic Sobolev space W1,2q (D ).Then u −ˆu H 10(D )≤1a min (C D ˆf −f L 2(D )+ a −ˆaL 2p (D ) u W 1,2q (D )),with C D >0being the Poincar´e constant for the domain D ;i.e., v L 2(D )≤C D v H 10(D )∀v ∈H 10(D ).Proof.Take V 1= H 10(D )and V 2= W 1,2q (D ).In order to apply (2.5)it is enough to bound the difference of bilinear forms E D (a −ˆa )∇u ·∇vdx ≤ E D (a −ˆa )2|∇u |2dx 1/2 E D|∇v |2dx1/2≤ ED (a −ˆa )2p dx 1/2pE D |∇u |2q dx 1/2q E D |∇v |2dx 1/2.2.4.Karhunen–Lo`e ve expansions and finite dimensional noise.Here we recall the Karhunen–Lo`e ve expansion,a suitable tool for the approximation of stochas-tic processes.Consider a stochastic process a with continuous covariance function Cov [a ]:D ×D →R .Besides this,let {(λn ,b n )}∞n =1denote the sequence of eigenpairs associated with the compact self-adjoint operator that maps f ∈L 2(D )→D Cov [a ](x,·)f (x )dx ∈L 2(D ).Its nonnegative eigenvalues, D ×D (Cov [a ](x 1,x 2))2dx 1dx 2≥λ1≥λ2≥···≥0,satisfy +∞n =1λn= D V ar [a ](x )dx .The corresponding eigenfunctions are orthonor-mal,i.e., D b i (x )b j (x )dx =δij .The truncated Karhunen–Lo`e ve expansion of the stochastic process a (cf.[33])isa N (ω,x )=E [a ](x )+Nn =1 λn b n (x )Y n (ω),where the real random variables,{Y n }∞n =1,are mutually uncorrelated and have mean zero and unit variance.These random variables are uniquely determined by Y n (ω)=1√λn D(a (ω,x )−E [a ](x ))b n (x )dx for λn >0.Then,by Mercer’s theorem (cf.[37,p.245]),we havesup x ∈D E [(a −a N )2](x )=sup x ∈D +∞n =N +1λn b 2n (x )→0as N →∞.FEM FOR STOCHASTIC ELLIPTIC PDEs 807If,in addition,•the images Y n (Ω),n =1,...,are uniformly bounded in R ,•the eigenfunctions b n are smooth,which is the case when the covariance function is smooth,•and the eigenpairs have at least the decay √λn b n L ∞(D )=O (11+n s )forsome s >1,then a −a N L ∞(D )→0.Notice that for larger values of the decay exponent s we can also obtain the convergence of higher spatial derivatives of a N in L∞(D ).The last two conditions can be readily verified once the covariance function of a is known.However,observe that it is also necessary to verify the uniform coercivity of a N ,which depends on the probability distributions of Y n ,n =1,....In many problems the source of the randomness can be approximated using just a small number of mutually uncorrelated,sometimes mutually independent,random variables.Take,for example,the case of a truncated Karhunen–Lo`e ve expansion described previously.Assumption 2.1(finite dimensional noise ).Whenever we apply some numer-ical method to solve (1.1)we assume that the coefficients used in the computa-tions,a,f :Ω×D →R ,are finite Karhunen–Lo`e ve expansions;i.e.,a (ω,x )=E [a ](x )+ N n =1√λn b n (x )Y n (ω)and f (ω,x )=E [f ](x )+ N n =1 ˆλn ˆb n (x )Y n (ω),where {Y n }N n =1are real random variables with mean value zero and unit variance,are uncorrelated,and have images,Γn ≡Y n (Ω),that are bounded intervals in R for n =1,...,N .Moreover,we assume that each Y n has a density function ρn :Γn →R +for n =1,...,N .In what follows we use the notation ρ(y )∀y ∈Γfor the joint probability density of (Y 1,...,Y N )and Γ≡ N n =1Γn ⊂R N for the support of such probability density.After making Assumption 2.1,we have by the Doob–Dynkin lemma (cf.[36])that u ,the solution corresponding to the stochastic partial differential equation (1.1),can be described by just a finite number of random variables,i.e.,u (ω,x )=u (Y 1(ω),...,Y N (ω),x ).The number N has to be large enough so that the approximation error is sufficiently small.Now the goal is to approximate the function u (y,x ).In addition,the stochastic variational formulation (2.1)has a deterministic equivalent in the following:find u ∈L 2ρ(Γ)⊗H 10(D )such thatΓρ(y ) D a (y,x )∇u (y,x )·∇v (y,x )dxdy = Γρ(y ) Df (y,x )v (y,x )dxdy ∀v ∈L 2ρ(Γ)⊗H 10(D ).(2.6)In this work the gradient notation,∇,always means differentiation with respect to x ∈D only,unless otherwise stated.The corresponding strong formulation for (2.6)is an elliptic partial differential equation with an N -dimensional parameter,i.e.,−∇·(a (y,x )∇u (y,x ))=f (y,x )∀(y,x )∈Γ×D,u (y,x )=0∀(y,x )∈Γ×∂D.(2.7)Making Assumption 2.1is a crucial step,turning the original stochastic elliptic equa-tion (1.1)into a deterministic parametric elliptic one and allowing the use of finite element and finite difference techniques to approximate the solution of the resulting deterministic problem.808IVO BABU ˇSKA,RA ´ULTEMPONE,AND GEORGIOS E.ZOURARIS Truncation of the outcomes set ,Γ.For the sake of efficiency,it may be useful to compute the solution of (2.7)in a subdomain with strictly positive probability,Γ0⊂Γ.Besides,we assume the probability density of Y to be strictly positive in Γ0.In that case,we approximate the functionE [u (Y,·)1{Y ∈Γ0}]=E [u (Y,·)|Y ∈Γ0]P (Y ∈Γ0)instead of the original E [u ].If ¯u is an approximation of u in Γ0,then we have the splittingE [u (Y,·)]−E [¯u (Y,·)1{Y ∈Γ0}]≤ E [u (Y,·)]−E [u (Y,·)1{Y ∈Γ0}] + E [u (Y,·)−¯u (Y,·)|Y ∈Γ0] P (Y ∈Γ0).(2.8)Property 2.1below gives a simple estimate for the first error contribution,which is related to the truncation of Γ.The second error contribution in (2.8)is the discretiza-tion error,and it will be analyzed for each numerical approximation separately;see sections 4,5,and 6.In those sections we shall simplify the notation by writing Γ=Γ0and work with the corresponding conditional probability space.Property 2.1.Let u be the solution of the problem (2.7);then there exists a constant C such thatE [u (Y,·)]−E [u (Y,·)1{Y ∈Γ}] H 10(D )≤C P (Y /∈Γ0) f L 2ρ(Γ\Γ0)⊗L 2(D ).(2.9)3.The finite element spaces.In this section,we define tensor product finite element spaces on the set Γ×D ,which we will use to construct approximations of the solution of the parametric boundary value problem (2.7),stating their approximation properties.3.1.Finite element spaces on the spatial set D ⊂R d :h -version.Con-sider a family of finite element approximation spaces,X d h ⊂H 10(D ),consisting ofpiecewise linear continuous functions on conforming triangulations (of simplices),T d h ,of the convex polyhedral domain,D ⊂R d ,with a maximum mesh spacing parameterh >0.We will always assume that the triangulations are nondegenerate (sometimesalso called regular);cf.,[11,p.106].Then (cf.[11,13])the finite element spaces X d h satisfy a standard approximation estimate,namely,that for all v ∈H 2(D )∩H 10(D )min χ∈X d h v −χ H 10(D )≤C h v H 2(D ),(3.1)where C >0is a constant independent of v and h .3.2.Tensor product finite element spaces on the outcomes set Γ⊂R N:k -version.Let Γ= N n =1Γn be as in subsection 2.4.Consider a parti-tion of Γconsisting of a finite number of disjoint R N -boxes,γ= N n =1(a γn ,b γn ),with (a γn ,b γn )⊂Γn for n =1,...,N .The mesh spacing parameters,k n >0,are defined by k n ≡max γ|b γn −a γn |for 1≤n ≤N .For every nonnegative inte-ger multi-index q =(q 1,...,q N )consider the finite element approximation space of(discontinuous)piecewise polynomials with degree at most q n on each direction y n ,Y q k ⊂L 2(Γ).Thus,if ϕ∈Y q k ,its restriction to each of the partition boxes satisfies ϕ|γ∈span N n =1y αn n :αn ∈N and αn ≤q n ,n =1,...,N .It is easy to verify。
The Flexible Extensible and Efficient Toolbox of Level Set Methods
J Sci Comput(2008)35:300–329DOI10.1007/s10915-007-9174-4The Flexible,Extensible and Efficient Toolboxof Level Set MethodsIan M.MitchellReceived:2February2007/Revised:6November2007/Accepted:7November2007/Published online:9December2007©Springer Science+Business Media,LLC2007Abstract Level set methods are a popular and powerful class of numerical algorithms for dynamic implicit surfaces and solution of Hamilton-Jacobi PDEs.While the advanced level set schemes combine both efficiency and accuracy,their implementation complexity makes it difficult for the community to reproduce new results and make quantitative comparisons between methods.This paper describes the Toolbox of Level Set Methods,a collection of M ATLAB routines implementing the basic level set algorithms onfixed Cartesian grids for rectangular domains in arbitrary dimension.The Toolbox’s code and interface are designed to permitflexible combinations of different schemes and PDE forms,allow easy extension through the addition of new algorithms,and achieve efficient execution despite the fact that the code is entirely written as m-files.The current contents of the Toolbox and some coding patterns important to achieving itsflexibility,extensibility and efficiency are briefly explained,as is the process of adding two new algorithms.Code for both the Toolbox and the new algorithms is available from the Web.Keywords Numerical software·Level set methods·Hamilton-Jacobi equations·Dynamic implicit surfaces·Reproducible research1IntroductionLevel set methods[29]have proved a popular technique for dynamic implicit surfaces and approximation of the time-dependent Hamilton-Jacobi(HJ)partial differential equa-tion(PDE),as evidenced by the many survey papers,textbooks and edited collections de-voted to their development;for example[12,26–28,34,35].The ease with which the ear-liest schemes could be implemented in two or three dimensions was a key facet of their popularity,and the dimensionalflexibility of many advanced schemes remains a major as-set.However,algorithm simplicity has largely lost out in recent work to the competing I.M.Mitchell( )Department of Computer Science,University of British Columbia,2366Main Mall,Vancouver,BC, Canada V6T1Z4e-mail:mitchell@cs.ubc.cademands of efficiency and accuracy.From a scientific computing perspective improving either or both of these is generally worth the increased complexity—users always have the option of going back to the simpler schemes—but there are two unintended and po-tentially adverse consequences of advanced methods.Thefirst is that scientists and engi-neers who might be interested in using dynamic implicit surfaces in their applicationfield but who are not experts at level set methods may give up when they are either unable to recreate with simple schemes the impressive published results generated by the advanced schemes,or they are overwhelmed by the details of those advanced schemes.The second is that designers of new schemesfind it increasingly difficult to promulgate their results in a reproducible manner and to provide quantitative comparisons with alternative meth-ods because of the complex algorithm and software infrastructure underlying each new ad-vance.The Toolbox of Level Set Methods(T OOLBOX LS)is designed to address these concerns. Its goal is to provide a collection of routines which implement the basic level set algorithms in M ATLAB1onfixed Cartesian grids for rectangular domains in arbitrary dimension.In using M ATLAB we seek to minimize not execution time,but the combination of coding, debugging and execution time.In our experience the visualization,debugging,data manip-ulation and scripting capabilities of M ATLAB make construction of numerical code so much simpler,when compared to compiled languages like C++or Fortran,that the increase in execution time is quite acceptable when designing new algorithms or exploring proof-of-concept for new applications.If the algorithms should prove successful but execution time and/or the restrictive class of Cartesian grids remains an impediment to adoption,a side ben-efit of T OOLBOX LS is that all of the source code is available so that recoding in a compiled language is straightforward.The Toolbox has a lengthy,indexed user manual[18],and users interested in applying level set methods to applications will probablyfind this manual a good place to get started. In this paper we instead explore the features of the Toolbox that make it suitable for devel-opers of new level set methods.In Sect.2,we briefly describe the contents of T OOLBOX LS version1.1:the kernel routines that provide a mix and match implementation of level set methods;the coding patterns we use to achieve efficiency,flexibility and dimensional in-dependence;and the many documented examples we have recreated from the literature.To demonstrate the extensibility of T OOLBOX LS,in Sect.3we describe two newly imple-mented schemes.We extend a class of SSP RK integrators[38]to handle time-dependent operators,and demonstrate their efficiency on several dynamic implicit surface problems. Then we extend a new monotone motion by mean curvature spatial approximation[22,41] to handle Cartesian grids with variable x,provide some additional order of accuracy analy-sis,and demonstrate that while the new scheme’s quantitative accuracy is poor,it provides qualitatively reasonable results in less time than the standard centered difference approxima-tion.In the spirit of the reproducible research initiative,the code for both the base Toolbox and the new additions are available as separate downloads from[17].1.1LimitationsThe decision to restrict the Toolbox to dense solutions on Cartesian discretizations of rec-tangular domains simplifies many aspects of the implementation.A major benefit is that a1M ATLAB is a product and trademark of The Mathworks Incorporated of Natick,Massachusetts.For more de-tails see /products/matlab/.T OOLBOX LS was developed by the author of this paper,and is neither endorsed by nor a product of The Mathworks.relatively high degree of computational efficiency can be achieved despite the fact that all of the code is written in m-files.On these grids the level set functions,their derivatives,and the spatially varying problem data can be represented by dense M ATLAB arrays of appropriate dimension.Applying operators to these arrays can then be vectorized in the M ATLAB sense; for example,a single short m-file command like speed.*data becomes a loop per-forming a multiplication at every node in the grid.We reap three benefits from such code: (1)it is dimensionally independent,(2)any computational overhead for interpreting the command is swamped by the huge number offloating point operations that are subsequently issued,and(3)in most M ATLAB installations,such an operator will invoke compiled code highly optimized for both cache and processor efficiency.In fact,such M ATLAB operators will often outperform equivalent looping code in naively written and compiled C/C++or Fortran.Consequently,the speed benefit of compiled versions of the Toolbox algorithms is likely to be fairly small for problems where the PDEs are solved globally on Cartesian grids.Of course,this restriction is also the primary limitation of the Toolbox.Unstructured and adaptive grids are not part of T OOLBOX LS and are never likely to be included,because the spatially varying nature of their nodes’connectivity gives rise to irregular data access patterns.Despite the addition of just-in-time compilation to recent versions of M ATLAB,m-files that include such irregular data access patterns are often orders of magnitude slower to execute than those which access the same amount of data organized in a dense regular array. On the other hand,numerical schemes for these grids are often simple—spatial refinement is used to improve accuracy rather than complex schemes with high order convergence rates—so the lack of support for these grids does not significantly detract from the goals of the Toolbox.Given that the Toolbox is constrained to Cartesian grids,a more glaring omission is the lack of support for narrowband[4]or local level set[31]algorithms.These algorithms focus their computational effort only on the nodes near the interface,so they can often achieve the same dynamic implicit surface evolution in less time than global algorithms(such as those in the Toolbox)despite the overhead of tracking this constantly evolving set of nodes. While localized algorithms generally present a clear win in compiled implementations,in a M ATLAB m-file implementation it is not clear whether the benefits of working only on a subset of nodes would offset the costs of identifying that subset(additional discussion can be found in Sect.2.4).What is clear is that M ATLAB-style vectorization in this situation would require significantly more complex implementations throughout the kernel.Because minimum execution time is not the primary objective of T OOLBOX LS,algorithmic simplic-ity has been chosen over an uncertain speed improvement.Should compiled code ever be added to the Toolbox,localization would become a much more appealing option(see Sect.4 for additional comments on including compiled code with the Toolbox).1.2Other Software PackagesWith a version1.0release date of July2004,T OOLBOX LS is to our knowledge thefirst publicly released code implementing the high accuracy level set algorithms,and it remains the only one that works in any dimension;however,it is no longer the only such package.For comparison purposes,version1.1of T OOLBOX LS has a140page indexed user man-ual,supports ten different types of time-dependent evolution,includes over twenty docu-mented examples,and is implemented with over120M ATLAB m-files(each with its own help entry).T OOLBOX LS is licensed under a modified version of the ACM Software Copy-right and License Agreement for free non-commercial use;we are investigating switching to a similar Creative Commons license[9].We are aware of the following other packages:•The Level Set Method Library(LSMLIB)[5]supports serial and parallel simulation in dimensions one to three.The code is written in a mixture of C,C++and Fortran,with M ATLAB interfaces to some components.Only two types of time-dependent evolution are presently supported(normal direction and convection),but the algorithms are localized. Fast marching algorithms for the time-independent PDEs arising in signed distance con-struction and velocity extension are included,as are routines for computing surface and volume integrals.Three short manuals(overview,users guide and reference)and com-plete code documentation are part of the download.Version0.9contains many examples (although they are not documented in the manuals)and several hundredfiles.LSMLIB does not seem to be driven by any particular applicationfield.The software is restricted to noncommercial use.•The Multivac C++Library[16]works only in two dimensions.It includes both localized algorithms and fast marching for signed distance construction.Six types of evolution are supported,of which two are forestfire models.A short hypertext user manual and complete code documentation can be found at the web site.Five examples(some with multiple versions)are included,although the user manual contains details on only one. Applications include forestfire propagation,image segmentation,and nanofilm growth. An optional display package and a GUI for image segmentation are available(written in Python with calls to Gnuplot).Version1.10includes more than100files,and is released under the GNU General Public License(GPL).•“A Matlab toolbox implementing Level Set Methods”[39]is the most similar of these packages to T OOLBOX LS,since it is also implemented entirely by M ATLAB m-files.The application emphasis is on vision and image processing,an importantfield missing from the set of examples in the current version of T OOLBOX LS.In keeping with this emphasis, version1.1of Sumengen’s package supports only two dimensional problems and three types of evolution(normal direction,curvature and/or convection).The restricted problem domain translates to a more compact package of roughly50m-files.This package does not seem to have a user manual—although the web site includes a tutorial and set of examples—nor is any licensing arrangement specified.The package[32]implements fast marching methods,which are used for static(time-independent)HJ PDEs and are quite distinct algorithmically from the level set methods discussed here.2Toolbox DesignIn this section we discuss the structure of T OOLBOX LS,with particular attention to how it is designed to be easy to use and to extend while still maintaining reasonably fast execution.2.1The EquationT OOLBOX LS is written with the vision of providing routines to approximate the solution of degenerate parabolic PDEs of the form[8]φ)=0for x∈ and t≥t0(1)D tφ(t,x)+G(t,x,φ,D xφ,D2xon domain ⊆R d and subject to initial and possibly boundary conditionsφ(t0,x)=φ0(x)for x∈ ,(2)φ(t,x)=φ∂ (t,x)for x∈∂ and t≥t0.We assume that the initial conditions are bounded and continuous and that G satisfies a monotonicity requirementG(t,x,r,p,X)≤G(t,x,s,p,Y),whenever r≤s and Y≤X,(3) where X and Y are symmetric matrices of appropriate dimension.For such G,there may not exist a classical solution to(1),and so the Toolbox routines are designed to approximate the viscosity solution[6],which is the appropriate weak solution for many problems that lead to equations of the form(1),although it is not the only possible weak solution.Included in the class of degenerate parabolic PDEs are those arising in dynamic implicit surfaces and the time-dependent HJ PDE.A key feature of the viscosity solution of(1)is that under suitable conditionsφremains bounded and continuous for all time.This property may not hold for other types of HJ PDE,such as the static equations arising in minimum time to reach problems(although see Sect.2.5for comments regarding a transformation[25]between static and time-dependent forms).2.2Toolbox ComponentsA single scheme to handle(1)in all its generality would be impossibly complex to design and use.Instead,the Toolbox has different routines to handle different subclasses of this equation.In the rest of this section,we demonstrate the design with the example equationD tφ(t,x)+a(t,x) D xφ(t,x) =0for x∈ and t≥t0,(4)φ(t0,x)=φ0(x)for x∈ ,which for dynamic implicit surfaces corresponds to motion in the normal direction with speed a(t,x):R× →R.Ideally we would choose =R d because the physical problem has no boundaries which would influence the evolution of the implicit surface.Approximating the solution of(4)(or the more general(1)and(2))requires the Toolbox to handle a number of features of the equations:•Discretization of the domain into a grid,including artificial boundary conditionsφ∂ for the necessarilyfinite computational domain.•Construction of initial conditionsφ0.•Approximation of spatial derivatives D xφ(and possibly D2xφ).•Selection of appropriate versions of those spatial derivatives(for example,upwinding) and their combination with problem parameters in terms such as a D xφ (or more gen-erally G).•Timestepping routines for D tφ.•Visualization of the results.To maximizeflexibility,each of these tasks is a separate component of the code.Conse-quently,it is often possible to swap schemes for one component without having to rewrite an entire example.With reference to(4),we consider each of these features in turn.Grid For the rectangular Cartesian grid,the user specifies the dimension,and for each dimension the upper and lower bounds on the domain,the number of grid nodes(or equiv-alently the grid node spacing x),and the boundary conditions.The boundary conditions are handled by functions which insert appropriate ghost values into the array representing φ(t,x).The user can create their own boundary condition functions or select among func-tions supplied by T OOLBOX LS,including periodic,homogenous Dirichlet,homogenous Neumann,or an extrapolation method designed to maintain stability[13].Since(4)does not have physically motivated boundary conditions,homogenous Neumann or extrapolation would normally be chosen to try to minimize the impact of the computational boundary,and the domain would be chosen large enough that the implicit surface would not venture too near those boundaries during the time interval of interest.Grid information is stored in a structure grid.During initialization,T OOLBOX LS pop-ulates this structure with additional useful data,such as the grid.xs cell vector discussed in Sect.2.4.Scalar functions on this grid,such asφ0(x),are stored in a standard d dimen-sional M ATLAB array,where each element represents the value ofφ0at the corresponding grid node.Initial Conditions For dynamic implicit surfaces,the Toolbox provides routines for com-mon shapes(circles/spheres,squares/cubes,hyperplanes,cylinders)and the operations of computational solid geometry(union,intersection,complement and set difference).For gen-eral HJ PDEs,the user can often construct suitableφ0(x)through simple array operations on the data in grid.xs.Spatial Derivatives Level set methods use upwinding when treatingfirst order derivatives, so the routines for D xφall return both left and right looking approximations.In two or more dimensions,each component of the gradient is computed independently.T OOLBOX LS pro-vides the standardfirst order accurate upwind approximations[29]as well as second and third order accurate essentially non-oscillatory(ENO)[30]andfifth order accurate weighted essentially non-oscillatory(WENO)schemes[11].The routines are interchangeable,so switching from low to high order requires changing only one function handle.ENO and WENO interpolants are computed with divided difference tables to maximize information reuse between neighboring nodes.Motion in the Normal Direction The vector normal to the implicit surface is given by ˆn(t,x)=D xφ(t,x) D xφ(t,x) .For motion in this direction,the Toolbox uses a dimension by dimension Godunov upwinding scheme based on the signs of D xφ(t,x)and a(t,x)[27]. One of the upwinding spatial derivative approximation routines mentioned above is selected by the user.This routine also handles estimation of the CFL timestep restriction for explicit integrators;in this case,a function of a(t,x), D xφ(t,x) and the grid’s node spacing. Explicit Time Integration As can be seen in the previous paragraphs,T OOLBOX LS adopts a method of lines approach to increaseflexibility.The result of the term approximation routine(in this case,an approximation of a D xφ )is treated as the right hand side of an ODE,which is solved by an explicit strong stability preserving(SSP)Runge-Kutta(RK) integrator(formerly called total variation diminishing(TVD)Runge-Kutta).The Toolbox provides the standardfirst,second and third order accurate SSP RK schemes[36],which are also designed to be used interchangeably.The actual timestep size is chosen by the integrator,based on the CFL factor and the estimates of the CFL bound provided by the term approximation.The user can set parameters(such as CFL factor),choose routines toexecute after each timestep(such as event detection routines to force early termination),and choose one or more of the term approximation routines described in Sect.2.3(such as the motion in the normal direction term discussed above).Visualization One of the primary benefits of working directly in M ATLAB is access to all of its two and three dimensional visualization routines at all times;for example,even within the debugger.The Toolbox does not provide any new visualization features,although there are helper routines to simplify function calls and the grid.xs arrays often prove useful in this context.2.3Current FeaturesAs mentioned in Sect.2.2,no single numerical scheme can achieve maximum accuracy,effi-ciency and ease-of-use for(1)in its full generality.Instead,the current Toolbox implements a variety of special cases:0=D tφ(t,x)(5)+v(t,x)·D xφ(t,x)(6)+a(t,x) D xφ(t,x) (7)+sign(φ(0,x))( D xφ(t,x) −1)(8)+H(t,x,φ,D xφ)(9)−b(t,x)κ(t,x) D xφ(t,x) (10)−trace[σ(t,x)σT(t,x)D2xφ(t,x)](11)+λ(t,x)φ(t,x)(12)+F(t,x,φ),(13) subject to constraintsD tφ(t,x)≥0,D tφ(t,x)≤0,(14)φ(t,x)≤ψ(t,x),φ(t,x)≥ψ(t,x).(15) Note that the time derivative(5)and at least one term involving a spatial derivative(6–11) must appear,otherwise the equation is not a degenerate parabolic PDE.In addition to the routines discussed in section2.2,T OOLBOX LS also provides numer-ical approximations for each of the terms(5–15).Except where noted below,D xφ(t,x)is approximated dimension by dimension by ENO/WENO upwindfinite difference schemes with user chosen order of accuracy between one andfive[11,30],as described above.Time Derivative(5)Treated by the standard explicit SSP RK schemes with order of accu-racy one to three[36],as described in Sect.2.2.Also,see Sect.3.1for some new SSP RK schemes that are now available.Motion by a Velocity Field(6)(also called advection or convection)The user provides the velocity vectorfield v:R× →R d.Upwinding is used to choose the spatial derivative.Motion in the Normal Direction(7)The user provides the speed of the interface a:R× →R.See Sect.2.2.Reinitialization Equation(8)In steady state,the solution of this equation is a signed dis-tance function[40],a class of functions often used in dynamic implicit surfaces.For lo-calized implementations of level set methods reinitialization is mandatory,but because the Toolbox solves the PDE(s)throughout it is often possible to avoid the extra expense.How-ever,there are some examples whose motion sufficiently distorts the initial implicit surface function so that reinitialization is necessary.While there are advantages and disadvantages to using this equation for reinitialization,it is at present the only reinitialization procedure available in the Toolbox.This term is implemented in T OOLBOX LS with a specifically de-signed Godunov scheme[10],and uses the“subcellfix”from[33]to minimize movement of the zero isosurface.General Hamilton-Jacobi Term(9)The user provides the analytic Hamiltonian H:R× ×R×R d→R.Any dependence of H onφmust satisfy the monotonicity require-ment(3).The average of the upwinded approximations of D xφis used,and Lax-Friedrichs schemes[7,30]are available for stabilizing the approximation of H(t,x,r,p)with different amounts of artificial dissipation.For scaling the dissipation,the user must provide bounds on|∂H/∂p|given bounds on p determined by the Toolbox and the specific Lax-Friedrichs scheme.This term can be used for optimal control,differential games,and reachable set approximation.Motion by Mean Curvature(10)The user provides the speed b:R× →R+,while the mean curvatureκ(t,x)and gradient D xφ(t,x)are approximated by centered second order accuratefinite differences[27].See Sect.3.2for a new monotone scheme for mean curvature motion.Potentially Degenerate Second Order Derivative Term(11)The user provides the rec-tangular matrixσ:R× →R d×k(where k≤d)while the Hessian matrix of mixedsecond order spatial derivatives D2x φ(t,x)is approximated by centered second order ac-curatefinite differences.The current implementation of this term suffers from the same non-monotonicity as the current mean curvature approximation.If the new mean curvature approximation described in Sect.3.2proves effective,it can be extended to handle this type of term.Expectations of stochastic differential equations(whose diffusion coefficient isσ) give rise to this term in the form of Kolmogorov or Fokker-Planck equations[14,24].In the Toolbox documentation this term is referred to as“motion by the Trace of the Hessian,”which is in retrospect a confusing and poor choice of name.Discounting Terms(12)and Forcing Terms(13)The user providesλ:R× →R+or F:R× ×R→R respectively,and must ensure that the result satisfies the monotonicity requirement(3).Because there are no derivatives,implementation of these terms is trivial. Constraints on the Change inφ(14)or onφItself(15)For dynamic implicit surfaces,the former controls whether the surface is allowed to shrink or grow,and the latter can be used to mask a region into which the surface cannot enter[34].The user providesψ:R× →R. Although not treated in[8],viscosity solution theory has been extended to handle these constraints[23],and the Toolbox’s implementation simply applies them toφafter each timestep.In addition to these specific terms,the toolbox allows multiple terms to be combined, arbitrary callback functions which are executed after each timestep,and vector level set equations whereφ:R× →R k for some constant k and each component ofφcan be subject to a separate PDE.This collection of terms covers most of the cases arising in appli-cations,although the Toolbox is organized so that adding more types of terms is relatively straightforward(as demonstrated in Sect.3).2.4Coding PatternsUsers of T OOLBOX LS—particularly those interested in adding new schemes—should be aware of three design patterns used in the code to achieve efficiency,flexibility and dimen-sional independence.Thefirst is the method by which parameters of the PDE and numerical schemes are passed up and down through the different layers of routines in the Toolbox.The second is the method by which functions of x∈ are stored and computed.The third is the method by which we achieve dimensional independence when indexing into arrays. Passing Parameters As can be seen in Sects.2.2and2.3,there are many parameters re-lated to the PDE and/or the numerical schemes which must be passed through a sequence of function calls.While object oriented programming is the typical approach adopted for such tasks,in T OOLBOX LS we have chosen a more light weight design.Apart from the temporal integrator,all parameters are collected into a single structure schemeData,whose con-tents are visible to(almost)all user and Toolbox routines in the call stack.Typical members of schemeData include the grid structure,function handles for the spatial derivative op-erator,and PDE parameters like velocityfields or speeds.By packing all of this information into a single structure,the Toolbox can maintain parameter compatibility between the vari-ous term and derivative approximation routines despite their very different internal details. The schemeData structure has been so successful that a similar protocol for the temporal integrator’s parameters is proposed in Sect.3.1.Functions of x∈ Remembering that ⊆R d,a scalar functionρ(x),ρ: →R is stored as a d dimensional M ATLAB array rho of doubles.Examples includeφ0(x)orφ(t,x) forfixed t.If there are n nodes in each dimension,these arrays contain n d elements and are typically very large.The key to the Toolbox’s efficiency is to perform“vectorized”opera-tions on these arrays.For example,when taking a time step at time t according to motion in the normal direction(4),the speed function a(t,x)is collected into one array speed and the magnitude of the gradient D xφ(t,x) into another magnitude.The normal mo-tion term approximation a(t,x) D xφ(t,x) is stored in a third array delta computed by delta=speed.*magnitude as a single vectorized operation(no explicit loops). This syntactic structure has the side benefit of being dimensionally independent,but its pri-mary benefit is speed of execution.For grids with many nodes,it is memory access time that dominates total execution time.Most M ATLAB installations have compiled versions of elementwise(such as“.*”)and basic linear algebra operations that are optimized for mem-ory access efficiency.Consequently,the operation above will often run faster in M ATLAB than its straightfoward translation into explicit C/C++or Fortran loop(s).For this strategy to be successful,every node by node operation in the PDE solver must be performed in this manner.In the example above,speed and magnitude must be constructed without any explicit loops by the user and the derivative approximation routine respectively.Likewise the update delta must be applied as a single operation by the integrator to the current so-lution approximation data(which storesφ(t,x)).All of the core Toolbox is coded in this efficient manner.。
带资源项的随机尺度结构系统数值收敛性分析
Advances in Applied Mathematics 应用数学进展, 2023, 12(5), 2288-2302 Published Online May 2023 in Hans. https:///journal/aam https:///10.12677/aam.2023.125233带资源项的随机尺度结构系统数值收敛性分析卫烁遥,李 心*燕山大学理学院,河北 秦皇岛收稿日期:2023年4月24日;录用日期:2023年5月19日;发布日期:2023年5月25日摘 要种群研究作为生物研究的重要组成部分之一,在生物发展进程中起着至关重要的作用。
文章考虑了一类带有特殊资源项和随机因素的尺度结构系统数值解的收敛性问题。
首先,利用半隐式欧拉数值方法,构造离散模型的数值解;随后,在一定的假设条件下,利用Itô引理,讨论了该系统数值解的依均方收敛性;最后,根据离散系统的特点对带有尺度结构的随机种群模型进行了数值模拟,同时验证数值方法的可靠性。
关键词Markovian 转换,随机种群,尺度结构,半隐式欧拉,收敛性Numerical Convergence Analysis of Stochastic Size-Structured Models with Resource TermShuoyao Wei, Xin Li *College of Science, Yanshan University, Qinhuangdao HebeiReceived: Apr. 24th , 2023; accepted: May 19th , 2023; published: May 25th , 2023AbstractPopulation research plays an important role in biological research as an important part of biolog-ical research. The convergence of numerical solutions for a class of scale-structured systems with special resource terms and random factors is considered in this paper. First, the numerical solu-tions of discrete models are constructed by using semi-implicit Euler numerical methods. Then,*通讯作者。
sci中numerical example的区别
sci中numerical example的区别Numerical examples play a crucial role in scientific research as they provide valuable insights and evidence to support or refute hypotheses, theories, and models. They help researchers analyze complex phenomena, understand the underlying mechanisms, and make predictions. In this article, we will explore the significance and differences of numerical examples in scientific studies.Numerical examples in scientific research involve the application of mathematical and statistical techniques to real-world problems. They consist of data, equations, calculations, and graphical representations that enable researchers to quantify the relationship between variables and test theoretical predictions. By carefully selecting relevant variables and parameters, researchers can construct numerical models that simulate complex phenomena.One significant advantage of numerical examples is their ability to provide a clear and concise representation of data. Often, scientific research involves large volumes of complex data that can be difficult to interpret without visualization. Numerical examples allow researchers to present data in simplified forms, such as tables or charts, that make it easier for the audience to understand and draw insights from. This ability to simplify complex data is particularly valuable for effective communication and knowledge dissemination.Moreover, numerical examples allow researchers to explore various scenarios and test hypotheses under different conditions. By manipulating different variables and parameters, researchers can analyze the behavior of a system or model and observe how itchanges in response to different inputs. This enables them to identify trends, patterns, and relationships that would otherwise be challenging to discern. For instance, in climate change research, scientists use numerical models to simulate different emission scenarios and predict the impact on global temperatures and sea levels.Numerical examples also serve as essential tools for validation and verification in scientific research. By comparing the results of numerical simulations with experimental or observed data, researchers can assess the accuracy and reliability of their models. This process of validation ensures that the numerical examples faithfully represent the real-world phenomena they seek to model. Additionally, numerical models allow for sensitivity analysis, which helps researchers identify the most crucial parameters and assess their impact on the overall system behavior.One of the key differences in numerical examples lies in the level of complexity involved. Some numerical examples may involve relatively straightforward calculations and simple equations, while others may require complex mathematical algorithms and rigorous statistical analyses. The level of complexity depends on the nature of the scientific problem being studied and the level of detail required to adequately represent the phenomena.Another difference in numerical examples lies in the scale or size of the data being analyzed. Some numerical examples may involve small datasets, such as those obtained from laboratory experiments or surveys, while others may involve large datasets obtained from sources like satellite observations or nationwide surveys. The sizeof the dataset can influence the complexity of the analysis and the computational techniques required to process the data efficiently.Additionally, numerical examples can differ in terms of the precision and accuracy required. Some scientific research may require high levels of accuracy, especially when dealing with critical applications like medical treatments or engineering designs. In such cases, researchers need to carefully select appropriate numerical methods and algorithms to ensure that the results are reliable and trustworthy. On the other hand, in some instances, a rough estimation or approximation may be sufficient for the research question at hand, allowing researchers to adopt simpler and faster numerical techniques.In conclusion, numerical examples are indispensable tools in scientific research. They provide a means to analyze complex phenomena, visualize data, test hypotheses, validate models, and make predictions. The differences in numerical examples lie in the level of complexity, the scale of data, and the precision and accuracy required. Regardless of these differences, numerical examples offer valuable insights and evidence that contribute to advancing our understanding of the natural and physical world.。
经管实证英文文献常用的缺失值处理方法
经管实证英文文献常用的缺失值处理方法全文共3篇示例,供读者参考篇1Title: Common Missing Value Handling Methods in Empirical Literature in Management and EconomicsAbstract:Missing data is a common issue in empirical research in the fields of management and economics. The presence of missing values can distort the results of statistical analyses and affect the validity of research findings. Therefore, it is important for researchers to employ appropriate methods to handle missing data effectively.This article discusses some of the commonly used methods for handling missing values in empirical research in the fields of management and economics. These methods include listwise deletion, pairwise deletion, mean imputation, regression imputation, multiple imputation, and maximum likelihood estimation. The advantages and disadvantages of each method are also discussed, along with recommendations for when to use each method.Introduction:Missing data is a common problem in empirical research in the fields of management and economics. It can arise due to a variety of reasons, such as non-response from survey participants, data entry errors, or incomplete data collection. Regardless of the cause, missing data can have a significant impact on the results of statistical analyses and the validity of research findings. Therefore, researchers need to be aware of the different methods available for handling missing values and choose the most appropriate method for their specific research context.Common Missing Value Handling Methods:1. Listwise Deletion: Listwise deletion, also known as complete case analysis, involves excluding cases with missing values on any variable from the analysis. This method is straightforward and easy to implement but can lead to a loss of data and statistical power.2. Pairwise Deletion: Pairwise deletion involves analyzing only those cases for which all variables are present. This method allows researchers to retain more data compared to listwise deletion but can lead to biased estimates if data are not missing completely at random.3. Mean Imputation: Mean imputation involves replacing missing values with the mean of the observed values for that variable. This method is simple and intuitive but can lead to biased estimates and underestimation of standard errors.4. Regression Imputation: Regression imputation involves predicting missing values based on the relationship between the variable with missing values and other variables in the dataset. This method can produce more accurate estimates compared to mean imputation but requires specifying a regression model.5. Multiple Imputation: Multiple imputation involves creating multiple imputed datasets by generating plausible values for missing data based on the observed data. This method accounts for the uncertainty associated with missing values and produces unbiased estimates.6. Maximum Likelihood Estimation: Maximum likelihood estimation involves estimating model parameters while accounting for missing data using a likelihood function. This method is computationally intensive but provides efficient estimates under the assumption of data missing at random.Conclusion:In conclusion, handling missing values is a critical step in empirical research in the fields of management and economics. Researchers should carefully consider the nature of the missing data and choose the most appropriate method for handling missing values. Each method has its advantages and disadvantages, and researchers should be aware of these when selecting a method for their specific research context. By employing appropriate methods for handling missing values, researchers can ensure the validity and reliability of their research findings.References:- Allison, Paul D. "Handling Missing Data by Maximum Likelihood." SAS Global Forum, 2012.- Little, Roderick J. A., and Donald B. Rubin. Statistical Analysis with Missing Data. Wiley, 2019.- Van Buuren, Stef. Flexible Imputation of Missing Data. CRC Press, 2018.篇2Handling missing values is a crucial step in empirical research in the field of economics and management. Missing values in datasets can lead to biased results and inaccurateconclusions if not dealt with properly. In this article, we will discuss some commonly used methods for handling missing values in empirical studies in economics and management.1. Deleting observations with missing values: One simple way to handle missing values is to delete observations with missing values. This method is commonly used when the number of observations with missing values is relatively small compared to the total sample size. However, this method can lead to a loss of information and may affect the representativeness of the sample.2. Imputation: Imputation is a widely used method for handling missing values in empirical research. Imputation involves replacing missing values with estimates based on the values of other observations in the dataset. There are several imputation techniques commonly used in economics and management, including mean imputation, median imputation, and regression imputation.3. Multiple imputation: Multiple imputation is a more sophisticated technique for handling missing values that involves creating multiple copies of the dataset with imputed values. The imputed values are randomly drawn from a distribution based on the observed data. Multiple imputationcan provide more accurate estimates and standard errors compared to single imputation methods.4. Weighted imputation: Weighted imputation is a technique that assigns weights to observations based on the likelihood of being missing. This method can help correct for bias introduced by non-random missing values and is commonly used in survey data analysis.5. Sensitivity analysis: Sensitivity analysis involves examining the robustness of results to different missing data assumptions. By testing the impact of different imputation methods on the results, researchers can assess the reliability of their findings and draw more robust conclusions.In conclusion, handling missing values is essential in empirical research in economics and management. Researchers should carefully consider the best method for handling missing values based on the characteristics of the dataset and the research question. By using appropriate techniques for handling missing values, researchers can improve the accuracy and reliability of their results.篇3Title: Common Missing Data Handling Methods in Empirical Literature in Economics and ManagementIntroductionMissing data is a common problem in empirical studies in economics and management. The presence of missing data can lead to biased estimates and decreased statistical power, affecting the validity of research findings. Therefore, it is essential to use appropriate methods to handle missing data effectively in order to ensure the reliability of the results. In this article, we will discuss some common methods used to handle missing data in empirical studies in economics and management.Listwise deletionListwise deletion is a common method used to handle missing data in empirical studies. This method involves deleting cases with missing data from the analysis. While listwise deletion is easy to implement, it can lead to biased estimates and reduced statistical power, especially when missing data are not missing at random. Additionally, listwise deletion can result in a significant loss of data, reducing the overall sample size and potentially affecting the generalizability of the research findings.Mean substitutionMean substitution is another commonly used method to handle missing data. This method involves replacing missing values with the mean of the observed values for that variable. While mean substitution is easy to implement, it can lead to biased estimates and reduced variability in the data, as all missing values are replaced with the same value. Additionally, mean substitution assumes that missing data are missing completely at random, which may not always be the case.Multiple imputationMultiple imputation is a more sophisticated method used to handle missing data in empirical studies. This method involves creating multiple copies of the dataset with imputed values for the missing data based on a statistical model. The imputed values are then averaged across the multiple datasets to create a single dataset with imputed values. Multiple imputation can provide more accurate estimates and better account for uncertainty in the imputed values compared to other methods. However, multiple imputation can be computationally intensive and requires careful consideration of the assumptions underlying the imputation model.ConclusionIn conclusion, handling missing data is a crucial aspect of empirical studies in economics and management. While listwise deletion and mean substitution are commonly used methods to handle missing data, they have limitations in terms of bias and loss of data. Multiple imputation is a more sophisticated method that can provide more accurate estimates while accounting for uncertainty in the imputed values. Researchers should carefully consider the appropriateness of each method based on the nature of the missing data and the assumptions underlying the imputation model. By using appropriate methods to handle missing data, researchers can ensure the reliability and validity of their research findings in economics and management.。
论证方法数字论证
论证方法数字论证As we delve into the intricacies of argumentation, the employment of numerical evidence stands out as a powerful tool. Numbers, being objective and quantifiable, lend credence to arguments, offering a concrete basis for analysis and comparison. The precision they bring to discussions is invaluable, especially in fields where accuracy is paramount, such as science, economics, andpolicy-making.当我们深入探讨论证的复杂性时,运用数字证据成为了一种强有力的工具。
数字具有客观性和可量化性,为论证提供了可信度,为分析和比较提供了坚实的基础。
它们为讨论带来的精确性是宝贵的,尤其在科学、经济学和政策制定等需要精确性的领域中,其重要性不言而喻。
Consider, for instance, the use of numerical data in economic policy decisions. Governments often rely on economic indicators like GDP growth rates, inflation figures, and unemployment rates to formulate fiscal and monetary policies. These figures provide a quantifiable measure of the economy's health, allowing policymakers to make informed decisions based on hard data rather than subjective opinions.以经济政策决策中数字数据的使用为例。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
errors, conflicting data and site heterogeneities all contribute to the complexities in site characterization. Detailed site information needed to fully characterize a site are usually lacking as large number of samples are required to completely capture a site's descriptions and its variabilities. Substantial data requirement means high costs and usually the site assessor makes decisions with much less data. Apart from issues related to site variabilities, multiple sources of information have to be compiled such as historical, geological and hydrogeologic information which may be in “soft” (descriptive) or “hard” (numeric) form. There may be data from laboratory studies, field investigations as well as from expert opinions. All these information are required to determine soil characteristics and site parameters such as hydraulic conductivity, storage coefficient, and porosity which
Kejiang Zhang a, Gopal Achari a,⁎, Hua Li b
a b
Department of Civil Engineering, University of Calgary, Calgary, AB, Canada T2N 1N4 Department of Mathematics, Zhengzhou University, Zhengzhou, Henan, PR China
Journal of Contaminant Hydrology 110 (2009) 45–59
Contents lists available at ScienceDirect
Journal of Contaminant Hydrology
j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / j c o n h yd
A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport
1. Introduction Modeling contaminant fate and transport is an integral part of exposure assessment, a necessary step in environmental risk assessment. The various physical phenomena in groundwater flow and contaminant transport are represented by partial differential equations (PDEs) in space and time. The geologic and hydrogeologic site characteristics are included as parameters in the governing PDEs. To determine these parameters is a challenging part of modeling. Scarcity of information, uncertainty in the data collected, mea. / Journal of Contaminant Hydrology 110 (2009) 45–59
are then integrated into a groundwater flow and contaminant transport model. For contaminant fate and transport, additionally, the biogeochemical information about the target contaminant such as precipitation–dissolution, redox reaction, and biodegradation are needed. Uncertain, variable and multi-sourced “soft” and “hard” information not only require proper representations of different uncertainties but also their integration (Porter et al., 2000; Dubois et al., 2000; Sentz and Ferson, 2002; Dubois et al., 2004), and incorporation of the integrated information into the groundwater flow and contaminant transport modeling. Three types of uncertainty (Möller and Beer, 2004), stochastic, informal and lexical, are present in site characterization. Stochastic uncertainty is best described by classical probability theory. Informal uncertainty results from an information deficit, such as when only a small number of observations are available. Lexical uncertainty occurs when instead of numbers, words such as “high”, “medium”, and “low” are used. Opinions of experts are usually linguistic and will have this type of uncertainty. Probability theory is suitable for representation of probabilistic information. Fuzzy set theory (Zadeh, 1965), and possibility theory (Zadeh, 1978; Dubois and Prade, 1988), evidence theory (Shafer, 1976), and random sets (Matheron, 1975; Molchanov, 2005; Nguyen, 2006) are employed to represent informal and lexical uncertain information or possibilistic information (Liu and Peng, 2005). Evidence theory is more suitable when a combination of different types of uncertain information exist (Ross, 2004). This may occur when one theory best represents one parameter whereas another theory may be more suitable for another parameter. Both types of parameters are however required in one governing equation. This leads to the interesting question of hybrid uncertainties and their propagation. Recent research by Guyonnet et al. (2003) and Baudrit et al. (2004, 2007) identified a method of hybrid uncertainty propagation through the conventional risk assessment process. Chen et al. (2003), Li et al. (2007) and Li et al. (2008) use the hybrid fuzzy-stochastic modeling approach to evaluate risk posed by contaminated groundwater and air pollution. However, different types of uncertainties associated with contaminant fate and transport were all captured using probability theory and fuzzy sets were only used to quantify uncertainties inherent in environmental quality guidelines and health evaluation criteria. Baraldi and Zio (2008) used a combined Monte Carlo and possibilistic approach to propagate uncertainty in event tree analysis. Conventionally, uncertainties associated with different parameters in a PDE are modeled using stochastic partial differential equations (SPDEs) (Dagan and Neuman, 1997; Glimm and Sharp, 1998). There are two limitations of stochastic modeling: (1) it needs sufficient data to justify the assumed statistical distributions. Given sufficient statistical data, the probability theory can better capture the random uncertainties; (2) different types of uncertainty that may be associated with the parameters cannot all be properly represented by probability distributions. Chen et al. (2003), Li et al. (2007), and Maxwell et al. (1998) used stochastic methods to simulate the uncertainties in groundwater flow and contaminant transport. Both used the Monte Carlo method for obtaining the simulation results.