计算物理第二讲-JIAYU
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
(2) The errors of the backward difference formula
The same as the forward difference formula!!
Errors of this approximations (3) The errors of the central difference formula
h n/2 1 S = ∑ f 2 l + 4 f 2 l +1 + f 2 l + 2 ) + O(h 4 ) ( 3 l =0
In order to pair up all the slices, we have to have an even Number of slices. If we have an odd number of slices, or an even number of points between [a,b], the last slice can be separated by b h ∫b−h f ( x)dx = 12 (− f n−2 + 8 f n−1 + 5 f n )
1 f = (− f k − 2 + 16 f k −1 − 30 f k + 16 f k +1 − f k + 2 ) + o(h 4 ) 12h 2
" k
Program derivatives, page 27
SecondSecond-order derivatives (3 and 5 points)
We can obtain a quadrature with a higher accuracy by Working on two slices together. If we apply the Lagrange interpolation to the function f(x) in the region of [ xk −1 , xk +1,] we have, ( x − xk )( x − xk +1 ) ( x − xk −1 )( x − xk +1 ) f ( x) = f k −1 + fk ( xk −1 − xk )( xk −1 − xk +1 ) ( xk − xk −1 )( xk − xk +1 )
• f ''(xi) is
′ f ′xi ) = ( f ( xi + 2 ) 2 f ( xi +1 ) + f ( xi ) h2
• This formula is called the second forward finite divided difference and the error of order O(h). • The second backward finite divided difference which has an error of order O(h) is
Chapter TWO
Numerical Differentiation & Numerical Integration
Numerical differentiation Numerical differentiation is a technique of numerical analysis to produce an estimate of the derivative of a mathematical function or function subroutine using values from the function and perhaps other knowledge about the function. Examples:
f ( x) ≈ f k + ( x − xk )( f k +1 − f k ) / h
After integrating every slice with this linear function, we have Trapezdoid rule
h n −1 S = ∑ ( f k + f k +1 ) + O( h 2 ), 2 k =0
" k
The five-point formula for the second-order derivative
1 f = (− f k − 2 + 16 f k −1 − 30 f k + 16 f k +1 − f k + 2 ) + o(h 4 ) 12h 2
" k
Program derivatives, page 27
f ( x0 ) ≈ f ( x) + ( x0 − x) f ' ( x) + ... = 0
where x can be viewed as a trial value for the root of x0 at the nth step and the approximate value of the next step xn +1 can be derived from
Backward
Centered
More accurate formula Five points formula
thinking
What is the error for the five-points formula?
SecondSecond-order derivatives (3 and 5 points)
f ( xn +1 ) = f ( xn ) + ( xn +1 − xn ) f ' ( xn ) ≈ 0
that is,
xn + 1 = x n
f n / f n' .
Newton Raphson method is better than Bisection method
Newton Raphson method is better than Bisection method
It is best to use the central differences whenever possible
Errors of this approximations
Errors of this approximations
Errors of this approximations Forward
Summary 1: Forward Formulas
Summary 2: Backward Formulas
Summary 3: Centered Formulas
Examples
What we do ?
An integral defined in the region [a,b],
S = ∫ f ( x)dx
Numerical differentiation
Numerical differentiation two points formula
What is the difference here?
Errors of this approximations (1) The errors of the forward difference formula
Program integral, page 30
Numerical intergation under uniform data points
Numerical intergation under uniform data points
Examples 1
2.3 Root for an equation Bisection method ( find a root )
• The second centered finite divided difference which has an error of order O(h2) is
f ′′( xi ) = f ( xi +1 ) − 2 f ( xi ) + f ( xi −1 ) h2
SecondSecond-order derivatives (3 and 5 points)
( x − xk −1 )( x − xk ) + f k +1 + O(h 3 ). ( xk +1 − xk −1 )( xk +1 − xk )
Why??
After integrating every slice with this linear function, we have Simpson rule
The Newton method ( find a root )
Expanding the function f ( x0 ) = 0 in the neighborhood of the root x0 through the Taylor expansion introduced in Section 2.2,
a
b
We can divide the region [a,b] into n slices with an evenly spaced interval h. If we take the lattice points as xk with k = 0,1,..., n , we can write the integral as a summation of integrals over all the slices:
Secant method or discrete Newton method ( find a root )
x0 In many cases, especially in the case of f (x) with an implicit dependence on x, an analytic expression for the first-order derivative needed in the Newton method may not exist or may be very difficult to obtain. One has to find an alternative scheme to achieve a similar algorithm. One way to do it is to replace f n' with the two-point formula for the first-order derivative, which gives
Other formula ?? (four and more points?????)
Give four points formula????? Applications?
The three-point formula for the second-order derivative
f k +1 − 2 f k + f k −1 f = + O(h 2 ) h2
f ( xi ) 2 f ( xi 1 ) + f ( xi 2 ) ′ f ′xi ) = ( h2
SecondSecond-order derivatives (3 and 5 points) The five-point formula for the second-order derivative
xn +1 = xn − ( xn − xn −1 ) f n /( f n − f n −1 )
Extremes of single-variable function
With the knowledge of the solution of a nonlinear equation f ( x) = 0 we can develop numerical schemes to obtain minima or maxima of a function g(x). We know that an extreme of g(x) happens at the point with
S = ∫ f ( x)dx = ∑ ∫
b a k =0
nLeabharlann Baidu−1
xk +1
xk
f ( x) dx.
The simplest quadrature is obtained if we approximate f(x) in the region [ xk , xk +1 ] linearly, that is,