系统优化与调度读书报告

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Book Report of System Optimization and

Scheduling

The conjugate gradient method and its application in solving

optimization problems

1. Introduction of problem’s background

Optimization theory and methods is a very active young discipline, it discusses the characteristics of the best choice deciding problems. Struct the calculations of seeking optimal solution, study the theoretical properties and the actual computing performance of these calculations. With the rapid development of high-tech, computer and information technology, optimization theory and methods become more and more important, it has been widely used in various aspects of the natural sciences and the engineering design. Conjugate gradient method is one of the most commonly used optimization methods. In all optimizations need to calculate derivative, the steepest descent method is the most simple, but it is too slow convergence. Quasi-Newton method converges quickly, is widely regarded as the most effective method for nonlinear programming, but the quasi-Newton method requires the storage matrix and by solving linear equations to calculate the search direction, which is almost impossible to solve large-scale problems.

Conjugate gradient method can transform an n-dimensional optimization problem into n equivalent one-dimensional problems, the algorithm is simple, small storage requirements, the convergence rate surpasses fast steepest descent method, and is particularly suitable for solving large-scale problems. Such as electricity distribution, oil exploration, atmospheric modeling, aerospace and other proposed optimization problems.

Conjugate gradient method was first proposed by Hestenes and Stiefle came in 1952, for the solution of linear equations of definite coefficient matrix. The famous article they cooperate -“Method of conjugate gradients for solving linear systems”[1] is considered to be the founder of the articles about conjugate gradient method. This article discusses in detail the nature of the conjugate gradient method for solving linear equations and its relationship with other methods. On this basis, Fletcher and Reeves in 1964 first proposed the conjugate gradient method to solve a nonlinear

optimization problem, making it an important optimization method. Subsequently 、Beale 、Fletcher 、Powell and other scholars be in-depth study, given early results of nonlinear conjugate gradient method some convergence analysis. Since the conjugate gradient method does not require matrix storage, and has a faster convergence rate and secondary termination, etc., and now the conjugate gradient method has been widely used in practical problems.

2. Mathematical description of the problem

We proceed from the point 0x x R ∈, in turn one-dimensional search along the group of conjugate direction to solving unconstrained optimization problems is known as conjugate direction method. Conjugate gradient method uses conjugate direction as a kind of search direction. It is a typical method of conjugate direction, each of search directions are mutually conjugate, and these search directions are only a combination of the negative gradient direction with the direction of the previous iteration of the search, Therefore, store less, calculate conveniently. Meanwhile, the conjugate gradient method is a method between between steepest descent method and Newton's method, it uses only the first order derivative information, but overcomes the disadvantage of slow convergence of the steepest descent method, but also avoids the need to store and calculate Hesse matrix inversion of Newton's method.

The basic idea of conjugate gradient method is to combine conjugation and the steepest descent method, using the known point ’s gradient to struct a group of conjugate directions, and search elements along the direction of this group, to find the minimum point of the objective function. According to the basic nature of the conjugate direction, this method has the second termination. In the conjugate direction, if take the initial search direction 00()d f x =-∇, the following conjugate direction k d is determined by the negative gradient ()k f x -∇from iteration of k times and linear combination whose conjugate direction 1-k d has been obtained negative k-th iteration of the conjugate gradient direction has been obtained by linear combination, which construct a specific conjugate direction method:

相关文档
最新文档