非线性模型预测控制_Chapter10

合集下载

非线性系统的模型预测控制技术研究

非线性系统的模型预测控制技术研究

非线性系统的模型预测控制技术研究随着科技的不断发展,非线性系统的控制越来越受到重视。

由于非线性控制具有非线性和时变因素,其不确定性更大,使得传统的线性控制方法难以应对。

因此,非线性系统的模型预测控制技术不断成熟,被广泛应用于化工、电力、交通等领域的工业控制。

一、非线性系统的特点非线性系统是指系统输出与输入之间不是线性关系的系统。

相较于线性系统,非线性系统对初始条件和输出的波动具有更敏感的关系,输出结果可以是非周期性、混沌、奇异等形式。

非线性系统的特征有以下几点:1. 非线性和时变性非线性系统在不同时间段输出的结果具有不同的性质,输入和输出之间的关系不随时间保持不变。

非线性控制系统的误差被认为是非零常态误差,系统输出不稳定,难以找到精确的数学模型进行控制。

2. 非确定性与线性系统相比,非线性系统的动力学特性更加复杂,控制过程出现的不确定性更加明显。

这一点要求控制系统具备强适应性和自适应能力,可以有效地应对非线性系统的不确定性。

3. 非周期性非线性系统的输出结果可以是非周期性的,即输出结果无法通过简单的周期函数来描述。

非周期性使得控制难度加大,需要更多的时间和精力来建立数学模型和控制算法。

二、模型预测控制模型预测控制是一种将控制器集成到动态模型中的先进控制方法。

也就是说,模型预测控制是通过建立非线性动态模型来预测未来的系统响应并进行控制。

与传统的控制方法相比,模型预测控制能够将非线性系统的不确定性纳入考虑,使其拥有更好的自适应性以及更高的控制精度。

三、模型预测控制技术1. 非线性动态模型建立建立非线性动态模型是模型预测控制的关键环节之一。

非线性系统不能够用线性方程或简单函数来描述,因此建立非线性模型需要利用系统的状态方程和非线性特性。

最常见的非线性建模方法包括:神经网络、模糊系统和多项式回归等。

2. 预测控制法则设计预测控制的目的是通过解决最优控制问题实现控制目标,因此需要制定相应的控制方法。

最优控制问题通常用优化问题的形式表达,采用目标函数来评估控制效果。

非线性模型预测控制的若干问题研究

非线性模型预测控制的若干问题研究

非线性模型预测控制的若干问题研究一、概述随着现代工业技术的快速发展,非线性模型预测控制(Nonlinear Model Predictive Control,NMPC)已成为控制领域的研究热点。

非线性系统广泛存在于实际工业过程中,其特性复杂、行为多样,且具有不确定性,这使得传统的线性控制策略在面对非线性系统时往往难以取得理想的效果。

研究非线性模型预测控制策略,对于提高控制系统的性能、稳定性和鲁棒性具有重要意义。

非线性模型预测控制是一种基于非线性模型的闭环优化控制策略,其核心思想是在每个采样周期,以系统当前状态为起点,在线求解有限时域开环最优问题,得到一个最优控制序列,并将该序列的第一个控制量作用于被控系统。

这种滚动优化的策略使得非线性模型预测控制能够实时地根据系统的状态变化调整控制策略,从而实现对非线性系统的有效控制。

非线性模型预测控制的研究也面临着诸多挑战。

由于非线性系统的复杂性,其预测模型的建立往往较为困难,且模型的准确性对控制效果的影响较大。

非线性模型预测控制需要在线求解优化问题,这对计算资源的需求较高,限制了其在实时性要求较高的系统中的应用。

非线性模型预测控制的稳定性和鲁棒性也是研究的重点问题。

本文旨在深入研究非线性模型预测控制的若干关键问题,包括非线性模型的建立、优化算法的设计、稳定性和鲁棒性的分析等。

通过对这些问题的研究,旨在提出一种高效、稳定、鲁棒的非线性模型预测控制策略,为实际工业过程的控制提供理论支持和实践指导。

1. 非线性模型预测控制(NMPC)概述非线性模型预测控制(Nonlinear Model Predictive Control,简称NMPC)是一种先进的控制策略,广泛应用于各种动态系统的优化控制问题中。

NMPC的核心思想是在每个控制周期内,利用系统的非线性模型预测未来的动态行为,并通过求解一个优化问题来得到最优控制序列。

这种方法能够显式地处理系统的不确定性和约束,因此非常适合于处理那些对控制性能要求较高、环境复杂多变的实际系统。

非线性控制系统中的模型预测控制技术

非线性控制系统中的模型预测控制技术

非线性控制系统中的模型预测控制技术一、引言现代控制理论的发展中,非线性控制系统成为了研究的关键领域。

非线性控制系统的特点是复杂性强、系统参数难以准确测量、不确定性大等。

这些因素使得非线性控制系统很难得到精准的控制。

本文将重点剖析模型预测控制技术在非线性控制系统中的应用。

二、非线性控制系统的特点一般来说,非线性控制系统具有以下几个特点:1. 系统的非线性和复杂性2. 系统存在参数不确定性,难以精确测量3. 控制输入和输出之间存在强耦合性4. 系统存在振荡、不稳定性等问题上述特点使得非线性控制系统的控制变得非常复杂,需要使用更加先进的控制算法来解决这些问题。

三、模型预测控制模型预测控制,简称MPC,是基于一个预测模型进行控制的一种方法。

在MPC中,控制器使用当前的状态以及对未来状态的预测来作出控制决策。

控制器会计算出一个控制变量序列,然后将其施加到非线性系统中。

这种方法可以提高控制系统的性能,从而降低控制误差。

MPC 的基本流程可以概括为以下几个步骤:1. 选择一个模型2. 预测下一步的状态和输出3. 计算控制变量序列,优化控制性能4. 应用当前的控制变量MPC 具有以下优点:1. 能够将未来的控制变量和权重考虑进去,使得控制系统能够更好地适应未来的变化。

2. 能够对强耦合的非线性系统进行控制。

3. 能够更好地应对系统不确定性和时变性。

因此,MPC 已经成为了非线性控制系统中的一种重要控制方法。

四、MPC 在非线性控制系统中的应用由于非线性控制系统具有非确定性和复杂性等特点,为了更好地处理这些问题,MPC 被广泛地应用在非线性控制系统中。

特别是在化工、能源等重要领域,MPC 已经成为了非线性控制系统中最常用的控制方法之一。

例如,在控制化工过程中, MFC(Model Predictive Control)技术已经广泛应用,该技术可以对复杂的化工过程中的需求进行实时调节,并对可能出现的负面效应进行修正。

基于支持向量机的非线性模型预测控制

基于支持向量机的非线性模型预测控制

(3)
第3期
基于支持向量机的非线性模型预测控制
•333•
来产生子代种群, β 的取值在( 0 1)之间,这个值的选取 对搜索优化的影响很大,它可以为常数,也可以随着搜索 过程自适应地改变。如果 F k + ≥ F k − ,式( 3)取正值,反 之则取负值,这里 F F
k+ k−
的设计参数为 σ = 0.3,ε = 0.01, C = 150 ,二次型规划采用牛 顿法。图 2 描述训练集和测试集样本的拟合情况。星号* 代表支持向量机的输出,实线代表系统样本的输出,可见 支持向量机的输出和系统的样本输出数据在训练集和测试 集都能很好地拟合, 说明支持向量机的泛化能力是很强的, 训练过的 SVM 能够精确描述系统的动态行为。
Abstract: Support vector machines (SVM) are a new-generation machine learning technique based on the statistical learning theory. They can solve small-sample learning problems better by using Structural Risk Minimization in place of Experiential Risk Minimization. Moreover, SVMs can change a nonlinear learning problem in to a linear learning problem in order to reduce the algorithm complexity by using the kernel function idea. A nonlinear predictive control framework is presented, in which nonlinear plants are modeled on a support vector machine. The predictive control law is derived by a new stochastic search optimization algorithm. At last a simulation example is given to demonstrate the proposed approach. Keywords :Support vector machine;Model predictive control;Stochastic search optimization

非线性模型预测控制_front-matter

非线性模型预测控制_front-matter

Communications and Control Engineering For other titles published in this series,go to/series/61Series EditorsA.Isidori J.H.van Schuppen E.D.Sontag M.Thoma M.Krstic Published titles include:Stability and Stabilization of Infinite Dimensional Systems with ApplicationsZheng-Hua Luo,Bao-Zhu Guo and Omer Morgul Nonsmooth Mechanics(Second edition)Bernard BrogliatoNonlinear Control Systems IIAlberto IsidoriL2-Gain and Passivity Techniques in Nonlinear Control Arjan van der SchaftControl of Linear Systems with Regulation and Input ConstraintsAli Saberi,Anton A.Stoorvogel and Peddapullaiah SannutiRobust and H∞ControlBen M.ChenComputer Controlled SystemsEfim N.Rosenwasser and Bernhard mpeControl of Complex and Uncertain SystemsStanislav V.Emelyanov and Sergey K.Korovin Robust Control Design Using H∞MethodsIan R.Petersen,Valery A.Ugrinovski andAndrey V.SavkinModel Reduction for Control System DesignGoro Obinata and Brian D.O.AndersonControl Theory for Linear SystemsHarry L.Trentelman,Anton Stoorvogel and Malo Hautus Functional Adaptive ControlSimon G.Fabri and Visakan KadirkamanathanPositive1D and2D SystemsTadeusz KaczorekIdentification and Control Using Volterra Models Francis J.Doyle III,Ronald K.Pearson and Babatunde A.OgunnaikeNon-linear Control for Underactuated Mechanical SystemsIsabelle Fantoni and Rogelio LozanoRobust Control(Second edition)Jürgen AckermannFlow Control by FeedbackOle Morten Aamo and Miroslav KrsticLearning and Generalization(Second edition) Mathukumalli VidyasagarConstrained Control and EstimationGraham C.Goodwin,Maria M.Seron andJoséA.De DonáRandomized Algorithms for Analysis and Controlof Uncertain SystemsRoberto Tempo,Giuseppe Calafiore and Fabrizio Dabbene Switched Linear SystemsZhendong Sun and Shuzhi S.GeSubspace Methods for System IdentificationTohru KatayamaDigital Control SystemsIoan ndau and Gianluca ZitoMultivariable Computer-controlled SystemsEfim N.Rosenwasser and Bernhard mpe Dissipative Systems Analysis and Control(Second edition)Bernard Brogliato,Rogelio Lozano,Bernhard Maschke and Olav EgelandAlgebraic Methods for Nonlinear Control Systems Giuseppe Conte,Claude H.Moog and Anna M.Perdon Polynomial and Rational MatricesTadeusz KaczorekSimulation-based Algorithms for Markov Decision ProcessesHyeong Soo Chang,Michael C.Fu,Jiaqiao Hu and Steven I.MarcusIterative Learning ControlHyo-Sung Ahn,Kevin L.Moore and YangQuan Chen Distributed Consensus in Multi-vehicle Cooperative ControlWei Ren and Randal W.BeardControl of Singular Systems with Random Abrupt ChangesEl-Kébir BoukasNonlinear and Adaptive Control with Applications Alessandro Astolfi,Dimitrios Karagiannis and Romeo OrtegaStabilization,Optimal and Robust ControlAziz BelmiloudiControl of Nonlinear Dynamical SystemsFelix L.Chernous’ko,Igor M.Ananievski and Sergey A.ReshminPeriodic SystemsSergio Bittanti and Patrizio ColaneriDiscontinuous SystemsYury V.OrlovConstructions of Strict Lyapunov FunctionsMichael Malisoff and Frédéric MazencControlling ChaosHuaguang Zhang,Derong Liu and Zhiliang Wang Stabilization of Navier–Stokes FlowsViorel BarbuDistributed Control of Multi-agent NetworksWei Ren and Yongcan CaoLars Grüne Jürgen Pannek Nonlinear Model Predictive Control Theory and AlgorithmsLars Grüne Mathematisches Institut Universität Bayreuth Bayreuth95440Germanylars.gruene@uni-bayreuth.de Jürgen Pannek Mathematisches Institut Universität BayreuthBayreuth95440Germanyjuergen.pannek@uni-bayreuth.deISSN0178-5354ISBN978-0-85729-500-2e-ISBN978-0-85729-501-9DOI10.1007/978-0-85729-501-9Springer London Dordrecht Heidelberg New YorkBritish Library Cataloguing in Publication DataA catalogue record for this book is available from the British LibraryLibrary of Congress Control Number:2011926502Mathematics Subject Classification(2010):93-02,92C10,93D15,49M37©Springer-Verlag London Limited2011Apart from any fair dealing for the purposes of research or private study,or criticism or review,as per-mitted under the Copyright,Designs and Patents Act1988,this publication may only be reproduced, stored or transmitted,in any form or by any means,with the prior permission in writing of the publish-ers,or in the case of reprographic reproduction in accordance with the terms of licenses issued by the Copyright Licensing Agency.Enquiries concerning reproduction outside those terms should be sent to the publishers.The use of registered names,trademarks,etc.,in this publication does not imply,even in the absence of a specific statement,that such names are exempt from the relevant laws and regulations and therefore free for general use.The publisher makes no representation,express or implied,with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made.Cover design:VTeX UAB,LithuaniaPrinted on acid-free paperSpringer is part of Springer Science+Business Media()For Brigitte,Florian and CarlaLGFor Sabina and AlinaJPPrefaceThe idea for this book grew out of a course given at a winter school of the In-ternational Doctoral Program“Identification,Optimization and Control with Ap-plications in Modern Technologies”in Schloss Thurnau in March2009.Initially, the main purpose of this course was to present results on stability and performance analysis of nonlinear model predictive control algorithms,which had at that time recently been obtained by ourselves and coauthors.However,we soon realized that both the course and even more the book would be inevitably incomplete without a comprehensive coverage of classical results in the area of nonlinear model pre-dictive control and without the discussion of important topics beyond stability and performance,like feasibility,robustness,and numerical methods.As a result,this book has become a mixture between a research monograph and an advanced textbook.On the one hand,the book presents original research results obtained by ourselves and coauthors during the lastfive years in a comprehensive and self contained way.On the other hand,the book also presents a number of results—both classical and more recent—of other authors.Furthermore,we have included a lot of background information from mathematical systems theory,op-timal control,numerical analysis and optimization to make the book accessible to graduate students—on PhD and Master level—from applied mathematics and con-trol engineering alike.Finally,via our web page we provide MATLAB and C++software for all examples in this book,which enables the reader to perform his or her own numerical experiments.For reading this book,we assume a basic familiarity with control systems,their state space representation as well as with concepts like feedback and stability as provided,e.g.,in undergraduate courses on control engineering or in courses on mathematical systems and control theory in an applied mathematics curriculum.However,no particular knowledge of nonlin-ear systems theory is assumed.Substantial parts of the systems theoretic chapters of the book have been used by us for a lecture on nonlinear model predictive con-trol for master students in applied mathematics and we believe that the book is well suited for this purpose.More advanced concepts like time varying formulations or peculiarities of sampled data systems can be easily skipped if only time invariant problems or discrete time systems shall be treated.viiviii PrefaceThe book centers around two main topics:systems theoretic properties of nonlin-ear model predictive control schemes on the one hand and numerical algorithms on the other hand;for a comprehensive description of the contents we refer to Sect.1.3.As such,the book is somewhat more theoretical than engineering or application ori-ented monographs on nonlinear model predictive control,which are furthermore often focused on linear methods.Within the nonlinear model predictive control literature,distinctive features of this book are the comprehensive treatment of schemes without stabilizing terminal constraints and the in depth discussion of performance issues via infinite horizon suboptimality estimates,both with and without stabilizing terminal constraints.The key for the analysis in the systems theoretic part of this book is a uniform way of interpreting both classes of schemes as relaxed versions of infinite horizon op-timal control problems.The relaxed dynamic programming framework developed in Chap.4is thus a cornerstone of this book,even though we do not use dynamic programming for actually solving nonlinear model predictive control problems;for this task we prefer direct optimization methods as described in the last chapter of this book,since they also allow for the numerical treatment of high dimensional systems.There are many people whom we have to thank for their help in one or the other way.For pleasant and fruitful collaboration within joint research projects and on joint papers—of which many have been used as the basis for this book—we are grateful to Frank Allgöwer,Nils Altmüller,Rolf Findeisen,Marcus von Lossow,Dragan Neši´c ,Anders Rantzer,Martin Seehafer,Paolo Varutti and Karl Worthmann.For enlightening talks,inspiring discussions,for organizing workshops and mini-symposia (and inviting us)and,last but not least,for pointing out valuable references to the literature we would like to thank David Angeli,Moritz Diehl,Knut Graichen,Peter Hokayem,Achim Ilchmann,Andreas Kugi,Daniel Limón,Jan Lunze,Lalo Magni,Manfred Morari,Davide Raimondo,Saša Rakovi´c ,Jörg Rambau,Jim Rawl-ings,Markus Reble,Oana Serea and Andy Teel,and we apologize to everyone who is missing in this list although he or she should have been mentioned.Without the proof reading of Nils Altmüller,Robert Baier,Thomas Jahn,Marcus von Lossow,Florian Müller and Karl Worthmann the book would contain even more typos and inaccuracies than it probably does—of course,the responsibility for all remaining errors lies entirely with us and we appreciate all comments on errors,typos,miss-ing references and the like.Beyond proof reading,we are grateful to Thomas Jahn for his help with writing the software supporting this book and to Karl Worthmann for his contributions to many results in Chaps.6and 7,most importantly the proof of Proposition 6.17.Finally,we would like to thank Oliver Jackson and Charlotte Cross from Springer-Verlag for their excellent rs Grüne Jürgen PannekBayreuth,Germany April 2011Contents1Introduction (1)1.1What Is Nonlinear Model Predictive Control? (1)1.2Where Did NMPC Come from? (3)1.3How Is This Book Organized? (5)1.4What Is Not Covered in This Book? (9)References (10)2Discrete Time and Sampled Data Systems (13)2.1Discrete Time Systems (13)2.2Sampled Data Systems (16)2.3Stability of Discrete Time Systems (28)2.4Stability of Sampled Data Systems (35)2.5Notes and Extensions (39)2.6Problems (39)References (41)3Nonlinear Model Predictive Control (43)3.1The Basic NMPC Algorithm (43)3.2Constraints (45)3.3Variants of the Basic NMPC Algorithms (50)3.4The Dynamic Programming Principle (56)3.5Notes and Extensions (62)3.6Problems (64)References (65)4Infinite Horizon Optimal Control (67)4.1Definition and Well Posedness of the Problem (67)4.2The Dynamic Programming Principle (70)4.3Relaxed Dynamic Programming (75)4.4Notes and Extensions (81)4.5Problems (83)References (84)ix5Stability and Suboptimality Using Stabilizing Constraints (87)5.1The Relaxed Dynamic Programming Approach (87)5.2Equilibrium Endpoint Constraint (88)5.3Lyapunov Function Terminal Cost (95)5.4Suboptimality and Inverse Optimality (101)5.5Notes and Extensions (109)5.6Problems (110)References (112)6Stability and Suboptimality Without Stabilizing Constraints (113)6.1Setting and Preliminaries (113)6.2Asymptotic Controllability with Respect to (116)6.3Implications of the Controllability Assumption (119)6.4Computation ofα (121)6.5Main Stability and Performance Results (125)6.6Design of Good Running Costs (133)6.7Semiglobal and Practical Asymptotic Stability (142)6.8Proof of Proposition6.17 (150)6.9Notes and Extensions (159)6.10Problems (161)References (162)7Variants and Extensions (165)7.1Mixed Constrained–Unconstrained Schemes (165)7.2Unconstrained NMPC with Terminal Weights (168)7.3Nonpositive Definite Running Cost (170)7.4Multistep NMPC-Feedback Laws (174)7.5Fast Sampling (176)7.6Compensation of Computation Times (180)7.7Online Measurement ofα (183)7.8Adaptive Optimization Horizon (191)7.9Nonoptimal NMPC (198)7.10Beyond Stabilization and Tracking (207)References (209)8Feasibility and Robustness (211)8.1The Feasibility Problem (211)8.2Feasibility of Unconstrained NMPC Using Exit Sets (214)8.3Feasibility of Unconstrained NMPC Using Stability (217)8.4Comparing Terminal Constrained vs.Unconstrained NMPC (222)8.5Robustness:Basic Definition and Concepts (225)8.6Robustness Without State Constraints (227)8.7Examples for Nonrobustness Under State Constraints (232)8.8Robustness with State Constraints via Robust-optimal Feasibility.2378.9Robustness with State Constraints via Continuity of V N (241)8.10Notes and Extensions (246)8.11Problems (249)References (249)9Numerical Discretization (251)9.1Basic Solution Methods (251)9.2Convergence Theory (256)9.3Adaptive Step Size Control (260)9.4Using the Methods Within the NMPC Algorithms (264)9.5Numerical Approximation Errors and Stability (266)9.6Notes and Extensions (269)9.7Problems (271)References (272)10Numerical Optimal Control of Nonlinear Systems (275)10.1Discretization of the NMPC Problem (275)10.2Unconstrained Optimization (288)10.3Constrained Optimization (292)10.4Implementation Issues in NMPC (315)10.5Warm Start of the NMPC Optimization (324)10.6Nonoptimal NMPC (331)10.7Notes and Extensions (335)10.8Problems (337)References (337)Appendix NMPC Software Supporting This Book (341)A.1The MATLAB NMPC Routine (341)A.2Additional MATLAB and MAPLE Routines (343)A.3The C++NMPC Software (345)Glossary (347)Index (353)。

模型预测控制

模型预测控制
极小化性能指标,即令
,得最优控制率:
根据滚动优化原理,只实施目前控制量u2(k):
式中:
多步优化MAC旳特点: 优点: (i)控制效果和鲁棒性优于单步MAC算法简朴;
(ii)合用于有时滞或非最小相位对象。 缺陷: (i)算法较单步MAC复杂;
(ii)因为以u作为控制量, 造成MAC算法不可防止地出现稳态误差.
第5章 模型预测控制
5.3.1.2 反馈校正 为了在模型失配时有效地消除静差,能够在模型预测值ym旳基础上 附加一误差项e,即构成反馈校正(闭环预测)。
详细做法:将第k时刻旳实际对象旳输出测量值与预测模型输出之间 旳误差附加到模型旳预测输出ym(k+i)上,得到闭环预测模型,用 yp(k+i)表达:
第5章 模型预测控制
5.1 引言
一 什么是模型预测控制(MPC)?
模型预测控制(Model Predictive Control)是一种基于模型旳闭环 优化控制策略,已在炼油、化工、冶金和电力等复杂工业过程中得到 了广泛旳应用。
其算法关键是:可预测过程将来行为旳动态模型,在线反复优化计
算并滚动实施旳控制作用和模型误差旳反馈校正。
2. 动态矩阵控制(DMC)旳产生:
动态矩阵控制(DMC, Dynamic Matrix Control)于1974年应用在美国壳牌石 油企业旳生产装置上,并于1980年由Culter等在美国化工年会上公开刊登,
3. 广义预测控制(GPC)旳产生:
1987年,Clarke等人在保持最小方差自校正控制旳在线辨识、输出预测、 最小方差控制旳基础上,吸收了DMC和MAC中旳滚动优化策略,基于参数 模型提出了兼具自适应控制和预测控制性能旳广义预测控制算法。

非线性Hammerstein模型预测控制策略及其在pH中和过程中的应用

非线性Hammerstein模型预测控制策略及其在pH中和过程中的应用

非线性Hammerstein模型预测控制策略及其在pH中和过程中的应用邹志云;郭宇晴;王志甄;刘兴红;于蒙;张风波;郭宁【摘要】Industrial processes usually contain complex nonlinearities. For example,the control of pH neutralization processes is a challenging problem in the chemical process industry because of their inherent strong nonlinearity. In this paper,the model predictive control (MPC) strategy is extended to a nonlinear process using Hammerstein model that consists of a static nonlinear polynomial function followed in series by a linear difference equation dynamic element. The calculation strategy of optimal control output based on the static nonlinear polynomial function of the Hammerstein model is presented in detail. Thus a new nonlinear Hammerstein predictive control strategy ( NLHPC) is developed. Simulation and control experiments of a pH neutralization process show that NLHPC gives better control performance than the commonly used industrial nonlinear PID (NL-PID) controller.%所有实际工业过程都包含一定程度的非线性,如pH中和过程由于其本身的强非线性是工业过程控制中具有挑战性的难题,但至今为止仍缺乏有效的非线性控制方法.将基于差分方程模型的模型预测控制策略(model predictive control,MPC)推广到包含一个静态非线性多项式函数和一个线性差分方程动态环节的非线性Hammerstein系统,详细描述了基于静态非线性多项式函数的最优控制作用求解方法,提出了一套新的非线性Hammerstein MPC控制策略(nonlinear Hammerstein predictive control,NLHPC).pH 中和过程控制仿真和控制实验表明,NLHPC 的控制结果好于工业上常用的非线性PID(nonlinear PID,NL-PID)控制器.【期刊名称】《化工学报》【年(卷),期】2012(063)012【总页数】6页(P3965-3970)【关键词】Hammerstein模型;模型预测控制;pH中和过程;非线性控制【作者】邹志云;郭宇晴;王志甄;刘兴红;于蒙;张风波;郭宁【作者单位】防化研究院,北京 102205;防化研究院,北京 102205;防化研究院,北京102205;防化研究院,北京 102205;防化研究院,北京 102205;防化研究院,北京102205;防化研究院,北京 102205【正文语种】中文【中图分类】TP273实际工业过程都包含一定程度的非线性,如pH中和过程由于其本身固有的强非线性,接近中和点的高度敏感性,当中和液流量及浓度存在不确定性时的增益时变问题,使得pH中和过程的控制被看成是测试非线性控制策略的典型代表性过程[1]。

非线性模型预测控制算法研究

非线性模型预测控制算法研究

非线性模型预测控制算法研究随着科技的发展,模型预测控制技术已逐渐成为控制领域的热门研究方向之一。

在传统的线性模型预测控制算法基础上,非线性模型预测控制算法已经得到了广泛应用,并取得了良好的控制效果。

本文将对非线性模型预测控制算法进行探究,并对其在实际应用中的优异表现进行分析。

一、非线性模型预测控制原理非线性模型预测控制算法的核心思想是建立非线性预测模型,然后利用该模型进行预测和控制。

与传统的线性模型预测控制算法不同的是,在非线性模型预测控制算法中,预测模型是通过非线性函数进行描述的。

这种方法能够更加准确地描述被控对象的动态特性,实现更好的前瞻性控制。

在非线性模型预测控制算法中,我们首先需要建立一个非线性模型,通常是建立一个神经网络模型或非线性回归模型。

接着,利用系统的历史数据进行训练和参数优化,得到一个可靠的预测模型。

在预测时,将模型输入预测变量,得到预测结果,然后进行控制决策。

在控制时,根据实际的运行状况和预测结果,调整控制动作,以达到预期的控制目标。

二、非线性模型预测控制算法的优势1. 能够更加准确地描述被控对象的动态特性与传统的线性模型预测控制算法相比,非线性模型预测控制算法能够更加准确地描述被控对象的动态特性。

这是由于非线性模型能够更好地逼近实际的物理过程。

这种方法能够充分挖掘系统的非线性特性,更好地描述系统的动态行为,从而实现更加准确的预测和控制。

2. 具有更强的稳定性和鲁棒性非线性模型预测控制算法具有更强的稳定性和鲁棒性。

这是由于该算法不受系统变化的影响,能够自适应地学习系统模型,并自动调整控制策略。

这种算法的控制性能更加可靠和优化,能够在实际应用中得到广泛应用。

3. 能够应对多变环境和复杂系统非线性模型预测控制算法能够应对多变环境和复杂系统。

这种算法在实际应用中表现出了很好的灵活性和鲁棒性,能够适应各种实际应用场景。

而在传统的线性模型预测控制算法中,存在线性模型无法描述非线性系统的缺陷,因此不能很好地应对复杂系统。

动力学非线性系统建模与预测控制

动力学非线性系统建模与预测控制

动力学非线性系统建模与预测控制在现代科技的飞速发展下,高科技产业的生产要求越来越高,要求对各种机电系统进行合理的建模和控制。

其中,动力学非线性系统的建模和预测控制是一个十分重要的问题。

动力学非线性系统是指其运动状态、系统输出和控制输入之间存在非线性关系,通俗的讲就是不存在一个通用的数学函数可以描述系统的行为。

这种系统在日常生活中很常见:例如,弹簧振动、地震、车辆运动轨迹等等。

由于其极其复杂的性质,能够对其进行建模和预测控制对于人类解决很多实际问题具有重要的意义。

在这方面,我们先来谈谈建模的问题。

对于非线性系统的建模,主要有时间域和频域两种方法。

时间域方法是指通过差分方程或微分方程来描述系统的状态变化,而频域方法则是通过系统的传递函数或频率响应来描述系统的输入和输出关系,即不考虑系统的状态变化。

相对来说,频域方法建模简单易懂,广泛应用也是其中的原因之一。

但是,当非线性系统的系统建模前提不能满足输出具有平稳性时,频域方法就不能使用,这时需要使用更为复杂的时间域方法。

在开始进行动力学非线性系统的建模之前,需先了解系统的基础性质,如系统是否相对稳定等,而这些性质确定了之后,才可进行相应的状态方程和输出方程的推导。

举个例子,我们来看看质量悬挂在弹簧上进行简谐振动的建模过程。

对于这个系统,可以通过牛顿第二定律F=ma得到其状态方程为(m为质量,k为弹簧系数,x为质量相对平衡点的位移):m(d2x/dt2)+kx=0此外,可以通过观察到系统的位移x与时间t的关系,得到其输出方程为:x=Asinωt其中A表示振幅,ω表示角频率。

将其代入状态方程,可以解得系统的频率为:ω=√(k/m)通过上述推导过程,我们就成功地建立了弹簧振动的动力学非线性系统模型。

除了建立系统模型,预测控制也是非常重要的一个环节。

在许多应用中,经常需要预测未来的状态,进而为控制提供依据。

例如,对于自主驾驶汽车来说,需要对未来的交通情况进行预测,以便进行合理的驾驶。

非线性模型预测控制_Chapter1

非线性模型预测控制_Chapter1

Chapter1Introduction1.1What Is Nonlinear Model Predictive Control?Nonlinear model predictive control(henceforth abbreviated as NMPC)is an opti-mization based method for the feedback control of nonlinear systems.Its primaryapplications are stabilization and tracking problems,which we briefly introduce inorder to describe the basic idea of model predictive control.Suppose we are given a controlled process whose state x(n)is measured at dis-crete time instants t n,n=0,1,2,....“Controlled”means that at each time instantwe can select a control input u(n)which influences the future behavior of the stateof the system.In tracking control,the task is to determine the control inputs u(n)such that x(n)follows a given reference x ref(n)as good as possible.This means thatif the current state is far away from the reference then we want to control the systemtowards the reference and if the current state is already close to the reference thenwe want to keep it there.In order to keep this introduction technically simple,weconsider x(n)∈X=R d and u(n)∈U=R m,furthermore we consider a referencewhich is constant and equal to x∗=0,i.e.,x ref(n)=x∗=0for all n≥0.With such a constant reference the tracking problem reduces to a stabilization problem;in itsfull generality the tracking problem will be considered in Sect.3.3.Since we want to be able to react to the current deviation of x(n)from the ref-erence value x∗=0,we would like to have u(n)in feedback form,i.e.,in the form u(n)=μ(x(n))for some mapμmapping the state x∈X into the set U of control values.The idea of model predictive control—linear or nonlinear—is now to utilize amodel of the process in order to predict and optimize the future system behavior.Inthis book,we will use models of the formx+=f(x,u)(1.1) where f:X×U→X is a known and in general nonlinear map which assigns to a state x and a control value u the successor state x+at the next time instant.Starting from the current state x(n),for any given control sequence u(0),...,u(N−1)with L.Grüne,J.Pannek,Nonlinear Model Predictive Control,1 Communications and Control Engineering,DOI10.1007/978-0-85729-501-9_1,©Springer-Verlag London Limited201121Introduction horizon length N≥2,we can now iterate(1.1)in order to construct a predictiontrajectory x u defined byx u(0)=x(n),x u(k+1)=fx u(k),u(k),k=0,...,N−1.(1.2)Proceeding this way,we obtain predictions x u(k)for the state of the system x(n+k) at time t n+k in the future.Hence,we obtain a prediction of the behavior of the sys-tem on the discrete interval t n,...,t n+N depending on the chosen control sequence u(0),...,u(N−1).Now we use optimal control in order to determine u(0),...,u(N−1)such that x u is as close as possible to x∗=0.To this end,we measure the distance between x u(k)and x∗=0for k=0,...,N−1by a function (x u(k),u(k)).Here,we not only allow for penalizing the deviation of the state from the reference but also—if desired—the distance of the control values u(k)to a reference control u∗,which here we also choose as u∗=0.A common and popular choice for this purpose isthe quadratic functionx u(k),u(k)=x u(k)2+λu(k)2,where · denotes the usual Euclidean norm andλ≥0is a weighting parameter for the control,which could also be chosen as0if no control penalization is desired. The optimal control problem now readsminimize Jx(n),u(·):=N−1k=0x u(k),u(k)with respect to all admissible1control sequences u(0),...,u(N−1)with x u gen-erated by(1.2).Let us assume that this optimal control problem has a solution which is given by the minimizing control sequence u (0),...,u (N−1),i.e.,minu(0),...,u(N−1)Jx(n),u(·)=N−1k=0x u (k),u (k).In order to get the desired feedback valueμ(x(n)),we now setμ(x(n)):=u (0), i.e.,we apply thefirst element of the optimal control sequence.This procedure is sketched in Fig.1.1.At the following time instants t n+1,t n+2,...we repeat the procedure with the new measurements x(n+1),x(n+2),...in order to derive the feedback values μ(x(n+1)),μ(x(n+2)),....In other words,we obtain the feedback lawμby an iterative online optimization over the predictions generated by our model(1.1).2 This is thefirst key feature of model predictive control.1The meaning of“admissible”will be defined in Sect.3.2.2Attentive readers may already have noticed that this description is mathematically idealized since we neglected the computation time needed to solve the optimization problem.In practice,when the measurement x(n)is provided to the optimizer the feedback valueμ(x(n))will only be available after some delay.For simplicity of exposition,throughout our theoretical investigations we will assume that this delay is negligible.We will come back to this problem in Sect.7.6.1.2Where Did NMPC Come from?3Fig.1.1Illustration of the NMPC step at time t nFrom the prediction horizon point of view,proceeding this iterative way the trajectories x u(k),k=0,...,N provide a prediction on the discrete interval t n,...,t n+N at time t n,on the interval t n+1,...,t n+N+1at time t n+1,on the interval t n+2,...,t n+N+2at time t n+2,and so on.Hence,the prediction horizon is moving and this moving horizon is the second key feature of model predictive control.Regarding terminology,another term which is often used alternatively to model predictive control is receding horizon control.While the former expression stresses the use of model based predictions,the latter emphasizes the moving horizon idea. Despite these slightly different literal meanings,we prefer and follow the common practice to use these names synonymously.The additional term nonlinear indicates that our model(1.1)need not be a linear map.1.2Where Did NMPC Come from?Due to the vast amount of literature,the brief history of NMPC we provide in this section is inevitably incomplete and focused on those references in the literature from which we ourselves learned about the various NMPC techniques.Furthermore, we focus on the systems theoretic aspects of NMPC and on the academic develop-ment;some remarks on numerical methods specifically designed for NMPC can be found in rmation about the use of linear and nonlinear MPC in prac-tical applications can be found in many articles,books and proceedings volumes, e.g.,in[15,22,24].Nonlinear model predictive control grew out of the theory of optimal control which had been developed in the middle of the20th century with seminal contri-butions like the maximum principle of Pontryagin,Boltyanskii,Gamkrelidze and Mishchenko[20]and the dynamic programming method developed by Bellman [2].Thefirst paper we are aware of in which the central idea of model predictive41Introduction control—for discrete time linear systems—is formulated was published by Propo˘ı[21]in the early1960s.Interestingly enough,in this paper neither Pontryagin’s max-imum principle nor dynamic programming is used in order to solve the optimal con-trol problem.Rather,the paper already proposed the method which is predominant nowadays in NMPC,in which the optimal control problem is transformed into a static optimization problem,in this case a linear one.For nonlinear systems,the idea of model predictive control can be found in the book by Lee and Markus[14] from1967on page423:One technique for obtaining a feedback controller synthesis from knowl-edge of open-loop controllers is to measure the current control process state and then compute very rapidly for the open-loop control function.Thefirst portion of this function is then used during a short time interval,after whicha new measurement of the process state is made and a new open-loop con-trol function is computed for this new measurement.The procedure is then repeated.Due to the fact that neither computer hardware nor software for the necessary“very rapid”computation were available at that time,for a while this observation had little practical impact.In the late1970s,due to the progress in algorithms for solving constrained linear and quadratic optimization problems,MPC for linear systems became popular in control engineering.Richalet,Rault,Testud and Papon[25]and Cutler and Ramaker [6]were among thefirst to propose this method in the area of process control,in which the processes to be controlled are often slow enough in order to allow for an online optimization,even with the computer technology available at that time. It is interesting to note that in[25]the method was described as a“new method of digital process control”and earlier references were not mentioned;it appears that the basic MPC principle was re-invented several times.Systematic stability investigations appeared a little bit later;an account of early results in that direction for linear MPC can,e.g.,be found in the survey paper of García,Prett and Morari [10]or in the monograph by Bitmead,Gevers and Wertz[3].Many of the techniques which later turned out to be useful for NMPC,like Lyapunov function based stability proofs or stabilizing terminal constraints were in factfirst developed for linear MPC and later carried over to the nonlinear setting.The earliest paper we were able tofind which analyzes an NMPC algorithm sim-ilar to the ones used today is an article by Chen and Shaw[4]from1982.In this paper,stability of an NMPC scheme with equilibrium terminal constraint in contin-uous time is proved using Lyapunov function techniques,however,the whole opti-mal control function on the optimization horizon is applied to the plant,as opposed to only thefirst part as in our NMPC paradigm.For NMPC algorithms meeting this paradigm,first comprehensive stability studies for schemes with equilibrium termi-nal constraint were given in1988by Keerthi and Gilbert[13]in discrete time and in1990by Mayne and Michalska[17]in continuous time.The fact that for non-linear systems equilibrium terminal constraints may cause severe numerical diffi-culties subsequently motivated the investigation of alternative techniques.Regional1.3How Is This Book Organized?5 terminal constraints in combination with appropriate terminal costs turned out to be a suitable tool for this purpose and in the second half of the1990s there was a rapid development of such techniques with contributions by De Nicolao,Magni and Scattolini[7,8],Magni and Sepulchre[16]or Chen and Allgöwer[5],both in discrete and continuous time.This development eventually led to the formulation of a widely accepted“axiomatic”stability framework for NMPC schemes with sta-bilizing terminal constraints as formulated in discrete time in the survey article by Mayne,Rawlings,Rao and Scokaert[18]in2000,which is also an excellent source for more detailed information on the history of various NMPC variants not men-tioned here.This framework also forms the core of our stability analysis of such schemes in Chap.5of this book.A continuous time version of such a framework was given by Fontes[9]in2001.All stability results discussed so far add terminal constraints as additional state constraints to thefinite horizon optimization in order to ensure stability.Among the first who provided a rigorous stability result of an NMPC scheme without such con-straints were Parisini and Zoppoli[19]and Alamir and Bornard[1],both in1995and for discrete time systems.Parisini and Zoppoli[19],however,still needed a terminal cost with specific properties similar to the one used in[5].Alamir and Bonnard[1] were able to prove stability without such a terminal cost by imposing a rank con-dition on the linearization on the system.Under less restrictive conditions,stability results were provided in2005by Grimm,Messina,Tuna and Teel[11]for discrete time systems and by Jadbabaie and Hauser[12]for continuous time systems.The results presented in Chap.6of this book are qualitatively similar to these refer-ences but use slightly different assumptions and a different proof technique which allows for quantitatively tighter results;for more details we refer to the discussions in Sects.6.1and6.9.After the basic systems theoretic principles of NMPC had been clarified,more advanced topics like robustness of stability and feasibility under perturbations,per-formance estimates and efficiency of numerical algorithms were addressed.For a discussion of these more recent issues including a number of references we refer to thefinal sections of the respective chapters of this book.1.3How Is This Book Organized?The book consists of two main parts,which cover systems theoretic aspects of NMPC in Chaps.2–8on the one hand and numerical and algorithmic aspects in Chaps.9–10on the other hand.These parts are,however,not strictly separated;in particular,many of the theoretical and structural properties of NMPC developed in thefirst part are used when looking at the performance of numerical algorithms.The basic theme of thefirst part of the book is the systems theoretic analysis of stability,performance,feasibility and robustness of NMPC schemes.This part starts with the introduction of the class of systems and the presentation of background material from Lyapunov stability theory in Chap.2and proceeds with a detailed61Introduction description of different NMPC algorithms as well as related background information on dynamic programming in Chap.3.A distinctive feature of this book is that both schemes with stabilizing terminal constraints as well as schemes without such constraints are considered and treated in a uniform way.This“uniform way”consists of interpreting both classes of schemes as relaxed versions of infinite horizon optimal control.To this end,Chap.4first de-velops the theory of infinite horizon optimal control and shows by means of dynamic programming and Lyapunov function arguments that infinite horizon optimal feed-back laws are actually asymptotically stabilizing feedback laws.The main building block of our subsequent analysis is the development of a relaxed dynamic program-ming framework in Sect.4.3.Roughly speaking,Theorems4.11and4.14in this section extract the main structural properties of the infinite horizon optimal control problem,which ensure•asymptotic or practical asymptotic stability of the closed loop,•admissibility,i.e.,maintaining the imposed state constraints,•a guaranteed bound on the infinite horizon performance of the closed loop,•applicability to NMPC schemes with and without stabilizing terminal constraints. The application of these theorems does not necessarily require that the feedback law to be analyzed is close to an infinite horizon optimal feedback law in some quantitative sense.Rather,it requires that the two feedback laws share certain prop-erties which are sufficient in order to conclude asymptotic or practical asymptotic stability and admissibility for the closed loop.While our approach allows for inves-tigating the infinite horizon performance of the closed loop for most schemes under consideration—which we regard as an important feature of the approach in this book—we would like to emphasize that near optimal infinite horizon performance is not needed for ensuring stability and admissibility.The results from Sect.4.3are then used in the subsequent Chaps.5and6in order to analyze stability,admissibility and infinite horizon performance properties for NMPC schemes with and without stabilizing terminal constraints,respectively. Here,the results for NMPC schemes with stabilizing terminal constraints in Chap.5 can by now be considered as classical and thus mainly summarize what can be found in the literature,although some results—like,e.g.,Theorems5.21and5.22—generalize known results.In contrast to this,the results for NMPC schemes without stabilizing terminal constraints in Chap.6were mainly developed by ourselves and coauthors and have not been presented before in this way.While most of the results in this book are formulated and proved in a mathemat-ically rigorous way,Chap.7deviates from this practice and presents a couple of variants and extensions of the basic NMPC schemes considered before in a more survey like manner.Here,proofs are occasionally only sketched with appropriate references to the literature.In Chap.8we return to the more rigorous style and discuss feasibility and robust-ness issues.In particular,in Sects.8.1–8.3we present feasibility results for NMPC schemes without stabilizing terminal constraints and without imposing viability as-sumptions on the state constraints which are,to the best of our knowledge,either1.3How Is This Book Organized?7 entirely new or were so far only known for linear MPC.These resultsfinish our study of the properties of the nominal NMPC closed-loop system,which is why it is followed by a comparative discussion of the advantages and disadvantages of the various NMPC schemes presented in this book in Sect.8.4.The remaining sec-tions in Chap.8address the robustness of the stability of the NMPC closed loop with respect to additive perturbations and measurement errors.Here we decided to present a selection of results we consider representative,partially from the literature and partially based on our own research.These considerationsfinish the systems theoretic part of the book.The numerical part of the book covers two central questions in NMPC:how can we numerically compute the predicted trajectories needed in NMPC forfinite-dimensional sampled data systems and how is the optimization in each NMPC step performed numerically?Thefirst issue is treated in Chap.9,in which we start by giving an overview on numerical one step methods,a classical numerical technique for solving ordinary differential equations.After having looked at the convergence analysis and adaptive step size control techniques,we discuss some implementa-tional issues for the use of this methods within NMPC schemes.Finally,we investi-gate how the numerical approximation errors affect the closed-loop behavior,using the robustness results from Chap.8.The last Chap.10is devoted to numerical algorithms for solving nonlinearfi-nite horizon optimal control problems.We concentrate on so-called direct methods which form the currently by far preferred class of algorithms in NMPC applications. In these methods,the optimal control problem is transformed into a static optimiza-tion problem which can then be solved by nonlinear programming algorithms.We describe different ways of how to do this transformation and then give a detailed introduction into some popular nonlinear programming algorithms for constrained optimization.The focus of this introduction is on explaining how these algorithms work rather than on a rigorous convergence theory and its purpose is twofold:on the one hand,even though we do not expect our readers to implement such algorithms, we still think that some background knowledge is helpful in order to understand the opportunities and limitations of these numerical methods.On the other hand,we want to highlight the key features of these algorithms in order to be able to explain how they can be efficiently used within an NMPC scheme.This is the topic of the final Sects.10.4–10.6,in which several issues regarding efficient implementation, warm start and feasibility are investigated.Like Chap.7and in contrast to the other chapters in the book,Chap.10has in large parts a more survey like character,since a comprehensive and rigorous treatment of these topics would easilyfill an entire book.Still,we hope that this chapter contains valuable information for those readers who are interested not only in systems theoretic foundations but also in the practical numerical implementation of NMPC schemes.Last but not least,for all examples presented in this book we offer either MAT-LAB or C++code in order to reproduce our numerical results.This code is available from the web page81Introduction Both our MATLAB NMPC routine—which is suitable for smaller problems—as well as our C++NMPC package—which can also handle larger problems withreasonable computing time—can also be modified in order to perform simulationsfor problems not treated in this book.In order to facilitate both the usage and themodification,the Appendix contains brief descriptions of our routines.Beyond numerical experiments,almost every chapter contains a small selectionof problems related to the more theoretical results.Solutions for these problemsare available from the authors upon request by email.Attentive readers will notethat several of these problems—as well as some of our examples—are actually lin-ear problems.Even though all theoretical and numerical results apply to generalnonlinear systems,we have decided to include such problems and examples,be-cause nonlinear problems hardly ever admit analytical solutions,which are neededin order to solve problems or to work out examples without the help of numericalalgorithms.Let usfinally say a few words on the class of systems and NMPC problemsconsidered in this book.Most results are formulated for discrete time systems onarbitrary metric spaces,which in particular coversfinite-and infinite-dimensionalsampled data systems.The discrete time setting has been chosen because of its no-tational and conceptual simplicity compared to a continuous time formulation.Still,since sampled data continuous time systems form a particularly important class ofsystems,we have made considerable effort in order to highlight the peculiaritiesof this system class whenever appropriate.This concerns,among other topics,therelation between sampled data systems and discrete time systems in Sect.2.2,thederivation of continuous time stability properties from their discrete time counter-parts in Sect.2.4and Remark4.13,the transformation of continuous time NMPCschemes into the discrete time formulation in Sect.3.5and the numerical solutionof ordinary differential equations in Chap.9.Readers or lecturers who are inter-ested in NMPC in a pure discrete time framework may well skip these parts of thebook.The most general NMPC problem considered in this book3is the asymptotictracking problem in which the goal is to asymptotically stabilize a time varyingreference x ref(n).This leads to a time varying NMPC formulation;in particular,the optimal control problem to be solved in each step of the NMPC algorithm ex-plicitly depends on the current time.All of the fundamental results in Chaps.2–4explicitly take this time dependence into account.However,in order to be able toconcentrate on concepts rather than on technical details,in the subsequent chapterswe often decided to simplify the setting.To this end,many results in Chaps.5–8arefirst formulated for time invariant problems x ref≡x∗—i.e.,for stabilizing an x∗—and the necessary modifications for the time varying case are discussed after-wards.3Except for some further variants discussed in Sects.3.5and7.10.1.4What Is Not Covered in This Book?9 1.4What Is Not Covered in This Book?The area of NMPC has grown so rapidly over the last two decades that it is virtually impossible to cover all developments in detail.In order not to overload this book,we have decided to omit several topics,despite the fact that they are certainly important and useful in a variety of applications.We end this introduction by giving a brief overview over some of these topics.For this book,we decided to concentrate on NMPC schemes with online opti-mization only,thus leaving out all approaches in which part of the optimization is carried out offline.Some of these methods,which can be based on both infinite hori-zon andfinite horizon optimal control and are often termed explicit MPC,are briefly discussed in Sects.3.5and4.4.Furthermore,we will not discuss special classes of nonlinear systems like,e.g.,piecewise linear systems often considered in the explicit MPC literature.Regarding robustness of NMPC controllers under perturbations,we have re-stricted our attention to schemes in which the optimization is carried out for a nom-inal model,i.e.,in which the perturbation is not explicitly taken into account in the optimization objective,cf.Sects.8.5–8.9.Some variants of model predictive con-trol in which the perturbation is explicitly taken into account,like min–max MPC schemes building on game theoretic ideas or tube based MPC schemes relying on set oriented methods are briefly discussed in Sect.8.10.An emerging and currently strongly growingfield are distributed NMPC schemes in which the optimization in each NMPC step is carried out locally in a number of subsystems instead of using a centralized optimization.Again,this is a topic which is not covered in this book and we refer to,e.g.,Rawlings and Mayne[23,Chap.6] and the references therein for more information.At the very heart of each NMPC algorithm is a mathematical model of the sys-tems dynamics,which leads to the discrete time dynamics f in(1.1).While we will explain in detail in Sect.2.2and Chap.9how to obtain such a discrete time model from a differential equation,we will not address the question of how to obtain a suitable differential equation or how to identify the parameters in this model.Both modeling and parameter identification are serious problems in their own right which cannot be covered in this book.It should,however,be noted that optimization meth-ods similar to those used in NMPC can also be used for parameter identification; see,e.g.,Schittkowski[26].A somewhat related problem stems from the fact that NMPC inevitably leads to a feedback law in which the full state x(n)needs to be measured in order to evaluate the feedback law,i.e.,a state feedback law.In most applications,this information is not available;instead,only output information y(n)=h(x(n))for some output map h is at hand.This implies that the state x(n)must be reconstructed from the output y(n)by means of a suitable observer.While there is a variety of different techniques for this purpose,it is interesting to note that an idea which is very similar to NMPC can be used for this purpose:in the so-called moving horizon state estimation ap-proach the state is estimated by iteratively solving optimization problems over a101Introduction moving time horizon,analogous to the repeated minimization of J(x(n),u(·))de-scribed above.However,instead of minimizing the future deviations of the pre-dictions from the reference value,here the past deviations of the trajectory from the measured output values are minimized.More information on this topic can be found,e.g.,in Rawlings and Mayne[23,Chap.4]and the references therein.References1.Alamir,M.,Bornard,G.:Stability of a truncated infinite constrained receding horizon scheme:the general discrete nonlinear case.Automatica31(9),1353–1356(1995)2.Bellman,R.:Dynamic Programming.Princeton University Press,Princeton(1957).Reprintedin20103.Bitmead,R.R.,Gevers,M.,Wertz,V.:Adaptive Optimal Control.The Thinking Man’s GPC.International Series in Systems and Control Engineering.Prentice Hall,New York(1990) 4.Chen,C.C.,Shaw,L.:On receding horizon feedback control.Automatica18(3),349–352(1982)5.Chen,H.,Allgöwer,F.:Nonlinear model predictive control schemes with guaranteed stabil-ity.In:Berber,R.,Kravaris,C.(eds.)Nonlinear Model Based Process Control,pp.465–494.Kluwer Academic,Dordrecht(1999)6.Cutler,C.R.,Ramaker,B.L.:Dynamic matrix control—a computer control algorithm.In:Pro-ceedings of the Joint Automatic Control Conference,pp.13–15(1980)7.De Nicolao,G.,Magni,L.,Scattolini,R.:Stabilizing nonlinear receding horizon control viaa nonquadratic terminal state penalty.In:CESA’96IMACS Multiconference:ComputationalEngineering in Systems Applications,Lille,France,pp.185–187(1996)8.De Nicolao,G.,Magni,L.,Scattolini,R.:Stabilizing receding-horizon control of nonlineartime-varying systems.IEEE Trans.Automat.Control43(7),1030–1036(1998)9.Fontes,F.A.C.C.:A general framework to design stabilizing nonlinear model predictive con-trollers.Systems Control Lett.42(2),127–143(2001)10.García,C.E.,Prett,D.M.,Morari,M.:Model predictive control:Theory and practice—a sur-vey.Automatica25(3),335–348(1989)11.Grimm,G.,Messina,M.J.,Tuna,S.E.,Teel,A.R.:Model predictive control:for want of alocal control Lyapunov function,all is not lost.IEEE Trans.Automat.Control50(5),546–558 (2005)12.Jadbabaie,A.,Hauser,J.:On the stability of receding horizon control with a general terminalcost.IEEE Trans.Automat.Control50(5),674–678(2005)13.Keerthi,S.S.,Gilbert,E.G.:Optimal infinite-horizon feedback laws for a general class ofconstrained discrete-time systems:stability and moving-horizon approximations.J.Optim.Theory Appl.57(2),265–293(1988)14.Lee,E.B.,Markus,L.:Foundations of Optimal Control Theory.Wiley,New York(1967)15.Maciejowski,J.M.:Predictive Control with Constraints.Prentice Hall,New York(2002)16.Magni,L.,Sepulchre,R.:Stability margins of nonlinear receding-horizon control via inverseoptimality.Systems Control Lett.32(4),241–245(1997)17.Mayne,D.Q.,Michalska,H.:Receding horizon control of nonlinear systems.IEEE Trans.Automat.Control35(7),814–824(1990)18.Mayne,D.Q.,Rawlings,J.B.,Rao,C.V.,Scokaert,P.O.M.:Constrained model predictive con-trol:Stability and optimality.Automatica36(6),789–814(2000)19.Parisini,T.,Zoppoli,R.:A receding-horizon regulator for nonlinear systems and a neuralapproximation.Automatica31(10),1443–1451(1995)20.Pontryagin,L.S.,Boltyanskii,V.G.,Gamkrelidze,R.V.,Mishchenko,E.F.:The MathematicalTheory of Optimal Processes.Translated by D.E.Brown.Pergamon/Macmillan Co.,New York (1964)。

基于非线性模型的预测控制技术研究

基于非线性模型的预测控制技术研究

基于非线性模型的预测控制技术研究
在控制系统中,预测控制技术一直受到研究者的广泛关注。

随着工业自动化程度的提高和复杂度的增加,控制的准确性和实时性也变得越来越重要。

传统的线性控制方法已不能满足实际控制需求。

因此,基于非线性模型的预测控制技术应运而生。

基于非线性模型的预测控制技术包括神经网络预测控制、模糊神经网络预测控制、小波神经网络预测控制等。

其中,神经网络预测控制技术是一种非线性控制策略,具有广阔的应用前景。

神经网络预测控制技术是一种类似于大脑神经系统的人工智能算法。

它是通过大量数据学习、训练出神经网络,将其用于模型建立和预测控制。

神经网络预测控制技术具有智能性、自适应性、非线性映射能力等优点,可以实现对非线性系统的精确控制。

模糊神经网络预测控制技术则将模糊逻辑运用于神经网络控制中,使得神经网络能够处理不确定和不完整的信息,并进行合理的推理和决策。

它比单独的神经网络预测控制技术更具表达力和智能性。

小波神经网络预测控制技术将小波分析应用于神经网络控制中,可以采用小波基函数对非线性系统进行逼近。

这种预测控制技术具有高效性和精确性,能够应对复杂的非线性系统控制问题。

与传统的线性控制方法相比,基于非线性模型的预测控制技术具有更高的自适应性和精度,可以有效应对非线性系统的控制问题。

同时,随着神经网络硬件和计算技术的进步,基于非线性模型的预测控制技术将会得到更广泛的应用。

总之,基于非线性模型的预测控制技术是一种全新的控制策略,具有广泛的应用前景。

未来,基于非线性模型的预测控制技术将会在工业自动化、智能化控制等领域发挥重要作用。

非线性控制系统的模型预测方法研究

非线性控制系统的模型预测方法研究

非线性控制系统的模型预测方法研究随着科技的不断进步和应用领域的不断扩展,控制系统已经成为现代社会中不可或缺的一部分。

其中,非线性控制系统因为可以解决许多线性系统难以应对的问题,在各个领域中被广泛应用。

而在非线性控制系统中,模型预测方法成为一种常见的控制策略。

一、非线性控制系统概述非线性系统是指不符合线性叠加原理的系统,也就是说,其输出与输入之间的关系不是线性的。

相比于线性系统,非线性系统模型更加复杂,因此在控制系统中,非线性控制系统需要采取更加复杂的控制策略才能实现对系统的有效控制。

以机器人控制为例,机器人在执行任务时面临的环境和任务是复杂多变的,如何通过控制增强机器人的灵活性、稳定性和精度就成为了难点。

这时候,非线性控制系统就能够发挥重要作用,因为模型的非线性特性能够更好地反映机器人在不同环境下的复杂状态,并且能够针对不同的任务场景动态调整控制参数,实现更高效的控制。

二、模型预测方法原理在非线性控制系统中,模型预测方法(Model Predictive Control,MPC)是一种比较常见的控制策略。

模型预测方法的基本思想是利用系统的动态模型来预测未来的系统状态,然后通过控制方法将系统状态引导到期望状态。

具体来说,模型预测方法的实现流程如下:1. 设置控制参数在模型预测方法中,需要预先设置控制参数,这些参数包括期望状态、目标输出等。

通过调整这些参数可以实现更加精确的控制。

2. 预测未来系统状态根据系统的动态模型,预测未来系统状态,同时考虑系统的环境变化和噪声干扰等因素,得出未来一段时间内的状态序列。

3. 优化控制策略利用优化算法,求解出一组最优的控制信号,使得未来一段时间内的系统状态能够达到期望状态,并且满足各种约束条件。

这一步是整个模型预测方法的核心。

4. 实施控制策略根据优化得出的控制信号,实施相应的控制策略,控制系统状态在未来一段时间内发生变化,使得系统能够达到期望状态。

三、模型预测方法的特点模型预测方法因其具有的许多特点而在非线性控制系统中被广泛使用,其主要特点包括:1. 预测能力强模型预测方法可以利用系统的动态模型对未来的系统状态进行预测,可以实现更加精确的控制。

基于模型预测控制的非线性系统设计研究

基于模型预测控制的非线性系统设计研究

基于模型预测控制的非线性系统设计研究第一章绪论随着科技的不断进步,非线性系统控制问题一直是研究的热点之一。

模型预测控制 (Model Predictive Control, MPC) 是一种重要的强化学习控制方法,近年来受到了广泛的关注和应用。

为了充分利用MPC的预测和优化能力,人们研究如何将其应用于非线性系统控制中,以实现更好的控制效果和性能。

第二章非线性系统建模非线性系统建模是非线性控制的基础。

目前,常用的建模方法包括:灰箱模型建模、黑箱模型建模、物理模型建模等。

但其中基于物理模型的建模方法更具优势,尤其是在一些实际控制问题中。

同时,物理模型方法还可以通过多种数学方法对模型进行简化和化简,如欧拉法、Runga-Kutta算法等。

第三章模型预测控制基础模型预测控制是一种以数学模型为基础的控制方法,其核心思想是通过对系统未来的预测,对调节参数进行优化,从而实现控制效果最优。

MPC有以下特点:- 预测性:MPC通过建立预测模型来提前预测未来的系统状态,并根据预测结果来调整控制量。

- 优化性:MPC把控制问题转化为优化问题,通过对未来的一段时间(通常是数秒或数分钟)进行预测,使得控制器在整个控制过程中一直处于最优状态。

- 多变量性:MPC可以同时处理多个输入和多个输出的系统,并通过考虑它们之间的相互作用来进行完整控制。

第四章基于MPC的非线性系统控制方法基于MPC的非线性系统控制分为两种方法:基于模型的MPC和基于数据的MPC。

基于模型的MPC:首先,需要建立非线性系统的数学模型,然后在MPC中使用该模型作为控制算法的核心部分,经过预测和优化,计算出控制量。

该方法的优点是,控制更加精确和稳定,但需要较精确的系统模型。

基于数据的MPC:该方法通常不需要建立较精确的模型,而是通过大量实验数据来进行预测和优化。

该方法的优点是,更加适用于实际系统,但需要处理大量数据和进行数据预处理。

第五章非线性系统实例研究以双摆控制系统为例,进行基于MPC的非线性控制实验研究,针对不同系统负载情况,设计不同的控制实验,并观察不同控制实验的控制效果。

第10章-广义预测控制

第10章-广义预测控制

10.1.1 预测模型
其中
Gj
(z1)
g j,0
g
z1
j,1
g j, j1z j1
H (z ) h z 则由式(10.1.4)和式(10.11.5)可以得到1
j
j,1
hj ,2 z 2
hj,nb znb
式出y信((1k0息.1及.4未)j、|来式k的)(1输0.入1G.5值j)(、, z就式可1()1以0.u预1.(7测k)和对式象j(未10来.11的.|8)k输都)出可。作H为jG(PzC1的)预u测(模k)型。F这j样(z, 根1)据y(已k知) 的(输10入.1输.7)
(k
)
FN
(
z
1
)
y
(k
)
均可由 k 时刻已知的信息 y , ≤k 以及 u , k 计算。
(10.1.15)
如果记
y(k | k) y(k 1| k), , y(k N | k)T
u(k | k) u(k | k), ,u(k Nu 1| k)T
f (k) f1(k), , fN (k)T
给出了一个
E j、(z1) Fj (的z递1)推算法。
首先, 根据式(10.1.3)可写出
1 Ej (z1)A(z1) z j Fj (z1)
1 Ej1(z1)A(z1) z( j1) Fj1(z1)
两式相减可得
A(z1
)[E
j
1 ( z 1
)
E
j
(z
1
)]
z
j
[
z
F 1 j 1
(z1
)
Fj
10.1.1 预测模型
式中,z 1 是后移算子,表示后退一个采样周期的相应的量,即 z1y(k) y(k 1) ,z1u(k) u(k 1);

非线性系统模型预测控制算法研究

非线性系统模型预测控制算法研究

非线性系统模型预测控制算法研究随着现代科技的飞速发展,越来越多的自动化、智能化设备出现在人们的生产、生活中。

这些设备需要跑出高效、精准的控制算法来实现它们的目标。

与此同时,非线性系统的广泛存在也使得传统的线性控制算法难以满足实际需求。

这时非线性系统模型预测控制算法便应运而生。

一、什么是非线性系统模型预测控制算法非线性系统模型预测控制算法是一种通过建立非线性系统的数学模型,预测系统响应并实现控制的方法。

它利用历史数据和对未来的预测来优化控制输出,以达到最优化的效果。

该算法本质上是一种优化算法,以最小化预测误差为目标,以提高系统性能为核心。

二、非线性系统模型预测控制算法的基本思想非线性系统模型预测控制算法的基本思想可以归纳为以下几点:1. 建立非线性系统的预测模型非线性系统的预测模型一般采用动态状态空间模型或非线性回归模型。

这个预测模型将历史数据建模,并通过对未来的预测获得最优化控制输出。

2. 进行优化控制基于预测模型,通过对未来的预测和历史数据的分析,来计算出最优控制输出。

为了使算法实现简单稳定,通常只考虑最小化预测误差,忽略约束条件等其他因素。

3. 控制器实施通过实施优化控制结果,将其转化为机器控制信号。

这种控制方法具有较高的实时性和适应性,并且可以适用于复杂的非线性系统。

三、非线性系统模型预测控制算法的研究内容非线性系统模型预测控制算法的研究内容通常包含以下几个方面:1. 建模方法的研究非线性系统的建模是非线性系统模型预测控制算法的关键,选取合适的建模方法可以提高算法的精度和实用性。

目前建模方法主要有基于ARMAX模型的方法、基于神经网络的方法和基于时滞的方法等。

2. 优化方法的研究优化方法是非线性系统模型预测控制算法的另一个关键,不同的优化方法可以影响算法的收敛速度和稳定性。

目前主流的优化方法有非线性规划方法、模型预测控制方法和演化算法等。

3. 实时性和执行效率的研究非线性系统模型预测控制算法需要具有较高的实时性和执行效率,才能适应复杂的实际场景。

非线性模型预测控制方法的优化研究

非线性模型预测控制方法的优化研究

非线性模型预测控制方法的优化研究随着现代科技的不断发展,人类对于各种复杂系统的掌控也越来越高效精准,其中非线性模型预测控制方法被广泛应用于工业生产、环境监测、交通运输等领域。

然而,该方法在实际应用中仍存在一些不足,如计算复杂度高、精度不够等问题,因此有必要对该方法进行优化研究,以提高其应用效果。

本文将从优化模型预测算法、缩减计算时延、提高系统鲁棒性三个方面进行探讨。

I. 优化模型预测算法在非线性模型预测控制中,优化模型预测算法是提高控制效果的重要手段。

传统的优化算法如梯度下降法、最小二乘法等对于线性问题效果良好,但对于非线性问题,这些算法效率较低,需要大量时间进行计算。

因此,需要对传统算法进行改进和优化,提高预测模型对于非线性问题的适应性。

现在比较常见的优化算法如LM算法、遗传算法、模拟退火算法等,这些算法的优点是能够更好地适应大规模、复杂的问题,并且能够在空间和时间上进行平衡,从而提高了模型预测的精度和速度。

II. 缩减计算时延非线性模型预测控制往往需要进行大量的计算,因此时延问题是目前该方法在实际应用中遇到的主要问题之一。

一些方法可用于缩短计算时间。

其中一种方法是参数压缩,在保证预测效果的情况下,缩减模型的参数,从而减少计算时间和内存的占用。

另外一种方法是核函数的应用,在预测时采用核函数对数据进行转换,使得转换后的数据更容易处理和计算。

此外,低通滤波器也是一个有效的方法,它对系统的低频部分进行处理,从而减少了噪声对模型的影响。

III. 提高系统鲁棒性传统的非线性模型预测控制方法通常只针对某一个模型进行控制,对于系统参数变化、噪声扰动等情况,预测效果往往不尽如人意。

因此,提高控制系统的鲁棒性是非常必要的。

鲁棒性能够使得控制算法对噪声扰动、参数变化等情况具有很好的适应性,保持良好的预测效果。

这一方面需要对模型结构进行适当的修改,使其更加稳定;另一方面也可以通过增加控制器的自适应性和迭代次数,提高控制器的适应性。

非线性模型预测控制

非线性模型预测控制

非线性模型预测控制
非线性模型预测控制,是一种基于非线性模型的控制方法,它可以有
效地控制复杂的系统,并且可以满足多个约束条件。

NMPC的基本思想是,通过预测未来的状态,并在预测的状态下求解最优控制量,从而
实现最优控制。

NMPC的优势在于,它可以有效地控制复杂的系统,并且可以满足多个
约束条件。

NMPC可以有效地控制复杂的系统,因为它可以根据系统的
实际状态来预测未来的状态,从而更好地控制系统。

此外,NMPC可以
满足多个约束条件,因为它可以根据系统的实际状态来求解最优控制量,从而满足多个约束条件。

NMPC的应用非常广泛,它可以用于控制各种复杂的系统,如机器人、
航空航天、汽车、电力系统等。

例如,NMPC可以用于控制机器人的运动,从而实现机器人的自动化操作。

此外,NMPC还可以用于控制航空
航天系统,从而实现航空航天系统的自动化操作。

NMPC的缺点在于,它的计算复杂度较高,因为它需要预测未来的状态,并在预测的状态下求解最优控制量,从而实现最优控制。

此外,NMPC
还受到系统模型的精度限制,因为它需要根据系统的实际状态来预测
未来的状态,如果系统模型的精度不够,则可能会导致NMPC的控制效
果不佳。

总之,NMPC是一种有效的控制方法,它可以有效地控制复杂的系统,
并且可以满足多个约束条件。

但是,NMPC的计算复杂度较高,并且受
到系统模型的精度限制,因此,在使用NMPC时,需要考虑这些因素。

非线性模型预测控制在机电传动系统中的应用研究

非线性模型预测控制在机电传动系统中的应用研究

非线性模型预测控制在机电传动系统中的应用研究引言:机电传动系统是现代工业中的重要部分,它涵盖了多个领域,包括工厂自动化、交通运输、航空航天等。

随着科技的不断进步,人们对机电传动系统的要求越来越高,需要更加精确和高效的控制方法来保证系统的稳定性和性能。

在这种情况下,非线性模型预测控制(Nonlinear Model Predictive Control,NMPC)展现出了巨大的潜力,并在机电传动系统中得到了广泛的应用。

1. 机电传动系统的特点机电传动系统由多个组件和子系统组成,包括电动机、伺服驱动器、传感器等。

这些组件之间的相互作用使得整个系统呈现出非线性、多变量和强耦合的特点。

这些特点给系统的控制带来了挑战,传统的线性控制方法往往难以满足系统的稳定性和性能要求。

2. 非线性模型预测控制的基本原理非线性模型预测控制是一种基于模型的控制方法,它将系统建模为非线性动态模型,并基于该模型进行预测和优化。

NMPC通过预测系统的未来状态和输出,计算出最优的控制动作来指导系统的运行。

与传统的线性控制方法相比,NMPC能够处理非线性系统,并能够考虑系统的约束条件。

这使得NMPC成为了一种强大的控制方法,可以应对机电传动系统中的复杂控制问题。

3. NMPC在机电传动系统中的应用3.1 动态优化机电传动系统往往需要满足多个性能指标,如速度跟踪精度、振动抑制、响应时间等。

NMPC能够将这些性能指标转化为目标函数,并通过优化算法来求解最优策略,以达到系统的最佳性能。

3.2 鲁棒性控制机电传动系统中的不确定性和干扰经常引起控制系统的性能下降和不稳定。

NMPC可以通过在预测模型中引入不确定性来提高控制系统的鲁棒性,使系统对不确定性和干扰具有较好的适应能力。

3.3 故障检测与诊断在机电传动系统中,组件的故障会对整个系统的性能产生严重影响。

NMPC可以通过与故障检测与诊断系统(Fault Detection and Diagnosis,FDD)的结合,实时监测系统的状态,并对故障进行预测和诊断,从而及时采取措施进行修复和调整。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Chapter 10Numerical Optimal Control of Nonlinear SystemsIn this chapter,we present methods for the numerical solution of the constrained finite horizon nonlinear optimal control problems which occurs in each iterate of the NMPC procedure.To this end,we first discuss standard discretization techniques to obtain a nonlinear optimization problem in standard form.Utilizing this form,we outline basic versions of the two most common solution methods for such problems,that is Sequential Quadratic Programming (SQP)and Interior Point Methods (IPM).Furthermore,we investigate interactions between the differential equation solver,the discretization technique and the optimization method and present several NMPC specific details concerning the warm start of the optimization routine.Finally,we discuss NMPC variants relying on inexact solutions of the finite horizon optimal control problem.10.1Discretization of the NMPC ProblemThe most general NMPC problem formulation is given in Algorithm 3.11and will be the basis for this chapter.In Step (2)of Algorithm 3.11we need to solve the optimal control problem minimize J N n,x 0,u(·) :=N −1 k =0ωN −k n +k,x u (k,x 0),u(k)+F J n +N,x u (N,x 0)with respect to u(·)∈U N X 0(n,x 0),subject to x u (0,x 0)=x 0,x u (k +1,x 0)=f x u (k,x 0),u(k) .(OCP n N ,e )We will particularly emphasize the case in which the discrete time system (2.1)is induced by a sampled data continuous time control systems ˙x(t)=f c x(t),v(t) ,(2.6)L.Grüne,J.Pannek,Nonlinear Model Predictive Control ,Communications and Control Engineering,DOI 10.1007/978-0-85729-501-9_10,©Springer-Verlag London Limited 201127527610Numerical Optimal Control of Nonlinear Systems however,all results also apply to discrete time models not related to a sampled data system.Throughout this chapter we assume X=R d and U=R m.Furthermore,we rename the terminal cost F in(OCP n N,e)to F J because we will use the symbol F with a different meaning,below.So far,in Chap.9we have shown how the solution x u(k,x0)of the discrete time system(2.1)in the last line of(OCP n N,e)can be obtained and evaluated using numerical methods for differential equations,but not how the minimization problem (OCP n N,e)can be solved.The purpose of this chapter is tofill this gap.In particular,wefirst show how problem(OCP n N,e)can be reformulated to match the standard problem in nonlinear optimizationminimize F(z)with respect to z∈R n z(NLP)subject to G(z)=0and H(z)≥0with maps F:R n z→R,G:R n z→R r g and H:R n z→R r h.Even though(OCP n N,e)is already a discrete time problem,the process of con-verting(OCP n N,e)into(NLP)is called discretization.Here we will stick with this commonly used term even though in a strict sense we only convert one discrete problem into another.As we will see,the(NLP)problem related to(OCP n N,e)can be formulated in dif-ferent ways.Thefirst variant,called full discretization,incorporates the dynamics (2.1)as additional constraints into(NLP).This approach is very straightforward but causes large computing times for solving the problem(NLP)due to its dimension-ality,unless special techniques for handling these constraints can be used on the optimization algorithm level,cf.the paragraph on condensing in Sect.10.4,below. The second approach is designed to deal with this dimensionality problem.It re-cursively computes x u(k,x0)from the dynamics(2.1)outside of the optimization problem(NLP)and is hence called recursive discretization.Proceeding this way, the dimension of the optimization variable z and the number of constraints p is reduced significantly.However,in this method it is difficult to incorporate preex-isting knowledge on optimal solutions,as derived,e.g.,from the reference,to the optimizer.Furthermore,computing x u(k,x0)on large time intervals may lead to a rather sensitive dependence of the solution on the control u(·),which may cause nu-merical problems in the algorithms for solving(NLP).To overcome these problems, we introduce a third problem formulation,called shooting discretization,which is closely related to concept of shooting methods for differential equations and can be seen as a compromise between the two other methods.After we have given the details of these discretization techniques,methods for solving the optimization problem(NLP)and their applicability in the NMPC context will be discussed in the subsequent sections.In order to illustrate certain effects,we will repeatedly consider the following example throughout this chapter.Example10.1Consider the inverted pendulum on a cart problem from Exam-ple2.10with initial value x0=(2,2,0,0),sampling period T=0.1and cost func-tional10.1Discretization of the NMPC Problem 277Fig.10.1Closed-loop trajectories x 1(·)for small optimization horizons NJ N (x 0,u):=N −1i =0 x(i),u(i)where the stage costs are of the integral type (3.4)withL(x,u):= 3.51sin (x 1−π)2+4.82sin (x 1−π)x 2+2.31x 22+2 1−cos (x 1−π) · 1+cos (x 2)22+x 23+x 24+u 2 2.We impose constraints on the state and the control vectors by definingX :=R ×R ×[−5,5]×[−10,10],U :=[−5,5]but we do not impose stabilizing terminal constraints.All subsequent computations have been performed with the default tolerance 10−6in the numerical solvers in-volved,cf.Sect.10.4for details on these tolerances.As we have seen in the previous chapters,the horizon length can be a critical component for the stability of an NMPC controlled system.In particular,the NMPC closed loop may be unstable if the horizon length is too short as shown in Fig.10.1.In this and in the subsequent figures we visualize closed-loop trajectories for dif-ferent optimization horizons as a surface in the (t,N)-plane.Looking at the figure one sees that for very small N the trajectory simply swings around the downward equilibrium.For N ≥12,the NMPC controller is able to swing up the pendulum to one of the upright equilibria (π,0,0,0)or (−π,0,0,0)but is not able to stabilize the system there.As expected from Theorem 6.21,for larger optimization horizons the closed-loop solution tends toward the upright equilibrium (−π,0,0,0)as shown in Fig.10.2.If we further increase the optimization horizon,then it can be observed that the algorithm chooses to stabilize the upright equilibrium (π,0,0,0)instead of (−π,0,0,0)as illustrated in Fig.10.3.Moreover,for some horizon lengths N ,27810Numerical Optimal Control of Nonlinear SystemsFig.10.2Closed-loop trajectories x1(·)for medium size optimization horizons NFig.10.3Closed-loop trajectories x1(·)for large optimization horizons Nstability is lost.Particularly,for N between50and55the behavior is similar to that for N∈{12,13,14}:the controller is able to swing up the pendulum to an up-right position but unable to stabilize it there.As a consequence,it appears that the NMPC algorithm cannot decide which of the upward equilibria shall be stabilized and the trajectories repeatedly move from one to another,i.e.,the x1-component of the closed-loop trajectory remains close to one of these equilibria only for a short time.This behavior contradicts what we would expect from the theoretical result from Theorem6.21and can hence only be explained by numerical problems in solving (OCP n N,e)due to the large optimization horizon.Since by now we do not have the means to explain the reason for this behavior,we postpone this issue to Sect.10.4. In particular,the background of this effect will be discussed after Example10.28 where we also outline methods to avoid this effect.As a result,we obtain that numerically the set of optimization horizon N for which stability can be obtained is not only bounded from below—as expected from the theoretical result in Theorem6.21—but also from above.Unfortunately,for a10.1Discretization of the NMPC Problem 279general example it is a priori not clear if the set of numerically stabilizing horizons is nonempty,at all.Furthermore,as we will see in this chapter,also the minimal optimization horizon which numerically stabilizes the closed-loop system may vary with the used discretization technique.This is due to the fact that for N close to the theoretical minimal stabilizing horizon and for the NMPC variant without sta-bilizing terminal constraints considered here,the value αin (5.1)is close to 0and hence small numerical inaccuracies may render αnegative and thus destabilize the system.Since different discretizations lead to different numerical errors,it is not entirely surprising that the minimal stabilizing horizon in the numerical simulation depends on the chosen discretization technique,as Example 10.2will show.Full DiscretizationIn order to obtain an optimization problem in standard form (NLP )the full dis-cretization technique is the most simplest and most common one.Recall that the discrete time trajectory x u (k,x 0)in (OCP n N ,e )is defined by the dynamics (2.1)viax u (0,x 0)=x 0,x u (k +1,x 0)=f x u (k,x 0),u(k) ,(2.2)where in the case of a continuous time system the map f is obtained from a numer-ical approximation via (9.12).Clearly,each control value u(k),k ∈{0,...,N −1}is an optimization variable in (OCP n N ,e )and will hence also be an optimization variable in (NLP ).The idea of the full discretization is now to treat each point on the trajectory x u (k,x 0)as an additional independent d -dimensional optimization variable.This implies that we need additional conditions which ensure that the resulting optimal choice of the variables x u (k,x 0)obtained from solving (NLP )is a trajectory of (2.1).To this end,we include the equations in (2.2)as equality constraints in (NLP ).This amounts to rewriting (2.2)asx u (k +1,x 0)−f x u (k,x 0),u(k) =0for k ∈{0,...,N −1},(10.1)x u (0,x 0)−x 0=0.(10.2)Next,we have to reformulate the constraints u(·)∈U N X 0(x 0).According to Defini-tion 3.2these conditions can be written explicitly asx u (k,x 0)∈X k ∈{0,...,N },and u(k)∈U x u (k,x 0) k ∈{0,...,N −1}and in the case of stabilizing terminal constraints we get the additional conditionx u (N,x 0)∈X 0.For simplicity of exposition,we only consider the case of time invariant state con-straints.The setting is,however,easily extended to the case of time varying con-straints X k as introduced in Sect.8.8.28010Numerical Optimal Control of Nonlinear Systems Here and in the following,we assume X ,U (x)and—if applicable—X 0to be characterized by Definition 3.6,i.e.,by a set of functions G S i :R d ×R m →R ,i ∈E S ={1,...,p g },and H S i :R d ×R m →R ,i ∈I S ={p g +1,...,p g +p h }via equality and inequality constraints of the formG S i x u (k,x 0),u(k) =0,i ∈E S ,k ∈K i ⊆{0,...,N },(10.3)H S i x u (k,x 0),u(k) ≥0,i ∈I S ,k ∈K i ⊆{0,...,N }.(10.4)The index sets K i ,i ∈E S ∪I S in these constraints formalize that some of the con-ditions may not be required for all times k ∈{0,...,N }.For instance,the terminal constraint condition x u (N,x 0)∈X 0is only required for k =N ,hence the sets K i corresponding to the respective conditions would be K i ={N }.In order to simplify the notation we have included u(N)into these conditions even though the functional J N in (OCP n N ,e )does not depend on this variable.However,since the functions H S i and G S i in (10.3)and (10.4)do not need to depend on u ,this can be done without loss of generality.Summarizing,we obtain the constraint function G in (NLP )from (10.1),(10.2),(10.3)and H in (NLP )from (10.4).The remaining component of the optimization problem is the optimization variable which is defined asz := x u (0,x 0) ,...,x u (N,x 0) ,u(0) ,...,u(N −1)(10.5)and the cost function F ,which we obtain straightforwardly asF (z):=N −1k =0ωN −k n +k,x u (k,x 0),u(k) +F J n +N,x u (N,x 0) .(10.6)Hence,the fully discretized problem (OCP n N ,e )is of the form minimize F (z):=N −1k =0ωN −k n +k,x u (k,x 0),u(k) +F J n +N,x u (N,x 0)with respect to z := x u (0,x 0) ,...,x u (N,x 0) ,u(0) ,...,u(N −1) ∈R n zsubject to G(z)=⎡⎣[G S i (x u (k,x 0),u(k))]i ∈E S ,k ∈K i [x u (k +1,x 0)−f (x u (k,x 0),u(k))]k ∈{0,...,N −1}x u (0,x 0)−x 0⎤⎦=0and H (z)= H S i x u (k,x 0),u(k) i ∈I S ,k ∈K i≥0.Similar to Definition 3.6we write the equality and inequality constraints as G =(G 1,...,G r g )and H =(H r g +1,...,H r g +r h )with r g :=(N +2)·d + i ∈E S K i and r h := i ∈I S K i where K i denotes the number of elements of the set K i .The corresponding index sets are denoted by E ={1,...,r g }and I ={r g +1,...,r g +r h }.10.1Discretization of the NMPC Problem281Fig.10.4HierarchyThe advantage of the full discretization is the simplicity of the transformation from(OCP n N,e)to(NLP).Unfortunately,it results in a high-dimensional optimiza-tion variable z∈R(N+1)·d+N·m and large number of both equality and inequality constraints r g and r h.Since computing times and accuracy of solvers for problems of type(NLP)depend massively on the size of these problems,this is highly un-desirable.One way to solve this problem is to handle the additional constraints by special techniques in the optimization algorithm,cf.the paragraph on condensing in Sect.10.4,below.Another way is to reduce the number of optimization variables directly in the discretization procedure,which is what we describe next. Recursive DiscretizationIn the previous section we have seen that the full discretization of(OCP n N,e)leads to a high-dimensional optimization problem(NLP).The discretization technique which we present now avoids this problem and minimizes the number of compo-nents within the optimization variable z as well as in the equality constraints G.The methodology of the recursive discretization is inspired by the(hierarchical) divide and conquer principle.According to this principle,the problem is broken down into subproblems which can then be treated by specialized solution methods. The fundamental idea of the recursive discretization is to decouple the dynamics of the control system from the optimization problem(NLP).At the control system level in the hierarchy displayed in Fig.10.4,a specialized solution method—for instance a numerical solver for an underlying ordinary dif-ferential equation—can be used to evaluate the dynamics of the system for given control sequences u(·).These control sequences correspond to values that are re-quired by the solver for problem(NLP).Hence,the interaction between these two components consists in sending control sequences u(·)and initial values x0from the(NLP)solver to the solver of the dynamics,which in turn sends computed state sequences x u(·,x0)back to the(NLP)solver,cf.Fig.10.5.Formally,the optimization variable z reduces toz:=u(0) ,...,u(N−1).(10.7)The constraint functions G S i:R d×R m→R,i∈E S,and H S i:R d×R m→R, i∈I S according to(10.3),(10.4)can be evaluated after computing the solution28210Numerical Optimal Control of Nonlinear SystemsFig.10.5Communication of data between the elements of the computing hierarchyx u (·,x 0)by the solver for the dynamics.This way we do not have to consider (10.1),(10.2)in the constraint function G .Hence,the equality constraints in (NLP )are given by G =[G S i ]with G S i from (10.3).The inequality constraints H which are given by (10.4)and the cost function F from (10.6)remain unchanged compared to the full discretization.In total,the recursively discretized problem (OCP n N ,e )takes the form minimize F (z):=N −1k =0ωN −k n +k,x u (k,x 0),u(k) +F J n +N,x u (N,x 0) with respect to z := u(0) ,...,u(N −1) ∈R n zsubject to G(z)= G S i x u (k,x 0),u(k) i ∈E S ,k ∈K i =0and H (z)= H S i x u (k,x 0),u(k) i ∈I S ,k ∈K i≥0.Taking a look at the dimensions of the optimization variable and the equality con-straints,we see that using the recursive discretization the optimization variable con-sists of N ·m scalar components and the number of equality constraints is reduced to the number of conditions in (10.3),that is,r g := i ∈E S K i .Hence,regarding the number of optimization variables and constraints,this discretization is opti-mal.Still,this method has some drawbacks compared to the full discretization.As we will see in the following sections,the algorithms for solving (NLP )proceed itera-tively,i.e.,starting from an initial guess z 0they compute a sequence z k converging to the minimizer z .The convergence behavior of this iteration can be significantly improved by providing a good initial guess z 0to the algorithm.If,for instance,the initial value x 0is close to the reference solution x ref ,then x ref itself is likely to be such a good initial guess.However,since in the recursive discretization the trajec-tory x u (k,x 0)is not a part of the optimization variable z ,there is no easy way to use this information.Another drawback is that the solution x u (k,x 0)may depend very sensitively on the control sequence u(·),in particular when N is large.For instance,a small change10.1Discretization of the NMPC Problem283 in u(0)may lead to large changes in x u(k,x0)for large k and consequently in F(z), G(z)and H(z),which may cause severe problems in the iterative optimization al-gorithm.In the full discretization,the control value u(0)only affects x u(0,x0)and thus those entries in F,G and H corresponding to k=0,i.e.,the functions F(z), G(z)and H(z)depend much less sensitive on the optimization variables.For these reasons,we now present a third method,which can be seen as a com-promise between the full and the recursive discretization.Multiple Shooting DiscretizationThe idea of the so-called multiple shooting discretization as introduced in Bock[4] is derived from the solution of boundary value problems of differential equations, see,e.g.,Stoer and Bulirsch[36].Within these boundary value problems one tries to find initial values for trajectories which satisfy given terminal conditions.The term shooting has its origins in the similarity of this problem to a cannoneers problem of finding the correct setting for a cannon to hit a target.The idea of this discretization is to include some components of some state vec-tors x u(k,x0)as independent optimization variables in the problem.These variables are treated just as in the full discretization except that we do not do this for all k∈{0,...,N−1}but only for some time instants and that we do not necessar-ily include all components of the state vector x u(k,x0)as additional optimization variables but only some.These new variables are called the shooting nodes and the corresponding times will be called the shooting times.Proceeding this way,we may then provide useful information,e.g.,from the reference trajectory x ref(·)as described at the end of the discussion of the recursive discretization for obtaining a good initial guess for the iterative optimization.Much like the cannoneer we aim at hitting the reference trajectory,the only difference is that we do not only want to hit the target at the end of the(finite)horizon but to stay close to it for the entire time interval which we consider within the optimization.This motivates to set the state components at the shooting nodes to a value which may violate the dynamics of the system(2.1)but is closer to the reference trajectory. This situation is illustrated in Fig.10.6and gives a good intuition why the multiple shooting initial guess is preferable from an optimization point of view.Unfortunately,we have to give up the integrity of the dynamics of the system to achieve this improvement in the initial guess,i.e.,the trajectory pieces starting in the respective shooting nodes can in general not be“glued”together to form a continuous trajectory.In order to solve this problem,we have to include additional equality constraints similar to(10.1),(10.2).For the formal description of this method,we denote the vector of multiple shoot-ing nodes by s:=(s1,...,s rs )∈R r s where s i is the new optimization variable cor-responding to the i th multiple shooting node.The shooting timesς:{1,...,r s}→{0,...,N}and indicesι:{1,...,r s}→{1,...,d}then define the time and the com-ponent of the state vector corresponding to s i via28410Numerical Optimal Control of Nonlinear SystemsFig.10.6Resulting trajectories for initial guess u using no multiple shooting nodes (solid ),one shooting node (dashed )and three shooting nodes (gray dashed )x u ς(j),x 0 ι(j)=s j .This means that the components x u (k,x 0)i ,i =1,...,d ,of the state vector are determined by the iterationx u (k +1,x 0)i =s jif there exists j ∈{1,...,r s }with ς(j)=k +1and ι(j)=i andx u (k +1,x 0)i =f x u (k,x 0),u(k) iwith initial condition x u (0,x 0)i =s j if there exists j ∈{1,...,r s }with ς(j)=0and ι(j)=i and x u (0,x 0)i =(x 0)i ,otherwise.The shooting nodes s i now become part of the optimization variable z ,which hence readsz := u(0) ,...,u(N −1) ,s .(10.8)As in the full discretization we have to ensure that the optimal solution of (NLP )leads to values s i for which the state vectors x u (k,x 0)thus defined form a trajectory of (2.1).To this end,we define the continuity condition for all shooting nodes s j ,j ∈{1,...,r s }with ς(j)≥1analogously to (10.1)ass j −f x u ς(j)−1,x 0 ,u ς(j)−1 ι(j)=0,(10.9)and for all j ∈{1,...,r s }with ς(j)=0analogously to (10.2)ass j −(x 0)ι(j)=0.(10.10)These so-called shooting constraints are included as equality constraints in (NLP ).Since the conditions (10.9)and (10.10)are already in the form of equality con-straints which we require for our problem (NLP ),we can achieve this by defining G to consist of Equalities (10.3),(10.9)and (10.10).As for the recursive discretization,the set of inequality constraints as well as the cost function are still identical to those10.1Discretization of the NMPC Problem285 given by the full discretization.As a result,the multiple shooting discretization of problem(OCP n N,e)is of the formminimize F(z):=N−1k=0ωN−kn+k,x u(k,x0),u(k)+F Jn+N,x u(N,x0)with respect to z:=u(0) ,...,u(N−1) ,s∈R n z subject toG(z)=⎡⎣[G S i(x u(k,x0),u(k))]i∈E S,k∈Ki[s j−f(x u(ς(j)−1,x0),u(ς(j)−1))ι(j)]j∈{1,...,rs},ς(j)≥1[s j−(x0)ι(j)]j∈{1,...,rs},ς(j)=0⎤⎦=0and H(z)=H S ix u(k,x0),u(k)i∈I S,k∈K i≥0.Comparing the size of the multiple shooting discretized optimization problem to the recursively discretized one,we see that the dimension of the optimization variable and the number of equality constraints is increased by r s.An appropriate choice of this number as well as for the values of the shooting nodes and times is crucial in order to obtain an improvement of the NMPC closed loop,as the following example shows.Example10.2Consider the inverted pendulum on a cart problem from Exam-ple10.1.For this example,numerical experience shows that the most critical tra-jectory component is the angle of the pendulum x1.In the following,we discuss and illustrate the efficient use and effect of multiple shooting nodes for this variable. (i)If we define every sampling instant in every dimension of the problem to bea shooting node,then the shooting discretization coincides with the full dis-cretization.Proceeding this way slows down the optimization process signifi-cantly due to the enlarged dimension of the optimization variable.Unless the computational burden can be reduced by exploiting the special structure of the shooting nodes in the optimization algorithm,cf.the paragraph on condensing on Sect.10.4,below,this implies that the number of shooting nodes should be chosen as small as possible.On the other hand,using shooting nodes we may be able to significantly improve the initial guess of the control.Therefore,in terms of the computing time,a balance between improving the initial guess and the number of multiple shooting nodes must be found.(ii)If the state trajectories evolve slowly,i.e.,in regions where the dynamics is slow,the use of shooting nodes may obstruct the optimization since there may not exist a trajectory which links two consecutive shooting nodes.In this case, the optimization routine wastes iteration steps trying to adapt the value of the shooting nodes and may be unable tofind a solution.Ideally,the shooting nodes are chosen close to the optimal transient trajectory,which is,however,usually not known at ing shooting nodes on the reference,instead,may be a good substitute but only if the initial value is sufficiently close to the reference28610Numerical Optimal Control of Nonlinear SystemsFig.10.7Results for a varying shooting nodes for horizon length N=14or if the shooting times are chosen sufficiently large in order to enable thetrajectory to reach a neighborhood of the reference.(iii)Using inappropriate shooting nodes may not only render the optimization rou-tine slow but may lead to unstable solutions even if the problem without shoot-ing nodes showed stable behavior.On the other hand,choosing good shootingnodes may have a stabilizing effect.In the following,we illustrate the effects mentioned in(iii)for the horizonsN=14,15and20.We compute the NMPC closed-loop trajectories for the in-verted pendulum on a cart problem on the interval[0,20]where in each optimiza-tion we use one shooting node for thefirst state dimension x1at different timesς(1)∈{0,...,N−1},and the corresponding initial value to x1=s1=−π.In a second test,we use two shooting nodes for thefirst state dimension with differ-ent(ς(1),ς(2))∈{0,...,N−1}2again with s1=s2=−π.Here,the closed-loop costs in the followingfigures are computed by numerically evaluating200k=0xμN(k,x0),μNxμN(k,x0)in order to approximate J∞from Definition4.10.Red colors in Figs.10.7(b), 10.8(b)and10.9(b)indicate higher closed-loop costs.As we can see from Fig.10.7(a),the state trajectory is stabilized for N=14if we add a shooting node for thefirst differential equation.Hence,using a single shooting node,we are now able to stabilize the problem for a reduced optimization horizon N.Yet,the stabilized equilibrium is not identical for all valuesς(1),i.e.forς(1)∈{t0,...,t2}the equilibrium(π,0,0,0)is chosen,which in our case corresponds to larger closed-loop costs in comparison to the solutions approaching(−π,0,0,0). Similarly,all solutions converge to an upright equilibrium if we use two shooting nodes.Here,Fig.10.7(b)shows the corresponding closed-loop costs which allow us to see that for small values ofς(i),i=1,2,the x1trajectory converges toward (π,0,0,0).10.1Discretization of the NMPC Problem287Fig.10.8Results for a varying shooting nodes for horizon length N=15Fig.10.9Results for a varying shooting nodes for horizon length N=20As we see,for N=14the shooting nodes may change the closed-loop behavior.A similar effect happens for the horizon N=15.For the case without shooting nodes,the equilibrium(−π,0,0,0)is stabilized,cf.Fig.10.2.If shooting nodes are considered,we obtain the results displayed in Fig.10.8.Here,we observe that choosing the shooting timeς(1)∈{0,1,3}results in sta-bilizing(π,0,0,0)while for all other cases the x1trajectory converges toward (−π,0,0,0).Still,forς(1)∈{5,...,9}the solutions differ significantly from the solution without shooting nodes,which also affects the closed-loop costs.As indi-cated by Fig.10.8(b),a similar effect can be experienced if more than one shooting node is used.Hence,by our numerical experience,the effect of stabilizing a chosen equilibrium by adding a shooting node can only be confirmed ifς(1)is set to a time instant close to the end of the optimization horizon.This is further confirmed by Fig.10.9,which illustrates the results for N=20. Here,one also sees that adding shooting nodes may lead to instability of the closed loop.Moreover,we like to stress the fact that adding shooting nodes to a problem may cause the optimization routine to stabilize a different equilibrium than intended.。

相关文档
最新文档