预测控制1
模型预测控制ppt
令
02 动态矩阵控制
动态矩阵控制以优化确定控制策略,在优化过程中, 同时考虑输出跟踪期望值和控制量变化来选择最优化准
则。往往不希望控制增量 Δ u 变化过于剧烈,这一因
素在优化性能指标中加入软约束予以考虑。
02 动态矩阵控制
02 动态矩阵控制
02 动态矩阵控制
02 动态矩阵控制
02 动态矩阵控制
01预测控制概述
工业过程的特点 多变量高维度复杂系统难以建立精确的数学模型 工业过程的结构、参数以及环境具有不确定性、时变性、 非线性、强耦合,最优控制难以实现
预测控制产生
基于模型的控制,但对模型要求不高 采用滚动优化策略,以局部优化取代全局优化 利用实测信息反馈校正,增强控制的鲁棒性
限时域优化策略。优化过程不是一次离线进行,而是在线反
复进行优化计算,滚动实施,从而使模型失配、时变、干扰 等引起的不确定性能及时得到弥补,提高系统的控制效果。
02滚动优化
03反馈校正
模型失配
实际被控过程存在非线性、时变性、不确定性等原因,使基于模型的预测不可能准确地与实 际被控过程相符
反馈校正
从图中可以看出: 第一根曲线是模型失配时的输出 曲线,其快速性较差,超调量小;
第二根曲线是模型未失配时的输 出曲线,其快速性较好,但超调量 略大。
这验证了预测控制对于模型精度 要求不高的优势,即使模型失配, 也能取得不错的控制效果,
05
总结
总结
模型预测控制
预测控制:不仅利用当前和过去的偏差值,而且还利用预测模 型来预测过程未来的偏差值。以滚动优化确定当前的最优控制 策略,使未来一段时间内被控变量与期望值偏差最小
增大P: 系统的快速性变差,稳定性增强; 减小P: 快速性变好,稳定性变差。
预测控制
1.1 引言预测控制是一种基于模型的先进控制技术,它不是某一种统一理论的产物,而是源于工业实践,最大限度地结合了工业实际地要求,并且在实际中取得了许多成功应用的一类新型的计算机控制算法。
由于它采用的是多步测试、滚动优化和反馈校正等控制策略,因而控制效果好,适用于控制不易建立精确数字模型且比较复杂的工业生产过程,所以它一出现就受到国内外工程界的重视,并已在石油、化工、电力、冶金、机械等工业部门的控制系统得到了成功的应用。
工业生产的过程是复杂的,我们建立起来的模型也是不完善的。
就是理论非常复杂的现代控制理论,其控制的效果也往往不尽人意,甚至在某些方面还不及传统的PID控制。
70年代,人们除了加强对生产过程的建模、系统辨识、自适应控制等方面的研究外,开始打破传统的控制思想的观念,试图面向工业开发出一种对各种模型要求低、在线计算方便、控制综合效果好的新型算法。
这样的背景下,预测控制的一种,也就是模型算法控制(MAC -Model Algorithmic Control)首先在法国的工业控制中得到应用。
同时,计算机技术的发展也为算法的实现提供了物质基础。
现在比较流行的算法包括有:模型算法控制(MAC)、动态矩阵控制(DMC )、广义预测控制(GPC)、广义预测极点(GPP)控制、内模控制(IMC)、推理控制(IC)等等。
随着现代计算机技术的不断发展,人们希望有一个方便使用的软件包来代替复杂的理论分析和数学运算,而Matlab、C、C++等语言很好的满足了我们的要求。
1.2 预测控制的存在问题及发展前景70年代以来,人们从工业过程的特点出发,寻找对模型精度要求不高,而同样能实现高质量控制性能的方法,以克服理论与应用之间的不协调。
预测控制就是在这种背景下发展起来的一种新型控制算法。
它最初由Richalet和Cutler等人提出了建立在脉冲响应基础上的模型预测启发控制(Model Predictive Heuristic Control,简称“MPHC”),或称模型算法控制(Model Algorithmic Control,简称“MAC”);Cutler等人提出了建立在阶跃响应基础上的动态矩阵控制(Dynamic Matrix Control,简称“DMC”),是以被控系统的输出时域响应(单位阶跃响应或单位冲激响应)为模型,控制律基于系统输出预测,控制系统性能有较强的鲁棒性,并且方法原理直观简单、易于计算机实现。
预测控制实验报告
预测控制实验报告摘要:本文报告了一项关于预测控制的实验研究,旨在对预测控制的原理和应用进行探讨。
实验通过建立数学模型,并运用预测控制算法对目标系统进行控制。
实验结果表明,预测控制在提高系统稳定性和响应速度方面具有显著的优势。
1. 引言预测控制是一种基于动态模型的控制策略,其可以通过对目标系统的未来特性进行预测来优化控制输入信号,以实现系统的稳定性和性能要求。
预测控制在工业生产中已被广泛应用,对于复杂系统的控制具有重要意义。
2. 实验设计在本实验中,我们首先设计了一个目标系统,即一个简单的加速度系统,用以模拟实际工业生产中的控制系统。
然后,通过使用预测控制算法对该系统进行控制。
2.1 目标系统建模我们使用了一个二阶传递函数模型来描述目标系统,该模型表示了系统的加速度响应特性。
通过测量系统的输入-输出数据,并运用系统辨识技术,我们得到了目标系统的模型参数。
2.2 预测控制算法选择在本实验中,我们选择了基于模型的预测控制算法(MPC),该算法利用目标系统的模型进行控制。
MPC算法通过不断预测系统的未来状态和性能,并通过优化过程来选择最优的控制信号。
我们基于目标系统的模型参数和性能要求,设置了MPC算法的相关参数。
2.3 实验过程在实验中,我们将目标系统的模型参数输入到MPC算法中,并根据目标系统的状态实时更新预测,并生成最优控制信号。
通过不断迭代和优化,我们最终实现了目标系统的控制。
3. 实验结果与分析我们对预测控制算法的性能进行了详细评估和分析。
实验结果表明,通过预测控制算法对目标系统进行控制,系统的稳定性得到了显著提高。
与传统的PID控制相比,预测控制算法在系统响应速度和抗干扰性能上取得了明显的优势。
4. 实验总结本实验通过对预测控制原理和应用的研究,验证了预测控制在系统控制方面的优势。
预测控制算法可以准确地预测未来的系统状态和性能,并通过优化控制信号实现对系统的稳定性和性能的优化。
本实验的结果对于工业生产中的控制系统设计和优化具有重要的指导意义。
模型预测控制
,得最优控制率:
根据滚动优化原理,只实施目前控制量u2(k):
式中:
多步优化MAC旳特点: 优点: (i)控制效果和鲁棒性优于单步MAC算法简朴;
(ii)合用于有时滞或非最小相位对象。 缺陷: (i)算法较单步MAC复杂;
(ii)因为以u作为控制量, 造成MAC算法不可防止地出现稳态误差.
第5章 模型预测控制
5.3.1.2 反馈校正 为了在模型失配时有效地消除静差,能够在模型预测值ym旳基础上 附加一误差项e,即构成反馈校正(闭环预测)。
详细做法:将第k时刻旳实际对象旳输出测量值与预测模型输出之间 旳误差附加到模型旳预测输出ym(k+i)上,得到闭环预测模型,用 yp(k+i)表达:
第5章 模型预测控制
5.1 引言
一 什么是模型预测控制(MPC)?
模型预测控制(Model Predictive Control)是一种基于模型旳闭环 优化控制策略,已在炼油、化工、冶金和电力等复杂工业过程中得到 了广泛旳应用。
其算法关键是:可预测过程将来行为旳动态模型,在线反复优化计
算并滚动实施旳控制作用和模型误差旳反馈校正。
2. 动态矩阵控制(DMC)旳产生:
动态矩阵控制(DMC, Dynamic Matrix Control)于1974年应用在美国壳牌石 油企业旳生产装置上,并于1980年由Culter等在美国化工年会上公开刊登,
3. 广义预测控制(GPC)旳产生:
1987年,Clarke等人在保持最小方差自校正控制旳在线辨识、输出预测、 最小方差控制旳基础上,吸收了DMC和MAC中旳滚动优化策略,基于参数 模型提出了兼具自适应控制和预测控制性能旳广义预测控制算法。
预测控制的基本原理
预测控制的基本原理
预测控制的基本原理是通过对过去的数据进行分析和建模,从而预测未来的状态或行为,并根据这些预测结果采取相应的控制策略来达到期望的目标。
具体步骤包括:
1. 数据收集:收集历史数据,并进行必要的预处理,例如去除异常值或噪声。
2. 建模:基于收集到的数据,建立数学模型来描述系统的演化规律。
可以使用统计模型、机器学习模型或基于物理原理的数学模型等。
3. 预测:利用建立的模型,对未来的状态进行预测。
可以使用时间序列分析、回归分析、神经网络等方法进行预测。
4. 目标设定:确定期望的目标或性能指标,例如最小化误差、最大化效益等。
5. 控制决策:根据预测结果和目标设定,制定相应的控制策略。
可以使用经典的控制算法,如PID控制器,也可以使用优化算法、模糊控制等。
6. 执行控制:根据控制策略,实施相应的控制动作,将系统引导到期望的状态或行为。
7. 监测调整:监测实际的系统响应,并根据反馈信息进行调整和优化,以进一步提高控制性能。
预测控制的基本原理是基于对系统行为的分析和预测,并通过控制策略来引导系统的运行。
通过不断的预测和调整,可以逐步优化系统的性能,适应变化的环境和需求。
预测控制的原理方法及应用
预测控制的原理方法及应用1. 概述预测控制是一种基于模型的控制方法,通过使用系统动态模型对未来的系统行为进行预测,进而生成最优的控制策略。
预测控制广泛应用于各种工业自动化和控制系统中,包括机械控制、化工过程控制、交通流量控制等。
2. 预测模型的建立在预测控制中,首先需要建立系统的预测模型,以描述系统的行为。
根据系统的具体特征,可以选择不同的预测模型,包括线性模型、非线性模型和时变模型等。
预测模型的建立通常需要通过系统的历史数据进行参数估计,以获得最佳的模型效果。
3. 预测优化算法为了生成最优的控制策略,预测控制采用了各种优化算法。
其中,最常用的是模型预测控制(MPC)算法,它通过迭代优化的方式,逐步调整控制策略,以使系统的输出与期望输出尽可能接近。
MPC算法可以通过数学优化方法来求解,如线性规划、二次规划等。
此外,还有一些其他的优化算法可以用于预测控制,如遗传算法、粒子群优化算法等。
4. 预测控制的应用预测控制在各种领域都有广泛的应用,下面将介绍几个典型的应用领域。
4.1 机械控制在机械控制中,预测控制被广泛应用于运动轨迹控制、力控制等方面。
通过建立机械系统的预测模型,可以实现对机械系统的高精度控制,并提高系统的稳定性和性能。
4.2 化工过程控制化工过程控制是预测控制的另一个重要应用领域。
通过预测模型对化工过程进行建模,可以实现对反应过程、传输过程等的预测和控制。
预测控制可以提高化工过程的安全性和效率,并减少能源消耗。
4.3 交通流量控制交通流量控制是城市交通管理中的重要问题。
预测控制可以借助历史交通数据建立交通流量的预测模型,并根据预测结果进行交通信号控制。
通过优化交通信号的时序和配时,可以有效减少交通拥堵和排队长度,提高交通流量的运行效率。
5. 预测控制的优势和挑战预测控制相较于传统的控制方法具有一些显著的优势,但也面临一些挑战。
5.1 优势•预测控制可以通过建立系统动态模型,更准确地预测系统的未来行为,从而生成更优的控制策略。
模型预测控制全面讲解..pdf
hT={h1,h2,…,hN} 可完全描述系统的动态特性
主要内容 预测模型 反馈校正 参考轨迹 滚动优化
第三节 模型算法控制(MAC) 一. 预测模型
MAC的预测模型 渐近稳定线性被控对象的单位脉冲响应曲线
y
h11 h2
有限个采样周期后
lim
j
h
j
0
hN
0 12
t/T N
系统的离散脉冲响应示意图第节 模型算法控制(MAC) 一. 预测模型
MAC算法中的模型参数
1─k 时刻的预测输出 2─k +1时刻实际输出
t/T
3─ k +1 时刻预测误差 4─k +1时刻校正后的预测输出
第三节 模型算法控制(MAC)
模型算法控制(Model Algorithmic Control): 基于脉冲响应模型的预测控制,又称模型预测 启发式控制(MPHC)
60年代末,Richalet等人在法国工业企业中应用 于锅炉和精馏塔的控制
1987年,Clarke 提出了基于时间序列模型和在线辨识的 广义预测控制(Generalized Predictive Control, GPC)
1988年,袁璞提出了基于离散状态空间模型的状态反馈预 测控制(State Feedback Predictive Control, SFPC)
第一节 预测控制的发展
反馈校正
在每个采样时刻,都要通过实际测到的输出信息对基于 模型的预测输出进行修正,然后再进行新的优化
闭环优化
不断根据系统的实际输出对预测输出作出修正,使滚动 优化不但基于模型,而且利用反馈信息,构成闭环优化
预测控制-ppt课件
u (k+j| k)
u(k-j)
k-j
04.05.2020
控制时域
k
k+m
.
k+p
31
反馈校正
❖ 每到一个新的采样时刻,都要通过实际测到 的输出信息对基于模型的预测输出进行修正, 然后再进行新的优化。
❖ 不断根据系统的实际输出对预测输出值作出 修正使滚动优化不但基于模型,而且利用了 反馈信息,构成闭环优化。
04.05.2020
.
16
滤波、预测与控制
❖ 预测:
▪ 已知信号的过去测量值: y(k), y(k-1), ……,y(k-n) ▪ 求解未来时刻期望值:y(k+1|k) , y(k+2|k) , ……
y(k)
预估器
y(k+d|k)
▪ 预估器:y(k+1|k)= b1y(k)+b2y(k-1)+……+any(k-n) y(k+2|k)= b1y (k+1|k) +b2y(k)+……+any(k-n+1) …….
常用预测模型
脉冲响应模型(要求系统为开环稳定对象)
N
y(k) gju(k j)
j1
阶跃响应模型(要求系统为开环稳定对象)
N1
y(k) aju(kj)aNu(kN) j1
u (k) u (k) u (k 1 )
04.05.2020
.
27
输出预测
利用预测模型得到输出预测ym(k+j|k) ym(k+j|k)=f[u(k-i),y(k-i)] i =1,2,3,……..j
高预测精度。
通过滚动优化和反馈校正弥补模型精度不高的 不足,抑制扰动,提高鲁棒性。
非线性模型预测控制_Chapter1
Chapter1Introduction1.1What Is Nonlinear Model Predictive Control?Nonlinear model predictive control(henceforth abbreviated as NMPC)is an opti-mization based method for the feedback control of nonlinear systems.Its primaryapplications are stabilization and tracking problems,which we briefly introduce inorder to describe the basic idea of model predictive control.Suppose we are given a controlled process whose state x(n)is measured at dis-crete time instants t n,n=0,1,2,....“Controlled”means that at each time instantwe can select a control input u(n)which influences the future behavior of the stateof the system.In tracking control,the task is to determine the control inputs u(n)such that x(n)follows a given reference x ref(n)as good as possible.This means thatif the current state is far away from the reference then we want to control the systemtowards the reference and if the current state is already close to the reference thenwe want to keep it there.In order to keep this introduction technically simple,weconsider x(n)∈X=R d and u(n)∈U=R m,furthermore we consider a referencewhich is constant and equal to x∗=0,i.e.,x ref(n)=x∗=0for all n≥0.With such a constant reference the tracking problem reduces to a stabilization problem;in itsfull generality the tracking problem will be considered in Sect.3.3.Since we want to be able to react to the current deviation of x(n)from the ref-erence value x∗=0,we would like to have u(n)in feedback form,i.e.,in the form u(n)=μ(x(n))for some mapμmapping the state x∈X into the set U of control values.The idea of model predictive control—linear or nonlinear—is now to utilize amodel of the process in order to predict and optimize the future system behavior.Inthis book,we will use models of the formx+=f(x,u)(1.1) where f:X×U→X is a known and in general nonlinear map which assigns to a state x and a control value u the successor state x+at the next time instant.Starting from the current state x(n),for any given control sequence u(0),...,u(N−1)with L.Grüne,J.Pannek,Nonlinear Model Predictive Control,1 Communications and Control Engineering,DOI10.1007/978-0-85729-501-9_1,©Springer-Verlag London Limited201121Introduction horizon length N≥2,we can now iterate(1.1)in order to construct a predictiontrajectory x u defined byx u(0)=x(n),x u(k+1)=fx u(k),u(k),k=0,...,N−1.(1.2)Proceeding this way,we obtain predictions x u(k)for the state of the system x(n+k) at time t n+k in the future.Hence,we obtain a prediction of the behavior of the sys-tem on the discrete interval t n,...,t n+N depending on the chosen control sequence u(0),...,u(N−1).Now we use optimal control in order to determine u(0),...,u(N−1)such that x u is as close as possible to x∗=0.To this end,we measure the distance between x u(k)and x∗=0for k=0,...,N−1by a function (x u(k),u(k)).Here,we not only allow for penalizing the deviation of the state from the reference but also—if desired—the distance of the control values u(k)to a reference control u∗,which here we also choose as u∗=0.A common and popular choice for this purpose isthe quadratic functionx u(k),u(k)=x u(k)2+λu(k)2,where · denotes the usual Euclidean norm andλ≥0is a weighting parameter for the control,which could also be chosen as0if no control penalization is desired. The optimal control problem now readsminimize Jx(n),u(·):=N−1k=0x u(k),u(k)with respect to all admissible1control sequences u(0),...,u(N−1)with x u gen-erated by(1.2).Let us assume that this optimal control problem has a solution which is given by the minimizing control sequence u (0),...,u (N−1),i.e.,minu(0),...,u(N−1)Jx(n),u(·)=N−1k=0x u (k),u (k).In order to get the desired feedback valueμ(x(n)),we now setμ(x(n)):=u (0), i.e.,we apply thefirst element of the optimal control sequence.This procedure is sketched in Fig.1.1.At the following time instants t n+1,t n+2,...we repeat the procedure with the new measurements x(n+1),x(n+2),...in order to derive the feedback values μ(x(n+1)),μ(x(n+2)),....In other words,we obtain the feedback lawμby an iterative online optimization over the predictions generated by our model(1.1).2 This is thefirst key feature of model predictive control.1The meaning of“admissible”will be defined in Sect.3.2.2Attentive readers may already have noticed that this description is mathematically idealized since we neglected the computation time needed to solve the optimization problem.In practice,when the measurement x(n)is provided to the optimizer the feedback valueμ(x(n))will only be available after some delay.For simplicity of exposition,throughout our theoretical investigations we will assume that this delay is negligible.We will come back to this problem in Sect.7.6.1.2Where Did NMPC Come from?3Fig.1.1Illustration of the NMPC step at time t nFrom the prediction horizon point of view,proceeding this iterative way the trajectories x u(k),k=0,...,N provide a prediction on the discrete interval t n,...,t n+N at time t n,on the interval t n+1,...,t n+N+1at time t n+1,on the interval t n+2,...,t n+N+2at time t n+2,and so on.Hence,the prediction horizon is moving and this moving horizon is the second key feature of model predictive control.Regarding terminology,another term which is often used alternatively to model predictive control is receding horizon control.While the former expression stresses the use of model based predictions,the latter emphasizes the moving horizon idea. Despite these slightly different literal meanings,we prefer and follow the common practice to use these names synonymously.The additional term nonlinear indicates that our model(1.1)need not be a linear map.1.2Where Did NMPC Come from?Due to the vast amount of literature,the brief history of NMPC we provide in this section is inevitably incomplete and focused on those references in the literature from which we ourselves learned about the various NMPC techniques.Furthermore, we focus on the systems theoretic aspects of NMPC and on the academic develop-ment;some remarks on numerical methods specifically designed for NMPC can be found in rmation about the use of linear and nonlinear MPC in prac-tical applications can be found in many articles,books and proceedings volumes, e.g.,in[15,22,24].Nonlinear model predictive control grew out of the theory of optimal control which had been developed in the middle of the20th century with seminal contri-butions like the maximum principle of Pontryagin,Boltyanskii,Gamkrelidze and Mishchenko[20]and the dynamic programming method developed by Bellman [2].Thefirst paper we are aware of in which the central idea of model predictive41Introduction control—for discrete time linear systems—is formulated was published by Propo˘ı[21]in the early1960s.Interestingly enough,in this paper neither Pontryagin’s max-imum principle nor dynamic programming is used in order to solve the optimal con-trol problem.Rather,the paper already proposed the method which is predominant nowadays in NMPC,in which the optimal control problem is transformed into a static optimization problem,in this case a linear one.For nonlinear systems,the idea of model predictive control can be found in the book by Lee and Markus[14] from1967on page423:One technique for obtaining a feedback controller synthesis from knowl-edge of open-loop controllers is to measure the current control process state and then compute very rapidly for the open-loop control function.Thefirst portion of this function is then used during a short time interval,after whicha new measurement of the process state is made and a new open-loop con-trol function is computed for this new measurement.The procedure is then repeated.Due to the fact that neither computer hardware nor software for the necessary“very rapid”computation were available at that time,for a while this observation had little practical impact.In the late1970s,due to the progress in algorithms for solving constrained linear and quadratic optimization problems,MPC for linear systems became popular in control engineering.Richalet,Rault,Testud and Papon[25]and Cutler and Ramaker [6]were among thefirst to propose this method in the area of process control,in which the processes to be controlled are often slow enough in order to allow for an online optimization,even with the computer technology available at that time. It is interesting to note that in[25]the method was described as a“new method of digital process control”and earlier references were not mentioned;it appears that the basic MPC principle was re-invented several times.Systematic stability investigations appeared a little bit later;an account of early results in that direction for linear MPC can,e.g.,be found in the survey paper of García,Prett and Morari [10]or in the monograph by Bitmead,Gevers and Wertz[3].Many of the techniques which later turned out to be useful for NMPC,like Lyapunov function based stability proofs or stabilizing terminal constraints were in factfirst developed for linear MPC and later carried over to the nonlinear setting.The earliest paper we were able tofind which analyzes an NMPC algorithm sim-ilar to the ones used today is an article by Chen and Shaw[4]from1982.In this paper,stability of an NMPC scheme with equilibrium terminal constraint in contin-uous time is proved using Lyapunov function techniques,however,the whole opti-mal control function on the optimization horizon is applied to the plant,as opposed to only thefirst part as in our NMPC paradigm.For NMPC algorithms meeting this paradigm,first comprehensive stability studies for schemes with equilibrium termi-nal constraint were given in1988by Keerthi and Gilbert[13]in discrete time and in1990by Mayne and Michalska[17]in continuous time.The fact that for non-linear systems equilibrium terminal constraints may cause severe numerical diffi-culties subsequently motivated the investigation of alternative techniques.Regional1.3How Is This Book Organized?5 terminal constraints in combination with appropriate terminal costs turned out to be a suitable tool for this purpose and in the second half of the1990s there was a rapid development of such techniques with contributions by De Nicolao,Magni and Scattolini[7,8],Magni and Sepulchre[16]or Chen and Allgöwer[5],both in discrete and continuous time.This development eventually led to the formulation of a widely accepted“axiomatic”stability framework for NMPC schemes with sta-bilizing terminal constraints as formulated in discrete time in the survey article by Mayne,Rawlings,Rao and Scokaert[18]in2000,which is also an excellent source for more detailed information on the history of various NMPC variants not men-tioned here.This framework also forms the core of our stability analysis of such schemes in Chap.5of this book.A continuous time version of such a framework was given by Fontes[9]in2001.All stability results discussed so far add terminal constraints as additional state constraints to thefinite horizon optimization in order to ensure stability.Among the first who provided a rigorous stability result of an NMPC scheme without such con-straints were Parisini and Zoppoli[19]and Alamir and Bornard[1],both in1995and for discrete time systems.Parisini and Zoppoli[19],however,still needed a terminal cost with specific properties similar to the one used in[5].Alamir and Bonnard[1] were able to prove stability without such a terminal cost by imposing a rank con-dition on the linearization on the system.Under less restrictive conditions,stability results were provided in2005by Grimm,Messina,Tuna and Teel[11]for discrete time systems and by Jadbabaie and Hauser[12]for continuous time systems.The results presented in Chap.6of this book are qualitatively similar to these refer-ences but use slightly different assumptions and a different proof technique which allows for quantitatively tighter results;for more details we refer to the discussions in Sects.6.1and6.9.After the basic systems theoretic principles of NMPC had been clarified,more advanced topics like robustness of stability and feasibility under perturbations,per-formance estimates and efficiency of numerical algorithms were addressed.For a discussion of these more recent issues including a number of references we refer to thefinal sections of the respective chapters of this book.1.3How Is This Book Organized?The book consists of two main parts,which cover systems theoretic aspects of NMPC in Chaps.2–8on the one hand and numerical and algorithmic aspects in Chaps.9–10on the other hand.These parts are,however,not strictly separated;in particular,many of the theoretical and structural properties of NMPC developed in thefirst part are used when looking at the performance of numerical algorithms.The basic theme of thefirst part of the book is the systems theoretic analysis of stability,performance,feasibility and robustness of NMPC schemes.This part starts with the introduction of the class of systems and the presentation of background material from Lyapunov stability theory in Chap.2and proceeds with a detailed61Introduction description of different NMPC algorithms as well as related background information on dynamic programming in Chap.3.A distinctive feature of this book is that both schemes with stabilizing terminal constraints as well as schemes without such constraints are considered and treated in a uniform way.This“uniform way”consists of interpreting both classes of schemes as relaxed versions of infinite horizon optimal control.To this end,Chap.4first de-velops the theory of infinite horizon optimal control and shows by means of dynamic programming and Lyapunov function arguments that infinite horizon optimal feed-back laws are actually asymptotically stabilizing feedback laws.The main building block of our subsequent analysis is the development of a relaxed dynamic program-ming framework in Sect.4.3.Roughly speaking,Theorems4.11and4.14in this section extract the main structural properties of the infinite horizon optimal control problem,which ensure•asymptotic or practical asymptotic stability of the closed loop,•admissibility,i.e.,maintaining the imposed state constraints,•a guaranteed bound on the infinite horizon performance of the closed loop,•applicability to NMPC schemes with and without stabilizing terminal constraints. The application of these theorems does not necessarily require that the feedback law to be analyzed is close to an infinite horizon optimal feedback law in some quantitative sense.Rather,it requires that the two feedback laws share certain prop-erties which are sufficient in order to conclude asymptotic or practical asymptotic stability and admissibility for the closed loop.While our approach allows for inves-tigating the infinite horizon performance of the closed loop for most schemes under consideration—which we regard as an important feature of the approach in this book—we would like to emphasize that near optimal infinite horizon performance is not needed for ensuring stability and admissibility.The results from Sect.4.3are then used in the subsequent Chaps.5and6in order to analyze stability,admissibility and infinite horizon performance properties for NMPC schemes with and without stabilizing terminal constraints,respectively. Here,the results for NMPC schemes with stabilizing terminal constraints in Chap.5 can by now be considered as classical and thus mainly summarize what can be found in the literature,although some results—like,e.g.,Theorems5.21and5.22—generalize known results.In contrast to this,the results for NMPC schemes without stabilizing terminal constraints in Chap.6were mainly developed by ourselves and coauthors and have not been presented before in this way.While most of the results in this book are formulated and proved in a mathemat-ically rigorous way,Chap.7deviates from this practice and presents a couple of variants and extensions of the basic NMPC schemes considered before in a more survey like manner.Here,proofs are occasionally only sketched with appropriate references to the literature.In Chap.8we return to the more rigorous style and discuss feasibility and robust-ness issues.In particular,in Sects.8.1–8.3we present feasibility results for NMPC schemes without stabilizing terminal constraints and without imposing viability as-sumptions on the state constraints which are,to the best of our knowledge,either1.3How Is This Book Organized?7 entirely new or were so far only known for linear MPC.These resultsfinish our study of the properties of the nominal NMPC closed-loop system,which is why it is followed by a comparative discussion of the advantages and disadvantages of the various NMPC schemes presented in this book in Sect.8.4.The remaining sec-tions in Chap.8address the robustness of the stability of the NMPC closed loop with respect to additive perturbations and measurement errors.Here we decided to present a selection of results we consider representative,partially from the literature and partially based on our own research.These considerationsfinish the systems theoretic part of the book.The numerical part of the book covers two central questions in NMPC:how can we numerically compute the predicted trajectories needed in NMPC forfinite-dimensional sampled data systems and how is the optimization in each NMPC step performed numerically?Thefirst issue is treated in Chap.9,in which we start by giving an overview on numerical one step methods,a classical numerical technique for solving ordinary differential equations.After having looked at the convergence analysis and adaptive step size control techniques,we discuss some implementa-tional issues for the use of this methods within NMPC schemes.Finally,we investi-gate how the numerical approximation errors affect the closed-loop behavior,using the robustness results from Chap.8.The last Chap.10is devoted to numerical algorithms for solving nonlinearfi-nite horizon optimal control problems.We concentrate on so-called direct methods which form the currently by far preferred class of algorithms in NMPC applications. In these methods,the optimal control problem is transformed into a static optimiza-tion problem which can then be solved by nonlinear programming algorithms.We describe different ways of how to do this transformation and then give a detailed introduction into some popular nonlinear programming algorithms for constrained optimization.The focus of this introduction is on explaining how these algorithms work rather than on a rigorous convergence theory and its purpose is twofold:on the one hand,even though we do not expect our readers to implement such algorithms, we still think that some background knowledge is helpful in order to understand the opportunities and limitations of these numerical methods.On the other hand,we want to highlight the key features of these algorithms in order to be able to explain how they can be efficiently used within an NMPC scheme.This is the topic of the final Sects.10.4–10.6,in which several issues regarding efficient implementation, warm start and feasibility are investigated.Like Chap.7and in contrast to the other chapters in the book,Chap.10has in large parts a more survey like character,since a comprehensive and rigorous treatment of these topics would easilyfill an entire book.Still,we hope that this chapter contains valuable information for those readers who are interested not only in systems theoretic foundations but also in the practical numerical implementation of NMPC schemes.Last but not least,for all examples presented in this book we offer either MAT-LAB or C++code in order to reproduce our numerical results.This code is available from the web page81Introduction Both our MATLAB NMPC routine—which is suitable for smaller problems—as well as our C++NMPC package—which can also handle larger problems withreasonable computing time—can also be modified in order to perform simulationsfor problems not treated in this book.In order to facilitate both the usage and themodification,the Appendix contains brief descriptions of our routines.Beyond numerical experiments,almost every chapter contains a small selectionof problems related to the more theoretical results.Solutions for these problemsare available from the authors upon request by email.Attentive readers will notethat several of these problems—as well as some of our examples—are actually lin-ear problems.Even though all theoretical and numerical results apply to generalnonlinear systems,we have decided to include such problems and examples,be-cause nonlinear problems hardly ever admit analytical solutions,which are neededin order to solve problems or to work out examples without the help of numericalalgorithms.Let usfinally say a few words on the class of systems and NMPC problemsconsidered in this book.Most results are formulated for discrete time systems onarbitrary metric spaces,which in particular coversfinite-and infinite-dimensionalsampled data systems.The discrete time setting has been chosen because of its no-tational and conceptual simplicity compared to a continuous time formulation.Still,since sampled data continuous time systems form a particularly important class ofsystems,we have made considerable effort in order to highlight the peculiaritiesof this system class whenever appropriate.This concerns,among other topics,therelation between sampled data systems and discrete time systems in Sect.2.2,thederivation of continuous time stability properties from their discrete time counter-parts in Sect.2.4and Remark4.13,the transformation of continuous time NMPCschemes into the discrete time formulation in Sect.3.5and the numerical solutionof ordinary differential equations in Chap.9.Readers or lecturers who are inter-ested in NMPC in a pure discrete time framework may well skip these parts of thebook.The most general NMPC problem considered in this book3is the asymptotictracking problem in which the goal is to asymptotically stabilize a time varyingreference x ref(n).This leads to a time varying NMPC formulation;in particular,the optimal control problem to be solved in each step of the NMPC algorithm ex-plicitly depends on the current time.All of the fundamental results in Chaps.2–4explicitly take this time dependence into account.However,in order to be able toconcentrate on concepts rather than on technical details,in the subsequent chapterswe often decided to simplify the setting.To this end,many results in Chaps.5–8arefirst formulated for time invariant problems x ref≡x∗—i.e.,for stabilizing an x∗—and the necessary modifications for the time varying case are discussed after-wards.3Except for some further variants discussed in Sects.3.5and7.10.1.4What Is Not Covered in This Book?9 1.4What Is Not Covered in This Book?The area of NMPC has grown so rapidly over the last two decades that it is virtually impossible to cover all developments in detail.In order not to overload this book,we have decided to omit several topics,despite the fact that they are certainly important and useful in a variety of applications.We end this introduction by giving a brief overview over some of these topics.For this book,we decided to concentrate on NMPC schemes with online opti-mization only,thus leaving out all approaches in which part of the optimization is carried out offline.Some of these methods,which can be based on both infinite hori-zon andfinite horizon optimal control and are often termed explicit MPC,are briefly discussed in Sects.3.5and4.4.Furthermore,we will not discuss special classes of nonlinear systems like,e.g.,piecewise linear systems often considered in the explicit MPC literature.Regarding robustness of NMPC controllers under perturbations,we have re-stricted our attention to schemes in which the optimization is carried out for a nom-inal model,i.e.,in which the perturbation is not explicitly taken into account in the optimization objective,cf.Sects.8.5–8.9.Some variants of model predictive con-trol in which the perturbation is explicitly taken into account,like min–max MPC schemes building on game theoretic ideas or tube based MPC schemes relying on set oriented methods are briefly discussed in Sect.8.10.An emerging and currently strongly growingfield are distributed NMPC schemes in which the optimization in each NMPC step is carried out locally in a number of subsystems instead of using a centralized optimization.Again,this is a topic which is not covered in this book and we refer to,e.g.,Rawlings and Mayne[23,Chap.6] and the references therein for more information.At the very heart of each NMPC algorithm is a mathematical model of the sys-tems dynamics,which leads to the discrete time dynamics f in(1.1).While we will explain in detail in Sect.2.2and Chap.9how to obtain such a discrete time model from a differential equation,we will not address the question of how to obtain a suitable differential equation or how to identify the parameters in this model.Both modeling and parameter identification are serious problems in their own right which cannot be covered in this book.It should,however,be noted that optimization meth-ods similar to those used in NMPC can also be used for parameter identification; see,e.g.,Schittkowski[26].A somewhat related problem stems from the fact that NMPC inevitably leads to a feedback law in which the full state x(n)needs to be measured in order to evaluate the feedback law,i.e.,a state feedback law.In most applications,this information is not available;instead,only output information y(n)=h(x(n))for some output map h is at hand.This implies that the state x(n)must be reconstructed from the output y(n)by means of a suitable observer.While there is a variety of different techniques for this purpose,it is interesting to note that an idea which is very similar to NMPC can be used for this purpose:in the so-called moving horizon state estimation ap-proach the state is estimated by iteratively solving optimization problems over a101Introduction moving time horizon,analogous to the repeated minimization of J(x(n),u(·))de-scribed above.However,instead of minimizing the future deviations of the pre-dictions from the reference value,here the past deviations of the trajectory from the measured output values are minimized.More information on this topic can be found,e.g.,in Rawlings and Mayne[23,Chap.4]and the references therein.References1.Alamir,M.,Bornard,G.:Stability of a truncated infinite constrained receding horizon scheme:the general discrete nonlinear case.Automatica31(9),1353–1356(1995)2.Bellman,R.:Dynamic Programming.Princeton University Press,Princeton(1957).Reprintedin20103.Bitmead,R.R.,Gevers,M.,Wertz,V.:Adaptive Optimal Control.The Thinking Man’s GPC.International Series in Systems and Control Engineering.Prentice Hall,New York(1990) 4.Chen,C.C.,Shaw,L.:On receding horizon feedback control.Automatica18(3),349–352(1982)5.Chen,H.,Allgöwer,F.:Nonlinear model predictive control schemes with guaranteed stabil-ity.In:Berber,R.,Kravaris,C.(eds.)Nonlinear Model Based Process Control,pp.465–494.Kluwer Academic,Dordrecht(1999)6.Cutler,C.R.,Ramaker,B.L.:Dynamic matrix control—a computer control algorithm.In:Pro-ceedings of the Joint Automatic Control Conference,pp.13–15(1980)7.De Nicolao,G.,Magni,L.,Scattolini,R.:Stabilizing nonlinear receding horizon control viaa nonquadratic terminal state penalty.In:CESA’96IMACS Multiconference:ComputationalEngineering in Systems Applications,Lille,France,pp.185–187(1996)8.De Nicolao,G.,Magni,L.,Scattolini,R.:Stabilizing receding-horizon control of nonlineartime-varying systems.IEEE Trans.Automat.Control43(7),1030–1036(1998)9.Fontes,F.A.C.C.:A general framework to design stabilizing nonlinear model predictive con-trollers.Systems Control Lett.42(2),127–143(2001)10.García,C.E.,Prett,D.M.,Morari,M.:Model predictive control:Theory and practice—a sur-vey.Automatica25(3),335–348(1989)11.Grimm,G.,Messina,M.J.,Tuna,S.E.,Teel,A.R.:Model predictive control:for want of alocal control Lyapunov function,all is not lost.IEEE Trans.Automat.Control50(5),546–558 (2005)12.Jadbabaie,A.,Hauser,J.:On the stability of receding horizon control with a general terminalcost.IEEE Trans.Automat.Control50(5),674–678(2005)13.Keerthi,S.S.,Gilbert,E.G.:Optimal infinite-horizon feedback laws for a general class ofconstrained discrete-time systems:stability and moving-horizon approximations.J.Optim.Theory Appl.57(2),265–293(1988)14.Lee,E.B.,Markus,L.:Foundations of Optimal Control Theory.Wiley,New York(1967)15.Maciejowski,J.M.:Predictive Control with Constraints.Prentice Hall,New York(2002)16.Magni,L.,Sepulchre,R.:Stability margins of nonlinear receding-horizon control via inverseoptimality.Systems Control Lett.32(4),241–245(1997)17.Mayne,D.Q.,Michalska,H.:Receding horizon control of nonlinear systems.IEEE Trans.Automat.Control35(7),814–824(1990)18.Mayne,D.Q.,Rawlings,J.B.,Rao,C.V.,Scokaert,P.O.M.:Constrained model predictive con-trol:Stability and optimality.Automatica36(6),789–814(2000)19.Parisini,T.,Zoppoli,R.:A receding-horizon regulator for nonlinear systems and a neuralapproximation.Automatica31(10),1443–1451(1995)20.Pontryagin,L.S.,Boltyanskii,V.G.,Gamkrelidze,R.V.,Mishchenko,E.F.:The MathematicalTheory of Optimal Processes.Translated by D.E.Brown.Pergamon/Macmillan Co.,New York (1964)。
预测控制
预测控制是以某种模型为基础, 利用过去的输入输出数
据来预测将来某段时间内的输出, 再通过具有控制约束和预
测误差的二次目标函数的极小化, 得到当前和未来几个采样
时刻的最优控制规律, 在下一采样周期, 利用最新数据, 重
复上述优化计算过程.
预测控制的结构可用下图表示,
(3)滚动优化. 预测控制是一种闭环优化控制算法. 它通过某一性能指标的最优化来确定未来的控制作用. 预测控制中的优化与通常的离散最优化控制算法不同, 它不采用一个不变的全局最优目标, 而是采用滚动式的 有限时域优化策略, 优化过程不是一次离线完成, 而是 反复在线进行. 即在每一采样时刻, 优化性能指标只涉 及从该时刻起到未来一段有限的时间, 而到下一个采样 时刻, 这一优化时段会同时向前推移. 因此, 预测控制 不是用一个对全局相同的性能指标, 而是在每一个不同 的时刻有一个相对于该时刻的局部优化性能指标. 不同 时刻优化性能指标的形式相同, 但其所包含的时间区域 不同. 这就是滚动优化的含义. 这种局部的有限时域的 优化目标, 只能得到全局的次优解.
观察过程在不同控制策略下的输出变化, 为比较这些控 制策略的优劣提供了基础.
(2)反馈校正. 在预测控制中, 采用预测模型进行 过程输出值的预估只是一种理想的方式, 对于实际过程 由于存在非线性﹑时变﹑模型失配和干扰等不确定因素 使基于模型的预测不可能准确地与实际相符. 因此在预 测控制中, 通过输出的测量值与模型的预估值进行比较 得出模型的预测误差, 再利用模型的预测误差来校正模 型的预测值, 以得到更为准确的将来输出的预测值. 模 型预测加反馈校正, 使预测控制具有很强的抗干扰和克 服系统不确定性的能力. 预测控制是一种闭环优化控制 算法.
预测控制理论与方法
预测控制理论与方法
预测控制理论和方法是一种用于控制系统的高级控制方法。
它基于系统模型和过去的测量数据,通过预测未来的系统行为来实时调整控制器的输出,以实现所需的控制效果。
预测控制方法通常包括以下几个步骤:
1. 建立系统模型:首先需要对被控制系统进行建模,并且将系统的动态行为表示为一个数学模型,通常是差分方程或状态空间方程。
2. 数据采集和处理:通过采集系统的输入和输出数据,以及其他相关的环境变量数据,来获取系统的实时状态。
这些数据一般需要进行处理和滤波,以去除噪声和提高数据质量。
3. 预测计算:利用建立的系统模型和最新的测量数据,通过数学方法来预测系统未来的行为。
这通常涉及到状态估计、参数估计和模型预测控制等技术,以获得准确的系统状态预测。
4. 控制器设计:根据系统的预测结果和控制要求,设计一个合适的控制器来实时调整系统的输出。
这通常涉及到最优控制、自适应控制和鲁棒控制等技术,以实现最佳的控制效果。
5. 实时调整和优化:根据实时测量数据和控制器的输出,在每个采样周期内进行控制器参数的调整和优化,以保持系统的稳定性和性能。
预测控制理论和方法在许多领域中广泛应用,包括工业过程控制、机械控制、交通控制、能源管理以及金融市场等。
它能够提高系统的控制性能和适应性,同时减少对系统模型的要求和对系统参数的依赖。
预测控制
《预测控制》课程题目1. 简述预测控制基本原理,并结合实际生活举例说明预测控制的思想。
2. 简述分散预测控制、分布式预测控制以及递阶预测控制之间的区别及特点。
3. 简述一种预测控制稳定性综合方法,并说明在此种方法下的稳定性证明思路。
4.通过查阅文献,简单介绍当前预测控制研究的一个热点问题。
专业:控制理论与控制工程学号:030120655姓名:高丽君1、答:所谓预测控制是使用过程模型来控制对象未来行为。
预测控制的基本原理是:预测模型、滚动优化和反馈校正。
预测模型是根据被控对象的历史信息和未来输入,预测系统未来响应。
(一定是因果模型) 滚动优化是通过使某一性能指标J 极小化,以确定未来的控制作用)|(k j k u +。
指标J 希望模型预测输出尽可能趋于参考轨迹。
滚动优化在线反复进行,只将)|(k k u 施加于被控对象。
反馈校正是每到一个新的采样时刻,通过实际测到的输出信息对基于模型的预测输出进行修正,然后再进行新的优化。
这样不断优化,不断修正,构成闭环优化。
比如穿越马路时,人们首先要根据自己的视野预测是否有车,同时还要边走边看,随时预测前方是否有新的车辆出现,以反馈修正自己的行为。
这其中就包含了预测控制的思想。
2、3、预测控制以Lyapunov 分析作为稳定性设计的基本方法。
其稳定性综合方法包括:终端零约束预测控制,终端惩罚项预测控制以及终端约束集预测控制。
其中终端零约束方法是在有限时域优化问题中,强制x(k + N)= 0,它实际相当于终端项的权矩阵为无穷大。
其证明思路是:采用每一时刻的最优值函数(即性能指标)作为Lyapunov 函数,证明其单调下降来说明他的稳定性。
具体过程为:把相邻两个时刻的性能指标通过构造一个中间控制序列联系起来。
利用1+k 时刻的可行解)1(+k u 将k 时刻的最优解)(*k u 和1+k 时刻的最优解)1(*+k u 联系起来,通过证明)()1(*k J k J ≤+以及)1()1(*+≤+k J k J 这两个式子,从而证明)()1(**k J k J ≤+。
预测控制
hM
hM
1
hM 1 hM
hN hP1P(N1)
hP hP 1
H1
U1(k)
0
hPM 2
H2
u(k )
h1
h1 h2
u
u (k
(k
M
1)
1)
P M 1 hi
i 1
PM
U2(k)
2 反馈校正
修正后的输出预估值为:
y P ( k j) y m ( k j)j[y ( k ) y m ( k )]
过去 w
y (t)
未来 yr (t)
y p (t)
u (t )
k k 1
kP t T
在线滚动的实现方式
预测控制中,通过求解优化问题,可得到现时刻所 确定的一组最优控制{u(k),u(k+1),…,u(k+M-1)}, 其中M为控制的时域长度。
对过程施加这组控制作用的方式有:
在现时刻k,只施加第一个控制作用u(k);下一时刻,根 据采集到的过程输出,重新计算一组最优控制序列,仍 只施加新控制序列的第一个;
P步预测的向量形式
ym(k1) u(k)
ym(k2)
u(k1)
u(k1) u(k)
u(k2) u(k1)
Ym(k) ym(kM ) u(kM1) u(kM2) u(kM3)
ym(kM1)u(kM1) u(kM1) u(kM2)
ym(kP) u(kM1) u(kM1) u(kM1) u(kM2)
主要优点:
1. 无需降低其模型阶数。 2. 可正确地直接进行处理。 3. 闭环响应对于受控对象的变化具有鲁棒性。 4. 内部模型的在线更新,可以实现增益预调整。 5. 可以简化硬件条件。 6. 可以采用不同的采样周期。 7. 可以在线修改控制规则。
课件--模型预测控制
h1
h1
h2
PM 1
hi
i1
PM
第三节 模型算法控制(MAC) 二. 反馈校正
以当前过程输出测量值与模型计算值之差修正模型预测值
yP (k j) ym (k j) jy(k) ym (k)
N
ym (k) hiu(k i) i 1
对于P步预测
j 1, 2, , P
YP (k) Ym (k) βe(k)
主要内容 预测模型 反馈校正 参考轨迹 滚动优化
第四节 动态矩阵控制(DMC) 一. 预测模型
DMC的预测模型
渐近稳定线性被控对象的单位阶跃响应曲线
和给定值的偏差来确定当前的控制输入 预测控制:不仅利用当前的和过去的偏差值,
而且还利用预测模型来预测过程未来的偏差值。 以滚动优化确定当前的最优控制策略,使未来 一段时间内被控变量与期望值偏差最小 从基本思想看,预测控制优于PID控制
第二节 预测控制的基本原理
r(k)
+_
d(k)
在线优化 控制器
u(k)
y(k) 受控过程
+ y(k+j| k)
+
模型输出 反馈校正
动态 预测模型
y(k|k)
_ +
三要素:预测模型 滚动优化 反馈校正
第二节 预测控制的基本原理 一.预测模型(内部模型)
预测模型的功能 根据被控对象的历史信息{ u(k - j), y(k - j) |
j≥1 }和未来输入{ u(k + j - 1) | j =1, …, m} ,预测 系统未来响应{ y(k + j) | j =1, …, p} 预测模型形式 参数模型:如微分方程、差分方程 非参数模型:如脉冲响应、阶跃响应
浅谈预测控制发展及其存在问题
信息系统工程 l 2 0 1 5 . 5 . 2 0 1 O 1
要解 决 的理论 和实 际 问题 ,但 预测 控制 的深 入研究 和推 广 应用 , 势 必会 对 我 国工业 自动化水 平 的提 高产 生积极 的影
响。 ( 作 者单位 :郑州职 业技术 学院 )
面对的问题 , 如何将理论结合于实践 , 是急需解决的问题。
下面对预测控制 中存在的主要问题做简单分析:
可能 ,为预测控制的快速发展突破提供条件 。
预测控 制一 种新 型 的计 算机控 制算 法 ,对 未来 的控 制
发展会 产 生重要 影 响 。在 工业 实践 应用 中虽 然还有 很多 需
预测 控制 具有 强大 的发展前 景 ,但 在 实践 中仍 存在 很 多 不足 ,理论 研究 远远 落后 于实践 。工 程实践 应用 系统 大 部分 是 多变量 复杂 的非 线性 系统 ,而预 测控制 的理 论研 究 集 中于 简单 的线性 化 系统 中。这是 我们 研究 预测 控制必 须
预测 控制 有很 多算 法 ,根 据基本 结构 ,也可 以大概分为 三类 :
1 . 运用 参数 化模 型的 预i 贝 4 控 制 ,分 为广 义预测 极点 配
置控 制 和广义 预测 控制 ,通过 受控 自回归滑 动平 均模 型 , 加强 系统 的鲁棒性 。
2 . 运 用非参 数模 型 的预测控 制 ,分 为动 态矩 阵控制 和
量分析 。
预测控制-1
法不断提出。
2019年9月2日星期一
预测控制的发展历程
预测控制首先在工程实践获得成功应用,是实践 超前于理论的一类控制器设计方法;
预测控制可看作是经典反馈控制和现代最优控制 之间的一种折中(滚动优化+反馈校正);
with zero mean. nk --measurement noise,independent Gaussian noise
with zero mean. X0-- initial state, assumed to be Gaussian with non-
zero mean.
2019年9月2日星期一
Cutler and Ramaker presented details of an unconstrained multivariable control algorithm which they named dynamic matrix control (DMC) at the 1979 National AIChE meeting
预测控制算法的三要素为:
预测模型 滚动优化 反馈校正
2019年9月2日星期一
预测控制的三要素
预测模型:对未来一段时间内的输出进行
预测;
滚动优化:滚动进行有限时域在线优化; 反馈校正:通过预测误差反馈,修正预测模
型,提高预测精度。
通过滚动优化和反馈校正弥补模型精 度不高的不足,抑制扰动,提高鲁棒性。
多变量约束系统。
2019年9月2日星期一
工业预测控制软件的发展历程
2019年9月2日星期一
工业预测控制软件的发展历程
预测控制MPC
N
其中u、y分别是输入量、输出量相对于稳态工作 点的偏移值。
其中N是建模时域,与采样周期Ts有关,N·Ts对 应于被控过程的响应时间,在合理选择Ts的情况 下,建议N的数值在20~60之间。
29 30
5
(2)阶跃响应模型
u(k)
当 输 入 为 单 位 阶 跃 输 入 时 , 即 U (s) = 1 s 1 s
13
14
2、预测控制基本原理
1978年,J.Richalet等就提出了预测控 1978年,J.Richalet等就提出了预测控 制算法的三要素:
内部(预测) 内部(预测)模型、参考轨迹、控制算法
(1)预测算法基本工作过程
模型预测 滚动优化 反馈校正
现在一般则更清楚地表述为: 在 般则更清楚地表述为
内部(预测) 内部(预测)模型、滚动优化、反馈控制
0 t
其中
Δu ( k − i ) = u ( k − i ) − u ( k − i − 1)
为k-i时刻作用在系统上的 控制增量。
图12-4 阶跃响应模型
即:a (t ) = ∫ g (τ ) dτ
0
t
实际上ai = ∑ g j = ∑ h j
j =1 j =1
i
i
即:gi = hi = ai − ai −1
19
2
模型预测控制是一种优化控制算法,通过某一性 能指标的最优来确定未来的控制作用。 控制目的 通过某一性能指标的最优, 通过某一性能指标的最优, 确定未来的控制作 用 优化过程 随时间推移在线优化,反复进行 每一步实现的是静态优化 全局是动态优化
20
滚动优化示意图
k时刻优化 2 1 3 1─参考轨迹yr (虚线) 2─最优预测输出y(实线) 3─最优控制作用u
预测控制-1
2020年2月8日星期六
预测控制的产生背景——LQG算法
• LQG算法 (linear quadratic Gaussian)
• u -- process inputs, or manipulated variables • y -- measured process outputs. • x -- process states to becontrolled. • wk --state disturbance,independent Gaussian noise
with zero mean. • nk --measurement noise,independent Gaussian noise
with zero mean. • X0-- initial state, assumed to be Gaussian with non-
zero mean.
2020年2月8日星期六
2020年2月8日星期六
预测控制的产生背景——LQG算法
• 上述优化问题的解为
• 卡尔曼滤波增益 Kf 通过求解矩阵黎卡提 方程得到,而控制器增益 Kc 通过构建对 偶的黎卡提方程得到。
2020年2月8日星期六
预测控制的产生背景——LQG算法
• LQG算法具有强大的稳定性,对于精确模型, 只要Q半正定,R正定,任何合理的线性对 象的LQG控制器都会稳定.
▪ 工业生产规模不断扩大
▪ 对生产过程要求不断提高:质量、性能、安全……
▪ 复杂性:非线性、不确定性、时变性、耦合、时 滞……
2020年2月8日星期六
预测控制的产生背景
❖ 现代控制理论的不足:
▪ 依赖精确模型 ▪ 适合多变量控制,但算法复杂 ▪ 实现困难:计算量大、鲁棒性差…..
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
分数: ___________华北电力大学研究生作业学年学期:课程名称:预测控制学生姓名:学号:提交时间:目录引言 (1)1.预测控制 (2)1.1.单步预测控制 (2)1.2.多步预测控制 (4)1.2.1.对差分方程的思考 (5)1.2.2.差分方程与传递函数关系 (6)2.内膜控制(IMC) (7)2.1.基本原理 (7)2.1.1.模型获取 (7)2.1.2.控制器计算 (7)2.1.3.性质 (8)2.2.基于IMC的PID设计 (9)2.2.1.IMC变形 (9)2.2.2.IMC控制器与PID控制器的联系 (9)2.3.效果 (10)3.动态模型控制(DMC) (10)附录A (13)附录B (16)附录C (20)附录D (22)引言本学期开了新的一门控制学科——预测控制,预测控制是一种基于模型的先进控制技术,它源于工业实践,最大限度地结合了工业实际地要求,并且在实际中取得了许多成功的应用,是一类新型的计算机控制算法。
因为工业生产的过程相当复杂,通过“黑白箱”建立起的模型并不完善,即使使用理论非常复杂的现代控制理论,其控制效果往往也不尽如人意,甚至在某些方面还不及传统的PID 控制。
随后通过老师课堂上对预测控制学科的介绍和讲解,我们渐渐认识到了这门新的学科,它不同与之前学过的PID控制器的思想,也和前馈系统控制思想有很大的不同,预测控制采用多步预测、滚动优化和反馈校正等控制策略实现控制被控对象,适用于控制不易建立精确数字模型并且比较复杂的工业生产过程。
并且由于当今计算机技术的快速发展,Matlab、C、C++的出现极大的方便了我们的理论分析和数学运算。
本课程一方面通过老师课堂上的讲解,另一方面通过课下练习老师布置的习题,一步一步的深入了解了预测控制的原理和运行机制。
1. 预测控制预测控制大致分为以下几个过程。
1) 模型预测对于线性模型有:y (k )=a 1y (k −1)+a 2y (k −2)+bu(k −1),此模型中u(k −1)为一步固有延迟。
其传递函数可以写为G (s )=kT 1s 2+T 2s+1e −τs (τ=0,T 1,T 2与a 1,a 2有关,k =b1−a 1−a 2)y ̂(k +p )=a 1y ̂(k +p −1)+a 2y ̂(k +p −2)+bu(k +p −1)2) 滚动优化目标函数:J min =∑q i [y r (k +i )−y ̂(k +i)]2P i=1+∑λj Δu 2(k +j −1)M j=1滚动概念:ΔU (k )=[Δu (k )⋯Δu (k +m −1)]T ΔU (k +1)=[Δu (k +1)⋯Δu (k +m )]T3) 反馈校正ê(k +j )=y (k )−y ̂(k |k −j )其中ê(k +j )——预测误差、y (k )——当前时刻输出、y ̂(k |k −j )——第j 个时刻前对当前时刻做的预测y p ̂(k +j |k )=ŷ(k +j |k )+ℎj ê(k +j) 其中y p ̂(k +j |k )——反馈校正之后的预测输出。
4) 参考轨迹y r (k +i )=∂y r (k +i −1)+(1−ð)r(k +j) 1.1. 单步预测控制对于单步预测控制的学习通过例子进行学习,例子如下:对y (k )=a×y (k −1)+b×u (k −l )作d 步预测,极小化如下指标J d=12(y r(k+d)−y p(k+d))2+12λΔu2(k)传递函数为Y(s)U(s)=bs+(1−a),其中时间常数T s=11−a,因此当a值越小时,系统响应时间会越慢。
此外,b(1−a)×U是系统的最终输出值。
参考轨迹为Y r(k)=α×Y r(k−1) + (1−α)×r(k−1) ( 0<α<1 ),可以理解为Y rr =(1−α)s+(1−α),此时调节α意味着调节该传递函数的时间参数T r=11−α。
因此,当α接近0时,预测轨迹跟踪r会快。
同时,由于该传递函数为一阶系统,因此参考轨迹不会出现超调震荡的现象。
最后参数λ是控制量增量的约束参数,当λ越大时,目标函数会相应的增大,当然,课堂上老师通过公式推导得出如下公式:dU =b×Yr−a×Y−b×Uλ+b2由上述公式可知,当系统模型确定时,λ越大,控制量的增量dU会越小,这样最直观的现象是控制量U会缓慢的增加。
由于在实际过程中,执行器的控制量的响应速度有限制,同时控制输出也不是无限大,因此从考虑执行器的响应速度的角度分析得出λ可以反映执行器的响应速度。
通过Matlab编程得出如下控制效果图:通过以上各图可以清晰地发现,当系统的时间常数T s较大时(即系统为慢速系统),执行器的控制器输出较为平缓,并且系统输出能够良好的跟踪参考轨迹;当系统是的时间常数T s较小时(即系统为快速系统),执行器控制量的输出会有较大波动,同时系统的输出出现了一定的超调。
通过本次实验发现单步预测控制通过改变离散控制过程中的目标值,使其逐步逼近已设定的目标值,同时利用目标函数得出有极小目标函数值时的控制量增量进而实现控制。
因为,当将参考轨迹的时间常数设为0时,就相当于直接将目标值施加系统中,然后用目标函数计算出控制量的增量即可。
1.2.多步预测控制多步预测控制的学习通过例子进行学习,例子如下:对y(k)=a×y(k −1)+b×u(k −L)作d 步预测,极小化如下指标J d =12(y r (k +d )−y p (k +d ))2+12λΔu 2(k ) 其中:L =1,d =2;L =2,d =2,3。
1.2.1. 对差分方程的思考当L =1时,差分方程为y(k)=a×y(k −1)+b×u(k −1),可推出其参考轨迹递推式如下:y p (k +1)=a×y (k )+b×u (k )⋯⋯y p (k +d )=a×y p (k +d −1)+b×u (k )化简上述各式可得:y p (k +d )=a d ×y (k )+b×∑a i d−1i=0×u (k ) 又u (k )=u (k −1)+∆u (k )∴ y p (k +d )=a d ×y (k )+b×∑a i d−1i=0×u (k −1)+b×∑a i d−1i=0×∆u(k) (1)由题目知,最小化J d ,且公式(1)中的自变量是∆u ,所以用求导公式令∂J d ∂∆u =0即可。
此时求得:∆u (k )=b×∑a i d−1i=0×(y r (k +d )−a d ×y (k )−b×∑a i d−1i=0×u(k −1))(b×∑a i i=0)2×λ上述公式是L =1时的通式。
同理,可以推导出当L =2时:∆u (k )=b×∑a i d−2i=0×(y r (k +d )−a d ×y (k )−b×∑a i d−1i=0×u(k −1))(b×∑a i i=0)2×λ1.2.2.差分方程与传递函数关系由数字仿真理论可知,差分方程与传递函数存在如下关系式:Y(s)=kT×s+1×U(s)y(k+1)=a×y(k)+bu(k−1)其中:a=e−∆tT,b=k×(1−a)通过Matlab编程得出如下控制效果图:2. 内膜控制(IMC)2.1. 基本原理2.1.1. 模型获取G m (s)可分解为G m−可逆部分(非最小相位,零点在左半平面)和G m+其余部分(稳态增益为1).2.1.2. 控制器计算G c (s )=G m−−1(s )∙F (s )F(s)=1(T f s+1)n f其中F(s)是为了使G c(s)物理可实现而引入的低通滤波器,n f与G m−有关,T f 是整个控制器时长的10%左右。
例如对于下述被控对象:G m(s)=25s+1e−2s有G m−(s)=25s+1G m+(s)=e−2s2.1.3.性质将内膜控制器构成的闭环控制器进行化简得G B(s)=y(s)r(s)=G p G c1+G c(G p−G m)理想情况下,G m=G p并且G c=G m−1、y(s)=r(s)。
同时实现了稳态无差,因为控制器的稳态增益等于模型稳态增益的逆。
2.2.基于IMC的PID设计2.2.1.IMC变形其中C(s)=G c1−G c G m =G m−(T f s+1)n f⁄1−G m+G m−∙G m−(T f s+1)n f⁄=G m−−1(T f s+1)n f−G m+2.2.2.IMC控制器与PID控制器的联系对于一阶系统G p(s)=k Ts+1G m(s)=k mT m s+a此时G m+=1,G m−=G m,T f=λT m,n f=1,所以C(s)=T m s+1K m(λT m s+1)−1=T m s+1λT m k m s=1λk m(1+1T m s)对于二阶系统G p (s )=1(T 1s +1)(T 2s +1)G m =k m(T m1s +1)(T m2s +1)此时G m+=1,G m−=G m ,T f =λ(T m1+T m2)2,n f =2,所以C (s )=(T m1s +1)(T m2s +1)k m (λT m s +1)2−1=T m1T m2s 2+(T m1+T m2)s +1k m ((λT m s )2+2λT m s ) (1)当n f =1时C (s )=T m1T m2s 2+(T m1+T m2)s +1λT m k m s =T m1+T m2λT m k m +1λT m k m s +T m1T m2s λT m k m(2)观察(1)、(2)式,发现n f =2时相当于串联了一阶滤波器。
2.3. 效果以一阶控制系统为例,通过Matlab 编程得出如下控制效果图:3. 动态模型控制(DMC )动态矩阵预测控制算法是一种基于系统的单位阶跃响应序列非参数模型的控制策略。
动态矩阵通过被控对象的阶跃响应来建立系统的非参数数学模型,可用于渐进稳定的线性系统。
动态矩阵预测控制算法通常三部分:预测模型、滚动优化、反馈校正。
动态矩阵控制算法结构图图3-1如所示。
图3-1 DMC控制算法结构图运行中以加热炉为例进行学习,通过Matlab编程得出如下控制效果图:输出结果如下:Percent error in the last step response coefficientof output yi for input uj is∶44%Percent error in the last step response coefficient of output yi for input uj is∶44%Time remaining 500/500Time remaining 400/500Time remaining 300/500Time remaining 200/500Time remaining 100/500Time remaining 0/500Simulation time is 0.032 seconds.附录A% ============================================ % 单步预测控制% by: 2017−03−01% ============================================ % function PreSP =3;% 仿真时间100ST =0.1;% 仿真步长LP =SP/ST;% 仿真循环次数% 模型参数% Y(k)=a∗Y(k−1) +b∗U(k)% Y(s)/U(s)=b/(s+(1−a))a =0.5;b =2;r =1;% 目标值L =1;% 延迟环节系数d =1;% 预测步长% 参考轨迹方程Yr =Alpha∗Yr + (1−Alpha)∗r ( 0<Alpha<1 )% Alpha 接近1 参考轨迹可信度越高% 接近0 原目标值可信度越高Tr =2;% 参考轨迹时间参数Ts =1/(1−a);% 模型时间常数Alpha =exp(−Ts/Tr);% 参考轨迹参数% Lamda 为目标函数对控制量增量约束的参数,是一个给定参数Lambda =0.5;% 控制量增量约束参数% 初始化参数Yo =zeros(1,LP+1);% 模型输出值YrRes =zeros(1,LP+1);% 参考轨迹Uo =zeros(1,LP+1);% 控制量输出T =zeros(1,LP+1);% 仿真时间% 初值初始化Y0 =0;U0 =0;dU =0;Y =Y0;U =U0 +dU;Yr =Y0;for i=1:LPYr =Alpha∗Yr+(1−Alpha)∗r;% 参考轨迹dU =b∗(Yr−a∗Y−b∗U)/(Lambda+b∗b);% 控制量增量U =U +dU;% 控制量Y =a∗Y +b∗U;% 模型输出% 存储历史值T(i+1)=i;Yo(i+1)=Y;Uo(i+1)=U;YrRes(i+1)=Yr;end% 图形显示plot(T,Yo,′ro−′,T,Uo,′k.−′,T,YrRes,′∗−′);legend(′模型输出Yo′,′控制量输出Uo′,′参考轨迹Yr′);% ShowTitle =sprintf(′a =%1.3f,b =%3.3f,\alpha =%1.3f,\Lamda =%4.3f′,...% a,b,Alpha,Lamda);title([′a =′,num2str(a),′,b =′,num2str(b),...′,\alpha =′,num2str(Alpha),′,\lambda =′,num2str(Lambda)]);grid minor附录B% ============================================ % 多步预测控制% by: 2017−03−07% ============================================ function Pre0307ST =3;% 仿真时间100DT =0.1;% 仿真步长LP =ST/DT;% 仿真循环次数R =1;% 目标值% 模型参数% Y(k)=a∗Y(k−1) +b∗U(k)% a =exp(−DT/Ts) −>Ts =−DT/ln(a)% b =K∗(1−a) −>K =b/(1−a)% Y(s)/U(s)=K/(Ts∗s+1)% ============================================ % a =0.5;% Ts =−DT/log(a);% 模型时间常数b =2;% k =b/(1−a);% 模型增益% 为便于结合传递函数模型,将上述表达式进行改写如下% ============================================ Ts =10;% 模型时间常数a =exp(−DT/Ts);% k =200;% b =k∗(1−a);% ============================================ L =1;% 延迟环节系数d =1;% 预测步长% 参考轨迹方程Yr =Alpha∗Yr + (1−Alpha)∗r ( 0<Alpha<1 )% Alpha 接近1 参考轨迹可信度越高% 接近0 原目标值可信度越高Tr =10;% 参考轨迹时间参数Alpha =exp(−Ts/Tr);% 参考轨迹参数% Lamda 为目标函数对控制量增量约束的参数,是一个给定参数Lambda =0.7;% 控制量增量约束参数% 初始化参数Yo =zeros(1,LP+1);% 模型输出值YrRes =zeros(1,LP+1);% 参考轨迹Uo =zeros(1,LP+1);% 控制量输出T =zeros(1,LP+1);% 仿真时间Uc =zeros(1,L);% 初值初始化Y0 =0;Y =Y0;Yr =Y0;% 求取控制量增量所用参数A =0;if L ==1for i=0:d−1A =A+power(a,i);endA =A∗b;B =A;elseif L==2for i=0:d−2A =A+power(a,i);endB =b∗(A+power(a,d−1));A =A∗b;endfor i=1:d−1Yr =Alpha∗Yr+(1−Alpha)∗R;% 参考轨迹初始化endYrRes(1)=Yr;for i=1:LPYrRes(i+1)=Alpha∗YrRes(i)+(1−Alpha)∗R;% 参考轨迹dU =A∗(YrRes(i+1)−power(a,d)∗Y−B∗Uc(1))/...(Lambda+A∗A);% 求取控制量增量通式if L>1Uc(2:L)=Uc(1:L−1);% 延迟环节endUc(1)=Uc(1) +dU;% 控制量Y =a∗Y +b∗Uc(L);% 模型输出% 存储历史值T(i+1)=i∗DT;Yo(i+1)=Y;Uo(i+1)=Uc(L);end% 图形显示plot(T,Yo,′ro−′,T,Uo,′k.−′,T,YrRes,′∗−′);legend(′模型输出Yo′,′控制量输出Uo′,′参考轨迹Yr′); title([′\lambda =′,num2str(Lambda),...′,T_s =′,num2str(Ts),...′,T_r =′,num2str(Tr)]);grid minor附录Cfunction [yimc]=yijieIMC(alfa1,k,t1,r1,dt,st,c1,c2)K=k;T=t1;r=r1;DT=dt;ST=st;LP=ST/DT;alfa=alfa1;x(1:4)=0;e1=0;e2(1:LP)=0;for i=1:LPe1=r−x(4);e2(i)=e1+x(3);if i>1x(1)=e2(i)∗(DT+T)/K/DT−T/DT/K∗e2(i−1);x(2)=x(2)+DT∗(−x(2)+x(1))/T/alfa;x(3)=x(3)+DT∗(−x(3)+c1∗K∗x(2))/T/c2;x(4)=x(4)+DT∗(−x(4)+K∗x(2))/T;yimc(i)=x(4);t(i)=i∗DT;endendfunction [ypid,u,t]=yijiePID(alfa1,k,t1,r1,dt,st,c1,c2)K=k;T=t1;r=r1;DT=dt;ST=st;LP=ST/DT;alfa=alfa1;x1=0;xp=0;xi=0;u0=0;for i=1:LPe=r−x1;xp=e/alfa/K/c1;xi=xi+DT∗e/T/c1/alfa/K/c2;u0=xp+xi;x1=x1+DT∗(−x1+K∗u0)/T;ypid(i)=x1;t(i)=i∗DT;u(i)=u0;end%% 一阶系统IMC−PIDclc;clear all;close all;K=5;T=3;r=1;DT=1;ST=500;alfa=18;c1=1;c2=1;[yimc]=yijieIMC(alfa,K,T,r,DT,ST,c1,c2); [ypid,u,t]=yijiePID(alfa,K,T,r,DT,ST,c1,c2); plot(yimc)hold on;plot(ypid,′r′)附录D% 加热炉DMC设计Km = 2.5;Tp1 =20;Tp2 =120;dp =80;Kp =Km;Tm1 =Tp1;Tm2 =Tp2;dm =dp;% modelGm =poly2tfd(Kp,conv([Tp1,1],[Tp2,1]),0,dp);% Sample Time Ts.Model length N.Output number ny Ts =10;N =200;ny =1;% Stop Response Coefficientsmodel =tfd2step(N,Ts,ny,Gm);plant =tfd2step(N,Ts,ny,Gp);% else Parametorsp =20;M =5;Q =1;R =1;Kmpc =mpccon(model,Q,R,M,p);%Step point and simulationyr =1;tend =500;[y,u,ym]=mpcsim(plant,model,Kmpc,tend,yr); plotall(y,u,Ts);。