Adaptive Control with a Nested Saturation Reference Model
Intelligent Excitation for Adaptive Control With Unknown

IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.52,NO.8,AUGUST 20071525[15]M.Johansson and A.Rantzer,“Computation of piecewise quadraticLyapunov functions for hybrid systems,”IEEE Trans.Autom.Control ,vol.43,no.4,pp.555–559,Apr.1998.[16]M.Mahmoud,P.Shi,and A.Ismail,“Robust Kalman filtering for dis-crete-time Markovian jump systems with Parameter Uncertainty,”put.Appl.Math.,vol.169,no.1,pp.53–69,2004.[17]C.Meyer,S.Schroder,and R.W.De Doncker,“Solid-state circuitsbreakers and current limiters for medium-voltage systems having dis-tributed power systems,”IEEE Trans.Power Electron.,vol.19,no.5,pp.1333–1340,Sep.2004.[18]M.C.de Oliverrira,J.C.Geromel,and J.Bernussu,“ExtendedH andH norm characterizations and controller parameterizations for dis-crete-time systems,”Int.J.Control ,vol.75,no.9,pp.666–679,2002.[19]P.Shi,M.Karan,and Y.Kaya,“Robust Kalman filter design for Mar-kovian jump linear systems with norm-bounded unknown nonlineari-ties,”Circuits Syst.Signal Process.,vol.24,no.2,pp.135–150,2005.[20]P.Shi,M.Mahmoud,and S.Nguang,“Robust filtering for jumpingsystems with mode-dependent delays,”Signal Process.,vol.86,no.1,pp.140–152,Jan.2006.[21]X.Sun,J.Zhao,and D.J.Hill,“Stability and L -gain analysis forswitched delay systems:A delay-dependent method,”Automatica ,vol.42,pp.1769–1774,2006.[22]H.Tshii and B.A.Francis,“Stabilizing a linear system by switchingcontrol with dwell-time,”IEEE Trans.Autom.Control ,vol.47,no.2,pp.1962–1973,Feb.2002.[23]Z.Wang and F.Yang,“Robust filtering for uncertain linear systemswith delayed states and outputs,”IEEE Trans.Circuits Syst.I,Fundam.Theory Appl.,vol.49,no.1,pp.125–130,Jan.2002.[24]L.Wu,P.Shi,C.Wang,and H.Gao,“Delay-dependent robustH andL 0L filtering for LPV systems with both discrete and distributed delays,”Inst.Electr.Eng.Proc.Control Theory Appl.,vol.153,no.4,pp.483–492,2006.[25]M.A.Wicks,P.Peleties,and R.A.De Carlo,“Construction of piece-wise Lyapunov functions for stabilizing switched systems,”in Proc.33rd Conf.Decision Control ,Dec.1994,pp.3492–3497.[26]G.Xie and L.Wang,“Quadratic stability and stabilization of discrete-time switched systems with state delay,”in Proc.43th IEEE Conf.De-cision Control ,Atlantis,Paradise Island,Bahamas,Dec.14–17,2004,pp.3235–3240.[27]L.Xie,“Output feedbackH control of systems with parameter un-certainty,”Int.J.Control ,vol.63,pp.741–750,1996.[28]H.Xu,Y.Zou,and S.Xu,“RobustH control for a class of uncertainnonlinear two-dimensional systems,”Int.J.Innovative Comput.Inf.Control ,vol.1,no.2,pp.181–191,2005.[29]G.Zhai,B.Hu,K.Yasuda,and A.N.Michel,“Disturbance attenuationproperties of time-controlled switched systems,”J.Franklin Inst.,vol.338,no.7,pp.765–779,2001.[30]G.Zhai,Y.Sun,X.Chen,and A.N.Michel,“Stability and L gainanalysis for switched symmetric systems with time delay,”in Proc.Amer.Control Conf.,Denver,CO,Jun.2003,pp.4–6.[31]B.Zhang and S.Xu,“RobustH filtering for uncertain discrete piece-wise time-delay systems,”Int.J.Control ,vol.80,no.4,pp.636–645,2007.[32]J.Zhao and G.M.Dimirovski,“Quadratic stability of a class ofswitched nonlinear systems,”IEEE Trans.Autom.Control ,vol.49,no.4,pp.574–578,Apr.2004.[33]S.Zhou and m,“Robust stabilization of delayed singular systemswith linear fractional parametric uncertainties,”Circuits Syst.Signal Process.,vol.22,no.6,pp.579–588,2003.Intelligent Excitation for Adaptive Control With UnknownParameters in Reference InputChengyu Cao,Naira Hovakimyan,and Jiang WangAbstract—Model reference adaptive control problem is considered for a class of reference inputs dependent upon the unknown parameters of the system.Due to the uncertainty in the reference input,the tracking objec-tive cannot be achieved without parameter convergence.The common ap-proach of injecting persistent excitation (PE)in the reference input leads to tracking of the excited reference input as opposed to the true one.A new technique,named intelligent excitation,is presented for introducing an ex-citation signal in the reference input and regulating its amplitude,depen-dent upon the convergence of the output tracking and parameter errors.Intelligent excitation ensures parameter convergence,similar to conven-tional PE;it vanishes as the errors converge to zero and reinitiates with every change in the unknown parameters.As a result,the regulated output tracks the desired reference input and not the excited one.Index Terms—Adaptive control,excitation,parameter convergence.I.I NTRODUCTIONThis note considers the framework of model reference adaptive con-trol (MRAC)architecture for the class of reference inputs dependent upon the unknown parameters of system dynamics.This problem for-mulation is motivated by practical applications,like visual guidance [1]or closed-coupled formation flight [10].In [1],the target tracking problem is considered with visual sensor (monocular camera),where the characteristic length of the target is unknown,and the relative range is not observable from available visual measurements.In [10],autopilot design is considered for a trailing aircraft that follows a lead aircraft in a closed-coupled formation.The trailing aircraft must constantly seek an optimal position relative to the leader to minimize the unknown drag effects introduced by the wing tip vortices of the lead aircraft.Both problems lead to definition of a reference system with a refer-ence input dependent upon the unknown parameters of the system.Fol-lowing the common approach of injecting persistent excitation (PE)in the reference input leads to parameter convergence [2],[11],[12]but the system output tracks the excited reference input and not the true one.A new technique,named intelligent excitation,is presented in this note to solve the output tracking problem while ensuring parameter convergence.The main feature of it is that it initiates excitation only when the tracking error exceeds a prespecified threshold,and it van-ishes as the parameter error converges to a neighborhood of the origin.The relationship between convergence of the parameter and the tracking errors,as well as between the conditions for convergence,has been extensively explored in the literature [2]–[8],[13].The well-known fact is that exponential parameter convergence leads to exponential tracking error convergence [2].We note that for a special set of regressor functions,like periodic ones,one can prove exponential convergence of tracking error to zero without the PEManuscript received August 5,2005;revised October 23,2006and May 1,2007.Recommended by Associate Editor M.Demetriou.This work was sup-ported by the Air Force Office of Scientific Research (AFOSR)under Con-tract FA9550-05-1-0157and the Multidisciplinary University Research Initia-tive (MURI)under Subcontract F49620-03-1-0401.The authors are with the Department of Aerospace and Ocean Engineering,Virginia Polytechnic Institute and State University,Blacksburg,V A 24061-0203USA (e-mail:chengyu@;nhovakim@;jwang005@).Color versions of one or more of the figures in this paper are available online at .Digital Object Identifier 10.1109/TAC.2007.9027800018-9286/$25.00©2007IEEEAuthorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.1526IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.52,NO.8,AUGUST 2007requirement or parameter convergence [4],[5],[9].However,in the problem of interest to us,namely when the reference input depends upon the unknown parameters of the system dynamics,the control objective cannot be met without parameter convergence.We further notice that in the presence of PE the exponential convergence of the tracking error to zero does not imply convergence of the system output to the desired reference input.Instead,the output tracks the excited reference input.In this note,we present a control design methodology that ensures simultaneous parameter convergence and tracking of the desired reference input.This note is organized as follows.Section II presents the problem for-mulation.Adaptive controller with intelligent excitation is introduced in Section III.Convergence of the regulated output within desired pre-cision in finite time is shown in Section IV.In Section V,simulation results are presented,and Section VI concludes this note.The proofs are included in the Appendix.II.P ROBLEM F ORMULATIONConsider the following single-input –single-output (SISO)system dynamics:_x (t )=A m x (t )+b m 1ru (t )0( 3x )>x (t);y (t )=c >x (t )(1)where x 2I R n is the system state vector (measurable),u 2I R is thecontrol signal,b m 2I R n and c 2I R n are known constant vectors,A mis known Hurwitz n 2n matrix, 3r2I R is an unknown constant with known sign, 3x 2I R nis the vector of unknown parameters,and y 2I Ris the regulated output.Let 3=[( 3x )> 3r ]>.The control objective is to regulate the output y so that it tracks r ( 3),where r is a known mapr :I R n 2IR !I R ,dependent upon the unknown parameters 3of the system.Let 23be the compact set,to which the unknown parameters belong,i.e., 3223.In case of a known reference signal r (t ),application of the conven-tional MRAC ensures that the tracking error between the desired ref-erence model and the system state goes to zero asymptotically,which consequently leads to output convergence y (t )!r (t ).Since r ( 3)depends upon the unknown parameters 3,it cannot be used in the feedforward component of the MRAC architecture,even if the map r :I R n 2I R !I R is known.III.A DAPTIVE C ONTROLLER U SING I NTELLIGENT E XCITATION In this section,we present a solution for solving the tracking problem in the presence of unknown r ( 3).We consider the following reference model:_x m (t )=A m x m (t )+b m k g ^r (t )y m (t )=c >x m (t )(2)where k g 1=lim s !0(1=(c >(sI 0A m )01b m )),and the following con-troller:u (t )=>x (t )x (t )+r (t )k g ^r (t )(3)in which ^r(t )is a bounded reference signal to be de fined shortly,and x (t )2I R n and r (t )2I R are the adaptive parameters governed by the following adaptive laws [15]:_ x (t )=0xProj x (t );0x (t )e >(t )P b sgn ( 3r)_ r (t )=0rProj r (t );0k g ^r (t )e >(t )Pb sgn ( 3r)(4)where e (t )=x (t )0x m (t ).In (4),0x >0and 0r >0are the adap-tation rates,P =P >>0is the solution of the algebraic Lyapunovequation A >m P +P A m =0Q for arbitrary Q >0,and Proj(1;1)denotes the projection operator de fined asProj( ;y )1=y;if f ( )<0y;if f ( ) 0and r f >y 0y 0r f r f >yjr f j 2f ( );if f ( ) 0and r f >y >0(5)where f:n!is the smooth convex function:f ( )=(( >0 2max )= ), max is the norm bound imposed on the parameter vector ,and >0denotes the convergence tolerance of our choice.Letx (t )=^r (t )x >m (t )>H (s )= x (s )=^r (s ):(6)It follows from (2)that x m (s )=^r(s )=(sI 0A m )01b m k g ,and hence H (s )=1(sI 0A m )01b m k g>>:(7)We de fine a (n +1)22m matrix with its p th row,q th column elementas=Re Hp j!de ;if q is oddIm Hp j!de;if q is evenq =1;2;111;m(8)where d q=2e denotes the smallest integer that is q=2and H p (s )is the p th element of H (s )de fined in (7).Lemma 1:There exist m and !1;...;!m such that has full row rank.Consider the following excitation signal over a finite time interval [0;T ]:e x (t )=mi =1sin(!i t );t 2[0;T ](9)where !1;...;!m ensure that has full row rank,and T >0is the first time instant for which e x (T )=0.The existence of finite T is straightforward for a linear combination of sinusoidal functions.Let(t )=[ >x (t ) r (t )]>.The reference signal ^r (t )is de fined as^r (t )=^r 0(t )+E x (t )(10)^r 0(t )=r ( (t ))(11)E x (t )=k (t )e x (t 0jT );if t 2[jT;(j +1)T );j =0;1;2...(12)and (13),shown at the bottom of the page,holds,where E x (t )is theintelligent excitation signal,e x (t )is de fined in (9),and 01>0,02>0,03>0,and k 0are design gains,subject to 03 k 0 02.It is straightforward to verify that 03 k (t ) 02,t 0.The complete controller with intelligent excitation consists of (2)–(4)and (10).We note that for the system in (1)the reference input r ( 3)is not available since 3is unknown.We can use (t ),the estimate of 3,to construct r ( (t )),for which MRAC can be applied.However,withoutk (t )=k 0;t 2[0;T )min 01jT(j 01)Te >( )Qe ( )d ;02003+03;t 2[jT;(j +1)T );j 1(13)Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.52,NO.8,AUGUST 20071527parameter convergence,there is no guarantee that r ( (t ))will con-verge to a neighborhood of r ( 3)in finite time.Therefore,we aug-ment r ( (t ))with the intelligent excitation signal E x (t ),which leads to the control objective as we prove in the following.First,we no-tice that though k (t )is piecewise constant over the time increments [jT;(j +1)T ),E x (t )is a continuous signal,since e x (T )=e x (0)=0.Thus,the rede fined reference input ^r(t )in (10)is continuous and bounded.Let e (t )=x (t )0x m (t )be the tracking error.Substituting (3)into (1),it follows from (2)that the dynamics of the tracking error can be rewritten_e (t )=A m e (t )+1 rbm ~ >x (t )x (t )+~ r (t )k g ^r (t)(14)where ~ x (t )= x (t )0 3x and ~ r (t )= r (t )0 3r denote parametricerrors.Let ~ (t )=[~ x (t )>~ r (t )]>.Using the candidate LyapunovfunctionV e (t );~ (t)=e >(t )P e (t )+1 3r~ >x (t )001x ~ x (t )+001r1 r~ 2r (t )(15)it can be veri fied easily that _V(t ) 0e >(t )Qe (t ) 0,t 0.Ap-plication of Barbalat ’s lemma yields lim t !1e (t )=0.Furthermore,itcan be veri fied easily that if ^r (t )= r ,where ris constant,one has lim t !1y (t )= r.From the de finition of asymptotic stability it follows that for any >0there exists finite T s >0such that j y (t )0 rj ,t T s .IV .C ONVERGENCE R ESULT OF A DAPTIVE C ONTROLLERW ITH I NTELLIGENT E XCITATIONWe note that the amplitude of the excitation signal in (13)is de fined via the integral of the tracking error over a time in-crement equal to the period of the excitation signal.To prove parameter convergence,we need to characterize the relationship between the unknown parameter error and the integral of thetracking error.Let 4(t )1=[x >(t )x >m (t ) >(t )]>2I R 3n +1be the state of the extended system dynamics (1),(2),and (4)with the reference input de fined in (10)and (13).Consider the compact set of all possible initial conditions of the system dy-namics and adaptive parameters 4(0)=402D 0 IR 3n +1.The Lyapunov function V (e (t );~(t ))in (15)can be equivalently rewritten in the phase space as V (e;~)=(x 0x m )>P (x 0x m )+j 1= 3rj ( x 0 3x )>001x ( x 0 3x )+001r j 1= 3r j ( r 0 3r )2and viewed as a map V (4; 3):IR 3n +1223![0;1).Since the error dynamics are globally asymptotically stable,the maximum of the Lyapunov function for every initial condition 402D 0is attained atthe initial time instant.Let V max =max 42D ; 22V (40; 3).Notice,however,that as time evolves,the system trajectory 4(t )can leaveD 0,but since the Lyapunov function is nonincreasing,there exists a compact set D c ,possibly larger than D 0,such that the system trajectory stays in it for all t 0.Since (14)implies that k (t )is constant over any time interval [jT;(j +1)T ],j =0;1;2...,we denote k j the value of k (t )over this interval.We note that the system trajectory over [jT;(j +1)T ],as well as the value of (j +1)T jTe >(t )Qe (t )dt is uniquely de fined by 4(jT )2D c ,k j 2(0;02],and 3223.We consider the following map g 4:D c 2(0;02]223![0;1)to characterize this relationship:(j +1)TjTe >(t )Qe (t )dt =g 4(4(jT );k j ; 3):(16)Fig.1.Illustration of maps g (v )and g (w ).We note that the entire systems (1),(2),and (4)de fining the trajec-tory of 4(t ),can be viewed as a time-invariant system with E x (t )as an external periodic input signal.Thus,the trajectory of 4(t )on t 2[j 1T;(j 1+1)T )is the same as on t 2[j 2T;(j 2+1)T ),if 4(j 1T )=4(j 2T )and k j =k j ,for any j 1;j 2.Hence,g 4(4(jT );k j ; 3)is independent of the choice of j and depends only upon the values of 4(jT )and k j .Moreover,since Q is positive de finite,g 4(4c ;k c ; 3):D c 2(0;02]223![0;1)is nonnegative,where 4c stands for the value of 4(jT ),and k c stands for k j to indicate the independence on j .We further de fine the map g v :[0;V max ]2(0;02]![0;1)as the solution of the following constrained optimization problem:g v (v;k c )=min42D ; 22;s :t :V (4; )=vg 4(4c ;k c ; 3)(17)where v 2[0;V max ].Notice that the constraint V (4c ; 3)=v de fines a nonempty compact set,hence the optimization problem (17)has at least one solution.Lemma 2:The map g v de fined in (17)has the following properties:1)g v (0;k c )=0;2)if k c >0and g v (v;k c )=0,then v =0.We de fine the map g k :[0;V max ]![0;1)asg k (v )=min0 k 0(g v (v;k c )=k c )(18)where g v is de fined in (17)and 02>03>0are design gains de fined in (14).The nonnegative property of g k (v )follows directly from the fact that g v (v;k c ) 0.Corollary 1follows from Lemma 2directly.Corollary 1:g k (v )=0if and only if v =0.It can be checked easily that g 4(4c ;k c ; 3)is a continuous function of its arguments.Therefore,g v (v;k c ),as well as g k (v ),continuously depend upon their arguments.Fig.1illustrates the function g k (v ).We note that g k (v )is a nonnegative function with a unique zero at v =0.Given the map g k (v ),which may not be monotonous,we de fine an “inverse-type ”map g i :[0;g k (V max )]![0;V max ]asg i (w )=max vf v 2[0;V max ]jg k (v )=w g :(19)Illustration of g i (w )for a possibly nonmonotonous g k (v )is shown in Fig.1for three different values of w .Since g k (v )is a continuous function and g k (0)=0,it can be checked easily that for any w 2[0;g k (V max )],the map g i (w )exists.Notice,however,that despite theAuthorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.1528IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.52,NO.8,AUGUST2007fact that the map g k(v)is continuous,g i(w),as defined in(19),is notguaranteed to be continuous.Lemma3:The map g i(w)has the following properties:1)limw!0g i(w)=0;(20)2)g i(0)=0;(21)3)for all v g i(w)one has g k(v) w:(22)For any constant >1,we define1(01; )=g i( =01)(23)j3=d(lg(02=03))( )e+1(24)where01;02,and03are the design gains defined in(14),and the no-tation d a e denotes the smallest integer greater than or equal to a.Itfollows from(20)thatlim0!11(01; )=0:(25)Theorem1:Given the system in(1)and the adaptive controller withintelligent excitation in(2),(3),(4),and(13),if for j3T,V( )1(01; ),then k x( )=02.Theorem1states that as long as the value of Lyapunov function isgreater than 1(01; ),the reference input will be subject to PE withconstant amplitude.Next,we prove that there exists afinite time instantsuch that the value of Lyapunov function will drop below 1(01; ).Inthat case,consequently,the amplitude of the excitation signal will beregulated dependent upon the integral of the tracking error to ensurethat the control objective can be met.Towards that end,let B~v=f~ jj1= 3r j~ >x001x~ x+001r j1= 3r j~ 2r~v g,where0 ~v V max.Let( ;~v)=max~2B r( 3+~ )0r( 3):(26)Let = max(cc>)= min(P),where min(P)is the minimum eigen-value of P>0,while max(cc>)is the maximum eigenvalue of cc>. We define 2(01;03; )=k G(s)k L( ( 3; 1(01; ))+03+ )+ + 1(01; );where >0and >1are arbitrary constants,and k G(s)k L is the L1gain of the system G(s)=y(s)=^r(s).Theorem2:For the system(1)and the adaptive controller with in-telligent excitation(2),(3),(4),and(13),there exists afinite T s>0 such thatj y(t)0r( 3)j 2(01;03; );t T s:(27) It can be verified easily that ( 3;~v)is a continuous function of~v and,therefore, ( 3;0)=0.Hence,it follows thatlim0!1;0!0; !02(01;03; )=0:(28)This implies that we can set01arbitrarily large and03and arbi-trarily small to obtain any desired precision of 1and 2in(25)and (28).Notice that 2can be set arbitrarily small by control design.Also, it is important to point out that a large value of01will not cause insta-bility,since the excitation signal E x(t)is always bounded by02. Thus,we proved that the adaptive controller with intelligent excita-tion regulates the system output infinite time.If there is any changein Fig.2.Illustration of time trajectories of V(t)and k(t).the unknown parameters of the system,then the desired reference tra-jectory changes correspondingly.If the interval in which the unknown parameters 3hold constant values is larger than thefinite settling time guaranteed by intelligent excitation,then the adaptive controller with intelligent excitation will achieve the control objective.Indeed,any change in the unknown parameters of the system results in an abrupt change of V(t).Theorem1ensures then that the intelligent excitation will reinitialize and lead to desired output tracking.Remark1:For practical implementation,due to the presence of noise and transient tracking errors,we can set03=0without wor-rying about premature disappearance of excitation.In that case,the ex-citation signal satisfies k(t) 02,where is a small positive number due to the noise in practical implementation.Furthermore,the definition of j3in(24)will change to j3=d lg(02= )=lg( )e+1.The constant gain01is inverse proportional to the bound of the parameter tracking error,so setting it large will increase the accuracy of parameter estimates.The gain02is the amplitude of the excitation signal,which controls the rate of convergence.Remark2:Fig.2illustrates the simultaneous change of V(t)and k(t).Let T be the time instant when V( T)= 1(01; ).Theorem1 states that k(t)is nondecreasing and increases to02before j3T for any initial k02[03;02],while k(t)=02;8t2[j3T; T].This im-plies a constant excitation signal,which leads to decrease of V(t)until it drops below 1(01; ).Once V(t) 1(01; ),Theorem2(Step1 in the proof)states that k(t)will decrease to03+ ,where can be arbitrarily small.Thus,Theorem1quantifies the performance of the intelligent excitation signal,while Theorem2consequently proves the output convergence.We further notice that Theorem1relates the pres-ence of constant excitation signal to the value of the Lyapunov function, which depends upon the unknown parameter errors.Thus,any change in the unknown parameters of the system,which leads to a new value for Lyapunov function,implies reinitialization of the excitation signal.Remark3:Wefinally note that adaptive control is not the only tool for controlling systems in the presence of uncertainties.Robust control literature offers alternative approaches,as high-gain controllers,vari-able structure controllers,to name just a few.However,for the output regulation problem discussed here,namely when the desired reference input depends on unknown parameters of the system,parameter identi-fication appears to be a required step for achieving the control objective. Intelligent excitation provides a solution for simultaneous parameter identification and output regulation.Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.IEEE TRANSACTIONS ON AUTOMATIC CONTROL,VOL.52,NO.8,AUGUST20071529V.S IMULATIONWe consider the reference input dependent on piecewise constantunknown parameters.Consider the SISO system in(1)withA m=01:030:880:2502:96b m=00:0101:05c=1 03x(t)=[3:243310:7432]>;t 42sec[2:16227:1621]>;t>42sec3r(t)=6:18;t 42sec4;t>42sec:The reference signal isr( 3)=0[11]A m0b mr ( 3x)>[11]001:5rand the objective is to design control signal u(t)so that the system output y(t)tracks the reference input r( 3).We construct the adap-tive controller with intelligent excitation using the following parame-ters:Q=diag[100;10],0x=50I222,0r=10,01=10,T= =3, 02=1:5,and03=0.We choose!1=6and!2=9.It can be veri-fied that has full row rank with the chosen!i.Simulation results are given in Fig.3.Fig.3(a)plots the time history of y(t)and the ideal ref-erence signal r( 3).It demonstrates that with the change in unknown parameter 3r(t)the output y(t)converges to r( 3)with the help of in-telligent excitation.The trajectory of k(t),which defines the amplitude of the intelligent excitation,is plotted in Fig.3(b).Fig.3demonstrates the following:1)intelligent excitation vanishes as parameter conver-gence takes place and2)intelligent excitation reinitiates when a change occurs in unknown parameters.VI.C ONCLUSIONIn this note,we augment the traditional MRAC with an intelligent excitation signal to solve the output tracking for a reference input that depends upon the unknown parameters of the system.The main fea-ture of the new technique is that it initiates excitation only when nec-essary.We prove that intelligent excitation is a general technique for the class of problems,in which parameter convergence is needed to meet the control objective.It can also be used to enhance robustness of the adaptive controllers,when parameter drift may cause instability. Since intelligent excitation modifies only the reference input,while the proofs of convergence and reinitialization use only the properties of the Lyapunov function,it can be straightforwardly modified for different adaptive controllers,like backstepping,output feedback,etc.A PPENDIXProof of Lemma1:The transfer function(sI0A m)01b m k g can be expressed as(sI0A m)01b m k g=n(s)=d(s),where d(s)= det(sI0A m)is a n th order polynomial,and n(s)is a n21vector with its i th element being a polynomial functionn i(s)=nj=1n ij s j01:(29)Consider the matrix N with its entries n ij in(29).Wefirst prove that N is full rank.We note that(A m;b m)is controllable.Controllability of(A m;b m)for the linear time invariant(LTI)system in(2)implies that for arbitrary x t2I R n,x m(t0)=0,and arbitrary t1,there existsu( ); 2[t0;t1]such that x m(t1)=x t.If N is not full rank,then Fig.3.Simulation results.(a)Comparison of y(t)and r(t).(b)Trajectory of k(t).there exists a nonzero constant vector 2I R n,such that >n(s)=0. Then,( >n(s)=d(s))=( >x(s)=^r(s))=0,which implies that >x m(s)=0.If x m(t0)=0,for any u( ), >t0,we have >x m( )=0;8 >t0.This contradicts x m(t1)=x t,in which x t2I R n is assumed to be an arbitrary point.Therefore,N must be full rank.Since N is full rank,it follows that n(s)contains n linearly inde-pendent polynomials.We can rewrite the transfer function in(7)as H(s)=(1=d(s))[d(s)n1(s)111n n(s)]>.Since d(s)is an n th order polynomial,and n i(s)is(n01)th order polynomial,d(s)and n i(s) are linearly independent.Hence,H(s)contains n+1linearly inde-pendent functions of s,and,therefore,there exist!1;...;!m,which ensure that has full rowrank.Proof of Lemma2:The proof of thefirst statement is straightfor-ward.If v=0,then the optimization set is defined by V(4c; 3)=0. This is indeed a nonempty set,since the points of the hyperspace in I R3n+1223defined via the conditions x=x m; = 3sat-isfy V(4c; 3)=0.Notice that the computation of g4(4c;k c; 3) in(16)is done by starting the integration from4(jT)=4c.Authorized licensed use limited to: UNIVERSITY OF CONNECTICUT. Downloaded on February 12, 2009 at 09:44 from IEEE Xplore. Restrictions apply.。
Advanced Control Systems

Advanced Control Systems Advanced Control Systems play a crucial role in modern engineering and technology, enabling precise and efficient control of complex systems across various industries. From aerospace and automotive to manufacturing and robotics, the application of advanced control systems has revolutionized the way we design, operate, and optimize processes and machinery. In this discussion, we will explore the significance of advanced control systems, their key components, challenges, and future prospects from multiple perspectives. From an engineering standpoint, advanced control systems encompass a wide range of methodologies and techniques aimed at regulating the behavior of dynamic systems. These systems can be as simple as a thermostat controlling room temperature or as complex as a self-driving car navigating through traffic. One of the fundamental components of advanced control systems is the use of mathematical models to describe the dynamics of the system and develop control algorithms. These algorithms can be implemented in hardware or software, utilizing sensors and actuators to measure and manipulate the system's behavior in real-time. In the field of aerospace, advanced control systems are instrumental in ensuring the stability and maneuverability of aircraft and spacecraft. Flight control systems utilize a combination of autopilots, gyroscopes, and control surfaces to maintain stability and respond to pilot commands. With the advent of unmanned aerial vehicles (UAVs), advanced control systems have become even more critical in enabling autonomous flight and navigation, opening up new possibilities for surveillance, delivery, and exploration. In the automotive industry, advanced control systems have revolutionized vehicle dynamics and safety. Electronic stability control (ESC) systems use sensors to detect and prevent skidding and loss of traction, enhancing the overall safety of vehicles. Moreover, the development of autonomous vehicles relies heavily on advanced control systems, enabling cars to perceive their environment, make decisions, and navigate without human intervention. The integration of sensors, actuators, and control algorithms in modern vehicles represents a significant leap forward in the quest for safer and more efficient transportation. The manufacturing sector has also benefited significantly from advanced control systems, particularly in the realm of robotics and automation.Industrial robots equipped with advanced control systems can perform a wide array of tasks with precision and repeatability, ranging from assembly and welding to painting and inspection. The seamless integration of robots into manufacturing processes has not only improved efficiency but also created new opportunities for customization and flexibility in production lines. Despite the numerous advantages offered by advanced control systems, several challenges and considerations must be addressed to ensure their effective implementation and operation. One of the primary concerns is the robustness and reliability ofcontrol algorithms, especially in safety-critical applications such as autonomous vehicles and medical devices. The need to account for uncertainties, disturbances, and unforeseen events poses a significant challenge in the design and validation of advanced control systems. Another critical aspect is the ethical and societal implications of advanced control systems, particularly in the context of autonomous technologies. The deployment of autonomous vehicles, for instance, raises questions regarding liability, decision-making algorithms, and the impact on traditional modes of transportation. Furthermore, the potential displacement of human workers in various industries due to automation calls for a thoughtful and inclusive approach to the adoption of advanced control systems. Looking ahead, the future of advanced control systems holds immense potential for further innovation and integration across diverse domains. The emergence of cyber-physical systems, enabled by the Internet of Things (IoT) and cloud computing, presents new opportunities for interconnected and intelligent control systems. The ability to collect and analyze vast amounts of data in real-time opens up avenues for adaptive and predictive control strategies, enhancing performance and resilience in dynamic environments. In conclusion, advanced control systems represent a cornerstone of modern engineering and technology, driving advancements in aerospace, automotive, manufacturing, and beyond. The convergence of mathematical modeling, sensors, actuators, and computing has paved the way for unprecedented levels of precision, efficiency, and autonomy in controlling complex systems. As we continue to navigate the opportunities and challenges associated with advanced control systems, it is essential to prioritize safety, ethics, and inclusiveinnovation to realize their full potential in shaping the future of technology and society.。
adaptive control

Desired Performance ComparisonDecision
Adaptation Mechanism
Performance Measurement
Adaptation scheme
Adaptive Control – Landau, Lozano, M’Saad, Karimi
Adaptive Control versus Conventional Feedback Control
y
u
Plant
Desired Performance
Adaptation Scheme
Reference Controller
u
Plant
y
An adaptive control structure
Remark: An adaptive control system is nonlinear since controller parameters will depend upon u and y
Adaptive Control – Landau, Lozano, M’Saad, Karimi
Conventional Control – Adaptive Control - Robust Control
Conventional versus Adaptive
Conventional versus Robust
Adaptive Control – Landau, Lozano, M’Saad, Karimi
Conceptual Structures
Desired Performance Controller Design Method Plant Model
adaptive control

Adaptive control can help deliver both stability and good response. The approach changes the control algorithm coefficients in real time to compensate for variations in the environment or in the system itself. In general, the controller periodically monitors the system transfer function and then modifies the control algorithm. It does so by simultaneously learning about the process while controlling its behavior. The goal is to make the controller robust to a point where the performance of the complete system is as insensitive as possible to modeling errors and to changes in the environment.
Adaptive Control
The most recent class of control techniques to be used are collectively referred to as adaptive control. Although the basic algorithms have been known for decades, they have not been applied in many applications because they are calculation-intensive. However, the advent of special-purpose digital signal processor (DSP) chips has brought renewed interest in adaptive-control techniques. The reason is that DSP chips contain hardware that can implement adaptive algorithms directly, thus speeding up calculations.
自适应控制(Astrom著)Lecture1

stances. Any alteration in structure or function of an organism to make it better tted to survive and multiply in its environment. Change in response of sensory organs to changed environmental conditions. A slow usually unconscious modi cation of individual and social activity in adjustment to cultural surroundings. Learn to acquire knowledge or skill by study, instruction or experience. Problem: Adaptation and feedback?
c K. J. str m and B. Wittenmark
Dual Control
uc Nonlinear control law u Process y
The Adaptive Control Problem
Principles Certainty Equivalence Caution Dual Control Controller structure Linear Nonlinear State Model Input Output Model Control Design Method Parameter Adjustment Method Speci cations Situation dependent? Optimality
0
5
西门子PXCCompact系列控制器说明书

Technical Specification SheetDocument No. 149-454July 1, 2013 Siemens Industry, Inc. Page 1 of 8PXC Compact SeriesFigure 1. PXC Compact Series Controllers(PXC-24 and PXC-36 shown.)DescriptionThe PXC Compact Series (Programmable Controller–Compact) is a high-performance Direct Digital Control(DDC) supervisory equipment controller, which is anintegral part of the APOGEE® Automation System.The PXC Compact Series offers integrated I/O basedon state-of-the-art TX-I/O™ Technology, whichprovides superior flexibility of point and signal types,and makes it an optimal solution for Air Handling Unit(AHU) control. The PXC Compact operates stand-alone or networked to perform complex control,monitoring, and energy management functions withoutrelying on a higher-level processor.The PXC Compact Series communicates with otherfield panels or workstations on a peer-to-peerAutomation Level Network (ALN) and supports thefollowing communication options:∙ Ethernet TCP/IP∙P2 RS-485The PXC Compact is available with 16, 24, or 36 pointterminations. Selected models in the Compact Seriesprovide the following options:∙Support for FLN devices.∙An extended temperature range for the control ofrooftop devices.∙Support for Island Bus, which uses TX I/Omodules to expand the number of pointterminations.Features∙DIN rail mounted device with removable terminalblocks simplifies installation and servicing.∙Proven program sequences to match equipmentcontrol applications.∙Built-in energy management applications and DDCprograms for complete facility management.∙Comprehensive alarm management, historicaldata trend collection, operator control, andmonitoring functions.∙Sophisticated Adaptive Control, a closed loopcontrol algorithm that auto-adjusts to compensatefor load/seasonal changes.∙Message control for terminals, printers, pagers,and workstations.∙Highly configurable I/O using Siemens state-of-the-art TX-I/O™ Technology.∙HMI RS-232 port, which provides laptopconnectivity for local operation and engineering.∙Extended battery backup of Real Time Clock.∙Persistent database backup and restore within thecontroller.∙Optional HOA (Hand/Off/Auto) module forswappable and configurable HOA capability.∙Optional extended temperature range for rooftop installation.∙Optional peer-to-peer communications over industry-standard 10Base-T/100Base-TX Ethernet networks.∙Optional support for FLN devices.∙Optional support for P1 Wireless FLN.∙Optional operation as a P1 FLN device with default applications.∙Optional support for Virtual AEM.∙PXM10T and PXM10S support: Optional LCD Local user interface with HOA (Hand-off-auto)capability and point commanding and monitoringfeatures.The Compact SeriesIn addition to building and system management functions, the Compact Series includes several styles of controllers that flexibly meet application needs.PXC-16The PXC-16 provides control of 16 points, including 8 software-configurable universal points.Point count includes: 3 Universal Input (UI), 5 Universal I/O (U), 2 Digital Input (DI), 3 Analog Output (AOV), and 3 Digital Output (DO).PXC-24The PXC-24 provides control of 24 points, including 16 software-configurable universal points.Point count includes: 3 Universal Input (UI), 9 Universal I/O (U), 4 Super Universal I/O (X), 3 Analog Output (AOV), 5 Digital Output (DO).PXC-36The PXC-36 provides control of 36 local points, including 24 software-configurable universal points. Point count includes: 18 Universal I/O (U), 6 Super Universal I/O (X), 4 Digital Input (DI), and 8 Digital Output (DO).The PXC-36 offers the flexibility of expanding the total point count through a self-forming island bus. With the addition of a TX-I/O Power Supply, up to 4 TX-I/O modules can be supported. For more information, see the TX-I/O Product Range Technical Specification Sheet (149-476). Available OptionsThe following options are available to match the application:Ethernet or RS-485 ALNSupport for APOGEE P2 ALN through TCP/IP orRS-485 networks.FLN Support∙The PXC-24 “F32” models support up to 32 P1 FLN devices when the ALN is connected toTCP/IP.∙The PXC-24 “F” models with an FLN license support up to 32 P1 FLN devices when the ALN isconnected to TCP/IP.∙The PXC-36 with an FLN license supports up to 96 P1 FLN devices when the ALN is connected toRS-485 or TCP/IP.∙ A Wireless FLN may also be used to replace the traditional P1 FLN cabling with wirelesscommunication links that form a wireless meshnetwork. Additional hardware is required toimplement the Wireless FLN.For more information about FLN support, contact your local Siemens Industry representative.P1 FLN OperationThe PXC-16 and PXC-24 can be configured as a programmable P1 FLN device. In the P1 FLN mode, the PXC Compact functions as an equipment controller with customized programming and default applications.Virtual AEM SupportThe Virtual AEM license allows the PXC Compact to connect an RS-485 APOGEE Automation Level Network or individual field panels to a P2 Ethernet network without additional hardware.Extended Temperature OperationThe "R" models of the PXC Compact Series support extended temperature operation, allowing for rooftop installations.Field Panel GOThe PXC-36 supports Field Panel GO.The Field Panel GO license provides a Web-based user interface for your APOGEE® Building Automation System. It is an ideal solution for small or remote facilities with field panels on an Ethernet Automation Level Network (ALN).Page 2 of 8 Siemens Industry, Inc.HardwareThe PXC Compact Series consists of the following major components:∙ Input/Output Points∙ Power Supply∙ Controller ProcessorInput/Output Points∙The PXC Compact input/output points perform A/D or D/A conversion, signal processing, pointcommand output, and communication with thecontroller processor. The terminal blocks areremovable for easy termination of field wiring.∙The Universal and Super Universal points leverage TX-I/O™ Technology from SiemensIndustry to configure an extensive variety of pointtypes.∙Universal Input (UI) and Universal Input/Output (U) points are software-selectable to be:- 0-10V input-4-20 mA input- Digital Input-Pulse Accumulator inputs-1K Ni RTD @ 32°F (Siemens, JohnsonControls, DIN Standard)-1K Pt RTD (375 or 385 alpha) @ 32°F-10K NTC Thermistor (Type 2 and Type 3) @ 77°F-100K NTC Thermistor (Type 2) @ 77°F-0-10V Analog Output (Universal Input/Output (U) points only)∙Super Universal (X) points (PXC-24 and PXC-36 only) are software-selectable to be:- 0-10V input-4-20 mA input- Digital Input-Pulse Accumulator inputs-1K Ni RTD @ 32°F (Siemens, JohnsonControls, DIN Standard)-1K Pt RTD (375 or 385 alpha) @ 32°F-10K NTC Thermistor (Type 2 and Type 3) @ 77°F-100K NTC Thermistor (Type 2) @ 77°F- 0-10V Analog Output-4-20 mA Analog Output-Digital Output (using external relay)∙Dedicated Digital Input (DI) points (PXC-16 and PXC-36 only) are dry contact status sensing. ∙Digital Output (DO) points are 110/220V 4 Amp (resistive) Form C relays; LEDs indicate the status of each point.∙All PXC Compact Series models support 0-10 Vdc Voltage Analog Output circuits.∙On PXC-24 and PXC-36 models, the Super Universal circuits may be defined as 4-20 mAcurrent AO.Power Supply∙The 24 volt DC power supply provides regulated power to the input/output points and activesensors. The power supply is internal to the PXCCompact housing, eliminating the need forexternal power supply and simplifying installationand troubleshooting.∙The power supply works with the processor to ensure smooth power up and power downsequences for the equipment controlled by the I/O points, even through brownout conditions. Controller Processor∙The PXC Compact Series includes amicroprocessor-based multi-tasking platform forprogram execution and communications with theI/O points and with other PXC Compacts and field panels over the ALN.∙ A Human Machine Interface (HMI) port, with a quick-connect phone jack (RJ-45), uses RS-232protocol to support operator devices (such as alocal user interface or simple CRT terminal), and a phone modem for dial-in service capability.∙ A USB Device port supports a generic serial interface for an HMI or Tool connection.∙The program and database information stored in the PXC Compact RAM memory is battery-backed. This eliminates the need for time-consuming program and database re-entry in theevent of an extended power failure.∙The firmware, which includes the operating system, is stored in non-volatile flash ROMmemory; this enables firmware upgrades in thefield.∙Brownout protection and power recovery circuitry protect the controller board from powerfluctuations.∙LEDs provide instant visual indication of overall operation, network communication, and lowbattery warning.Siemens Industry, Inc. Page 3 of 8Programmable Control with Application FlexibilityThe PXC Compact Series of high performance controllers provides complete flexibility, which allows the owner to customize each controller with the exact program for the application.The control program for each PXC Compact is customized to exactly match the application. Proven Powers Process Control Language (PPCL), a text-based programming structure like BASIC, provides direct digital control and energy management sequences to precisely control equipment and optimize energy usage.Global Information AccessThe HMI port supports operator devices, such as a local user interface or simple CRT terminal, and a phone modem for dial-in service capability. Devices connected to the operator terminal port gain global information access.Multiple Operator AccessMultiple operators can access the network simultaneously. Multiple operator access ensures that alarms are reported to an alarm printer while an operator accesses information from a local terminal. When using the Ethernet TCP/IP ALN option, multiple operators may also access the controller through concurrent Telnet sessions and/or local operator terminal ports.Menu Prompted, English Language Operator InterfaceThe PXC Compact field panel includes a simple, yet powerful, menu-driven English Language Operator Interface that provides, among other things:∙Point monitoring and display∙ Point commanding∙Historical trend collection and display for multiple points∙ Event scheduling∙Program editing and modification via Powers Process Control Language (PPCL)∙Alarm reporting and acknowledgment∙Continual display of dynamic information Built-in Direct Digital Control RoutinesThe PXC Compact provides stand-alone Direct Digital Control (DDC) to deliver precise HVAC control and comprehensive information about system operation. The controller receives information from sensors in the building, processes the information, and directly controls the equipment. The following functions are available:∙Adaptive Control, an auto-adjusting closed loop control algorithm, which provides more efficient,adaptive, robust, fast, and stable control than thetraditional PID control algorithm. It is superior interms of response time and holding steady state,and at minimizing error, oscillations, and actuatorrepositioning.∙Closed Loop Proportional, Integral and Derivative (PID) control.∙ Logical sequencing.∙Alarm detection and reporting.∙ Reset schedules.Built-in Energy Management ApplicationsThe following applications are programmed in the PXC Compact Series and require simple parameter input for implementation:∙Automatic Daylight Saving Time switchover∙ Calendar-based scheduling∙ Duty cycling∙ Economizer control∙Equipment scheduling, optimization andsequencing∙ Event scheduling∙ Holiday scheduling∙Night setback control∙Peak Demand Limiting (PDL)∙Start-Stop Time Optimization (SSTO)∙ Temperature-compensated duty cycling∙Temporary schedule overridePage 4 of 8 Siemens Industry, Inc.SpecificationsDimensions (L × W × D)PXC-16 and PXC-2410.7 in. × 5.9 in. × 2.45 in. (272 mm × 150 mm × 62 mm)PXC-3611.5 in. × 5.9 in. × 3.0 in. (293 mm × 150 mm × 77 mm) Processor, Battery, and MemoryProcessor and Clock SpeedPXC-16 and PXC-24: Motorola MPC852T, 100 MHzPXC-36: Motorola MPC885, 133 MHz MemoryPXC-16 and PXC-24: 24 MB (16 MB SDRAM, 8 MB Flash ROM)PXC-36: 80 MB (64 MB SDRAM, 16 MB Flash ROM) Battery backup of Synchronous Dynamic (SD) RAM (field replaceable)Non-rooftop Models: 60 days (accumulated),AA (LR6) 1.5 Volt Alkaline (non-rechargeable)Rooftop (Extended Temperature) Models: 90 days (accumulated),AA (LR6) 3.6 Volt Lithium (non-rechargeable) Battery backup of Real Time ClockNon-rooftop Models: 10 yearsRooftop (Extended Temperature) Models: 18 months CommunicationA/D Resolution (analog in)16 bitsD/A Resolution (analog out)10 bitsEthernet/IP Automation Level Network (ALN)10Base-T or 100Base-TX compliant RS-485 Automation Level Network (ALN)1200 bps to 115.2 Kbps RS-485 P1 Field Level Network (FLN) on selected models, license required4800 bps to 38.4 Kbps Human-Machine Interface (HMI)RS-232 compliant, 1200 bps to 115.2 Kbps USB Device port (for non-smoke control applications only)Standard 1.1 and 2.0 USB device port, Type B female connector.USB Host port on selected models (for ancillary smoke control applications only)Standard 1.1 and 2.0 USB host port, Type A female connector. ElectricalPower Requirements24 Vac ±20% input @ 50/60 HzPower Consumption (Maximum)PXC-16: 18 VA @ 24 VacPXC-24: 20 VA @ 24 VacPXC-36: 35 VA @ 24 Vac Siemens Industry, Inc. Page 5 of 8AC Power and Digital OutputsNEC Class 1 Power Limited Communication and all other I/ONEC Class 2 Digital InputContact Closure SensingDry Contact/Potential Free inputs onlyDoes not support counter inputs Digital OutputClass 1 Relay Analog Output0 to 10 VdcUniversal Input (UI) and Universal Input/Output (U)Analog InputVoltage (0-10 Vdc)Current (4-20 mA)1K Ni RTD @ 32°F1K Pt RTD (375 or 385 alpha) @ 32°F10K NTC Type 2 or Type 3 Thermistor @ 77°F100K NTC Type 2 Thermistor @ 77°FDigital InputPulse AccumulatorContact Closure SensingDry Contact/Potential Free inputs onlySupports counter inputs up to 20 HzAnalog Output (Universal Input/Output (U) points only)Voltage (0-10 Vdc) Super Universal (X)Analog InputVoltage (0-10 Vdc)Current (4-20 mA)1K Ni RTD @ 32°F1K Pt RTD (375 or 385 alpha) @ 32°F10K NTC Type 2 or Type 3 Thermistor @ 77°F100K NTC Type 2 Thermistor @ 77°FDigital InputPulse AccumulatorContact Closure SensingDry Contact/Potential Free inputs onlySupports counter inputs up to 20 HzAnalog OutputVoltage (0-10 Vdc)Current (4-20 mA)Digital Output (requires an external relay)0 to 24 Vdc, 22 mA max. Operating EnvironmentAmbient operating temperature32°F to 122°F (0°C to 50°C) Ambient operating temperature with rooftop (extended temperature) option-40°F to 158°F (-40°C to 70°C) Relative HumidityPXC-16 and PXC-24: 5% to 95%, non-condensingPXC-36: 5% to 95%, non-condensing Page 6 of 8 Siemens Industry, Inc.Mounting SurfacePXC-16 and PXC-24: Direct equipment mount, building wall, or structural memberPXC-36: Building wall or a secure structure Agency ListingsULUL864 UUKL (except rooftop models)UL864 UUKL7 (except rooftop models)CAN/ULC-S527-M8 (except rooftop models)UL916 PAZX (all models)UL916 PAZX7 (all models) Agency ComplianceFCC ComplianceAustralian EMC FrameworkEuropean EMC Directive (CE)European Low Voltage Directive (LVD) OSHPD Seismic CertificationProduct meets OSHPD Special Seismic Preapproval certification(OSH-0217-10) under California Building Code 2010 (CBC2010) andInternational Building Code 2009 (IBC2009) when installed within thefollowing Siemens enclosure part numbers: PXA-ENC18, PXA-ENC19,or PXA-ENC34. Ordering InformationPXC Compact SeriesProduct Number DescriptionPXC16.2-P.A PXC Compact, 16 point, RS-485 ALNPXC16.2-PE.A PXC Compact, 16 point, Ethernet/IP ALNPXC24.2-P.A PXC Compact, 24 point, RS-485 ALNPXC24.2-PE.A PXC Compact, 24 point, Ethernet/IP ALNPXC24.2-PR.A PXC Compact, 24 point, RS-485 ALN, rooftop optionPXC24.2-PER.A PXC Compact, 24 point, Ethernet/IP ALN, rooftop optionPXC24.2-PEF.A PXC Compact, 24 point, Ethernet/IP or RS-485 ALN. P1 FLN or Remote Ethernet/IP(Virtual AEM) option.PXC24.2-PEF32.A PXC Compact, 24 point, Ethernet/IP or RS-485 ALN. P1 FLN enabledPXC24.2-PERF.A PXC Compact, 24 point, Ethernet/IP or RS-485 ALN, rooftop option. P1 FLN or RemoteEthernet/IP (Virtual AEM) option.PXC36-PE.A PXC Compact, 36 point, Ethernet/IP or RS-485 ALN.PXC36-PEF.A PXC Compact, 36 point, Ethernet/IP or RS-485 ALN, Island Bus, P1 FLN.Siemens Industry, Inc. Page 7 of 8Information in this document is based on specifications believed correct at the time of publication. The right is reserved to make changes as design improvements are introduced. APOGEE and Insight are registered trademarks of Siemens Industry, Inc. Other product or company names mentioned herein may be the trademarks of their respective owners. © 2013 Siemens Industry, Inc.Siemens Industry, Inc. Building Technologies Division 1000 Deerfield Parkway Buffalo Grove, IL 60089-4513 USA+ 1 847-215-1000Your feedback is important to us. If you havecomments about this document, please send them to***************************************.Document No. 149-454Printed in USAPage 8 of 8Optional LicensesProduct Number DescriptionLSM-FLN License to enable FLN support on PXC-16 or PXC-24 “F”modelsLSM-VAEM License to enable Virtual AEM support when the ALN is connected to RS-485LSM-FLN36.A License to enable FLN support on model PXC36-PE.ALSM-FPGO License to enable Field Panel GO on models PXC36-PE.A and PXC36-PEF.ALSM-IB36.A License to enable the Island Bus on model PXC36-PE.ALSM-36.A License to enable both FLN and Island Bus support on model PXC36-PE.AAccessoriesProduct Number DescriptionPXM10S Controller mounted Operator Display module with point monitor and optional blue backlight PXM10T Controller mounted Operator Display modulePXA8-M 8-switchHOA(UL864)PXA16-M 16-switchHOA(UL864)PXA16-MR 16-switch HOA (extended temp, UL 916) with HMI cablePXA-HMI.CABLEP5 Serial cable required for HOA or PXM10T/S connection to non-rooftop variants ofthe 16-point and 24-point Compact Series (pack of 5)TXA1.LLT-P100 Labels for HOA and TX-I/O Modules, pack of 100, letter formatService Boxes and EnclosuresProduct Number DescriptionPXA-SB115V192VA PX Series Service Box —115V, 24 Vac, 50/60 Hz, 192 VAPXA-SB115V384VA PX Series Service Box— 115V, 24 Vac, 50/60 Hz, 384 VAPXA-SB230V192VA PX Series Service Box— 230V, 24 Vac, 50/60 Hz, 192 VAPXA-SB230V384VA PX Series Service Box —230V, 24 Vac, 50/60 Hz, 384 VAPXA-ENC18 18" Enclosure (Utility Cabinet) (UL Listed NEMA Type 1 Enclosure)PXA-ENC19 19” Enclosure (UL Listed NEMA Type 1 Enclosure)PXA-ENC34 34” Enclosure (UL Listed NEMA Type 1 Enclosure)DocumentationProduct Number Description553-104 PXC Compact Series Owner’s Manual125-1896 Powers Process Control Language (PPCL) User’s Manual。
Adaptive tracking control of uncertain MIMO nonlinear systems with input constraints

article
info
abstract
In this paper, adaptive tracking control is proposed for a class of uncertain multi-input and multi-output nonlinear systems with non-symmetric input constraints. The auxiliary design system is introduced to analyze the effect of input constraints, and its states are used to adaptive tracking control design. The spectral radius of the control coefficient matrix is used to relax the nonsingular assumption of the control coefficient matrix. Subsequently, the constrained adaptive control is presented, where command filters are adopted to implement the emulate of actuator physical constraints on the control law and virtual control laws and avoid the tedious analytic computations of time derivatives of virtual control laws in the backstepping procedure. Under the proposed control techniques, the closed-loop semi-global uniformly ultimate bounded stability is achieved via Lyapunov synthesis. Finally, simulation studies are presented to illustrate the effectiveness of the proposed adaptive tracking control. © 2011 Elsevier Ltd. All rights reserved.
基于强化学习的自主式水下潜器障碍规避技术

Reinforcement Learning Based Obstacle Avoidance for Autonomous Underwater VehiclePrashant Bhopale 1&Faruk Kazi 1&Navdeep Singh 1Received:24September 2017/Accepted:19March 2018/Published online:8April 2019#Harbin Engineering University and Springer-Verlag GmbH Germany,part of Springer Nature 2019AbstractObstacle avoidance becomes a very challenging task for an autonomous underwater vehicle (AUV)in an unknown underwater environment during exploration process.Successful control in such case may be achieved using the model-based classical control techniques like PID and MPC but it required an accurate mathematical model of AUV and may fail due to parametric uncer-tainties,disturbance,or plant model mismatch.On the other hand,model-free reinforcement learning (RL)algorithm can be designed using actual behavior of AUV plant in an unknown environment and the learned control may not get affected by model uncertainties like a classical control approach.Unlike model-based control model-free RL based controller does not require to manually tune controller with the changing environment.A standard RL based one-step Q-learning based control can be utilized for obstacle avoidance but it has tendency to explore all possible actions at given state which may increase number of collision.Hence a modified Q-learning based control approach is proposed to deal with these problems in unknown environment.Furthermore,function approximation is utilized using neural network (NN)to overcome the continuous states and large state-space problems which arise in RL-based controller design.The proposed modified Q-learning algorithm is validated using MATLAB simulations by comparing it with standard Q-learning algorithm for single obstacle avoidance.Also,the same algorithm is utilized to deal with multiple obstacle avoidance problems.Keywords Obstacleavoidance .Autonomousunderwatervehicle .Reinforcementlearning .Q-learning .Functionapproximation1IntroductionThe ocean is the central energy source of energy,minerals,food,etc.for human being hence understanding and exploring the ocean area becomes an important task (Council 1996).Such exploration can be carried out by humans themselves using different manned and unmanned vehicles.Butsometimes,it is not possible for human being to personally visit some hostile areas like radioactive environments or higher depth;in such cases,autonomous underwater vehicle (AUV)plays a vital role for achieving such tasks.AUVs are unmanned type underwater vehicles which are used in the commercial,military,scientific,and private sectors which are designed to explore underwater areas and perform differ-ent missions like pipeline monitoring,etc.(Russell et al.2014).AUVs are the most suitable candidate for exploration of extreme environments due to their ability to operate auton-omously (Fossen 2011).In such operations,AUVs are sup-posed to maneuver on their own as per programmed mission,but such missions are prone to fail due to unknown obstacles in AUV ’s programmed path.Hence,obstacle avoidance be-comes a necessary task for AUV .Obstacle detection and avoidance can be carried out by de-signing proper feedback control for AUV .The controller design-ing process can be classified into two types,namely model-based control and model-free control.Model-based control requires precise computation;the typical classical controller design pro-cedure requires derivation of an exact mathematical model byArticle Highlights•In order to complete the given task in unknown environment,AUV must avoid collisions with obstacles.•A modified Q-learning-based control is proposed to reduce number of collisions and compared with standard one-step Q-learning-based control.•Function approximation is utilized along with RL to deal with continu-ous states and large state-space problem.•Proposed RL-based control is utilized for multiple obstacle avoidance.*Prashant Bhopalepsbhopale_p14@el.vjti.ac.in1Electrical Engineering Department,Veermata Jijabai Technological Institute,Mumbai 400019,IndiaJournal of Marine Science and Application (2019)18:228–238https:///10.1007/s11804-019-00089-3careful analysis of process dynamics;by using this mathematical model,control law has to be derived to meet certain design criteria(Su et al.2013;Qu et al.2017).Sometimes,reduced order models are used to design controller(Bhopale et al. 2017)but it again requires an abstract mathematical model of the plant.Construction of the abstract model may be carried out by system identification approach but it may increase the param-eter dependency(Hafner and Riedmiller2014)and if the behav-ior of plant in real time is different from the abstract model due to parametric uncertainties then,the controller designed using that model may fail.In such scenario,robust controller is proposed for AUVin Cheng et al.(2010)and Bhopale et al.(2016),but the performance of robust control is again limited due to assump-tions of bounded uncertainties.All these methods mentioned above are usually based on specific environment and plant’s abstract mathematical model and depend on more prior knowl-edge like experience and rules.Also,they lack self-learning property to adapt to various unknown environments.Once there is any change in the task or environment,the corresponding designed model-based controller need to be updated manually. Hence,it is better to incorporate model-free self-learning ap-proach in designing feedback control for AUV since dependency on the mathematical model,and uncertainties will vanish and the controller will be developed depending entirely on plant’s(AUV in our case)behavior in the unknown environments.Different model-free control approaches have been pro-posed in literature;where the controller learns plant behavior using neural network(NN).Reinforcement learning(RL)can be considered as a suitable candidate for both self-learning model-based and model-free control approach.Kober (Kober et al.2013)contains a detailed survey regarding the application of RL in the area of robotics,where AUVapplica-tions are also listed.Model-based RL approach utilizes the kinematic model of AUV or sometimes the behavior can be summarized into Gaussian process(GP)model and this model will be used for long-term prediction.Many researchers have combined other controllers with RL where the predicted con-trol of classical control is used as known policy and this policy is modified using actor-critic approach;this particular ap-proach is known as on-policy approach where predefined pol-icy is available but it increases the dependency on model available or derived using NN or GP(Paula and Acosta 2015).On the other hand,the off-policy approach does not require knowledge of AUV’s kinetics or dynamics and the agent learn entire control law by itself.Q-learning is one of the off-policy RL algorithms which can learn from actual plant behavior and can decide its own control command depending on previous experience (Phanthong et al.2014).The Q-learning process executes in the following sequence:for present state,agent selects action depending on policy(random policy or greedy policy)from Q-table which is a storage of state-action value pairs as per previous experience,this selected action is then executed on plant and generated output is measured,depending on how good or bad the output is,the agent receives reward or pun-ishment and using this reward or punishment the Q-value for particular state-action pair is updated and stored as an experi-ence in the Q-table.For next step,the same process is execut-ed and Q-table is updated.In this way,the Q-table is updated iteratively with different state-action pair for the entire state and action space.At the end of the process,Q-learning policy learns the entire control law by itself for defined state-action space from the scratch,without knowing the kinetics or dy-namics of the plant.This self-learning policy can be applied for AUV set-point tracking problem,where AUV can explore entire state space by trying each and every state-action pair (exploration process)to reach the set-point with the objective of maximizing cumulative reward.But in such exploration process if AUV comes across an obstacle and takes any ran-dom action in order to explore,it is possible that collision may occur with obstacle causing damage to AUV.In such case, some researchers have proposed auxiliary controller strategy to switch controller to avoid obstacle but again this controller has to be designed from the mathematical model of AUV which as stated above is prone to uncertainties.Also,curse of dimensionality and continuous state problem are some oth-er drawbacks being faced while standard one-step Q-learning algorithm which is being utilized for AUV.Hence,as a remedy,a modified Q-algorithm is proposed in this paper which does not require auxiliary control or mathemat-ical model of AUV.In the proposed method force exploitation is carried out to deal with this obstacle avoidance problem when AUV is in the unsafe region.This method is entirely model-free and furthermore,NN-based function approximation is utilized to deal with the curse of dimensionality and continuous state prob-lem.Together proposed method removes the dependency of plant model and deals with the curse of dimensionality and con-tinuous state space problem while designing self-learning con-troller for AUV set point tracking and obstacle avoidance.Remaining of the paper is organized as follows:In Section2,a brief introduction to RL and the standard one-step Q-learning algorithm is presented.Then in Section3,the main idea of obstacle avoidance for AUV using RL is pro-posed with a modified Q-learning algorithm.Also for the same problem,issues with continuous state space and curse of dimensionality are highlighted and utilization of function approximation method using the NN is proposed.In Section4,the proposed approach is illustrated using simula-tion results.Finally,Section5concludes the paper by discussing the overall results of the proposed approach.2Reinforcement Learning(RL)RL is a standard machine learning method which is used to solve sequential decision problems modeled in the form ofP.Bhopale et al.:Reinforcement Learning Based Obstacle Avoidance for Autonomous Underwater Vehicle229Markov decision processes (MDPs)(Powell 2007).The dy-namic programming assumes deterministic system;on the other hand,RL is approximate dynamic programming obtaining an optimal control policy when the perfect mathe-matical model is not available.Hence,we can say that RL can be utilized as a model-free approach.In the RL problem,the agent which is interacting with the environment has to observe a present state s ∈S of the envi-ronment and,depending on policy an action,a ∈A is selected,where state space S and action space A can be either discrete or continuous set,and can be single or multi-dimensional.It is assumed that state s t at any time instant t contains all relevant information about the plant ’s current situation.As shown in Fig.1below,at time instant t agent observe the state s t ,selects an action a t from A using policy decided,this action a t which is used to control states of the system is executed on the en-vironment,then the environment reacts to the action and next state s t +1is generated by the system using action a t .Now depending on the state s t +1,a reward r t is generated for state-action pair (s t ,a t )this reward can be designed as a sca-lar value or a function of the error between the present state and destination/target state or combination of both.The main goal of RL is to find a policy πby maximizing cumulative expected reward for the action a in given state s .The desired policy πcan be deterministic or stochastic.We can say that a is a sample over actions distribution over new state en-countered as a ~π(s ,a )=P (a |s ).The reward functions are com-monly designed as function of present state,i.e.,r =r (s t );current state and action pair,i.e.,r =r (s t ,a t );or it can be a function of the transitions from one state to new state,i.e.,r =r (s t ,a t ,s t +1).The RL agent is expected to find out the relations between available states,available actions,and earned rewards depend-ing on experience or best available choices.Hence,knowledge of exploration and exploitation is necessary to design RL agent.2.1Exploration vs ExploitationFrom the agent point of view,the environment may be static or dynamic;hence,agent has to try different actions randomly,receive a reward,and keep learning on trial and error basis,this process is known as exploration.When the agent chooses the best action from learned experience and minimizes the cost of learning,it is called exploitation.If the agent has very less expe-rience in large dimensional spaces,then agent selecting best actions based on current learned experience (which is not suffi-cient)is not preferable,because better alternative actions may be available which were potentially never been explored,hence sufficient exploration has to be done for learning the global op-timal solution.Now a question arises that how much and when to explore,and how much and when to exploit.However,too much exploration can cost more in terms of performance and stability when the online implementation is necessary.The greedy policy can be used in such case where the exploration rate is more when the agent starts learning,as experience is gained and exploitation rate increases and exploration rate de-creases gradually to reach the optimal solution.This method can be used to deal with exploration and exploitation trade-off.2.2Q-learningThe Q-learning algorithm is model-free,off-policy RL tech-nique,which uses temporal difference learning approach.It is possible to prove that if sufficient training and experience is given to RL agent under any soft-policy,then the algorithm will converge to close approximation of the action-value func-tion for arbitrary target policy with probability 1.Optimal policy can be learned by Q-learning in both more exploration and random policy case.The state-action value is updated in Q-learning as,Q s t ;a t ðÞ≔Q s t ;a t ðÞþαr t þ1þγmax aQ s t þ1;a t ðÞ−Q s t ;a t ðÞh ið1ÞThe following are parameters in the Q-value update process:&αis a learning rate parameter;it can be set between 0and 1value.If αis set to 0,then,no learning process is carried out and Q-values will never be updated.If αis set to 0.9,then,learning can occur very quickly.&γis a discount factor;this parameter can take any value between 0and 1.This factor is used to ensure that the future rewards are not worth.&ε-if ε=1then pure exploration is carried out and if ε=0then pure exploitation.Hence εis normally set to small positive value between 0and 1.It is a probability fordeciding the policy selection factor ^B,when ^B =1exploi-tation is carried out,else exploration continues.In a scenario when each action is executed on every state in a huge number of times,and if learning rate αis decayed appropriately with increasing number of trials,the Q-values will converge optimal value Q *with probability 1seeFig.1Reinforcement learning mechanism where learning agent interacts with an environment230Journal of Marine Science and Application(Watkins and Dayan1992).In this case,Q directly approxi-mates to optimal value Q*,independent of the policy being followed.This approximation simplifies algorithm analysis and enabled early convergence proofs.But this policy still depends on which state-action pairs are visited and which value functions are updated hence it becomes mandatory to visit all state action pair.If proper exploration-exploitation is carried out then under this assumption Q t converges to Q*with probability1,this one-step Q-learning algorithm(Sutton and Barto1998)is shown below:Algorithm1One-step Q-learning algorithmInitialize (,)arbitrailyrepeat(for each episode):Initialize;repeat(for each step of episode):Choose for using policy derived from ;Take action , observe and ;update(,)(,)+[+max(,)−(,)]until is terminaluntil all episodes end.2.3Q-learning in an Unknown EnvironmentSince Q-learning is a type of RL,it can directly interact with the unknown environment and develop self-learning control without any prior knowledge of the environment.When the environment is unknown,the obstacle avoidance problem of an AUV can be considered as a behavior selection task.In this task,the AUV can automatically produce a correct action of reaching the destination without collision according to the environment information perceived by the sensors equipped on the AUV.The task of finding the optimal path for AUV in an unknown environment is shown in Fig.2.It is assumed that AUV’s sensor system collects information regarding its own position[x t,y t,ψt]and information of nearby obstacle’s(rep-resented by gray circles)position with its dimension at every time instant t.The black point is initial position of AUV rep-resented by[x o,y o,ψo]and the green point is the destination point fed to AUV which is represented by[x d,y d,ψd].The linear distance between AUV and destination point is calculated byΔd¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix d−x tðÞ2þy d−y tðÞ2 qAnd the angular difference between AUV’s current orienta-tion and the destination expected orientation at time t is given by Δψ¼ψd−ψt;Δψ∈−π;þπ½In order to navigate the AUV to its destination point, it is assumed that these variables are always known at each time instant t.Therefore,an obstacle avoidance task is to obtain these variables,Δd andΔψat each time step t,and based on them determine a state-action mapping process until the goal is achieved.As stated in above section Q-learning is a self-learning approach which learns optimal value function(1)with sufficient train-ing using trial and error approach.Q-learning learns control policy for what to do and how to do,to maximize the reward value as stated in Algorithm1.According to this algorithm, AUV first checks its current state s t(position and orientation) in current environment,then pick up random action a t.The random action will result in next state s t+1and,depending on next state,the reward value r t is generated as reinforcement signal depending onΔd,Δψand whether target achived or collision occored.This reward value indicates the conse-quences or advantages of a t at s t.The information s t,a t,s t+1and r t are fed to Q-value function and Q-value,Q(s t,a t)is updated.This process is repeated for next state and taking random action at that state until the destination point is reached or AUV collide with some obstacle.If in this process,AUV find out4paths namely A,B,C and D as shown in Fig.2 then the path A is decided as optimal path at the time of exploitation because Q-learning policy attempts to maximize the cumulative reward value which agent re-ceives in progressive transition of states from its present state.But as we can see,there is no provision to avoid or reduce collision with obstacles in this standardQ-Fig.2Q-learning to find optimal path for AUVP.Bhopale et al.:Reinforcement Learning Based Obstacle Avoidance for Autonomous Underwater Vehicle231learning algorithm,all it can learn is from experience, also there is no provision to deal with curse of dimen-sionality,hence a modified Q algorithm is proposed which can reduce the number of collision with obstacles in learning process and can deal with curse of dimen-sionality along with continuous state space problem.3Modified Q-learningTraditionally,AUV consists3subsystems namely guidance system,navigation system,and control system.Guidance system designs an optimal path depending on vehicle dy-namics and obstacles,control system executes the path, and navigation system estimates the states/trajectory in presence of noise or disturbances.Together this guidance, navigation,and control(GNC)systems strongly require the mathematical model of AUV which is againparameter-dependent and may fail due to parametric uncer-tainties,plant-model mismatch or change in the environ-ment.Hence,to replace the GNC system,a behavior adaptive self-learning controller is required to be designed in such scenario which does not depend on plant’s math-ematical model.The self-learning standard one-step Q-learning algorithm stated in Algorithm1can be utilized for this task as stated in Section2.3but it has a tendency to explore all possible actions at given state which may be dangerous in presence of obstacle,as controller may try random action as stated in step4of Algorithm1in order to explore and end up in colliding with obstacle many more times, e.g.,as Path C and Path D shown in Fig. 2.This random action exploration at all time is a drawback of present one-step Q-learning algorithm.Also, standard Q-learning requires a large amount of storage to save all discretized state and action space,this particular problem is known as the curse of dimensionality.Hence, to reduce the collision with an obstacle and to deal with the curse of dimensionality,a modified version of Q-learning algorithm is proposed in Section2.3by augment-ing obstacle with an imaginary unsafe region around the obstacle,force exploitation when the unsafe region is de-tected and utilizing NN.This modified algorithm ensures that when the obstacle is detected AUV will not perform exploration(will not try new or random action)and force exploitation is carried out to get out of the unsafe region to reduce the number of collisions.Static obstacles represent hard constraints that must be taken into account in the development of approxi-mately optimal path planner.To facilitate the develop-ment of obstacle avoiding modified Q-learning control, we assume that AUV receives full knowledge about nearby obstacle,initial point and current state(position and orientation)using sensor mounted on AUV,then obstacles are augmented with an imaginary perimeterin the received database that extends from their bordersdenoting an unsafe region as illustrated in Fig.3.It is also possible to use auxiliary controller alongwith optimal Q control for obstacle avoidance,andswitch in between auxiliary and Q control in case ofthe obstacle detected but it again increases model depen-dency.Hence,a new approach to deal with this problemby updating next values in Q matrix for upcoming ob-stacle and force exploitation in the process is proposed inthis paper.Normally,exploration and exploitation is a trade-offbetween deciding whether to take action which issafe(exploit),try well-known previously updated actionwhich is having high rewards or dare to try new action(explore)in order to discover new strategies with aneven higher or lower reward.But in presence of anobstacle,trying exploration is not a good idea since arandom new action may result in collision with the ob-stacle,hence in the modified Q-learning algorithm,forceexploitation is carried out whenever the AUV goes intothe unsafe region around an obstacle to avoid the pos-sibility of a collision.Theεis probability which is setbetween0and1to decide how much to exploit andhow much to explore as stated in Section 2.1above. The policy factor^B is random value with(1−ε)exploi-tation and(ε)exploration probability.When^B becomes 1then pure exploitation is carried out else pure explo-ration(process randomly choosing an action a t at states t)will be continued as,Action a t¼argmaxa∈AQ if^B^¼1rand a t∈AðÞotherwise&Fig.3i th obstacle with(radius r obs)is augmented with the unsafe region (with radius r pen)232Journal of Marine Science and ApplicationAs stated in Algorithm2,if the AUV enters in theunsafe region then future Q-values is updated to avoid the collision and^B will be set to1for pure exploita-tion irrespective ofεvalue.Until AUV is in the unsaferegion this process repeated to move AUV into the saferegion and avoid collision with obstacles.Proposedmodified Q-learning algorithm is stated in Algorithm2below,Algorithm2Modified one-step Q-learning algorithm Initialize (,)arbitrailyInitialize desired staterepeat (for each episode):Initialize;Initialize exploration or exploitation policy factor and store it asrepeat (for each episode):if ==1then =max(,): choose action with maximum ;else=datasam ple(action space): random ly choose action;endTake action , and observe ;Decide reward (,,)using Algorithm 3;update(,)(,)+[+max(,)−(,)] if unsafe region is detected(,)=maximum(): give m inim umvalue for next step and same action to not to go closer to the obstacle or to avoid collision.=1: m ake pure exploitation to avoid next action in same directionelse=: load saved factor for episode.end ifuntil is terminaluntil all episodes end.To explain the importance of policy factor^B in Algorithm2,the procedure in this algorithm is shown as flowchart in Fig.4.The reward r can be designed as a function of the error between desired state s d and new state s t+1for the transition using action a t,also the reward value will be depending on the current state of AUV that whether present action a t made the transition from safe to unssafe resion,unsafe to safe resion or collision oc-curred.The short schematic for the same is stated in Algorithm3below,Algorith3Reward function for obstacle avoidanceif transition from safe region to unsafe region,then=−10;elseif transition from unsafe region to unsafe region, then =−20;elseif transition from unsafe region to safe region,then =+10;elseif collision with obstacle,then =−100and restart the exploration;else it's transition from safe region to safe region,then (,)=tanh(|_−|end ifThe second last step in algorithm3is reward func-tion as a function of error.This algorithm ensures that if the AUV is entered in the unsafe region,it will try not to go closer to the obstacle avoiding the collision andget out of the unsafe region by taking adifferent action. If the AUV in a safe region it will try to take greedy action towards the desired set-point.Fig.4Flowchart for modified Q algorithmP.Bhopale et al.:Reinforcement Learning Based Obstacle Avoidance for Autonomous Underwater Vehicle2333.1Issues with Continuous State Space and Curseof DimensionalityRL algorithms can be modeled as MDPs;hence,we are re-quired to define state space S and action space A.Q-learning is executed in order to learn the mapping from present state input (s t)to the highest value of tried action(max(Q(s t,a t)).In a navigation problem,the AUV receives the state information from environment using its internal measurement units (IMUs),and this state is used to decide which action is to be taken in order to achieve desired setpoint/goal.For AUV action space,A is defined by the span of rudder plane[minδr maxδr]and stern plane[minδs maxδs].Since the maximum span is[−20+20]in degrees for AUV the action space for each control plane can have m user-defined discrete values.This will help AUV in taking other decision and avoiding the previous decision in case if AUV is in unsafe region near the obstacle.It is assumed that,when sensor system detects a nearby obstacle,the AUV will receive this information from vector U∈R,as an indicator of upcoming obstacle’s position and it is unsafe region.Therefore,a state space is augmented with U to define two groups of features,and is expressed as,S t¼s t U !ð2ÞBut as we discretized the states,it is not always possible to be precise and optimal at the same time.Choosing to be more precise will make increase the computation cost;hence,there is a trade-off between the smoothness of the output trajectory and computational efficiency.In such case output,will not remain smooth.(Yoo and Kim2016)used path smoothing to deal with the smoothness of the output trajectory but compu-tation cost is still high.Also,if the state space discretized coarsely then the dimension of Q matrix increases to high extend increasing storage space,this particular problem is known as the curse of dimensionality.Hence,we propose to use function approximation to deal with continuous state problem and curse of dimensionality.3.2Function Approximation Using Neural Network Traditional Q-learning can be designed directly for AUV but on the cost of discretization of states and actions. However,in AUV navigation task,the states are continu-ous due to continuous motion and sensory inputs;hence, it is required to have large memory space to store all the state-action pair value for Q-table and learning speed may decrease as it required to precise state space.This is typ-ically known as the curse of dimensionality.In order to solve this problem,function approximation using the neu-ral network(NN)can be used,since it provides a good generalization as a universal function approximation also it has strong ability to deal with large-scale state spaces.NN basically have three layers namely input layer where input data is loaded,output layer where output data is loaded for training or output is generated for testing,and the interme-diate layer is known as hidden layer where different function (e.g.,Sigmoid)is used for function approximation,these hid-den layers are connected to input and output layer via links and these links have weights assigned.NN can be classified into two types:feed-forward neural network(FFNN)where the weights are fixed and not changed and back-propagation neural network(BPNN)where weights are updated using var-ious methods like back-propagation to train the NN.In order to propose the NN based Q-learning,the tradition-al Q-table is replaced by function approximation using three layers NN as shown in Fig.5.The input layer of NN has4inputs,where3inputs are AUV positions in the surge,sway,and yaw direction,and fourth input is obstacle position.The action space is divided into21discrete states for convenience and represented as m number of Q-values at the output layer.A NN with fully trained weights is utilized in AUV’s navigation problem.For every state transition from s t to s t+1,the inputs are passed through input layer as shown in Fig.5and predicted output is generated by NN.The weights are updated on the basis of networks error,which is a difference between itspredicted Fig.5Neural network(NN)for modified Q algorithmTable1Parameters for AUVInitial poistion x=1,y=1,ψ=0°Desired set point x=100,y=100,ψ=45°State space x∈0:110meters,y∈0:110meters,ψ∈0:359°Action space[0,±5°,±10°,±15°,±20°] 234Journal of Marine Science and Application。
智能家居未来 英语作文

In the notsodistant future,the concept of a smart home is no longer a mere fantasy but a reality that is gradually permeating our daily lives.The integration of technology into our living spaces has transformed the way we interact with our homes,making them more intuitive,efficient,and personalized.This essay will explore the potential future of smart homes, the benefits they offer,and the challenges they may face.The evolution of smart homes is a testament to the rapid advancement of technology.What was once a futuristic idea is now becoming a staple in many households.Smart homes are equipped with a range of devices that can be controlled remotely,from lights and thermostats to security systems and appliances.These devices are interconnected,allowing for seamless communication and automation.For instance,a smart thermostat can learn the users preferences and adjust the temperature accordingly, ensuring a comfortable living environment without the need for manual intervention.One of the most significant benefits of smart homes is the convenience they offer.With the ability to control various aspects of the home remotely, users can save time and effort.Imagine arriving home after a long day to a house that is already warmed up or cooled down to your preferred temperature.Or,waking up to a room filled with light as the curtains open automatically at sunrise.These are just a few examples of how smart homes can enhance our daily routines.Moreover,smart homes can contribute to energy efficiency and cost savings.By automating the control of lights,heating,and cooling systems,smart homes can reduce energy consumption and lower utility bills.For example,smart lighting systems can be programmed to turn off when a room is unoccupied,thereby conserving energy.Similarly,smart thermostats can optimize heating and cooling schedules based on the users presence and preferences,further reducing energy waste.However,the implementation of smart homes is not without its challenges. One of the primary concerns is privacy and security.As smart homes rely on internet connectivity,they are susceptible to hacking and data breaches. Ensuring the security of these systems is crucial to protect users personal information and prevent unauthorized access.Additionally,the cost of installing and maintaining smart home systems can be a barrier for some users.While the longterm benefits may outweigh the initial investment, affordability remains a significant factor for widespread adoption.Another challenge is the interoperability of smart home devices.With numerous manufacturers and brands in the market,compatibility issues can arise.For a truly seamless smart home experience,devices from different manufacturers should be able to communicate and work together. This requires standardization and collaboration among industry players to develop universal protocols and interfaces.Despite these challenges,the future of smart homes looks promising.As technology continues to advance,we can expect smarter,more efficient, and more secure smart home systems.The integration of artificial intelligence and machine learning can further enhance the capabilities of smart homes,making them more adaptive and personalized to individualneeds.In conclusion,the future of smart homes holds great potential for improving our living experience.They offer numerous benefits,such as convenience,energy efficiency,and cost savings.However,addressing the challenges related to privacy,security,and interoperability is essential for their widespread adoption.As technology continues to evolve,we can look forward to a future where our homes are not just places to live but intelligent spaces that cater to our every need and preference.。
Design and Analysis of a Novel L1 Adaptive Controller, Part II_

Design and Analysis of a Novel L1Adaptive Controller,Part II: Guaranteed Transient PerformanceChengyu Cao and Naira HovakimyanAbstract—In this paper,we present a novel adaptive control architecture that ensures that the input and output of an uncertain linear system track the input and output of a desired linear system during the transient phase,in addition to the asymptotic tracking.Design guidelines are presented to ensure that the desired transient specifications can be achieved for both system’s input and output signals.The tools from this paper can be used to develop a theoretically justified verification and validation framework for adaptive systems. Simulation results illustrate the theoreticalfindings.I.I NTRODUCTIONModel Reference Adaptive Control(MRAC)architecturewas developed conventionally to control linear systems inthe presence of parametric uncertainties[1],[2].However,itoffers no means for characterizing the system’s input/outputperformance during the transient phase.Improvement ofthe transient performance of adaptive controllers has beenaddressed from various perspectives in numerous publica-tions.One canfind a detailed description of these resultsin Part I of this paper[3].Therein,a novel L1adaptivecontrol design method is introduced,which guarantees thatthe control signal is in low-frequency range by definition.Inthis Part II,we give a slightly different design of the same L1adaptive controller.To enable comparison with high-gain controllers,we replace the feedback moduleˆK(s), introduced in[3],by a linear constant gain feedback of thesystem states.For the sake of completeness,we give brieflythe stability proof here for this design,which requiressimilar L1-gain minimization of a cascaded system as in[3].The ideal(non-adaptive)version of this L1adaptivecontroller is used along with the main system dynamicsto define an extended closed-loop reference system,whichgives an opportunity to estimate performance bounds interms of L∞norms for both transient and steady state errorsof both system’s input and output signals as compared tothe same signals of this reference system.These boundsimmediately imply that the transient performance of thecontrol signal in MRAC cannot be characterized.Designguidelines for selection of the low-passfilter ensure thatthe extended closed-loop reference system approximates thedesired system response,despite the fact that it dependsupon the unknown parameter.The paper is organized as follows.Section II gives theproblem formulation.In Section III,the new L1adaptiveResearch is supported by AFOSR under Contract No.FA9550-05-1-0157and partly by ADV ANCE VT Institutional Transformation Research Seed Grant of NSF.The authors are with AOE,Virginia Tech,Blacksburg,V A24061-0203, e-mail:{chengyu,nhovakim}@ controller is presented.Stability and tracking results of the L1adaptive controller are presented in Section IV.Design guidelines for the L1adaptive controller are presented in Section parison of the performance of L1adaptive controller,MRAC and the high gain controller are discussed in section VI.In section VII,simulation results are pre-sented,while Section VIII concludes the paper.Proofs are in Appendix.II.P ROBLEM F ORMULATIONConsider the following single-input single-output system:˙x(t)=A m x(t)−bθ x(t)+bu(t),x(0)=x0(1) y(t)=c x(t),where x∈R n is the system state vector(measurable), u∈R is the control signal,y∈R is the regulated output, b,c∈R n are known constant vectors,A m is a known n×n Hurwitz matrix,θ∈R n is a vector of unknown parameters, which belongs to a given compact convex setΘ,i.e.θ∈Θ. The control objective is to design an adaptive controller to ensure that y(t)tracks a given bounded continuous refer-ence signal r(t)both in transient and steady state,while all other error signals remain bounded.Rigorously,the control objective is to ensure y(s)≈D(s)r(s),where y(s),r(s) are Laplace transformation of y(t),r(t)respectively,and D(s)is a strictly proper stable LTI system that specifies the desired transient and steady state performance.III.L1A DAPTIVE C ONTROLLERIn this section,we develop a novel adaptive control architecture that permits complete transient characterization for both system input and output signals.The following control structureu(t)=u1(t)+u2(t),u1(t)=−K x(t),(2) where u2(t)is the adaptive controller to be determined later, while K is a nominal design gain and can be set to zero, leads to the following partially closed-loop dynamics:˙x(t)=A o x(t)−bθ x(t)+bu2(t),x(0)=x0(3) y(t)=c x(t).The choice of K needs to ensure that A o=A m−bK is Hurwitz or,equivalently,that H o(s)=(s I−A o)−1b is stable.One obvious choice is K=0.For the linearly parameterized system in(3),we consider the following companion model˙ˆx(t)=Aoˆx(t)+b(u2(t)−ˆθ (t)x(t)),ˆx(0)=x0ˆy(t)=c ˆx(t)(4)Proceedings of the 2006 American Control ConferenceMinneapolis, Minnesota, USA, June 14-16, 2006ThB17.6along with the adaptive law for ˆθ(t ):˙ˆθ(t )=ΓProj(ˆθ(t ),x (t )˜x (t )P o b ),ˆθ(0)=ˆθ0,(5)where ˜x (t )=ˆx (t )−x (t )is the tracking error,Γ∈R n ×n =Γc I n ×n ,Γc >0is a positive definite matrix of adaptationgains,and P o =Po>0is the solution of the algebraic equation Ao P o +P o A o =−Q o for arbitrary Q o >0.Letting¯r (t )=ˆθ (t )x (t ),(6)the companion model in (4)can be viewed as a low-passsystem with u (t )being the control signal,¯r (t )being a time-varying disturbance,which is not prevented from having high-frequency oscillations.Consider the following control design for (4):u 2(s )=C (s ) ¯r (s )+k g r (s ) ,(7)where ¯r (s ),r (s )are the Laplace transformations of¯r (t ),r (t ),respectively,C (s )is a stable and strictly proper system with low-pass gain C (0)=1,and k g =1/(c H o (0)).Closed-loop companion model in (4)with the control signal in (7)can be viewed as an LTI system with two inputs r (t )and ¯r (t ):ˆx (s )=¯G (s )¯r (s )+G (s )r (s )(8)¯G (s )=H o (s )(C (s )−1)(9)G (s )=k g H o (s )C (s ),(10)where ˆx (s )is the Laplace transformation of ˆx (t ).We note that ¯r (t )is related to ˆx (t ),u (t )and r (t )via nonlinear relationships.Letθmax =maxθ∈Θn i =1|θi |,(11)where θi is the i th element of θ,Θis the compact set,where the unknown parameter lies.We now give the L 1performance requirement for the design of K and the strictly proper stable system C (s ).L 1-gain requirement:Design K and C (s )to satisfyλ ¯G (s ) L 1θmax <1,(12)where θmax is defined in (11).IV.A NALYSIS OF L 1A DAPTIVE C ONTROLLER A.Stability and Asymptotic ConvergenceConsider the following Lyapunov function candidate:V (˜x (t ),˜θ(t ))=˜x (t )P o ˜x (t )+˜θ(t )Γ−1˜θ(t ),(13)where P o and Γare introduced in (5).It follows from (3)and (4)that˙˜x (t )=A o ˜x (t )−b ˜θ(t )x (t ),˜x (0)=0.(14)Hence,it is straightforward to verify from (5)that˙V (t )≤−˜x (t )Q o ˜x (t )≤0.(15)Fig.1.Block diagram of the reference LTI systemNotice that the result in (15)is independent of u 2(t ),however one cannot deduce stability from it.One needs to prove in addition that with the L 1adaptive controller the state of the companion model will remain bounded.Boundedness of the system state then will follow.Similar to Theorem 3in Part I [3],we have:Theorem 1:Given the system in (1)and the L 1adaptive controller defined via (2),(4),(5),(7)subject to (12),the tracking error ˜x (t )converges to zero asymptotically:lim t →∞˜x (t )=0.(16)B.Reference SystemRecall that in the conventional MRAC architecture the proof on asymptotic stability was implying that the state of the system was tracking the state of the reference model.With the L 1adaptive controller one should question what is the reference system that the closed loop system with the L 1adaptive controller tracks.In this section we characterize the reference system,which is being tracked by both the system state and control input of the system (1)via the L 1adaptive controller in (2),(4),(5),(7)both in transient and steady state.Towards that end,consider the following ideal version of the adaptive controller in (2),(7):u r (s )=C (s )(k g r (s )+θ x r (s ))−K x r (s ),(17)where x r (s )denotes Laplace transformation of the statex r (t )of the closed-loop system.The block diagram of the system (1)with the controller (17)is shown in Fig.1.Remark 1:Notice that when C (s )=1and K =0,one recovers the reference model of MRAC,and the ideal con-troller in (17)reduces to the conventional ideal controller u (t )=θ x (t )+k g r (t )of MRAC.If C (s )=1and K =0,then the control law in (17)changes the bandwidth of it.Under the control action (17),x r (s )can be expressed as:x r (s )=(I −¯G(s )θ )−1G (s )r (s ).(18)Lemma 1:If the condition in (12)holds,then (i )(I −¯G(s )θ )−1is stable ;(19)(ii )(I −¯G(s )θ )−1G (s )is stable .The proof follows from Theorem 1in [3]easily.C.System Response and the L 1Adaptive Control SignalLetting r 1(t )=˜θ(t )x (t ),it follows from (6)that ¯r (t )=θ (ˆx (t )−˜x (t ))+r 1(t ),t ≥0.Hence,the companion model in (8)can be rewritten asˆx (s )=(I −¯G (s )θ )−1 −¯G (s )θ ˜x (s )+¯G (s )r 1(s )+G (s )r (s ),(20)where r 1(s )is the Laplace transformation of r 1(t ).Itfollows from (14)that˜x (s )=−H o (s )r 1(s ).(21)Substituting (9),(21)into (20)leads to ˆx (s )=(I −¯G (s )θ )−1G (s )r (s )+(I −¯G (s )θ )−1(−¯G (s )θ ˜x (s )−(C (s )−1)˜x (s )).Using x r (s )from (18)and recalling the definition of ˜x (s )=ˆx (s )−x (s ),one arrives atx (s )=x r (s )−I +(I −¯G(s )θ )−1 ¯G (s )θ +(C (s )−1)I ˜x (s ).(22)It follows from (2),(7)and (17)thatu (s )=u r (s )+C (s )r 1(s )+(C (s )θ −K )(x (s )−x r (s )).(23)D.Asymptotic Performance and Steady State Error Theorem 2:Given the system in (1)and the L 1adaptive controller defined via (2),(4),(5),(7)subject to (12),we have:lim t →∞x (t )−x r (t )=0,(24)lim t →∞|u (t )−u r (t )|=0.(25)Lemma 2:Given the system in (1)and the L 1adaptive controller defined via (2),(4),(5),(7)subject to (12),if r (t )is constant,we have:lim t →∞y (t )=r.The closed-loop system response with the L 1controller to a time varying input r (t )is given in the next Section.E.Transient PerformanceWe note that (A m −bK,b )is the state space realization of H o (s ).Since (A m ,b )is controllable,it can be proved easily that (A m −bK ,b )is also controllable.It follows from Lemma 4in [3]that there exists c o ∈R n such thatc o H o (s )=N n (s )/Nd (s ),(26)where the order of N d (s )is one more than the order of N n (s ),and both N n (s )and N d (s )are stable polynomials.Theorem 3:Given the system in (1)and the L 1adaptive controller defined via (2),(4),(5),(7)subject to (12),we have:x −x r L ∞≤γ1/Γc(27) y −y r L ∞≤γ1 c L 1/Γc(28) u −u r L ∞≤γ2/Γc ,(29)where c L 1is L 1gain of c ,H 2(s )is defined in (47),γ1= H 2(s ) L 1¯θmax /(λmax (P o )),(30)γ2=C (s )1c o H o (s )c oL 1 ¯θmax λmax (P o )+ C (s )θ −K L 1γ1.(31)Corollary 1:Given the system in (1)and the L 1adaptive controller defined via (2),(4),(5),(7)subject to (12),we have:lim Γc →∞(x (t )−x r (t ))=0,∀t ≥0,(32)lim Γc →∞(y (t )−y r (t ))=0,∀t ≥0,(33)lim Γc →∞(u (t )−u r (t ))=0,∀t ≥0.(34)Corollary 1states that x (t ),y (t )and u (t )follow x r (t ),y r (t )and u r (t )not only asymptotically but also during the transient,provided that the adaptive gain is selected sufficiently large.Thus,the control objective is reduced to designing K and C (s )to ensure that the reference LTI system has the desired response D (s ).Remark 2:Notice that if we set C (s )=1,then theL 1adaptive controller is equivalent to MRAC.In that case C (s )1c o Ho (s )c oL 1cannot be finite,since H o (s )is strictly proper.Therefore,from (31)it follows that γ2→∞,and hence for the control signal in MRAC one can not reduce the bound in (29)by increasing the adaptive gain.V.D ESIGN OF THE L 1A DAPTIVE C ONTROLLERWe proved that the error between the state and the control signal of the closed-loop system with L 1adaptive controller in (1),(2),(4),(5),(7)and the state and the control signal of the closed-loop reference system in (17),(18)can be rendered arbitrarily small by choosing large adaptive gain.Therefore,the control objective is reduced to determining K and C (s )to ensure that the reference system in (17),(18)(Fig.1)has the desired response D (s )from r (t )to y r (t ).Notice that the reference system in Fig.1depends upon the unknown parameter θ.Consider the following signals:y d (s )=c G (s )r (s )=C (s )k g c H o (s )r (s )(35)u d (s )=k g C (s )(1+C (s )θ H o (s )−K H o (s ))r (s ).(36)We note that u d (t )depends on the unknown parameter θ,while y d (t )does not.Lemma 3:For the LTI system in Fig.1,subject to (12),the following upper bounds hold:y r −y d L ∞≤λ1−λc L 1 G (s ) L 1 r L ∞(37) y r −yd L ∞≤( c L 1 h 3 L ∞)/(1−λ)(38) u r −u d L ∞≤(λ C (s )θ −K L 1G (s ) L 1 r L ∞)/(1−λ)(39) u r −u d L ∞≤( C (s )θ −K L 1h 3 L ∞)/(1−λ),(40)where h 3(t )is the inverse Laplace transformation of H 3(s )=(C (s )−1)C (s )r (s )k g H o (s )θ H o (s ).(41)Thus,we need to determine K and C (s )such that(i )λor h 3 L ∞are sufficiently small ,(42)(ii )y d (s )≈D (s )r (s ),(43)where D (s )is the desired LTI system.It consequently implies that the output y (t )of the system in (1)and its L 1adaptive control signal u (t )will follow y d (t )and u d (t )both in transient and steady state with quantifiable bounds given in (25),(29)and (38)-(40).Thus,for the given desired specifications,one needs to ensure that (42)and (43)are satisfied,which in turn imply that the L 1adaptive controller controls a partially known system with satisfactory performance.We note that the minimization of λin (42)is consistent with the stability requirement in (12),while the other requirements in (42)and (43)can be achieved via two different design methods:i)fix C (s )and minimize H o (s ) L 1,ii)fix H o (s )and minimize the L 1-gain of one of the cascaded systems H o (s )(C (s )−1) L 1, (C (s )−1)r (s ) L 1or C (s )(C (s )−1) L 1via the choice of C (s ).The important point to emphasize is that the requirements in (42)and (43)are not in conflict with each other.Design Method 1.Set C (s )=D (s ).Then minimization of H o (s ) L 1can be achieved via high-gain feedback by choosing K sufficiently large.However,minimized H o (s ) L 1via large K leads to large poles of H o (s ),which is typical for high-gain design methods.Since C (s )is a strictly proper system containing the dominant poles of the closed-loop system in k g c H o (s )C (s )and k g c H o (0)=1,we have k g c H o (s )C (s )≈C (s )=D (s ).Hence,the system response y r (s )≈D (s )r (s ).We note that with large feedback K ,L 1adaptive controller degenerates into a high-gain robust one.The shortcoming of this design is that the high gain feedback K leads to a reduced phase margin and affects robustness.Design Method 2.As in MRAC,assume that we can select A m to ensure that k g c (s I −A m )−1b ≈D (s ).Then we can set K =0.Or one can alternatively choose K to ensure k g c H o (s )≈D (s ).Let C (s )=ω/(s +ω).Lemma 4:For any single input n -output strictly proper stable system H (s )the following is true:lim ω→∞ (C (s )−1)H (s ) L 1=0.The proof is straightforward,and is therefore omitted.Lemma 4states that if one chooses k g c H o (s )r (s )≈D (s ),then by increasing the bandwidth of the low-pass systemC (s ),it is possible to render ¯G (s ) L 1arbitrarily small.With large ω,the pole −ωdue to C (s )is omitted,and H o (s )is the dominant reference system leading to y r (s )≈k g c H o (s )r (s )≈D (s )r (s ).We note that k g c H o (s )is exactly the reference model of the MRAC design.Therefore this approach is equivalent to mimicking MRAC,and,hence,high-gain feedback can be completely avoided.However,increasing the bandwidth of C (s )is not theonly choice for minimizing ¯G (s ) L 1.Since C (s )is a low-pass filter,its complementary 1−C (s )is a high-pass filter with its cutoff frequency approximating the bandwidth of C (s ).Since both H o (s )and C (s )are strictly propersystems,¯G(s )=H o (s )(C (s )−1)is equivalent to cascading a low-pass system H o (s )with a high-pass system C (s )−1.If one chooses the cut-off frequency of C (s )−1largerthan the bandwidth of H o (s ),it ensures that ¯G(s )is a “no-pass”system,and hence its L 1gain can be rendered suitably small.This can be done via higher order filter design methods.Next,consider the minimization of h 3 L ∞.We note that h 3 L ∞can be upperbounded in two ways:(i )h 3 L ∞≤ (C (s )−1)r (s ) L 1 h 4 L ∞,where h 4(t )is the inverse Laplace transformation of H 4(s )=C (s )k g H o (s )θ H o (s ),and (ii )h 3 L ∞≤ (C (s )−1)C (s ) L 1 h 5 L ∞,where h 5(t )is the inverse Laplace transformation of H 5(s )=r (s )k g H o (s )θ H o (s ).We note that since r (t )is a bounded signal and C (s ),H o (s )are stable proper systems, h 4 L ∞and h 5 L ∞are finite.Therefore, h 3 L ∞can be minimized by minimizing (C (s )−1)r (s ) L 1or (C (s )−1)C (s ) L 1.First,consider minimizing (C (s )−1)r (s ) L 1.Since r (t )is usually in low-frequency range,one can choose the cut-off frequency of C (s )−1to be larger than the bandwidth of the reference signal r (t )to minimize (C (s )−1)r (s ) L 1.Second,consider minimizing C (s )(C (s )−1) L 1.If C (s )is an ideal low-pass filter,it can be checked easily that C (s )(C (s )−1)=0and hence h 3 L ∞=0.Although an ideal low-pass filter is not physically implementable,one can still minimize C (s )(C (s )−1) L 1via the choice of the low-pass filter C (s ).The above presented approaches ensure that C (s )≈1in the bandwidth of r (s )and H o (s ).Therefore it follows from (35)that y d (s )=C (s )k g c H o (s )r (s )≈k g c H o (s )r (s ),which conseuquently implies that y d (s )≈D (s )r (s ).Remark 3:From Corollary 1and Lemma 3it follows that the L 1adaptive controller can generate a system response to track (35)and (36)both in transient and steady state,if we set the adaptive gain large and minimize λor h 3 L ∞.Notice that u d (t )in (36)depends upon the unknown parameter θ,while y d (t )in (35)does not.This implies that for different values of θ,the L 1adaptive controller will generate different control signals (dependent on θ)to ensure uniform system response (independent of θ).This is natural,since different unknown parameters imply different systems,and to have similar response for different systems the control signals have to be different.Here is the obvious advantage of the L 1adaptive controller in a sense that it controls a partially known system as an LTI feedback controller would have done if the unknown parameters were known.Finally,we note that if the termk g C (s )C (s )θ H o (s )is dominated by k g C (s )K H o (s ),then the controller in (36)turns into a robust one,and L 1adaptive controller degenerates into robust design.VI.D ISCUSSIONWe use a scalar system to compare the performance of L 1and high-gain controllers.Towards that end,consider ˙x (t )=θx (t )+u (t ),where x ∈R is the measurable system state,u ∈R is the control signal and θ∈R is un-known,which belongs to a given compact set [θmin ,θmax ].Let u (t )=−kx (t ),leading to the following closed-loop system:˙x (t )=(θ−k )x (t )+kr (t ).We need to choose k >θmax to guarantee stability.We note that both the steady state error and transient performance depend on the unknown parameter value θ.By further introducing a proportional-integral controller,we can achieve zero steady error.When we choose k max {θmax ,θmin },we havex (s )=k s −(θ−k )r (s )≈ks +k r (s ),which leads to high-gain system.To apply the L 1adaptive controller,considerthe following desired reference system:D (s )=2s +2.Let u 1=−2x,k g =2,leading to H o (s )=1s +2.ChooseC (s )=ωns +ωn with large ωn ,and set adaptive gain Γc large.Then it follows from Theorem 3thatx (s )≈x r (s )≈2/(s +2),(44)u (s )≈u r (s )=(−2+θ)x r (s )+2r (s ).(45)The relationship in (44)implies that the control objective ismet,while the relationship in (45)states that the L 1adaptive controller approximates u r (t ),that cancels the unknown θ.VII.S IMULATIONSConsider the following system parameters:A m = 01−1−1.4,b =[01] ,c =[10] ,where theunknown parameter θ=[4−4.5] belongs to a known compact set:θi ∈[−10,10],i =1,2.We set K =0to avoid the linear feedback completely and present two different simulation scenarios for different choices of C (s ).Simulation results are in Figs.2-3.(a)y (t )(solid)and r (t )(dashed)(b)Control signalFig.2.Performance of L 1adaptive controller with C (s )=160s +160,Γ=40000for r =25,100,400with uniform bound y ref −y des L ∞≤0.0946 r L ∞,while λ=0.1725.It can be seen that the L 1adaptive controller leads to scaled control signal and scaled system response for(a)y (t)(solid)and r (t )(dashed)(b)Control signalFig. 3.Performance of L 1adaptive controller with C (s )=3ω2s +ω3(s +ω)3,ω=50,Γ=400for r =25,100,400with uniform bound y ref −y des L ∞≤0.0721 r L ∞,while λ=0.3984.(a)y (t )(solid)and r (t )(dashed )(b)Time-history of u (t )Fig.4.C (s )=160s +160,Γ=40000for r =100cos(0.2t )(a)y (t )(solid)and r(t )(dashed )(b)Time-history of u (t )Fig.5.C (s )=3502s +503(s +50)3,Γ=400for r =100cos(0.2t )scaled reference inputs.We notice that in the second casehigher order C (s )leads to an improved bound with smaller bandwidth and smaller adaptive gain.While a rigorous relationship between the choice of adaptive gain and the bandwidth of the low-pass filter has not been derived at this stage,an insight into this can be gained from the following analysis.It follows from (3),(2)and (7)thatx (s )=k g H o (s )C (s )r (s )−H o (s )θ x (s )+H o (s )C (s )¯r (s ),while the companion model in (4)can be rewritten asˆx (s )=k g H o (s )C (s )r (s )+H o (s )(C (s )−1)¯r (s ).We note that ¯r (t )is divided into two parts.Its low-frequency component C (s )¯r (s )is what the system in (3)gets,while the complementary high-frequency component (C (s )−1)¯r (s )goes into the companion model.We recallthat higher frequencies appear in ¯r (t )in the presence of large adaptive gain.Therefore a first order C (s )with large bandwidth achieves the desired performance with large adaptive gain.A higher order filter with smaller bandwidth and reduced tailing effects obtains similar performance with a smaller adaptive gain.Figs 4(a)-4(b),5(a)-5(b)show the system response and control signal for reference input r (t )=100cos(0.2t ),without any retuning of the controller.VIII.C ONCLUSIONA novel L 1adaptive controller is developed that has guar-anteed transient response in addition to stable tracking.The new low-pass control architecture tolerates high adaptation gains without generating high-frequency oscillations in the control signal and guarantees desired transient performance for both system’s input and output signals.In [4],[5],the methodology is extended to systems with unknown time-varying parameters and bounded disturbances in the presence of unknown high-frequency gain,and stability margins are derived.These arguments enable development of theoretically justified tools for verification and validation of adaptive controllers.R EFERENCES[1]K.Narendra,A.Annaswamy.Stable Adaptive Systems .Prentice Hall,1989.[2]J.-J.Slotine,W.Li.Applied Nonlinear Control .Prentice Hall,1991.[3] C.Cao and N.Hovakimyan.Design and analysis of a novel L 1adaptive controller,Part I:Control signal and asymptotic stability.Americal Control Conference ,2006.[4] C.Cao and N.Hovakimyan.Guaranteed transient performancewith L 1adaptive controller for systems with unknown time-varying parameters:Part I.Conf.on Decision and Contr.,Submitted 2006.[5] C.Cao and N.Hovakimyan.Stability margins of L 1adaptivecontroller:Part II.Conf.on Decision and Contr.,Submitted 2006.A PPENDIXProof of Theorem 2:Letr 2(s )=(I +(I −¯G(s )θ )−1(¯G (s )θ +(C (s )−1)I ))˜x (s ).(46)It follows from (22)that r 2(t )=x r (t )−x (t ).The signal r 2(t )can be viewed as the response of the LTI system H 2(s )=I +(I −¯G(s )θ )−1(¯G(s )θ+(C (s )−1)I )(47)to the bounded error signal ˜x (t ).It follows from (19)that (I −¯G (s )θ )−1,¯G (s ),C (s )are stable and,therefore,H 2(s )is stable.Hence,from (16)we have lim t →∞r 2(t )=0,which confirms (24).Let r 3(s )=C (s )r 1(s )+(C (s )θ−K)(x (s )−x r (s )).(48)It follows from (23)that r 3(t )=u (t )−u r (t ).Since ˜θ(t )is bounded,it follows from (14)and (16)thatlim t →∞r 1(t )=0.(49)Since C (s )is a stable proper system,it follows from (24),(48)and (49)that lim t →∞r 3(t )=0,which confirms (25).Proof of Lemma 2:Since y r (t )=c x r (t ),it follows from (24)that lim t →∞(y (t )−y r (t ))=0.It follows from (18)that y r (s )=c (I −¯G(s )θ )−1G (s )r (s ),and hence for a constant r ,the end value theorem ensures lim t →∞y r (t )=lim s →0c (I −¯G(s )θ )−1G (s )r =c H o (0)C (0)k g r .The definition of k g and the fact that C (0)=1lead to lim t →∞y (t )=r .Proof of Theorem 3:It follows from (46)and Corollary 1in[3]that r 2 L ∞≤ H 2(s ) L 1 ˜xL ∞.It follows from (13),(14),(15)that ˜x L ∞≤(¯θmax )/(λmax (P o )Γc ).(50)Therefore, r 2 L ∞≤ H 2(s ) L 1¯θmax λmax (P o )Γc,which along with (30)leads to (27).The upper bound in (28)follows from (27)and Lemma 2in [3]directly.From (21),we have r 3(s )=C (s )1c oHoc oH o (s )r 1(s )+(C (s )θ −K )(x (s )−x r (s ))=−C (s )1c oH o (s )c o ˜x (s )+(C (s )θ −K )(x (s )−x r (s )),where c o is introduced in (26).It follows from (26)that C (s )1oo(s )=C (s )N d (s )n (s ),where N d (s ),N n (s )are stable polynomials and the order of N n (s )is one less than the order of N d (s ).Since C (s )isstable and strictly proper,the complete system C (s )1c o H o (s )is proper and stable,which implies that itsL 1gain exists and is finite.Thus, r 3 L ∞≤ C (s )1ooc oL 1˜x L ∞+ C (s )θ −KL 1 x −x r L ∞,which with (27),(50)leads to (29). Proof of Lemma 3:It follows from (18)that y r (s )=c(I −¯G (s )θ )−1G (s )r (s ).Following Lemma 1,the condition in (12)ensures stability of the reference system.Since (I −¯G(s )θ )−1is stable,then one can expand it into convergent series:y r (s )=c(I +∞i =1(¯G(s )θ )i )G (s )r (s )=y d (s )+c (∞ i =1(¯G(s )θ )i )G (s )r (s ).(51)Let r 4(s )=c ( ∞i =1(¯G (s )θ )i )G (s )r (s ).Then r 4(t )=y r (t )−y d (t ).It follows from Lemma 2in [3]thatr 4 L ∞≤(∞ i =1λi ) c L 1 G (s ) L 1 r L ∞=λ1−λc L 1 G (s ) L 1 r L ∞.(52)Using (9),(10)and (41),from (51)one can derivey r (s )=y d (s )+c(∞i =1(¯G(s )θ )i −1)H 3(s ),upon which Corollary 1in [3]leads to (38).Comparing u d (s )in (36)to u r (s )in (17)it follows that u d (s )can be written as u d (s )=k g C (s )r (s )+(C (s )θ −K )x d (s ),where x d (s )=C (s )k g H o (s )r (s ).Therefore u r (s )−u d (s )=(C (s )θ −K )(x r (s )−x d (s )).Hence,it follows from Lemma 1in [3]u r −u d L ∞= C (s )θ −K L 1 x r −x d L ∞.Using the same steps as for y r −y d L ∞,we havex r −x d L ∞≤λ G (s ) L 1 r L ∞/(1−λ), x r −x d L ∞≤h 3 L ∞/(1−λ),and hence (39)and (40)are proved.。
Excerpt from 49 CFR 173 Shippers - General Requirements for

Up-and-Down Procedure Peer Panel Report Appendix Q-7REGULATIONSExcerpt from49 CFR Part 173Pages 342 - 348, 441 - 443Shippers - General Requirements for Shipments and Packaging The Department of Transportation in compliance with Hazardous Materials Regulations outlines the requirements to be observed in preparing hazardous materials for shipment by air, highway, rail, or water, or any combination thereof. These regulations are based on the Recommendations of the United Nations Committee of Experts on the Transport of Dangerous Goods, the International Civil Aviation Organization, and the International Maritime Organization.Appendix Q-7 Up-and-Down Procedure Peer Panel ReportPt. 172, App. C 49 CFR Ch. I (10–1–98 Edition)A PPENDIX C TO P ART 172—D IMENSIONAL S PECIFICATIONS FOR R ECOMMENDEDP LACARD HOLDERPART 173—SHIPPERS—GENERAL REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Subpart A—GeneralSec.173.1 Purpose and scope.173.2Hazardous materials classes and index to hazard class definitions.173.2a Classification of a material having more than one hazard.173.3 Packaging and exceptions. 173.4 Small quantity exceptions. 173.5 Agricultural operations. 173.5a Oilfield service vehicles.173.6 Materials of trade exceptions.Research and Special Programs Administration, DOT Pt. 173173.7 U.S. Government material.173.8 Exceptions for non-specification packagings used in intrastate transportation. 173.9 Transport vehicles or freight containers containing lading which has been fumigated.173.10 Tank car shipments.173.12 Exceptions for shipment of waste materials.173.13 Exceptions for Class 3, Divisions 4.1,4.2, 4.3,5.1,6.1, and Classes 8 and 9 materials.Subpart B—Preparation of HazardousMaterials for Transportation173.21 Forbidden materials and packages. 173.22 Shipper’s responsibility.173.22a Use of packagings authorized under exemptions.173.23 Previously authorized packaging.173.24 General requirements for packagings and packages.173.24a Additional general requirements for non-bulk packagings and packages.173.24b Additional general requirements for bulk packagings.173.25 Authorized packages and overpacks. 173.26 Quantity limitations.173.27 General requirements for transportation by aircraft.173.28 Reuse, reconditioning and remanufacture of packagings.173.29 Empty packagings.173.30 Loading and unloading of transport vehicles.173.31 Use of tank cars.173.32 Qualification, maintenance and use of portable tanks other than Specification IM portable tanks.173.32a Approval of Specification IM portable tanks.173.32b Periodic testing and inspection of Specification IM portable tanks.173.32c Use of Specification IM portable tanks.173.33 Hazardous materials in cargo tank motor vehicles.173.34 Qualification, maintenance, and use of cylinders.173.35 Hazardous materials in intermediate bulk containers.173.40 General packaging requirements for poisonous materials required to be packaged in cylinders.Subpart C—Definitions, Classification andPackaging for Class 1173.50 Class 1—Definitions.173.51 Authorization to offer and transport explosives.173.52 Classification codes and compatibility groups of explosives.173.53 Provisions for using old classifications of explosives.173.54 Forbidden explosives. 173.55 [Reserved]173.56 New explosives—Definition and procedures for classification and approval.173.57 Acceptance criteria for new explosives.173.58 Assignment of class and division for new explosives.173.59 Description of terms for explosives.173.60 General packaging requirements for explosives.173.61 Mixed packaging requirements.173.62 Specific packaging requirements for explosives.173.63 Packaging exceptions.Subpart D—Definitions, Classification, Packing Group Assignments and Exceptions for Hazardous Material OtherThan Class 1 and Class 7173.115 Class 2, Divisions 2.1, 2.2, and 2.3— Definitions.173.116 Class 2—Assignment of hazard zone. 173.117–173.119 [Reserved]173.120 Class 3—Definitions.173.121 Class 3—Assignment of packing group.173.124 Class 4, Divisions 4.1, 4.2 and 4.3— Definitions.173.125 Class 4—Assignment of packing group.173.127 Class 5, Division 5.1—Definition and assignment of packing groups.173.128 Class 5, Division 5.2—Definitions and types.173.129 Class 5, Division 5.2—Assignment of packing group.173.132 Class 6, Division 6.1—Definitions.173.133 Assignment of packing group and hazard zones for Division 6.1 materials.173.134 Class 6, Division 6.2—Definitions, exceptions and packing group assignments. 173.136 Class 8—Definitions.173.137 Class 8—Assignment of packing group.173.140 Class 9—Definitions.173.141 Class 9—Assignment of packing group.173.144 Other Regulated Materials (ORM)— Definitions.173.145 Other Regulated Materials—Assignment of packing group.173.150 Exceptions for Class 3 (flammable) and combustible liquids.173.151 Exceptions for Class 4.173.152Exceptions for Division 5.1 (oxidizers) and Division 5.2 (organic peroxides).173.153 Exceptions for Division 6.1 (poisonous materials).173.154 Exceptions for Class 8 (corrosive materials).173.155 Exceptions for Class 9 (miscellaneous hazardous materials).173.156 Exceptions for ORM materials.Pt. 173Subpart E—Non-bulk Packaging for Hazardous Materials Other Than Class 1and Class 7173.158 Nitric acid.173.159 Batteries, wet.173.160 Bombs, smoke, non-explosive (corrosive).173.161 Chemical kits.173.162 Gallium.173.163 Hydrogen fluoride.173.164 Mercury (metallic and articles containing mercury).173.166 Air bag inflators, air bag modules and seat-belt pretensioners.173.170 Black powder for small arms.173.171 Smokeless powder for small arms.173.172 Aircraft hydraulic power unit fuel tank.173.173 Paint, paint-related material, adhesives and ink and resins.173.174 Refrigerating machines.173.181 Pyrophoric materials (liquids).173.182 Barium azide—50 percent or more water wet.173.183 Nitrocellulose base film.173.184 Highway or rail fusee.173.185 Lithium batteries and cells.173.186 Matches.173.187 Pyrophoric solids, metals or alloys, n.o.s.173.188 White or yellow phosphorous.173.189 Batteries containing sodium or cells containing sodium.173.192 Packaging for certain Packing Group I poisonous materials.173.193 Bromoacetone, methyl bromide, chloropicrin and methyl bromide ormethyl chloride mixtures, etc.173.194 Gas identification sets.173.195 Hydrogen cyanide, anhydrous, stabilized (hydrocyanic acid, aqueous solution).173.196Infectious substances (etiologic agents).173.197 Regulated medical waste.173.198 Nickel carbonyl.173.201 Non-bulk packagings for liquid hazardous materials in Packing Group I.173.202 Non-bulk packagings for liquid hazardous materials in Packing Group II.173.203 Non-bulk packagings for liquid hazardous materials in Packing Group III. 173.204 Non-bulk, non-specification packagings for certain hazardous materials.173.205 Specification cylinders for liquid hazardous materials.173.211 Non-bulk packagings for solid hazardous materials in Packing Group I.173.212 Non-bulk packagings for solid hazardous materials in Packing Group II.173.213 Non-bulk packagings for solid hazardous materials in Packing Group III. 173.214 Packagings which require approval by the Associate Administrator for Hazardous Materials Safety.173.216 Asbestos, blue, brown, or white. 49 CFR Ch. I (10–1–98 Edition)173.217 Carbon dioxide, solid (dry ice).173.218 Fish meal or fish scrap.173.219 Life-saving appliances.173.220 Internal combustion engines, self-propelled vehicles, and mechanical equipment containing internal combustion engines or wet batteries.173.221 Polymeric beads, expandable.173.222 Wheelchairs equipped with wet electric storage batteries.173.224 Packaging and control and emergency temperatures for self-reactive materials.173.225 Packaging requirements and other provisions for organic peroxides.173.226 Materials poisonous by inhalation, Division 6.1, Packing Group I, HazardZone A.173.227 Materials poisonous by inhalation, Division 6.1, Packing Group I, HazardZone B.173.228 Bromine pentaflouride or bromine trifluoride.173.229 Chloric acid solution or chlorine dioxide hydrate, frozen.Subpart F—Bulk Packaging for Hazardous Materials Other Than Class 1 and Class 7 173.240 Bulk packaging for certain low hazard solid materials.173.241 Bulk packagings for certain low hazard liquid and solid materials.173.242 Bulk packagings for certain medium hazard liquids and solids, including solidswith dual hazards.173.243 Bulk packaging for certain high hazard liquids and dual hazard materialswhich pose a moderate hazard.173.244Bulk packaging for certain pyrophoric liquids (Division 4.2), dangerous when wet (Division 4.3) materials,and poisonous liquids with inhalationhazards (Division 6.1).173.245 Bulk packaging for extremely hazardous materials such as poisonous gases(Division 2.3).173.247 Bulk packaging for certain elevated temperature materials (Class 9) and certain flammable elevated temperaturematerials (Class 3).173.249 Bromine.Subpart G—Gases; Preparation andPackaging173.300 [Reserved]173.300a Approval of independent inspection agency.173.300b Approval of non-domestic chemical analyses and tests.173.300c Termination of approval.173.301 General requirements for shipment of compressed gases in cylinders andspherical pressure vessels.173.302 Charging of cylinders with non-liquefied compressed gases.Research and Special Programs Administration, DOT Pt. 173173.303 Charging of cylinders with compressed gas in solution (acetylene).173.304 Charging of cylinders with liquefied compressed gas.173.305 Charging of cylinders with a mixtureof compressed gas and other material.173.306 Limited quantities of compressed gases.173.307 Exceptions for compressed gases.173.308 Cigarette lighter or other similar device charged with fuel.173.309 Fire extinguishers.173.314 Compressed gases in tank cars andmulti-unit tank cars.173.315 Compressed gases in cargo tanks andportable tanks.173.316 Cryogenic liquids in cylinders.173.318 Cryogenic liquids in cargo tanks.173.319 Cryogenic liquids in tank cars.173.320 Cryogenic liquids; exceptions.173.321 Ethylamine.173.322 Ethyl chloride.173.323 Ethylene oxide.173.334 Organic phosphates mixed with compressed gas.173.335 Gas generator assemblies.173.336 Nitrogen dioxide, liquefied, ordinitrogen tetroxide, liquefied.173.337 Nitric oxide.173.338 Tungsten hexafluoride.173.340 Tear gas devices.Subpart H [Reserved ]Subpart I–Class 7 (Radioactive) Materials173.401 Scope. 173.403 Definitions. 173.410 General design requirements. 173.411 Industrial packagings. 173.412 Additional design requirements for Type A packages. 173.413 Requirements for Type B packages. 173.415 Authorized Type A packages. 173.416 Authorized Type B packages. 173.417 Authorized fissile materials packages. 173.418 Authorized packages—pyrophoric Class 7 (radioactive) materials. 173.419 Authorized packages—oxidizing Class 7 (radioactive) materials. 173.420 Uranium hexafluoride (fissile, fissile excepted and non-fissile). 173.421 Excepted packages for limited quantities of Class 7 (radioactive) materials. 173.422 Additional requirements for excepted packages containing Class 7 (radioactive) materials.173.423 Requirements for multiple hazard limited quantity Class 7 (radioactive) materials. 173.424 Excepted packages for radioactive instruments and articles. 173.425 Table of activity limits—excepted quantities and articles. 173.426 Excepted packages for articles containing natural uranium or thorium. 173.427 Transport requirements for low specific activity (LSA) Class 7 (radioactive) materials and surface contaminated objects (SCO). 173.428 Empty Class 7 (radioactive) materials packaging. 173.431 Activity limits for Type A and Type B packages. 173.433 Requirements for determining A 1 and A 2 values for radionuclides and for the listing of radionuclides on shipping papers and labels. 173.434 Activity-mass relationships for uranium and natural thorium. 173.435 Table of A 1 and A 2 values for radio- nuclides. 173.441 Radiation level limitations. 173.442 Thermal limitations. 173.443 Contamination control. 173.447 Storage incident to transportation—general requirements. 173.448 General transportation requirements. 173.453 Fissile materials—exceptions. 173.457 Transportation of fissile material, controlled shipments—specific requirements. 173.459 Mixing of fissile material packages. 173.461 Demonstration of compliance with tests.173.462 Preparation of specimens for testing.173.465 Type A packaging tests. 173.466 Additional tests for Type A packagings designed for liquids and gases. 173.467 Tests for demonstrating the ability of Type B and fissile materials packagings to withstand accident conditions in transportation.173.468 Test for LSA-III material.173.469 Tests for special form Class 7 (radioactive) materials. 173.471 Requirements for U.S. Nuclear Regulatory Commission approved packages.173.472 Requirements for exporting DOT Specification Type B and fissile packages. 173.473 Requirements for foreign-madepackages. 173.474 Quality control for construction ofpackaging. 173.475 Quality control requirements priorto each shipment of Class 7 (radioactive) materials. 173.476 Approval of special form Class 7 (radioactive) materials. Subparts J–O [Reserved ]A PPENDIX A TO P ART 173 [R ESERVED ]A PPENDIXB TO P ART 173—P ROCEDURE FORT ESTING C HEMICAL C OMPATIBILITY AND R ATE OF P ERMEATION IN P LASTIC P ACKAG ING AND R ECEPTACLES§ 173.1A PPENDIX C TO P ART 173—P ROCEDURE FORB ASE-LEVEL V IBRATION T ESTINGA PPENDIX D TO P ART 173—T EST M ETHODS FORD YNAMITE (E XPLOSIVE, B LASTING, T YPEA)A PPENDIXES E–G TO P ART 173 [R ESERVED]A PPENDIX H TO P ART 173—M ETHOD OF T ESTING FOR S USTAINED C OMBUSTIBILITYA UTHORITY: 49 U.S.C. 5101–5127, 44701; 49 CFR 1.45, 1.53.Subpart A—General§ 173.1 Purpose and scope.(a) This part includes:(1) Definitions of hazardous materials for transportation purposes;(2) Requirements to be observed in preparing hazardous materials for shipment by air, highway, rail, or water, or any combination thereof; and(3) Inspection, testing, and retesting responsibilities for persons who retest, recondition, maintain, repair and rebuild containers used or intended for use in the transportation of hazardous materials.(b) A shipment of hazardous materials that is not prepared in accordance with this subchapter may not be offered for transportation by air, highway, rail, or water. It is the responsibility of each hazmat employer subject to the requirements of this sub-chapter to ensure that each hazmat employee is trained in accordance with the requirements prescribed in this subchapter. It is the duty of each person who offers hazardous materials for transportation to instruct each of his officers, agents, and employees having any responsibility for preparing hazardous materials for shipment as to applicable regulations in this subchapter.49 CFR Ch. I (10–1–98 Edition)(c) When a person other than the person preparing a hazardous material for shipment performs a function required by this part, that person shall perform the function in accordance with this part.(d) In general, the Hazardous Materials Regulations (HMR) contained in this subchapter are based on the Recommendations of the United Nations Committee of Experts on the Transport of Dangerous Goods and are consistent with international regulations issued by the International Civil Aviation Organization (ICAO Technical Instructions) and the International Maritime Organization (IMDG Code). However, the HMR are not consistent in all respects with the UN Recommendations, the ICAO Technical Instructions or the IMDG Code, and compliance with the HMR will not guarantee acceptance by regulatory bodies outside of the United States.[Amdt. 173–94, 41 FR 16062, Apr. 15, 1976, as amended by Amdt. 173–100, 41 FR 40476, Sept. 20, 1976; Amdt. 173–161, 48 FR 2655, Jan. 20, 1983; Amdt. 173–224, 55 FR 52606, Dec. 21, 1990; Amdt. 173–231, 57 FR 20953, May 15, 1992]§173.2 Hazardous materials classes and index to hazard class definitions.The hazard class of a hazardous material is indicated either by its class (or division) number, its class name, or by the letters ‘‘ORM–D’’. The following table lists class numbers, division numbers, class or division names and those sections of this subchapter which contain definitions for classifying hazardous materials, including forbidden materials.Class No. Division No.(if any) Name of class or division49 CFR reference fordefinitionsNone .................... Forbidden materials .............................................................................................................. 173.21 None .................... Forbidden explosives ............................................................................................................ 173.54 1 1.1 Explosives (with a mass explosion hazard) ......................................................................... 173.50 1 1.2 Explosives (with a projection hazard) ................................................................................... 173.50 1 1.3 Explosives (with predominately a fire hazard) ...................................................................... 173.50 1 1.4 Explosives (with no significant blast hazard) ........................................................................ 173.50 1 1.5 Very insensitive explosives; blasting agents ........................................................................ 173.501 1.6 Extremely insensitive detonating substances ....................................................................... 173.502 2.1 Flammable gas ..................................................................................................................... 173.115 2 2.2 Non-flammable compressed gas .......................................................................................... 173.1152 2.3 Poisonous gas ...................................................................................................................... 173.1153 .................... Flammable and combustible liquid ....................................................................................... 173.1204 4.1 Flammable solid .................................................................................................................... 173.124 4 4.2 Spontaneously combustible material .................................................................................... 173.1244 4.3 Dangerous when wet material .............................................................................................. 173.1245 5.1 Oxidizer ................................................................................................................................. 173.127Research and Special Programs Administration, DOT § 173.2aClass No. Division No.(if any) Name of class or division49 CFR reference fordefinitions566789 None5.26.16.2................................................................................Organic peroxide ...................................................................................................................Poisonous materials ..............................................................................................................Infectious substance (Etiologic agent) ..................................................................................Radioactive material .............................................................................................................Corrosive material .................................................................................................................Miscellaneous hazardous material .......................................................................................Other regulated material: ORM–D ........................................................................................173.128173.132173.134173.403173.136173.140173.144[Amdt. 173–224, 55 FR 52606, Dec. 21, 1990, as amended at 57 FR 45460, Oct. 1, 1992; Amdt. 173–234, 58 FR 51531, Oct. 1, 1993]§ 173.2a Classification of a material having more than one hazard.(a) Classification of a material having more than one hazard. Except as provided in paragraph (c) of this section, a material not specifically listed in the § 172.101 table that meets the definition of more than one hazard class or division as defined in this part, shall be classed according to the highest applicable hazard class of the following hazard classes, which are listed in descending order of hazard:(1) Class 7 (radioactive materials, other than limited quantities).(2) Division 2.3 (poisonous gases).(3) Division 2.1 (flammable gases).(4) Division 2.2 (nonflammable gases).(5) Division 6.1 (poisonous liquids), Packing Group I, poisonous-by-inhalation only.(6) A material that meets the definition of a pyrophoric material in § 173.124(b)(1) of this subchapter (Division 4.2).(7) A material that meets the definition of a self-reactive material in § 173.124(a)(2) of this subchapter (Division 4.1).(8) Class 3 (flammable liquids), Class 8 (corrosive materials), Division 4.1 (flammable solids), Division 4.2 (spontaneously combustible materials), Division 4.3 (dangerous when wet materials), Division 5.1 (oxidizers) or Division 6.1 (poisonous liquids or solids other than Packing Group I, poisonous-by-inhalation). The hazard class and packing group for a material meeting more than one of these hazards shall be determined using the precedence table in paragraph (b) of this section.(9) Combustible liquids.(10) Class 9 (miscellaneous hazardous materials).(b) Precedence of hazard table for Classes 3 and 8 and Divisions 4.1, 4.2, 4.3, 5.1 and 6.1. The following table ranks those materials that meet the definition of Classes 3 and 8 and Divisions 4.1, 4.2, 4.3, 5.1 and 6.1:P RECEDENCE OF H AZARD T ABLE [Hazard class and packing group]4.2 4.35.1I 1 5.1II 15.1III 16.1, Idermal6.1,Ioral6.1II6.1III8, Iliquid8, Isolid8, IIliquid8, IIsolid8, IIIliquid8, IIIsolid3 I ..................... ...... ...... ...... ...... ...... 3 3 3 3 3 (3) 3 (3) 3 (3) 3 II .................... ...... ...... ...... ...... ...... 3 3 3 3 8 (3) 3 (3) 3 (3)3 III ................... ...... ...... ...... ...... ...... 6.1 6.1 6.1 34 8 (3) 8 (3) 3 (3)4.1 II 2 .................. 4.2 4.35.1 4.1 4.16.1 6.1 4.1 4.1 (3) 8 (3) 4.1 (3) 4.1 4.1 III 2 ................. 4.2 4.3 5.1 4.1 4.1 6.1 6.1 6.1 4.1 (3) 8 (3) 8 (3) 4.1 4.2 II ..................... ...... 4.3 5.1 4.2 4.2 6.1 6.1 4.2 4.2 8 8 4.2 4.2 4.2 4.2 4.2 III .................... ...... 4.3 5.1 5.1 4.2 6.1 6.1 6.1 4.2 8 8 8 8 4.2 4.2 4.3 I ...................... ...... ...... 5.1 4.3 4.3 6.1 4.3 4.3 4.3 4.3 4.3 4.3 4.3 4.3 4.3 4.3 II ..................... ...... ...... 5.1 4.3 4.3 6.1 4.3 4.3 4.3 8 8 8 4.3 4.3 4.34.3 III .................... ...... ......5.1 5.1 4.36.1 6.1 6.1 4.3 8 8 8 8 4.3 4.35.1 I 1 ................... ...... ...... ...... ...... ...... 5.1 5.1 5.1 5.1 5.1 5.1 5.1 5.1 5.1 5.1 5.1 II 1 .................. ...... ...... ...... ...... ......6.1 5.1 5.1 5.1 8 8 8 5.1 5.1 5.15.1 III 1 ................. ...... ...... ...... ...... ......6.1 6.1 6.1 5.1 8 8 8 8 5.1 5.16.1 I, Dermal ........ ...... ...... ...... ...... ...... ............ ........ ...... ........ 8 6.1 6.1 6.1 6.1 6.1 6.1 I, Oral ............. ...... ...... ...... ...... ...... ............ ........ ...... ........ 8 6.1 6.1 6.1 6.1 6.1 6.1 II, Inhalation ... ...... ...... ...... ...... ...... ............ ........ ...... ........ 8 6.1 6.1 6.1 6.1 6.1 6.1 II, Dermal ....... ...... ...... ...... ...... ...... ............ ........ ...... ........ 8 6.1 8 6.1 6.1 6.1 6.1 II, Oral ............ ...... ...... ...... ...... ...... ............ ........ ...... ........ 8 8 8 6.1 6.1 6.1§ 173.3 49 CFR Ch. I (10–1–98 Edition)P RECEDENCE OF H AZARD T ABLE —Continued[Hazard class and packing group]4.24.35.1 I 15.1 II 15.1 III 16.1, I dermal6.1, I oral6.1 II6.1 III8, I liquid 8, I solid 8, II liquid 8, II solid 8, III liquid 8, III solid 6.1 III .................... ...... ...... ...... ...... ...... ............ ........ ...... ........8888881 There are at present no established criteria for determining Packing Groups for liquids in Division 5.1. For the time being, thedegree of hazard is to be assessed by analogy with listed substances, allocating the substances to Packing Group I, great; II, medium; or III, minor danger.2 Substances of Division 4.1 other than self-reactive substances.3 Denotes an impossible combination.4 For pesticides only, where a material has the hazards of Class 3, Packing Group III, and Division 6.1, Packing Group III, the primary hazard is Division 6.1, Packing Group III.N OTE 1: The most stringent packing group assigned to a hazard of the material takes precedence over other packing groups; for example, a material meeting Class 3 PG II and Division 6.1 PG I (oral toxicity) is classified as Class 3 PG I.N OTE 2: A material which meets the definition of Class 8 and has an inhalation toxicity by dusts and mists which meets criteria for Packing Group I specified in § 173.133(a)(1) must be classed as Division 6.1 if the oral or dermal toxicity meets criteria for Packing Group I or II. If the oral or dermal toxicity meets criteria for Packing Group III or less, the material must be classed as Class 8.(c) The following materials are not subject to the provisions of paragraph (a) of this section because of their unique properties:(1) A Class 1 (explosive) material that meets any other hazard class or division as defined in this part shall be assigned a division in Class 1. Class 1 materials shall be classed and approved in accordance with § 173.56 of this part;(2) A Division 5.2 (organic peroxide) material that meets the definition ofany other hazard class or division as defined in this part, shall be classed as Division 5.2;(3) A Division 6.2 (infectious substance) material that also meets the definition of another hazard class or division, other than Class 7, or that alsois a limited quantity Class 7 material, shall be classed as Division 6.2;(4) A material that meets the definition of a wetted explosive in § 173.124(a)(1) of this subchapter (Division 4.1). Wetted explosives are either specifically listed in the § 172.101 table or are approved by the Associate Administrator for Hazardous Materials Safety (see § 173.124(a)(1) of this sub-chapter); and(5) A limited quantity of a Class 7 (radioactive) material that meets the definition for more than one hazard class or division shall be classed in accordance with § 173.423.[Amdt. 173–224, 55 FR 52606, Dec. 21, 1990, as amended at 56 FR 66264, Dec. 20, 1991; Amdt. 173–241, 59 FR 67490, Dec. 29, 1994; Amdt. 173– 247, 60 FR 48787, Sept. 20, 1995; Amdt. 173–244, 60 FR 50307, Sept. 28, 1995]§ 173.3 Packaging and exceptions.(a) The packaging of hazardous materials for transportation by air, highway, rail, or water must be as specifiedin this part. Methods of manufacture, packing, and storage of hazardous materials, that affect safety in transportation, must be open to inspection by a duly authorized representative of the initial carrier or of the Department.Methods of manufacture and related functions necessary for completion of a DOT specification or U.N. standardpackaging must be open to inspection by a representative of the Department.(b) The regulations setting forth packaging requirements for a specific material apply to all modes of transportation unless otherwise stated, or unless exceptions from packaging requirements are authorized.(c) Salvage drums. Packages of hazardous materials that are damaged, defective, or found leaking and hazardousmaterials that have spilled or leaked may be placed in a metal or plastic removable head salvage drum that iscompatible with the lading and shipped for repackaging or disposal under the following conditions:(1) Except as provided in paragraph (c)(7) of this section, the drum must be a UN 1A2, 1B2, 1N2 or 1H2 tested andmarked for Packing Group III or higher performance standards for liquids or solids and a leakproofness test of 20。
自动化专业英语常用词汇解析

自动化专业英语常用词汇acceleration transducer 加速度传感器acceptance testing 验收测试accessibility 可及性accumulated error 累积误差AC-DC-AC frequency converter 交-直-交变频器AC (alternating current) electric drive 交流电子传动active attitude stabilization 主动姿态稳定actuator 驱动器,执行机构adaline 线性适应元adaptation layer 适应层adaptive telemeter system 适应遥测系统adjoint operator 伴随算子admissible error 容许误差aggregation matrix 集结矩阵AHP (analytic hierarchy process) 层次分析法amplifying element 放大环节analog-digital conversion 模数转换annunciator 信号器antenna pointing control 天线指向控制anti-integral windup 抗积分饱卷aperiodic decomposition 非周期分解a posteriori estimate 后验估计approximate reasoning 近似推理a priori estimate 先验估计articulated robot 关节型机器人assignment problem 配置问题,分配问题associative memory model 联想记忆模型associatron 联想机asymptotic stability 渐进稳定性attained pose drift 实际位姿漂移attitude acquisition 姿态捕获AOCS (attritude and orbit control system) 姿态轨道控制系统attitude angular velocity 姿态角速度attitude disturbance 姿态扰动attitude maneuver 姿态机动attractor 吸引子augment ability 可扩充性augmented system 增广系统automatic manual station 自动-手动操作器automaton 自动机backlash characteristics 间隙特性base coordinate system 基座坐标系Bayes classifier 贝叶斯分类器bearing alignment 方位对准bellows pressure gauge 波纹管压力表benefit-cost analysis 收益成本分析bilinear system 双线性系统biocybernetics 生物控制论biological feedback system 生物反馈系统black box testing approach 黑箱测试法blind search 盲目搜索block diagonalization 块对角化Boltzman machine 玻耳兹曼机bottom-up development 自下而上开发boundary value analysis 边界值分析brainstorming method 头脑风暴法breadth-first search 广度优先搜索butterfly valve 蝶阀CAE (computer aided engineering) 计算机辅助工程CAM (computer aided manufacturing) 计算机辅助制造Camflex valve 偏心旋转阀canonical state variable 规范化状态变量capacitive displacement transducer 电容式位移传感器capsule pressure gauge 膜盒压力表CARD 计算机辅助研究开发Cartesian robot 直角坐标型机器人cascade compensation 串联补偿catastrophe theory 突变论centrality 集中性chained aggregation 链式集结chaos 混沌characteristic locus 特征轨迹chemical propulsion 化学推进calrity 清晰性classical information pattern 经典信息模式classifier 分类器clinical control system 临床控制系统closed loop pole 闭环极点closed loop transfer function 闭环传递函数cluster analysis 聚类分析coarse-fine control 粗-精控制cobweb model 蛛网模型coefficient matrix 系数矩阵cognitive science 认知科学cognitron 认知机coherent system 单调关联系统combination decision 组合决策combinatorial explosion 组合爆炸combined pressure and vacuum gauge 压力真空表command pose 指令位姿companion matrix 相伴矩阵compartmental model 房室模型compatibility 相容性,兼容性compensating network 补偿网络compensation 补偿,矫正compliance 柔顺,顺应composite control 组合控制computable general equilibrium model 可计算一般均衡模型conditionally instability 条件不稳定性configuration 组态connectionism 连接机制connectivity 连接性conservative system 守恒系统consistency 一致性constraint condition 约束条件consumption function 消费函数context-free grammar 上下文无关语法continuous discrete event hybrid system simulation 连续离散事件混合系统仿真continuous duty 连续工作制control accuracy 控制精度control cabinet 控制柜controllability index 可控指数controllable canonical form 可控规范型【control】plant 控制对象,被控对象controlling instrument 控制仪表control moment gyro 控制力矩陀螺control panel 控制屏,控制盘control synchro 控制【式】自整角机control system synthesis 控制系统综合control time horizon 控制时程cooperative game 合作对策coordinability condition 可协调条件coordination strategy 协调策略coordinator 协调器corner frequency 转折频率costate variable 共态变量cost-effectiveness analysis 费用效益分析coupling of orbit and attitude 轨道和姿态耦合critical damping 临界阻尼critical stability 临界稳定性cross-over frequency 穿越频率,交越频率current source inverter 电流【源】型逆变器cut-off frequency 截止频率cybernetics 控制论cyclic remote control 循环遥控cylindrical robot 圆柱坐标型机器人damped oscillation 阻尼振荡damper 阻尼器damping ratio 阻尼比data acquisition 数据采集data encryption 数据加密data preprocessing 数据预处理data processor 数据处理器DC generator-motor set drive 直流发电机-电动机组传动D controller 微分控制器decentrality 分散性decentralized stochastic control 分散随机控制decision space 决策空间decision support system 决策支持系统decomposition-aggregation approach 分解集结法decoupling parameter 解耦参数deductive-inductive hybrid modeling method 演绎与归纳混合建模法delayed telemetry 延时遥测derivation tree 导出树derivative feedback 微分反馈describing function 描述函数desired value 希望值despinner 消旋体destination 目的站detector 检出器deterministic automaton 确定性自动机deviation 偏差舱deviation alarm 偏差报警器DFD 数据流图diagnostic model 诊断模型diagonally dominant matrix 对角主导矩阵diaphragm pressure gauge 膜片压力表difference equation model 差分方程模型differential dynamical system 微分动力学系统differential game 微分对策differential pressure level meter 差压液位计differential pressure transmitter 差压变送器differential transformer displacement transducer 差动变压器式位移传感器differentiation element 微分环节digital filer 数字滤波器digital signal processing 数字信号处理digitization 数字化digitizer 数字化仪dimension transducer 尺度传感器direct coordination 直接协调disaggregation 解裂discoordination 失协调discrete event dynamic system 离散事件动态系统discrete system simulation language 离散系统仿真语言discriminant function 判别函数displacement vibration amplitude transducer 位移振幅传感器dissipative structure 耗散结构distributed parameter control system 分布参数控制系统distrubance 扰动disturbance compensation 扰动补偿diversity 多样性divisibility 可分性domain knowledge 领域知识dominant pole 主导极点dose-response model 剂量反应模型dual modulation telemetering system 双重调制遥测系统dual principle 对偶原理dual spin stabilization 双自旋稳定duty ratio 负载比dynamic braking 能耗制动dynamic characteristics 动态特性dynamic deviation 动态偏差dynamic error coefficient 动态误差系数dynamic exactness 动它吻合性dynamic input-output model 动态投入产出模型econometric model 计量经济模型economic cybernetics 经济控制论economic effectiveness 经济效益economic evaluation 经济评价economic index 经济指数economic indicator 经济指标eddy current thickness meter 电涡流厚度计effectiveness 有效性effectiveness theory 效益理论elasticity of demand 需求弹性electric actuator 电动执行机构electric conductance levelmeter 电导液位计electric drive control gear 电动传动控制设备electric hydraulic converter 电-液转换器electric pneumatic converter 电-气转换器electrohydraulic servo vale 电液伺服阀electromagnetic flow transducer 电磁流量传感器electronic batching scale 电子配料秤electronic belt conveyor scale 电子皮带秤electronic hopper scale 电子料斗秤elevation 仰角emergency stop 异常停止empirical distribution 经验分布endogenous variable 内生变量equilibrium growth 均衡增长equilibrium point 平衡点equivalence partitioning 等价类划分ergonomics 工效学error 误差error-correction parsing 纠错剖析estimate 估计量estimation theory 估计理论evaluation technique 评价技术event chain 事件链evolutionary system 进化系统exogenous variable 外生变量expected characteristics 希望特性external disturbance 外扰fact base 事实failure diagnosis 故障诊断fast mode 快变模态feasibility study 可行性研究feasible coordination 可行协调feasible region 可行域feature detection 特征检测feature extraction 特征抽取feedback compensation 反馈补偿feedforward path 前馈通路field bus 现场总线finite automaton 有限自动机FIP (factory information protocol) 工厂信息协议first order predicate logic 一阶谓词逻辑fixed sequence manipulator 固定顺序机械手fixed set point control 定值控制FMS (flexible manufacturing system) 柔性制造系统flow sensor/transducer 流量传感器flow transmitter 流量变送器fluctuation 涨落forced oscillation 强迫振荡formal language theory 形式语言理论formal neuron 形式神经元forward path 正向通路forward reasoning 正向推理fractal 分形体,分维体frequency converter 变频器frequency domain model reduction method 频域模型降阶法frequency response 频域响应full order observer 全阶观测器functional decomposition 功能分解FES (functional electrical stimulation) 功能电刺激functional simularity 功能相似fuzzy logic 模糊逻辑game tree 对策树gate valve 闸阀general equilibrium theory 一般均衡理论generalized least squares estimation 广义最小二乘估计generation function 生成函数geomagnetic torque 地磁力矩geometric similarity 几何相似gimbaled wheel 框架轮global asymptotic stability 全局渐进稳定性global optimum 全局最优globe valve 球形阀徢goal coordination method 目标协调法grammatical inference 文法推断graphic search 图搜索gravity gradient torque 重力梯度力矩group technology 成组技术guidance system 制导系统gyro drift rate 陀螺漂移率gyrostat 陀螺体Hall displacement transducer 霍尔式位移传感器hardware-in-the-loop simulation 半实物仿真harmonious deviation 和谐偏差harmonious strategy 和谐策略heuristic inference 启发式推理hidden oscillation 隐蔽振荡hierarchical chart 层次结构图hierarchical planning 递阶规划hierarchical control 递阶控制homeostasis 内稳态homomorphic model 同态系统horizontal decomposition 横向分解hormonal control 内分泌控制hydraulic step motor 液压步进马达hypercycle theory 超循环理论I controller 积分控制器identifiability 可辨识性IDSS (intelligent decision support system) 智能决策支持系统image recognition 图像识别impulse 冲量impulse function 冲击函数,脉冲函数inching 点动incompatibility principle 不相容原理incremental motion control 增量运动控制index of merit 品质因数inductive force transducer 电感式位移传感器inductive modeling method 归纳建模法industrial automation 工业自动化inertial attitude sensor 惯性姿态敏感器inertial coordinate system 惯性坐标系inertial wheel 惯性轮inference engine 推理机infinite dimensional system 无穷维系统information acquisition 信息采集infrared gas analyzer 红外线气体分析器inherent nonlinearity 固有非线性inherent regulation 固有调节initial deviation 初始偏差initiator 发起站injection attitude 入轨姿势input-output model 投入产出模型instability 不稳定性instruction level language 指令级语言integral of absolute value of error criterion 绝对误差积分准则integral of squared error criterion 平方误差积分准则integral performance criterion 积分性能准则integration instrument 积算仪器integrity 整体性intelligent terminal 智能终端interacted system 互联系统,关联系统interactive prediction approach 互联预估法,关联预估法interconnection 互联intermittent duty 断续工作制internal disturbance 内扰ISM (interpretive structure modeling) 解释结构建模法invariant embedding principle 不变嵌入原理inventory theory 库伦论inverse Nyquist diagram 逆奈奎斯特图inverter 逆变器investment decision 投资决策isomorphic model 同构模型iterative coordination 迭代协调jet propulsion 喷气推进job-lot control 分批控制joint 关节Kalman-Bucy filer 卡尔曼-布西滤波器knowledge accomodation 知识顺应knowledge acquisition 知识获取knowledge assimilation 知识同化KBMS (knowledge base management system) 知识库管理系统瓢knowledge representation 知识表达ladder diagram 梯形图lag-lead compensation 滞后超前补偿Lagrange duality 拉格朗日对偶性Laplace transform 拉普拉斯变换large scale system 大系统lateral inhibition network 侧抑制网络least cost input 最小成本投入least squares criterion 最小二乘准则level switch 物位开关libration damping 天平动阻尼limit cycle 极限环linearization technique 线性化方法linear motion electric drive 直线运动电气传动linear motion valve 直行程阀linear programming 线性规划LQR (linear quadratic regulator problem) 线性二次调节器问题load cell 称重传感器local asymptotic stability 局部渐近稳定性local optimum 局部最优log magnitude-phase diagram 对数幅相图long term memory 长期记忆lumped parameter model 集总参数模型Lyapunov theorem of asymptotic stability 李雅普诺夫渐近稳定性定理macro-economic system 宏观经济系统magnetic dumping 磁卸载magnetoelastic weighing cell 磁致弹性称重传感器magnitude-frequency characteristic 幅频特性magnitude margin 幅值裕度magnitude scale factor 幅值比例尺manipulator 机械手man-machine coordination 人机协调manual station 手动操作器MAP (manufacturing automation protocol) 制造自动化协议marginal effectiveness 边际效益Mason‘‘s gain formula 梅森增益公式master station 主站matching criterion 匹配准则maximum likelihood estimation 最大似然估计maximum overshoot 最大超调量maximum principle 极大值原理mean-square error criterion 均方误差准则mechanism model 机理模型meta-knowledge 元知识metallurgical automation 冶金自动化minimal realization 最小实现minimum phase system 最小相位系统minimum variance estimation 最小方差估计minor loop 副回路missile-target relative movement simulator 弹体-目标相对运动仿真器modal aggregation 模态集结modal transformation 模态变换MB (model base) 模型库model confidence 模型置信度model fidelity 模型逼真度model reference adaptive control system 模型参考适应控制系统model verification 模型验证modularization 模块化MEC (most economic control) 最经济控制motion space 可动空间MTBF (mean time between failures) 平均故障间隔时间MTTF (mean time to failures) 平均无故障时间multi-attributive utility function 多属性效用函数multicriteria 多重判据multilevel hierarchical structure 多级递阶结构multiloop control 多回路控制multi-objective decision 多目标决策multistate logic 多态逻辑multistratum hierarchical control 多段递阶控制multivariable control system 多变量控制系统myoelectric control 肌电控制Nash optimality 纳什最优性natural language generation 自然语言生成nearest-neighbor 最近邻necessity measure 必然性侧度negative feedback 负反馈neural assembly 神经集合neural network computer 神经网络计算机Nichols chart 尼科尔斯图noetic science 思维科学noncoherent system 非单调关联系统noncooperative game 非合作博弈nonequilibrium state 非平衡态nonlinear element 非线性环节nonmonotonic logic 非单调逻辑nonparametric training 非参数训练nonreversible electric drive 不可逆电气传动nonsingular perturbation 非奇异摄动non-stationary random process 非平稳随机过程nuclear radiation levelmeter 核辐射物位计nutation sensor 章动敏感器Nyquist stability criterion 奈奎斯特稳定判据objective function 目标函数observability index 可观测指数observable canonical form 可观测规范型on-line assistance 在线帮助on-off control 通断控制open loop pole 开环极点operational research model 运筹学模型optic fiber tachometer 光纤式转速表optimal trajectory 最优轨迹optimization technique 最优化技术orbital rendezvous 轨道交会orbit gyrocompass 轨道陀螺罗盘orbit perturbation 轨道摄动order parameter 序参数orientation control 定向控制originator 始发站oscillating period 振荡周期output prediction method 输出预估法oval wheel flowmeter 椭圆齿轮流量计overall design 总体设计overdamping 过阻尼overlapping decomposition 交叠分解Pade approximation 帕德近似Pareto optimality 帕雷托最优性passive attitude stabilization 被动姿态稳定path repeatability 路径可重复性pattern primitive 模式基元PR (pattern recognition) 模式识别P control 比例控制器peak time 峰值时间penalty function method 罚函数法perceptron 感知器periodic duty 周期工作制perturbation theory 摄动理论pessimistic value 悲观值phase locus 相轨迹phase trajectory 相轨迹phase lead 相位超前photoelectric tachometric transducer 光电式转速传感器phrase-structure grammar 短句结构文法physical symbol system 物理符号系统piezoelectric force transducer 压电式力传感器playback robot 示教再现式机器人PLC (programmable logic controller) 可编程序逻辑控制器plug braking 反接制动plug valve 旋塞阀pneumatic actuator 气动执行机构point-to-point control 点位控制polar robot 极坐标型机器人pole assignment 极点配置pole-zero cancellation 零极点相消polynomial input 多项式输入portfolio theory 投资搭配理论pose overshoot 位姿过调量position measuring instrument 位置测量仪posentiometric displacement transducer 电位器式位移传感器positive feedback 正反馈power system automation 电力系统自动化predicate logic 谓词逻辑pressure gauge with electric contact 电接点压力表pressure transmitter 压力变送器price coordination 价格协调primal coordination 主协调primary frequency zone 主频区PCA (principal component analysis) 主成分分析法principle of turnpike 大道原理priority 优先级process-oriented simulation 面向过程的仿真production budget 生产预算production rule 产生式规则profit forecast 利润预测PERT (program evaluation and review technique) 计划评审技术program set station 程序设定操作器proportional control 比例控制proportional plus derivative controller 比例微分控制器protocol engineering 协议工程prototype 原型pseudo random sequence 伪随机序列pseudo-rate-increment control 伪速率增量控制pulse duration 脉冲持续时间pulse frequency modulation control system 脉冲调频控制系统pulse width modulation control system 脉冲调宽控制系统PWM inverter 脉宽调制逆变器pushdown automaton 下推自动机QC (quality control) 质量管理quadratic performance index 二次型性能指标qualitative physical model 定性物理模型quantized noise 量化噪声quasilinear characteristics 准线性特性queuing theory 排队论radio frequency sensor 射频敏感器ramp function 斜坡函数random disturbance 随机扰动random process 随机过程rate integrating gyro 速率积分陀螺ratio station 比值操作器reachability 可达性reaction wheel control 反作用轮控制realizability 可实现性,能实现性real time telemetry 实时遥测receptive field 感受野rectangular robot 直角坐标型机器人rectifier 整流器recursive estimation 递推估计reduced order observer 降阶观测器redundant information 冗余信息reentry control 再入控制regenerative braking 回馈制动,再生制动regional planning model 区域规划模型regulating device 调节装载regulation 调节relational algebra 关系代数relay characteristic 继电器特性remote manipulator 遥控操作器remote regulating 遥调remote set point adjuster 远程设定点调整器rendezvous and docking 交会和对接reproducibility 再现性resistance thermometer sensor 热电阻resolution principle 归结原理resource allocation 资源分配response curve 响应曲线return difference matrix 回差矩阵return ratio matrix 回比矩阵reverberation 回响reversible electric drive 可逆电气传动revolute robot 关节型机器人revolution speed transducer 转速传感器rewriting rule 重写规则rigid spacecraft dynamics 刚性航天动力学risk decision 风险分析robotics 机器人学robot programming language 机器人编程语言robust control 鲁棒控制robustness 鲁棒性roll gap measuring instrument 辊缝测量仪root locus 根轨迹roots flowmeter 腰轮流量计rotameter 浮子流量计,转子流量计rotary eccentric plug valve 偏心旋转阀rotary motion valve 角行程阀rotating transformer 旋转变压器Routh approximation method 劳思近似判据routing problem 路径问题sampled-data control system 采样控制系统sampling control system 采样控制系统saturation characteristics 饱和特性scalar Lyapunov function 标量李雅普诺夫函数SCARA (selective compliance assembly robot arm) 平面关节型机器人scenario analysis method 情景分析法scene analysis 物景分析s-domain s域self-operated controller 自力式控制器self-organizing system 自组织系统self-reproducing system 自繁殖系统self-tuning control 自校正控制semantic network 语义网络semi-physical simulation 半实物仿真sensing element 敏感元件sensitivity analysis 灵敏度分析sensory control 感觉控制sequential decomposition 顺序分解sequential least squares estimation 序贯最小二乘估计servo control 伺服控制,随动控制servomotor 伺服马达settling time 过渡时间sextant 六分仪short term planning 短期计划short time horizon coordination 短时程协调signal detection and estimation 信号检测和估计signal reconstruction 信号重构similarity 相似性simulated interrupt 仿真中断simulation block diagram 仿真框图simulation experiment 仿真实验simulation velocity 仿真速度simulator 仿真器single axle table 单轴转台single degree of freedom gyro 单自由度陀螺single level process 单级过程single value nonlinearity 单值非线性singular attractor 奇异吸引子singular perturbation 奇异摄动sink 汇点slaved system 受役系统slower-than-real-time simulation 欠实时仿真slow subsystem 慢变子系统socio-cybernetics 社会控制论socioeconomic system 社会经济系统software psychology 软件心理学solar array pointing control 太阳帆板指向控制solenoid valve 电磁阀source 源点specific impulse 比冲speed control system 调速系统spin axis 自旋轴spinner 自旋体stability criterion 稳定性判据stability limit 稳定极限stabilization 镇定,稳定Stackelberg decision theory 施塔克尔贝格决策理论state equation model 状态方程模型state space description 状态空间描述static characteristics curve 静态特性曲线station accuracy 定点精度stationary random process 平稳随机过程statistical analysis 统计分析statistic pattern recognition 统计模式识别steady state deviation 稳态偏差steady state error coefficient 稳态误差系数step-by-step control 步进控制step function 阶跃函数stepwise refinement 逐步精化stochastic finite automaton 随机有限自动机strain gauge load cell 应变式称重传感器strategic function 策略函数strongly coupled system 强耦合系统subjective probability 主观频率suboptimality 次优性supervised training 监督学习supervisory computer control system 计算机监控系统sustained oscillation 自持振荡swirlmeter 旋进流量计switching point 切换点symbolic processing 符号处理synaptic plasticity 突触可塑性synergetics 协同学syntactic analysis 句法分析system assessment 系统评价systematology 系统学system homomorphism 系统同态system isomorphism 系统同构system engineering 系统工程tachometer 转速表target flow transmitter 靶式流量变送器task cycle 作业周期teaching programming 示教编程telemechanics 远动学telemetering system of frequency division type 频分遥测系统telemetry 遥测teleological system 目的系统teleology 目的论temperature transducer 温度传感器template base 模版库tensiometer 张力计texture 纹理theorem proving 定理证明therapy model 治疗模型thermocouple 热电偶thermometer 温度计thickness meter 厚度计three-axis attitude stabilization 三轴姿态稳定three state controller 三位控制器thrust vector control system 推力矢量控制系统thruster 推力器time constant 时间常数time-invariant system 定常系统,非时变系统time schedule controller 时序控制器time-sharing control 分时控制time-varying parameter 时变参数top-down testing 自上而下测试topological structure 拓扑结构TQC (total quality control) 全面质量管理tracking error 跟踪误差trade-off analysis 权衡分析transfer function matrix 传递函数矩阵transformation grammar 转换文法transient deviation 瞬态偏差transient process 过渡过程transition diagram 转移图transmissible pressure gauge 电远传压力表transmitter 变送器trend analysis 趋势分析triple modulation telemetering system 三重调制遥测系统turbine flowmeter 涡轮流量计Turing machine 图灵机two-time scale system 双时标系统ultrasonic levelmeter 超声物位计unadjustable speed electric drive 非调速电气传动unbiased estimation 无偏估计underdamping 欠阻尼uniformly asymptotic stability 一致渐近稳定性uninterrupted duty 不间断工作制,长期工作制unit circle 单位圆unit testing 单元测试unsupervised learing 非监督学习upper level problem 上级问题urban planning 城市规划utility function 效用函数value engineering 价值工程variable gain 可变增益,可变放大系数variable structure control system 变结构控制vector Lyapunov function 向量李雅普诺夫函数velocity error coefficient 速度误差系数velocity transducer 速度传感器vertical decomposition 纵向分解vibrating wire force transducer 振弦式力传感器vibrometer 振动计viscous damping 粘性阻尼voltage source inverter 电压源型逆变器vortex precession flowmeter 旋进流量计vortex shedding flowmeter 涡街流量计WB (way base) 方法库weighing cell 称重传感器weighting factor 权因子weighting method 加权法Whittaker-Shannon sampling theorem 惠特克-香农采样定理Wiener filtering 维纳滤波work station for computer aided design 计算机辅助设计工作站w-plane w平面zero-based budget 零基预算zero-input response 零输入响应zero-state response 零状态响应zero sum game model 零和对策模型z-transform z变换。
严格反馈系统未知执行器故障的自适应补偿(IJMECS-V3-N5-2)

We consider a class of nonlinear systems with uncertain parameters and m inputs. The system model is given as
Copyright © 2011 MECS
Index Terms—- actuator failure, backstepping, nonlinear system, adaptive control, uncertain system
I. INTRODUCTION
Actuator failures seem inevitable in practice especially in complex systems. The unknown failure may cause instability and catastrophic accidents during operation of control systems. As we all know such failures are often uncertain in time, value and pattern, namely it is not known when, how much and how many actuator fail. So it is difficult to address the problem of actuator failure compensation. To address such problem, serval different design methods have been proposed such as multiplemodel designs and switching and tuning techniques, fault detection and diagnosis-based designs, robust control designs, neural network techniques, sliding model method. It is well known that adaptive control systems can obtain desired performance by adjusting controller parameters with system response errors during the operation of control system. So compared with other methods, It avoid false alarms and delays caused by failure detection.
自动化控制工程外文翻译外文文献英文文献

Team-Centered Perspective for Adaptive Automation DesignLawrence J.PrinzelLangley Research Center, Hampton, VirginiaAbstractAutomation represents a very active area of human factors research. Thejournal, Human Factors, published a special issue on automation in 1985.Since then, hundreds of scientific studies have been published examiningthe nature of automation and its interaction with human performance.However, despite a dramatic increase in research investigating humanfactors issues in aviation automation, there remain areas that need furtherexploration. This NASA Technical Memorandum describes a new area ofIt discussesautomation design and research, called “adaptive automation.” the concepts and outlines the human factors issues associated with the newmethod of adaptive function allocation. The primary focus is onhuman-centered design, and specifically on ensuring that adaptiveautomation is from a team-centered perspective. The document showsthat adaptive automation has many human factors issues common totraditional automation design. Much like the introduction of other new technologies and paradigm shifts, adaptive automation presents an opportunity to remediate current problems but poses new ones forhuman-automation interaction in aerospace operations. The review here isintended to communicate the philosophical perspective and direction ofadaptive automation research conducted under the Aerospace OperationsSystems (AOS), Physiological and Psychological Stressors and Factors (PPSF)project.Key words:Adaptive Automation; Human-Centered Design; Automation;Human FactorsIntroduction"During the 1970s and early 1980s...the concept of automating as much as possible was considered appropriate. The expected benefit was a reduction inpilot workload and increased safety...Although many of these benefits have beenrealized, serious questions have arisen and incidents/accidents that have occurredwhich question the underlying assumptions that a maximum availableautomation is ALWAYS appropriate or that we understand how to designautomated systems so that they are fully compatible with the capabilities andlimitations of the humans in the system."---- ATA, 1989The Air Transport Association of America (ATA) Flight Systems Integration Committee(1989) made the above statement in response to the proliferation of automation in aviation. They noted that technology improvements, such as the ground proximity warning system, have had dramatic benefits; others, such as the electronic library system, offer marginal benefits at best. Such observations have led many in the human factors community, most notably Charles Billings (1991; 1997) of NASA, to assert that automation should be approached from a "human-centered design" perspective.The period from 1970 to the present was marked by an increase in the use of electronic display units (EDUs); a period that Billings (1997) calls "information" and “management automation." The increased use of altitude, heading, power, and navigation displays; alerting and warning systems, such as the traffic alert and collision avoidance system (TCAS) and ground proximity warning system (GPWS; E-GPWS; TAWS); flight management systems (FMS) and flight guidance (e.g., autopilots; autothrottles) have "been accompanied by certain costs, including an increased cognitive burden on pilots, new information requirements that have required additional training, and more complex, tightly coupled, less observable systems" (Billings, 1997). As a result, human factors research in aviation has focused on the effects of information and management automation. The issues of interest include over-reliance on automation, "clumsy" automation (e.g., Wiener, 1989), digital versus analog control, skill degradation, crew coordination, and data overload (e.g., Billings, 1997). Furthermore, research has also been directed toward situational awareness (mode & state awareness; Endsley, 1994; Woods & Sarter, 1991) associated with complexity, coupling, autonomy, and inadequate feedback. Finally, human factors research has introduced new automation concepts that will need to be integrated into the existing suite of aviationautomation.Clearly, the human factors issues of automation have significant implications for safetyin aviation. However, what exactly do we mean by automation? The way we choose to define automation has considerable meaning for how we see the human role in modern aerospace s ystems. The next section considers the concept of automation, followed by an examination of human factors issues of human-automation interaction in aviation. Next, a potential remedy to the problems raised is described, called adaptive automation. Finally, the human-centered design philosophy is discussed and proposals are made for how the philosophy can be applied to this advanced form of automation. The perspective is considered in terms of the Physiological /Psychological Stressors & Factors project and directions for research on adaptive automation.Automation in Modern AviationDefinition.Automation refers to "...systems or methods in which many of the processes of production are automatically performed or controlled by autonomous machines or electronic devices" (Parsons, 1985). Automation is a tool, or resource, that the human operator can use to perform some task that would be difficult or impossible without machine aiding (Billings, 1997). Therefore, automation can be thought of as a process of substituting the activity of some device or machine for some human activity; or it can be thought of as a state of technological development (Parsons, 1985). However, some people (e.g., Woods, 1996) have questioned whether automation should be viewed as a substitution of one agent for another (see "apparent simplicity, real complexity" below). Nevertheless, the presence of automation has pervaded almost every aspect of modern lives. From the wheel to the modern jet aircraft, humans have sought to improve the quality of life. We have built machines and systems that not only make work easier, more efficient, and safe, but also give us more leisure time. The advent of automation has further enabled us to achieve this end. With automation, machines can now perform many of the activities that we once had to do. Our automobile transmission will shift gears for us. Our airplanes will fly themselves for us. All we have to dois turn the machine on and off. It has even been suggested that one day there may not be aaccidents resulting from need for us to do even that. However, the increase in “cognitive” faulty human-automation interaction have led many in the human factors community to conclude that such a statement may be premature.Automation Accidents. A number of aviation accidents and incidents have been directly attributed to automation. Examples of such in aviation mishaps include (from Billings, 1997):DC-10 landing in control wheel steering A330 accident at ToulouseB-747 upset over Pacific DC-10 overrun at JFK, New YorkB-747 uncommandedroll,Nakina,Ont. A320 accident at Mulhouse-HabsheimA320 accident at Strasbourg A300 accident at NagoyaB-757 accident at Cali, Columbia A320 accident at BangaloreA320 landing at Hong Kong B-737 wet runway overrunsA320 overrun at Warsaw B-757 climbout at ManchesterA310 approach at Orly DC-9 wind shear at CharlotteBillings (1997) notes that each of these accidents has a different etiology, and that human factors investigation of causes show the matter to be complex. However, what is clear is that the percentage of accident causes has fundamentally shifted from machine-caused to human-caused (estimations of 60-80% due to human error) etiologies, and the shift is attributable to the change in types of automation that have evolved in aviation.Types of AutomationThere are a number of different types of automation and the descriptions of them vary considerably. Billings (1997) offers the following types of automation:?Open-Loop Mechanical or Electronic Control.Automation is controlled by gravity or spring motors driving gears and cams that allow continous and repetitive motion. Positioning, forcing, and timing were dictated by the mechanism and environmental factors (e.g., wind). The automation of factories during the Industrial Revolution would represent this type of automation.?Classic Linear Feedback Control.Automation is controlled as a function of differences between a reference setting of desired output and the actual output. Changes a re made to system parameters to re-set the automation to conformance. An example of this type of automation would be flyball governor on the steam engine. What engineers call conventional proportional-integral-derivative (PID) control would also fit in this category of automation.?Optimal Control. A computer-based model of controlled processes i s driven by the same control inputs as that used to control the automated process. T he model output is used to project future states and is thus used to determine the next control input. A "Kalman filtering" approach is used to estimate the system state to determine what the best control input should be.?Adaptive Control. This type of automation actually represents a number of approaches to controlling automation, but usually stands for automation that changes dynamically in response to a change in state. Examples include the use of "crisp" and "fuzzy" controllers, neural networks, dynamic control, and many other nonlinear methods.Levels of AutomationIn addition to “types ” of automation, we can also conceptualize different “levels ” of automation control that the operator can have. A number of taxonomies have been put forth, but perhaps the best known is the one proposed by Tom Sheridan of Massachusetts Institute of Technology (MIT). Sheridan (1987) listed 10 levels of automation control:1. The computer offers no assistance, the human must do it all2. The computer offers a complete set of action alternatives3. The computer narrows the selection down to a few4. The computer suggests a selection, and5. Executes that suggestion if the human approves, or6. Allows the human a restricted time to veto before automatic execution, or7. Executes automatically, then necessarily informs the human, or8. Informs the human after execution only if he asks, or9. Informs the human after execution if it, the computer, decides to10. The computer decides everything and acts autonomously, ignoring the humanThe list covers the automation gamut from fully manual to fully automatic. Although different researchers define adaptive automation differently across these levels, the consensus is that adaptive automation can represent anything from Level 3 to Level 9. However, what makes adaptive automation different is the philosophy of the approach taken to initiate adaptive function allocation and how such an approach may address t he impact of current automation technology.Impact of Automation TechnologyAdvantages of Automation . Wiener (1980; 1989) noted a number of advantages to automating human-machine systems. These include increased capacity and productivity, reduction of small errors, reduction of manual workload and mental fatigue, relief from routine operations, more precise handling of routine operations, economical use of machines, and decrease of performance variation due to individual differences. Wiener and Curry (1980) listed eight reasons for the increase in flight-deck automation: (a) Increase in available technology, such as FMS, Ground Proximity Warning System (GPWS), Traffic Alert andCollision Avoidance System (TCAS), etc.; (b) concern for safety; (c) economy, maintenance, and reliability; (d) workload reduction and two-pilot transport aircraft certification; (e) flight maneuvers and navigation precision; (f) display flexibility; (g) economy of cockpit space; and (h) special requirements for military missions.Disadvantages o f Automation. Automation also has a number of disadvantages that have been noted. Automation increases the burdens and complexities for those responsible for operating, troubleshooting, and managing systems. Woods (1996) stated that automation is "...a wrapped package -- a package that consists of many different dimensions bundled together as a hardware/software system. When new automated systems are introduced into a field of practice, change is precipitated along multiple dimensions." As Woods (1996) noted, some of these changes include: ( a) adds to or changes the task, such as device setup and initialization, configuration control, and operating sequences; (b) changes cognitive demands, such as requirements for increased situational awareness; (c) changes the roles of people in the system, often relegating people to supervisory controllers; (d) automation increases coupling and integration among parts of a system often resulting in data overload and "transparency"; and (e) the adverse impacts of automation is often not appreciated by those who advocate the technology. These changes can result in lower job satisfaction (automation seen as dehumanizing human roles), lowered vigilance, fault-intolerant systems, silent failures, an increase in cognitive workload, automation-induced failures, over-reliance, complacency, decreased trust, manual skill erosion, false alarms, and a decrease in mode awareness (Wiener, 1989).Adaptive AutomationDisadvantages of automation have resulted in increased interest in advanced automation concepts. One of these concepts is automation that is dynamic or adaptive in nature (Hancock & Chignell, 1987; Morrison, Gluckman, & Deaton, 1991; Rouse, 1977; 1988). In an aviation context, adaptive automation control of tasks can be passed back and forth between the pilot and automated systems in response to the changing task demands of modern aircraft. Consequently, this allows for the restructuring of the task environment based upon (a) what is automated, (b) when it should be automated, and (c) how it is automated (Rouse, 1988; Scerbo, 1996). Rouse(1988) described criteria for adaptive aiding systems:The level of aiding, as well as the ways in which human and aidinteract, should change as task demands vary. More specifically,the level of aiding should increase as task demands become suchthat human performance will unacceptably degrade withoutaiding. Further, the ways in which human and aid interact shouldbecome increasingly streamlined as task demands increase.Finally, it is quite likely that variations in level of aiding andmodes of interaction will have to be initiated by the aid rather thanby the human whose excess task demands have created a situationrequiring aiding. The term adaptive aiding is used to denote aidingconcepts that meet [these] requirements.Adaptive aiding attempts to optimize the allocation of tasks by creating a mechanism for determining when tasks need to be automated (Morrison, Cohen, & Gluckman, 1993). In adaptive automation, the level or mode of automation can be modified in real time. Further, unlike traditional forms of automation, both the system and the pilot share control over changes in the state of automation (Scerbo, 1994; 1996). Parasuraman, Bahri, Deaton, Morrison, and Barnes (1992) have argued that adaptive automation represents the optimal coupling of the level of pilot workload to the level of automation in the tasks. Thus, adaptive automation invokes automation only when task demands exceed the pilot's capabilities. Otherwise, the pilot retains manual control of the system functions. Although concerns have been raised about the dangers of adaptive automation (Billings & Woods, 1994; Wiener, 1989), it promises to regulate workload, bolster situational awareness, enhance vigilance, maintain manual skill levels, increase task involvement, and generally improve pilot performance.Strategies for Invoking AutomationPerhaps the most critical challenge facing system designers seeking to implement automation concerns how changes among modes or levels of automation will be accomplished (Parasuraman e t al., 1992; Scerbo, 1996). Traditional forms of automation usually start with some task or functional analysis and attempt to fit the operational tasks necessary to the abilities of the human or the system. The approach often takes the form of a functional allocation analysis (e.g., Fitt's List) in which an attempt is made to determine whether the human or the system is better suited to do each task. However, many in the field have pointed out the problem with trying to equate the two in automated systems, as each have special characteristics that impede simple classification taxonomies. Such ideas as these have led some to suggest other ways of determining human-automation mixes. Although certainly not exhaustive, some of these ideas are presented below.Dynamic Workload Assessment.One approach involves the dynamic assessment o fmeasures t hat index the operators' state of mental engagement. (Parasuraman e t al., 1992; Rouse,1988). The question, however, is what the "trigger" should be for the allocation of functions between the pilot and the automation system. Numerous researchers have suggested that adaptive systems respond to variations in operator workload (Hancock & Chignell, 1987; 1988; Hancock, Chignell & Lowenthal, 1985; Humphrey & Kramer, 1994; Reising, 1985; Riley, 1985; Rouse, 1977), and that measures o f workload be used to initiate changes in automation modes. Such measures include primary and secondary-task measures, subjective workload measures, a nd physiological measures. T he question, however, is what adaptive mechanism should be used to determine operator mental workload (Scerbo, 1996).Performance Measures. One criterion would be to monitor the performance of the operator (Hancock & Chignel, 1987). Some criteria for performance would be specified in the system parameters, and the degree to which the operator deviates from the criteria (i.e., errors), the system would invoke levels of adaptive automation. For example, Kaber, Prinzel, Clammann, & Wright (2002) used secondary task measures to invoke adaptive automation to help with information processing of air traffic controllers. As Scerbo (1996) noted, however,"...such an approach would be of limited utility because the system would be entirely reactive."Psychophysiological M easures.Another criterion would be the cognitive and attentional state of the operator as measured by psychophysiological measures (Byrne & Parasuraman, 1996). An example of such an approach is that by Pope, Bogart, and Bartolome (1996) and Prinzel, Freeman, Scerbo, Mikulka, and Pope (2000) who used a closed-loop system to dynamically regulate the level of "engagement" that the subject had with a tracking task. The system indexes engagement on the basis of EEG brainwave patterns.Human Performance Modeling.Another approach would be to model the performance of the operator. The approach would allow the system to develop a number of standards for operator performance that are derived from models of the operator. An example is Card, Moran, and Newell (1987) discussion of a "model human processor." They discussed aspects of the human processor that could be used to model various levels of human performance. Another example is Geddes (1985) and his colleagues (Rouse, Geddes, & Curry, 1987-1988) who provided a model to invoke automation based upon system information, the environment, and expected operator behaviors (Scerbo, 1996).Mission Analysis. A final strategy would be to monitor the activities of the mission or task (Morrison & Gluckman, 1994). Although this method of adaptive automation may be themost accessible at the current state of technology, Bahri et al. (1992) stated that such monitoring systems lack sophistication and are not well integrated and coupled to monitor operator workload or performance (Scerbo, 1996). An example of a mission analysis approach to adaptive automation is Barnes and Grossman (1985) who developed a system that uses critical events to allocate among automation modes. In this system, the detection of critical events, such as emergency situations or high workload periods, invoked automation.Adaptive Automation Human Factors IssuesA number of issues, however, have been raised by the use of adaptive automation, and many of these issues are the same as those raised almost 20 years ago by Curry and Wiener (1980). Therefore, these issues are applicable not only to advanced automation concepts, such as adaptive automation, but to traditional forms of automation already in place in complex systems (e.g., airplanes, trains, process control).Although certainly one can make the case that adaptive automation is "dressed up" automation and therefore has many of the same problems, it is also important to note that the trend towards such forms of automation does have unique issues that accompany it. As Billings & Woods (1994) stated, "[i]n high-risk, dynamic environments...technology-centered automation has tended to decrease human involvement in system tasks, and has thus impaired human situation awareness; both are unwanted consequences of today's system designs, but both are dangerous in high-risk systems. [At its present state of development,] adaptive ("self-adapting") automation represents a potentially serious threat ... to the authority that the human pilot must have to fulfill his or her responsibility for flight safety."The Need for Human Factors Research.Nevertheless, such concerns should not preclude us from researching the impact that such forms of advanced automation are sure to have on human performance. Consider Hancock’s (1996; 1997) examination of the "teleology for technology." He suggests that automation shall continue to impact our lives requiring humans to co-evolve with the technology; Hancock called this "techneology."What Peter Hancock attempts to communicate to the human factors community is that automation will continue to evolve whether or not human factors chooses to be part of it. As Wiener and Curry (1980) conclude: "The rapid pace of automation is outstripping one's ability to comprehend all the implications for crew performance. It is unrealistic to call for a halt to cockpit automation until the manifestations are completely understood. We do, however, call for those designing, analyzing, and installing automatic systems in the cockpit to do so carefully; to recognize the behavioral effects of automation; to avail themselves of present andfuture guidelines; and to be watchful for symptoms that might appear in training andoperational settings." The concerns they raised are as valid today as they were 23 years ago.However, this should not be taken to mean that we should capitulate. Instead, becauseobservation suggests that it may be impossible to fully research any new Wiener and Curry’stechnology before implementation, we need to form a taxonomy and research plan tomaximize human factors input for concurrent engineering of adaptive automation.Classification of Human Factors Issues. Kantowitz and Campbell (1996)identified some of the key human factors issues to be considered in the design of advancedautomated systems. These include allocation of function, stimulus-response compatibility, andmental models. Scerbo (1996) further suggested the need for research on teams,communication, and training and practice in adaptive automated systems design. The impactof adaptive automation systems on monitoring behavior, situational awareness, skilldegradation, and social dynamics also needs to be investigated. Generally however, Billings(1997) stated that the problems of automation share one or more of the followingcharacteristics: Brittleness, opacity, literalism, clumsiness, monitoring requirement, and dataoverload. These characteristics should inform design guidelines for the development, analysis,and implementation of adaptive automation technologies. The characteristics are defined as: ?Brittleness refers to "...an attribute of a system that works well under normal or usual conditions but that does not have desired behavior at or close to some margin of its operating envelope."?Opacity reflects the degree of understanding of how and why automation functions as it does. The term is closely associated with "mode awareness" (Sarter & Woods, 1994), "transparency"; or "virtuality" (Schneiderman, 1992).?Literalism concern the "narrow-mindedness" of the automated system; that is, theflexibility of the system to respond to novel events.?Clumsiness was coined by Wiener (1989) to refer to automation that reduced workload demands when the demands are already low (e.g., transit flight phase), but increases them when attention and resources are needed elsewhere (e.g., descent phase of flight). An example is when the co-pilot needs to re-program the FMS, to change the plane's descent path, at a time when the co-pilot should be scanning for other planes.?Monitoring requirement refers to the behavioral and cognitive costs associated withincreased "supervisory control" (Sheridan, 1987; 1991).?Data overload points to the increase in information in modern automated contexts (Billings, 1997).These characteristics of automation have relevance for defining the scope of humanfactors issues likely to plague adaptive automation design if significant attention is notdirected toward ensuring human-centered design. The human factors research communityhas noted that these characteristics can lead to human factors issues of allocation of function(i.e., when and how should functions be allocated adaptively); stimulus-response compatibility and new error modes; how adaptive automation will affect mental models,situation models, and representational models; concerns about mode unawareness and-of-the-loop” performance problem; situation awareness decay; manual skill decay and the “outclumsy automation and task/workload management; and issues related to the design of automation. This last issue points to the significant concern in the human factors communityof how to design adaptive automation so that it reflects what has been called “team-centered”;that is, successful adaptive automation will l ikely embody the concept of the “electronic team member”. However, past research (e.g., Pilots Associate Program) has shown that designing automation to reflect such a role has significantly different requirements than those arising in traditional automation design. The field is currently focused on answering the questions,does that definition translate into“what is it that defines one as a team member?” and “howUnfortunately, the literature also shows that the designing automation to reflect that role?” answer is not transparent and, therefore, adaptive automation must first tackle its own uniqueand difficult problems before it may be considered a viable prescription to currenthuman-automation interaction problems. The next section describes the concept of the electronic team member and then discusses t he literature with regard to team dynamics, coordination, communication, shared mental models, and the implications of these foradaptive automation design.Adaptive Automation as Electronic Team MemberLayton, Smith, and McCoy (1994) stated that the design of automated systems should befrom a team-centered approach; the design should allow for the coordination betweenmachine agents and human practitioners. However, many researchers have noted that automated systems tend to fail as team players (Billings, 1991; Malin & Schreckenghost,1992; Malin et al., 1991;Sarter & Woods, 1994; Scerbo, 1994; 1996; Woods, 1996). Thereason is what Woods (1996) calls “apparent simplicity, real complexity.”Apparent Simplicity, Real Complexity.Woods (1996) stated that conventional wisdomabout automation makes technology change seem simple. Automation can be seen as simply changing the human agent for a machine agent. Automation further provides for more optionsand methods, frees up operator time to do other things, provides new computer graphics and interfaces, and reduces human error. However, the reality is that technology change has often。
适应性控制实务应用

適應性控制實務應用The practical application of adaptive control=======================工業技術研究院 機械所 楊宜學=======================關鍵詞=======================適應性控制 Adaptive control參數估測 Parameter estimate位置控制 Position control=======================摘要=======================在系統控制中,常常會有許多未知的常數係數或是緩時變係數。
適應性控制便是一種用來控制這一類系統的方法。
適應性控制的基本觀念為在控制同時利用可量測的系統訊號進行系統未知係數估測,且估測係數將改變受控場的輸入訊號。
因此一個適應性控制器也可視為擁有即時參數估測的控制系統。
本文中將實現一個適應性控制器用於馬達位置控制,同時估測系統參數,再利用估測參數即時調整適應性控制器係數,並紀錄系統響應及誤差,觀察適應性控制器的性能及參數收歛性。
Many dynamic systems to be controlled have constant or slowly-varying uncertain parameters. Adaptive control is an approach to the control of such systems. The basic idea in adaptive control is to estimate the uncertain plant parameters on-line based on the measured system signals, and use the estimated parameters in the control input computation. An adaptive control system can thus be regarded as a control system with on-line parameter estimation.This paper will be achieve an adaptive controller for the motor position control, while estimates the parameter of plant, and then the estimated parameter using adaptive controller, records system response and error, discuss the performance of adaptive controller and parameter convergence.=======================前言=======================直流馬達控制在工業界是一個十分普遍的應用,甚至許多複雜的交流馬達透過座標轉換將三相系統轉換直角正交的兩軸座標系統,再經向量控制的輔助,也可得到類似直流馬達的系統模型。
关于自控能力的英语作文高中

关于自控能力的英语作文高中Self-control is an essential skill that is important for success in all aspects of life. It refers to our ability to control our actions, emotions, and thoughts in order to achieve a particular goal or to resist temptation. Self-control is not only important for achieving personal goals, but it also plays a crucial role in our relationships with others and in our overall well-being.One of the key benefits of self-control is that it enables us to make better decisions. When we are able to resist immediate gratification, we are more likely to make choices that will benefit us in the long run. For example, if we have the self-control to resist the temptation to eat unhealthy foods, we are more likely to maintain a healthydiet and avoid the negative consequences of poor nutrition. Similarly, if we have the self-control to resist the impulseto procrastinate, we are more likely to accomplish our tasks and achieve our long-term goals.Self-control also plays an important role in our relationships with others. When we are able to control our emotions and reactions, we are better able to communicate effectively and resolve conflicts in a constructive manner. Additionally, self-control allows us to consider the feelings and perspectives of others, which is essential for building and maintaining healthy relationships.Furthermore, self-control is closely linked to ouroverall well-being. Research has shown that individuals with high levels of self-control are more likely to experience greater psychological and physical health. This is because self-control enables us to engage in behaviors that promote well-being, such as exercising regularly, getting enough sleep, and managing stress effectively.Developing self-control is not always easy, but there are several strategies that can help. One important strategy is to identify and understand the triggers that lead to impulsive or undesirable behaviors. By recognizing these triggers, we can take steps to avoid or mitigate them. Another strategy is to practice mindfulness, which can help us become more aware of our thoughts and emotions, making it easier to regulate them. Additionally, setting clear goals and creating a plan to achieve them can help us stay focused and motivated, even when faced with temptation.In conclusion, self-control is a vital skill that plays a crucial role in our personal and professional lives. It enables us to make better decisions, build strong relationships, and maintain our overall well-being. By developing and practicing self-control, we can improve our lives in countless ways and work towards achieving our goals and aspirations.。
如何保护乳品安全英语作文

In the realm of food safety, dairy products hold a significant position due to their widespread consumption and the potential risks associated with contamination. The safety of dairy products is a critical concern for consumers, manufacturers, and regulatory bodies alike. Ensuring the safety of dairy products involves a multifaceted approach that encompasses farmtofork practices, stringent regulations, and consumer education.Dairy farms are the first link in the chain of dairy safety. The health and hygiene of dairy animals are paramount. Regular health checks, vaccinations, and proper nutrition are essential to prevent diseases that could potentially contaminate the milk. Additionally, the cleanliness of the milking environment and equipment is crucial. Modern dairy farms employ automated milking systems that minimize human contact, reducing the risk of contamination.Once milk is collected, it must be promptly cooled and transported under hygienic conditions to the processing facilities. Temperature control is vital as it inhibits the growth of harmful bacteria. During transportation, the milk is often kept in refrigerated trucks to maintain the required low temperatures.At the processing stage, dairy products undergo various treatments to ensure safety. Pasteurization is a widely used method that involves heating the milk to a specific temperature for a set duration to kill harmful microorganisms. Ultrahigh temperature UHT treatment and sterilization are other processes that extend the shelf life of dairy products by eliminating bacteria.Quality control is an integral part of dairy safety. Laboratories test samples of milk and dairy products for the presence of pathogens, antibiotic residues, and other contaminants. Advanced testing methods, such as polymerase chain reaction PCR and mass spectrometry, provide accurate and rapid detection of contaminants.Regulation plays a pivotal role in safeguarding dairy safety. Governments and international organizations have established standards and guidelines that dairy manufacturers must adhere to. These regulations cover aspects such as permissible levels of contaminants, hygiene practices, and product labeling. For instance, the U.S. Food and Drug Administration FDA and the European Food Safety Authority EFSA have strict guidelines for dairy production.Traceability is another key component in ensuring dairy safety. It allows for the tracking of products from the farm to the consumer, facilitating the identification of the source of contamination in case of a foodborne illness outbreak. This system is particularly important in the event of a recall, enabling swift action to prevent further distribution of contaminated products.Consumer education is equally important in dairy safety. Consumers should be aware of the proper storage and handling of dairy products to prevent spoilage and contamination. For example, they should be informed about the importance of refrigerating milk and dairy products to inhibit bacterial growth.Moreover, consumers should be vigilant about product recalls and be aware of the signs of spoilage, such as an off smell or unusual texture in dairy products. By being informed, consumers can make safer choices and contribute to the overall safety of the dairy supply chain.In conclusion, protecting the safety of dairy products is a collective responsibility that involves farmers, manufacturers, regulators, and consumers. By implementing best practices in dairy farming, adhering to stringent processing and quality control measures, enforcing regulations, ensuring traceability, and promoting consumer education, the safety of dairy products can be significantly enhanced. This collaborative effort ensures that dairy products remain a nutritious and safe part of the global diet.。
坚韧跨越星河远的英语作文

Resilience is a quality that allows individuals to overcome adversity and navigate through lifes challenges with strength and perseverance.It is the ability to bounce back from setbacks and keep moving forward,even when faced with seemingly insurmountable obstacles.In the vast expanse of the cosmos,where distances are measured in lightyears and the journey is as daunting as it is aweinspiring,resilience is a crucial attribute for those who seek to explore the unknown.The concept of resilience can be likened to the journey of a spacecraft venturing into deep space.Just as a spacecraft must withstand the harsh conditions of space travel, including extreme temperatures,radiation,and the vacuum of space,so too must individuals develop the resilience to face lifes trials and tribulations.One of the key components of resilience is adaptability.Just as a spacecraft must be able to adapt to the everchanging conditions of its environment,individuals must learn to adapt to the various challenges they encounter in life.This may involve developing new skills,adjusting to new situations,or finding creative solutions to problems.Another important aspect of resilience is the ability to maintain a positive outlook,even in the face of adversity.Just as a spacecraft relies on its navigational systems to stay on course,individuals must rely on their inner strength and optimism to guide them through difficult times.By focusing on the potential for growth and learning from setbacks, individuals can cultivate a resilient mindset that allows them to persevere in the face of challenges.In addition to adaptability and a positive outlook,resilience also requires a strong support system.Just as a spacecraft relies on a team of engineers,scientists,and mission control personnel to ensure its success,individuals benefit from the support of friends,family, and mentors who can provide encouragement,guidance,and assistance when needed. Furthermore,resilience is often built through experience.Just as a spacecraft undergoes rigorous testing and simulation before embarking on its mission,individuals can develop their resilience by facing and overcoming smaller challenges in their daily lives.By learning from these experiences and building on their successes,individuals can gradually increase their capacity to handle larger and more complex challenges.In the context of space exploration,resilience is not just a personal attribute but also a collective endeavor.The history of space travel is filled with stories of teams working together to overcome setbacks and achieve remarkable feats.From the Apollo13mission, which successfully returned to Earth despite a critical systems failure,to the Mars Rover missions,which have overcome numerous technical challenges to explore the Martiansurface,resilience has been a key factor in the success of these endeavors.In conclusion,resilience is a vital quality for anyone embarking on a journey,whether it be through the vastness of space or the ups and downs of life.By cultivating adaptability, maintaining a positive outlook,relying on a strong support system,and learning from experience,individuals can develop the resilience needed to navigate the challenges they face and achieve their goals,just as a spacecraft navigates the stars.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Adaptive Control with a Nested Saturation Reference Model
Suresh K. Kannan∗ and Eric N. Johnson† School of Aerospace Engineering Georgia Institute of Technology, Atlanta, GA 30332
This paper introduces a neural network based model reference adaptive control architecture that allows adaptation in the presence of saturation. The given plant is approximately feedback linearized, with adaptation used to cancel any matched uncertainty. A nested saturation based reference model is used. This law allows the incorporation of magnitude actuator saturation and has useful small gain properties. Depending on the bandwidth and saturation limits, the reference model based on this law eases off on the aggressiveness of the desired trajectory thus avoiding saturation. However, actuator saturation might yet occur due to uncertainty or external disturbances. In order to protect the adaptive element from such plant input characteristics, the nested saturation reference model is augmented with a pseduo-control hedging signal that removes these characteristics from the adaptive element’s training signal.
flight vehicles such as the X-36 Tailless fighter,4 JDAM guided munitions5 and on an unmanned helicopter.6 In implementing this architecture, initial applications assumed no actuator (input) saturation. If input saturation is encountered, the adaptive element (a neural network) incorrectly adapts to these input characteristics. In order to overcome this problem the reference model is modified in a specific way in order to remove input characteristics from the training signal of the neural network. This method called pseduo-control hedging (PCH) was developed initially to adaptively control the attitude dynamics of the X-33.7 Apart from input saturation pseduo-control hedging may also be used to completely remove any input characteristics the control designer does not want the adaptive element to see. Such characteristics might include actuator magnitude saturation, actuator rate limits, latency and oths. In addition to enabling continued adaptation in the presence of input dynamics, it was shown that a proven domain of attraction for the closed loop system is at least as large as that without PCH. It has been shown that as long as the external command and the isolated nonadaptive system states are close, boundedness of the reference model, plant and neural network states may be shown for certain amounts of saturation and consequently certain amounts of hedging.7 In present work that uses this architecture,1, 4 linear reference models are used. In general, PCH modifies the reference model dynamics and thus poses a problem when large external commands are applied to a linear reference model. For example in the position control of a vehicle a large position command causes a linear reference model to immediately saturate the controls until the plant state is close to command. The problem was partially tackled by introducing a nonlinear reference model contain-
Nomenclature
bv , bw ∆ δ e ecr ec K ˆ f, f g, g ˆ NN ν P CH V, W x ad c des h lc r neural network biases function approximation error control vector / actuator deflections tracking error xr − x ref. model command tracking error xc − xr command tracking error xc − x linear compensator gains actual, estimated plant dynamics actual, estimated actuator dynamics neural network pseudo-control vector pseudo-control hedging neural network input, output weights state vector adaptive signal commanded desired hedge linear compensator reference model
1 of 11 American Institute of Aeronautics and Astronautics Paper
ing limits on the maximum speed that could be used to achieve a large position command.6 This method however has problems where the poles of the reference model change when these limits are active. In this paper the use of a nested saturation based reference model8 is proposed. For a certain class of systems in feedforward form,9 saturation elements may be used to stabilize the system with bounded control guaranteeing Global Asymptotic Stability (GAS) and Local Exponential Stability (LES). This nested saturation law was first introduced to stabilize a chain of integrators8 and generalized by Sontag.10 A perfectly feedback linearized system is set of n-integrators. A natural choice of reference model is a set of nintegrators controlled using the nested saturation law which takes into account magnitude bounded actuator input and reflects the structure of a feedback linearized plant with magnitude saturation. Adaptation is used to cancel any matched uncertainty in the approximately linearized system, and a nested saturation based reference model is used to generate the trajectory. PCH is used to protect the neural network from incorrect adaptation when large uncertainty or external disturbances cause actuator magnitude saturation. The combined approach is expected to entail no need to avoid actuator saturation or large external commands. First, the adaptive control architecture is introduced along with PCH. A discussion on the choice of reference model is made by comparing linear and nonlinear reference models, followed by the nested saturation law. Finally, the architecture is applied to a 4th order plant and simulation results are presented.