中立型
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
鲁棒稳定性、中立 型
构造李雅普诺夫函 数证明稳定性丆比 较麻烦
2010 Second International Conference on Computational Intelligence and Natural Computing (CINC)
New robust stability criteria for neutral-type neural networks with multiple mixed delays
The dynamic behavior of uncertain neutral-type neural networks with multiple time-varying delays and distributed time-varying delays can be described by the following state equation: ˜(x(t)) + ¯ (t) + A ¯f x ˙ (t) = − Cx ∫ ¯ +D
Φ=[P
T X0
0
Σ 1 − Σ2
0
...
0 ]1×(3m+6) ,
Ξ = diag{τ ¯1 U1 , ..., τ ¯m Um , τ ¯1 U1 , ..., τ ¯m Um }, m ∑ { 2r τ } T Ψ1,1 = −P C − CP + e ¯k (Pk + Rk ) − Sk + Xk + Xk
r1
(2)
பைடு நூலகம்
詹森不等式
CINC2010
978-1-4244-7703-6/10/$26.00 ©2010 IEEE
244
Lemma 3 (see [1]) Let X, Y and P be real matrices of appropriate dimensions with P > 0. Then for any positive scalar ε the following matrix inequality holds: X T Y + Y T X ≤ ε−1 X T P −1 X + εY T P Y.
t−σ (t) t m ∑ k=1
˜(x(t − τk (t))) ¯k f B (1)
where τ ¯M = max1≤k≤m {τ ¯k }, ϕi (t)(i = 1, 2, ..., n) are continuous functions. Throughout this paper, let ||y || denote the Euclidean norm of a vector y ∈ Rn , W T , W −1 , λM (W ), λm (W ) and ||W || = √ λM (W T W ) denote the transpose, the inverse, the largest eigenvalue, the smallest eigenvalue, and the spectral norm of a square matrix W, respectively. Let W > 0(≤ 0) denote a positive (semi-negative) definite symmetric matrix. Let I denote an identity matrix with compatible dimension. The time uncertain matrices ∆C (t), ∆A(t), ∆Bk (t)(k = 1, 2, ..., m), ∆D(t), ∆U (t) are defined by: [ ∆C (t) ∆A(t) ∆Bk (t) ∆D(t) G0 Gk G+ ∆U (t) ] (3)
Li Jin Department of Mathematics, Dalian Jiaotong University,116028,China (email: chd4211853@163.com).
Abstract:The global exponential stability is analyzed for a class of uncertain neutral-type neural networks with multiple variable and distributed delays. By applying Jensen integral inequality, free-weighting matrix method and linear matrix inequality(LMI) techniques, some less conservative delaydependent stability criteria are obtained, which generalize some previous results in the literature. Furthermore, the obtained results can be generalized to uncertain neural networks and bidirectional associative memory (BAM) neural networks. Keywords: Global robust exponential stability; neutral-type; linear matrix inequality(LMI); Jensen integral inequality; freeweighting matrix method; Bidirectional associative memory(BAM) neural networks 1 P ROBLEM DESCRIPTION AND PRELIMINARIES .
k=1
+ 2rP0 + 4r(L2 − L1 )(Σ1 + Σ2 ) − 2L2 T L1 − 2L4 T0 L3 ,
T Ψ1,2 = −CX0 , Ψ1,3 = P U, Ψ1,4 = P A − C (Σ1 − Σ2 ) + T (L2 + L1 ), Ψ1,5 = T0 (L3 + L4 ), Ψ1,6 = P D, Ψ1,k+6 = −Zk , T Ψ1,m+6+k = Sk − Xk + Yk + Zk , Ψ1,2m+6+k = P Bk , m ∑ ¯k ¯ T Ψ2,2 = τ ¯k e2rτ (Uk + τ ¯k Sk ) + e2rτ R0 − X0 − X0 , k=1
where H, G, G0 , Gk , G+ , G− are known constant real matrices with appropriate dimensions, F (t) is an unknown timevarying matrix satisfying F T (t)F (t) ≤ I. (4)
¯x g ˜(x(s))ds + U ˙ (t − τ (t)) + J,
T n
= U0 ∆(t)[ G
G− ],
where x(t) = (x1 (t), x2 (t), ..., xn (t)) ∈ R is the neural ¯ = C + ∆C (t), A ¯ = A + ∆A(t), B ¯k = state vector, C ¯ ¯ Bk + ∆Bk (t), D = D + ∆D(t), U = U + ∆U (t); C = diag{c1 , c2 , ..., cn } is a positive diagonal matrix, A = (k) (aij )n×n , Bk = (bij )n×n , D = (dij )n×n , U = (uij )n×n are known constant matrices, ∆C (t), ∆A(t), ∆Bk (t), ∆D(t), ∆U (t) are parametric uncertainties, 0 ≤ τk (t) ≤ τ ¯k , 0 ≤ τ (t) ≤ τ ¯, 0 ≤ σ (t) ≤ σ ¯ are the time-varying delays, where τ ¯k , τ ¯, σ ¯ > 0 are constants, k = 1, 2, ..., m. J = (J1 , J2 , ..., Jn )T is the constant external input vecT ˜(x(t)) = (f ˜ ˜ ˜ tor, and f ∈ 1 (x1 (t)), f2 (x2 (t)), ..., fn (xn (t))) n T R , g ˜(x(t)) = (˜ g1 (x1 (t)), g ˜2 (x2 (t)), ..., g ˜n (xn (t))) ∈ Rn denote the neural activation functions. It is assumed that ˜ f ˜j (xj (t)) are bounded and satisfy the following j (xj (t)), g condition: Assumption 1 There exist constants l1j , l2j , l3j , l4j such that l1j < l2j , l3j < l4j and l1 j ≤ l3 j ˜ ˜ f j (s1 ) − fj (s2 ) ≤ l2 j , s1 − s2 g ˜j (s1 ) − g ˜j (s2 ) ≤ ≤ l4j , j = 1, 2, ..., n s1 − s2
for any s1 , s2 ∈ R,s1 ̸= s2 . For notational simplicity, we denote Li = diag{li1 , li2 , ..., lin }, i = 1, 2, 3, 4. From the well-known Brouwer’s fixed point theorem, system (1) always has an equilibrium point x∗ . Moreover, we assume that the initial condition of neural network (1) has the form xi (t) = ϕi (t), t ∈ [− max{τ ¯M , τ ¯, σ ¯ }, 0],
ς
Lemma 2( [2]) For any positive symmetric constant matrix M ∈ Rn×n , scalars r1 < r2 and vector function ω : [r1 , r2 ] → Rn such that the integrations concerned are well defined, then ( ∫ r2 )T ∫ r 2 ω (s)ds ω (s)ds M r1 r1 ∫ r2 ≤(r2 − r1 ) ω T (s)M ω (s)ds.
2
ROBUST EXPONENTIAL STABILITY RESULT
In order to prove the robust exponential stability of the equilibrium point x∗ of neural network (1), we will firstly simplify neural network (1) as follows. Let u(·) = x(·) − x∗ , then we have ¯ (t) + Af ¯ (u(t)) + u ˙ (t) = − Cu ∫ ¯ +D
In order to obtain the results, we need the following lemmas. Lemma 1(see [3]) Assuming that function gj (s) is defined such that 0 ≤ gj (s)/s ≤ ρj (ρj > 0), then the following inequality holds ∫ ξ (gj (s) − gj (ς ))ds ≤ (ξ − ς )(gj (ξ ) − gj (ς )).
构造李雅普诺夫函 数证明稳定性丆比 较麻烦
2010 Second International Conference on Computational Intelligence and Natural Computing (CINC)
New robust stability criteria for neutral-type neural networks with multiple mixed delays
The dynamic behavior of uncertain neutral-type neural networks with multiple time-varying delays and distributed time-varying delays can be described by the following state equation: ˜(x(t)) + ¯ (t) + A ¯f x ˙ (t) = − Cx ∫ ¯ +D
Φ=[P
T X0
0
Σ 1 − Σ2
0
...
0 ]1×(3m+6) ,
Ξ = diag{τ ¯1 U1 , ..., τ ¯m Um , τ ¯1 U1 , ..., τ ¯m Um }, m ∑ { 2r τ } T Ψ1,1 = −P C − CP + e ¯k (Pk + Rk ) − Sk + Xk + Xk
r1
(2)
பைடு நூலகம்
詹森不等式
CINC2010
978-1-4244-7703-6/10/$26.00 ©2010 IEEE
244
Lemma 3 (see [1]) Let X, Y and P be real matrices of appropriate dimensions with P > 0. Then for any positive scalar ε the following matrix inequality holds: X T Y + Y T X ≤ ε−1 X T P −1 X + εY T P Y.
t−σ (t) t m ∑ k=1
˜(x(t − τk (t))) ¯k f B (1)
where τ ¯M = max1≤k≤m {τ ¯k }, ϕi (t)(i = 1, 2, ..., n) are continuous functions. Throughout this paper, let ||y || denote the Euclidean norm of a vector y ∈ Rn , W T , W −1 , λM (W ), λm (W ) and ||W || = √ λM (W T W ) denote the transpose, the inverse, the largest eigenvalue, the smallest eigenvalue, and the spectral norm of a square matrix W, respectively. Let W > 0(≤ 0) denote a positive (semi-negative) definite symmetric matrix. Let I denote an identity matrix with compatible dimension. The time uncertain matrices ∆C (t), ∆A(t), ∆Bk (t)(k = 1, 2, ..., m), ∆D(t), ∆U (t) are defined by: [ ∆C (t) ∆A(t) ∆Bk (t) ∆D(t) G0 Gk G+ ∆U (t) ] (3)
Li Jin Department of Mathematics, Dalian Jiaotong University,116028,China (email: chd4211853@163.com).
Abstract:The global exponential stability is analyzed for a class of uncertain neutral-type neural networks with multiple variable and distributed delays. By applying Jensen integral inequality, free-weighting matrix method and linear matrix inequality(LMI) techniques, some less conservative delaydependent stability criteria are obtained, which generalize some previous results in the literature. Furthermore, the obtained results can be generalized to uncertain neural networks and bidirectional associative memory (BAM) neural networks. Keywords: Global robust exponential stability; neutral-type; linear matrix inequality(LMI); Jensen integral inequality; freeweighting matrix method; Bidirectional associative memory(BAM) neural networks 1 P ROBLEM DESCRIPTION AND PRELIMINARIES .
k=1
+ 2rP0 + 4r(L2 − L1 )(Σ1 + Σ2 ) − 2L2 T L1 − 2L4 T0 L3 ,
T Ψ1,2 = −CX0 , Ψ1,3 = P U, Ψ1,4 = P A − C (Σ1 − Σ2 ) + T (L2 + L1 ), Ψ1,5 = T0 (L3 + L4 ), Ψ1,6 = P D, Ψ1,k+6 = −Zk , T Ψ1,m+6+k = Sk − Xk + Yk + Zk , Ψ1,2m+6+k = P Bk , m ∑ ¯k ¯ T Ψ2,2 = τ ¯k e2rτ (Uk + τ ¯k Sk ) + e2rτ R0 − X0 − X0 , k=1
where H, G, G0 , Gk , G+ , G− are known constant real matrices with appropriate dimensions, F (t) is an unknown timevarying matrix satisfying F T (t)F (t) ≤ I. (4)
¯x g ˜(x(s))ds + U ˙ (t − τ (t)) + J,
T n
= U0 ∆(t)[ G
G− ],
where x(t) = (x1 (t), x2 (t), ..., xn (t)) ∈ R is the neural ¯ = C + ∆C (t), A ¯ = A + ∆A(t), B ¯k = state vector, C ¯ ¯ Bk + ∆Bk (t), D = D + ∆D(t), U = U + ∆U (t); C = diag{c1 , c2 , ..., cn } is a positive diagonal matrix, A = (k) (aij )n×n , Bk = (bij )n×n , D = (dij )n×n , U = (uij )n×n are known constant matrices, ∆C (t), ∆A(t), ∆Bk (t), ∆D(t), ∆U (t) are parametric uncertainties, 0 ≤ τk (t) ≤ τ ¯k , 0 ≤ τ (t) ≤ τ ¯, 0 ≤ σ (t) ≤ σ ¯ are the time-varying delays, where τ ¯k , τ ¯, σ ¯ > 0 are constants, k = 1, 2, ..., m. J = (J1 , J2 , ..., Jn )T is the constant external input vecT ˜(x(t)) = (f ˜ ˜ ˜ tor, and f ∈ 1 (x1 (t)), f2 (x2 (t)), ..., fn (xn (t))) n T R , g ˜(x(t)) = (˜ g1 (x1 (t)), g ˜2 (x2 (t)), ..., g ˜n (xn (t))) ∈ Rn denote the neural activation functions. It is assumed that ˜ f ˜j (xj (t)) are bounded and satisfy the following j (xj (t)), g condition: Assumption 1 There exist constants l1j , l2j , l3j , l4j such that l1j < l2j , l3j < l4j and l1 j ≤ l3 j ˜ ˜ f j (s1 ) − fj (s2 ) ≤ l2 j , s1 − s2 g ˜j (s1 ) − g ˜j (s2 ) ≤ ≤ l4j , j = 1, 2, ..., n s1 − s2
for any s1 , s2 ∈ R,s1 ̸= s2 . For notational simplicity, we denote Li = diag{li1 , li2 , ..., lin }, i = 1, 2, 3, 4. From the well-known Brouwer’s fixed point theorem, system (1) always has an equilibrium point x∗ . Moreover, we assume that the initial condition of neural network (1) has the form xi (t) = ϕi (t), t ∈ [− max{τ ¯M , τ ¯, σ ¯ }, 0],
ς
Lemma 2( [2]) For any positive symmetric constant matrix M ∈ Rn×n , scalars r1 < r2 and vector function ω : [r1 , r2 ] → Rn such that the integrations concerned are well defined, then ( ∫ r2 )T ∫ r 2 ω (s)ds ω (s)ds M r1 r1 ∫ r2 ≤(r2 − r1 ) ω T (s)M ω (s)ds.
2
ROBUST EXPONENTIAL STABILITY RESULT
In order to prove the robust exponential stability of the equilibrium point x∗ of neural network (1), we will firstly simplify neural network (1) as follows. Let u(·) = x(·) − x∗ , then we have ¯ (t) + Af ¯ (u(t)) + u ˙ (t) = − Cu ∫ ¯ +D
In order to obtain the results, we need the following lemmas. Lemma 1(see [3]) Assuming that function gj (s) is defined such that 0 ≤ gj (s)/s ≤ ρj (ρj > 0), then the following inequality holds ∫ ξ (gj (s) − gj (ς ))ds ≤ (ξ − ς )(gj (ξ ) − gj (ς )).