神经网络与模糊系统
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
E m j xj
EDCODING
Unsupervised
VECTOR QUANTIZATION
SELF-ORGANIZING MAPS COMPETITIVE LEARNING COUNTER-PROPAGATION
BROWNIAN ANNEALING BOLTZMANN LEARNING
ABAM
ART-2 BAM-COHEN-GROSSBERG MODEL HOPFIELD CIRCUIT BRAIN-STATE-IN-A-BOX MASKING FILED
R n D1 D2 Dk Di D j if i j
1 if I Dj x 0 if
The random indicator functions I D1 ,, I Dk :
x Dj x Dj
图像处理 质心定位
Centroid of D j :
m j t x t min mi t x t
3. Update the winning synaptic vector(s) m j t by the UCL, SCL, or DCL learning algorithm. Unsupervised Competitive Learning (UCL)
神经网络与模糊系统
CHAPTER 6 ARCHITECTURE AND EQUILIBRIA
结构和平衡
学生: 导师:
PREFACE
L函数与系统的稳定性 对于一个系统构造一个Lyapunov方程
0 ,系统稳定 若L 0 ,系统渐进稳定 若L 0 能构造 L
系统稳定 系统稳定 能构造 L方程
R
( x m j ) p ( x)dx
Dj
xp ( x)dx m j p ( x)dx
Dj Dj
mj
Dj Dj
xp ( x)dx p ( x)dx
xj
In general the AVQ centroid theorem concludes that at equilibrium: Q.E.D
Competitive AVQ Algorithms
1. Initialize synaptic vectors: mi 0 x i , i 1,, m. 2. For random sample x t , find the closest synaptic vector m j t :
实际中,只使 用该信号差的 符号或 sgn y j
The fixed competition matrix W defines a symmetric lateral inhibition Topology within FY .
Stochastic Equilibrium and Convergence Competitive synaptic vector m j converge to decision-class centroids.
So I Dj x 1 iff S j 1 by S j y j I Dj x .
j O. Suppose m
j I Dj x The competitive law m x m j nj .
Take Expectation : j OE m n I D j ( x)( x m j ) p ( x)dx E n j
Supervised
a.训练 label 训练数据 特征 模型 训练
b.测试
测试数据 特征 模型 label
2. how learning modifies their connection topologies
Unsupervised
a.训练 训练数据 特征 模型 训练
b.测试
测试数据 特征 模型 结果
j Sj yj m x m j nj
Equilibrium:m j x j or E m j xj
As discussed in Chapter 4: S j y j I D x
j
The linear stochastic competitive learning law:
j I Dj x m x m j nj
rj x I D j x I Di x
i j
The linear supervised competitive learning law:
j rj x I Dj x m x m j nj x m n j S m j j j
ADAPTIVEΒιβλιοθήκη BaiduRESONANCE
ART-1 ART-2
6.2 Global Equilibria: convergence and stability
Three dynamical systems in neural network:
1)synaptic dynamical system M
Example: ct 0.1 1 t / 10000
Supervised Competitive Learning (SCL)
m j t 1 m j t ct rj x t x t m j t m j t ct x t m j t if m j t ct x t m j t if x Dj x Dj
O x Global Stability M O
n x Stochastic Global Stability M N
Stability-Convergence dilemma Neurons fluctuate faster than synapses fluctuate. Learning tends to destroy the neuronal patterns being learned. Convergence undermines stability.
m j t 1 m j t ct x t m j t
i
mi t 1 mi t if
i j
ct defines a slowly decreasing sequence of learning coefficients.
6.3 Synaptic convergence to centroids: AVQ Algorithms
Competitive learning adaptively quantizes the input pattern space Rn . Probability density function p x characterizes the continuous distributions of patterns in Rn . Competitive AVQ Stochastic Differential Equations: The decision classes D1 , , Dk partition Rn into k classes:
I
S
L
圣彼得堡数学学派
切比雪夫
李雅普诺夫
马尔科夫
6.1 Neutral Network As Stochastic Gradient system 1. synaptic connection topologies Feedforward
Feedback
2. how learning modifies their connection topologies
n
mi t 1 mi t if
i j
S j y j t S j y j t 1 S j y j t
m i 1 k 1
y j t 1 y j t Si xi mij t Sk yk wkj
Differential Competitive Learning (DCL) m j t 1 m j t ct S j y j t x t m j t
S j y j t denotes the time change of the jth neuron’s competitive signal S j y j t in the competition field FY :
K-means
NEURAL NETWORK TAXONOMY DECODING
Feedforward Supervised GRADIENT DESCENT
LMS BACKPROPAGATION REINFORCEMENT LEARNING
Feedback RECURRENT BACKPROPAGATION BABAM
The centroids may correspond to local maxima of the sampled but unknown probability density function p x . AVQ centroid theorem: If a competitive AVQ system converges, it converges to the centroid of the sampled decision class. Prob(m j x j ) 1 at equilibrium Proof. Suppose the jth neuron in FY wins the competition. Suppose the jth synaptic vector m j codes for decision class D j .
xj
Dj Dj
xp x dx p x dx
灰度质心法
x0
x f ( x , y )
i 1 j 1 m n i i j
m
n
f ( x , y )
i 1 j 1 i j
灰度质心法
The Stochastic unsupervised competitive learning law:
z0 z0 z0
The linear differential competitive learning law:
1 if j sgn In practice:m y x m n , sgn[ z ] 0 if j j j 1 if
2)neuronal dynamical system
x
, M 3)joint neuronal-synaptic dynamical system x
Equilibrium is steady state (for fixed-point attractors). O Convergence is synaptic equilibrium: M O Stability is neuronal equilibrium: x More generally neural signals reach steady state even though the O activations still change. Steady state: F X