神经网络的研究和应用(英文文献)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

E
=
¦ Ek
=
¦ ¦ 1
2
ei2k
ε
(r) p,k
(r
=
0,1,
2)
E < Emax
ωk ← ωk−1 +ηΔωk v j ← v j−1 +ηΔv j
Figure 2 Flow chart of the standard BP neural network algorithm
978-1-4244-5046-6/10/$26.00 c 2010 IEEE
Xinmin Wang
School of Automation Northwestern Polytechnical
University Xi’an,China wxmin@nwpu.edu.cn
Abstract—As the iterations are much, and the adjustment speed is slow, the improvements are made to the standard BP neural network algorithm. The momentum term of the weight adjustment rule is improved, make the weight adjustment speed more quicker and the weight adjustment process more smoother. The simulation of a concrete example shows that the iterations of the improved BP neural network algorithm can be calculated and compared. Finally, choosing a certain type of airplane as the controlled object, the improved BP neural network algorithm is used to design the control law for control command tracking, the simulation results show that the improved BP neural network algorithm can realize quicker convergence rate and better tracking accuracy.
University Xi’an,China zhaokairui@nwpu.edu.cn
II. STUCTURE AND ALGORITHM OF THE STANDARD BP NEURAL NETWORK
A. Structure of the BP neural network
The standard structure of a typical three-layer feed-forward
ΔW
(
n)
=
−η
∂E
∂W (n)
+
αΔW
(n

1)
(2)
In formula (2), α ΔW ( n − 1) represents the momentum
term, ǻW (n-1) represents the weight adjustment value which generated by the (n −1)th iterations, α represents the
From formula (1), the learning rate η influences the
weight adjustment value ǻW(n), and then influences the convergence rate of the network. If the learning rate η is too
smoothing coefficient, its value is from 0 to 1.
Formula (2) is a improvement of formula (1), which can improve the convergence rate of the neural network in a certain degree, but the effect is not obvious.
represents the weight adjustment value of the nth iterations; E(n) represents the error of the nth iterations; W(n) represents the connection weight of the nth iterations.
According to the different types of the neuron connections, the neural networks can be divided into several types. This paper studies feed-forward neural network, as the feed-forward neural network using the error back propagation function in the weight training process, it is also known as back propagation neural network, or BP network for short [2,3]. BP neural network is a core part of the feed-forward neural network, which can realize a special non-linear transformation, transform the input space to the output space.
1462
III. IMPROVEMENT OF THE STANDARD BP NEURAL NETWORK ALGORITHM
The convergence rate of the standard BP algorithm is slow, and the iterations of the standard BP algorithm are much, they all have negative influences on the rapidity of the control system. In this paper, improvement has been made to the learning rate of the standard BP algorithm to accelerate the training speed of the neural network.
B. Algorithm of the BP neural network The flow chart of the standard BP neural network
algorithm is as follows[4]:
wki , v jk
oi = g (neti ) = g (ωi g(neti−1))
small, the convergence rate will become very slow; If the learning rate η is too big, the excessive weight adjustment
will cause the convergence process oscillates around the minimum point. In order to solve the problem, the momentum term is added behind the formula (1):
Research and Application on Improved BP Neural Network Algorithm
Rong Xie
School of Automation Northwestern Polytechnical
University Xi’an,China xierong2005@tom.com
Although the BP neural network has mature theory and wide application, it still has many problems, such as the convergence rate is slow, the iterations are much, and the realtime performance is not so good. It is necessary to improve the standard BP neural network algorithm to solve there problems and achieve optimal performance.
In order to accelerate the convergence speed of the neural networks, the weight adjustment formula needs to be further improved.
network is shown as follows:
x1
w11 w12
y1
v11 v12
o1
x2
y2
o2
# xn1n1
y3
m# w2m w1m
wn1m
ym
#n v1n2 2
v3n2
on2
vmn2
Figure 1 The standard structure of a typical three-layer feed-forward network
Yan Li
School of Automation Northwestern Polytechnical
University Xi’an,China liyan@nwpu.edu.cn
Kairui Zhao
School of Automation Northwestern Polytechnical
For the standard BP algorithm, the formula to calculate the weight adjustment is as follows:
ΔW ( n)
=
−η
∂E
∂W (n)
Fra Baidu bibliotek
(1)
In formula (1), η represents the learning rate; ǻW(n)
Keywords— improved BP neural networ˗ weight adjustment˗ learning rate˗ convergence rate˗ momentum term
I. INTRODUCTION
Artificial neural network (ANN) is developed under the basis of researching on complex biological neural networks. The human brain is constituted by about 1011 highly interconnected units, these units called neurons, and each neuron has about 104 connections[1]. Imitating the biological neurons, neurons can be expressed mathematically, the concept of artificial neural network is introduced, and the types can be defined by the different interconnection of neurons. It is an important area of the intelligent control by using the artificial neural network.
相关文档
最新文档