外文翻译---人工神经网络

合集下载

人工神经网络ArtificialNeuralNetworks共354页文档

人工神经网络ArtificialNeuralNetworks共354页文档

(由人根据已知的环境 自动从样本数据中抽取内涵(自动适应应
去构造一个模型)
用环境)
适应领域 精确计算:符号处理, 非精确计算:模拟处理,感觉,大规模数
数值计算
据并行处理
模拟对象 左脑(逻辑思维)
右脑(形象思维)
18.09.2019
31
1.2 人工神经网络的特点
• 信息的分布表示 • 运算的全局并行和局部操作 • 处理的非线性
–传统的人工智能技术——心理的角度模拟 –基于人工神经网络的技术——生理的角度模拟
18.09.2019
18
1.1 人工神经网络的提出
• 人工神经网络(Artificial Neural Networks, 简记作ANN),是对人类大脑系统的一阶 特性的一种描述。简单地讲,它是一个数 学模型,可以用电子线路来实现,也可以 用计算机程序来模拟,是人工智能研究的 一种方法。
–掌握软件实现方法。
18.09.2019
5
课程目的和基本要求
• 了解人工神经网络的有关研究思想,从中 学习开拓者们的部分问题求解方法。
• 通过实验进一步体会有关模型的用法和性 能,获取一些初步的经验。
• 查阅适当的参考文献,将所学的知识与自 己未来研究课题(包括研究生论文阶段的 研究课题)相结合起来,达到既丰富学习 内容,又有一定的研究和应用的目的。
18.09.2019
26
1.1 人工神经网络的提出
• 困难:
– 抽象——舍弃一些特性,同时保留一些特性 – 形式化处理——用物理符号及相应规则表达物
理系统的存在和运行。
• 局限:
–对全局性判断、模糊信息处理、多粒度的视觉 信息处理等是非常困难的。
18.09.2019

人工智能9人工神经网络基础

人工智能9人工神经网络基础

第九章人工神经网络基础人工神经网络(Artificial Neural Network, ANN)是在模拟人脑神经系统的基础上实现人工智能的途径,因此认识和理解人脑神经系统的结构和功能是实现人工神经网络的基础。

而人脑现有研究成果表明人脑是由大量生物神经元经过广泛互连而形成的,基于此,人们首先模拟生物神经元形成人工神经元,进而将人工神经元连接在一起形成人工神经网络。

因此这一研究途径也常被人工智能研究人员称为“连接主义”(connectionism)。

又因为人工神经网络开始于对人脑结构的模拟,试图从结构上的模拟达到功能上的模拟,这与首先关注人类智能的功能性,进而通过算法来实现的符号式人工智能正好相反,为了区分这两种相反的途径,我们将符号式人工智能称为“自上而下的实现方式”,而称人工神经网络称为“自下而上的实现方式”。

人工神经网络中存在两个基本问题。

第一个问题是人工神经网络的结构问题,即如何模拟人脑中的生物神经元以及生物神经元之间的互连方式的问题。

确定了人工神经元模型和人工神经元互连方式,就确定好了网络结构。

第二个问题是在所确定的结构上如何实现功能的问题,这一般是,甚至可以说必须是,通过对人工神经网络的学习来实现,因此主要是人工神经网络的学习问题。

具体地说,是如何利用学习手段从训练数据中自动确定神经网络中神经元之间的连接权值的问题。

这是人工神经网络中的核心问题,其智能程度更多的反映在学习算法上,人工神经网络的发展也主要体现在学习算法的进步上。

当然,学习算法与网络结构是紧密联系在一起的,网络结构在很大程度上影响着学习算法的确定。

本章首先阐述人脑神经系统,然后说明人工神经元模型,进而介绍人工神经网络的基本结构类型和学习方式。

9.1 人脑神经系统人工神经网络是在神经细胞水平上对人脑的简化和模拟,其核心是人工神经元。

人工神经元的形态来源于神经生理学中对生物神经元的研究。

因此,在叙述人工神经元之前,首先介绍目前人们对生物神经元的构成及其工作机理的认识。

第七章人工神经网络

第七章人工神经网络

2003-11-1
高等教育出版社
27
三层前向神经网络
考虑一个三层的前向神经网络,设输入
层层节节点点数数为为nm1。,设中间Yi1为层输节入点层数节为点n2,i的输输出
出;Y
2 j
为中间层节点j的输出;Yk3
为输出
层的权θ点节教值jk为的点师;中阈信kW间的值j号k层为输。;节节出W点点;ij为jjT的和k节为阈节点输值点i出和;k间层节θ的节点k为连点j间输接k的出对权连层应值接节;
y
sgn
Wi Ii
2003-11-1
高等教育出版社
5
人工神经元的形式化描述
人工神经元的数学模型如图所示
x1 x2
… ‥ xn
u育出版社
6
人工神经元的形式化描述(续)
其中ui为第i个神经元的内部状态,θi为神经元 阈值,xj为输入信号,wji表示从第j个神经元到 第i个神经元连接的权值。si表示第i个神经元的 外部输入信号,上述假设可描述为:
ui
f
j
x j wji
si
i
yi gui h
x j wji
si
i
j
hg f
2003-11-1
高等教育出版社
7
常用的神经元状态转移函数
阶跃函数
1 y f (x) 0
x0 x0
准线形函数
1
y f (x) x
0
x 0 x
x0
Sigmoid函数
f
x
1
1 ex
双曲正切函数 f (x)=th (x)
出输入样本的网络预测。有
两个隐层的前向神经网络如
•••
图所示:

人工神经网络简介

人工神经网络简介

人工神经网络简介1 人工神经网络概念、特点及其原理 (1)1.1人工神经网络的概念 (1)1.2人工神经网络的特点及用途 (2)1.3人工神经网络的基本原理 (3)2 人工神经网络的分类及其运作过程 (5)2.1 人工神经网络模式的分类 (5)2.2 人工神经网络的运作过程 (6)3 人工神经网络基本模型介绍 (6)3.1感知器 (7)3.2线性神经网络 (7)3.3BP(Back Propagation)网络 (7)3.4径向基函数网络 (8)3.5反馈性神经网络 (8)3.6竞争型神经网络 (8)1 人工神经网络概念、特点及其原理人工神经网络(Artificial Neural Networks,简记作ANN),是对人类大脑系统的一阶特征的一种描述。

简单地讲,它是一个数学模型,可以用电子线路来实现,也可以用计算机程序来模拟,是人工智能研究的一种方法。

1.1人工神经网络的概念利用机器模仿人类的智能是长期以来人们认识自然、改造自然的理想。

自从有了能够存储信息、进行数值运算和逻辑运算的电子计算机以来,其功能和性能得到了不断的发展,使机器智能的研究与开发日益受到人们的重视。

1956年J.McCart冲等人提出了人工智能的概念,从而形成了一个与神经生理科学、认知科学、数理科学、信息论与计算机科学等密切相关的交叉学科。

人工神经网络是人工智能的一部分,提出于50年代,兴起于80年代中期,近些年已经成为各领域科学家们竞相研究的热点。

人工神经网络是人脑及其活动的一个理论化的数学模型,它由大量的处理单元通过适当的方式互联构成,是一个大规模的非线性自适应系统,1998年Hecht-Nielsen曾经给人工神经网络下了如下定义:人工神经网络是一个并行、分层处理单元及称为联接的无向信号通道互连而成。

这些处理单元(PE-Processing Element)具有局部内存,并可以完成局部操作。

每个处理单元有一个单一的输出联接,这个输出可以根据需要被分支撑希望个数的许多并联联接,且这些并联联接都输出相同的信号,即相应处理单元的信号。

人工神经网络ANN

人工神经网络ANN

10
off
Stimulus
ui wij x j
j
Response
yi f urest ui
“Hard” threshold
z ON
f
z




else OFF
= threshold
• ex: Perceptrons, Hopfield NNs, Boltzmann Machines
2 1 ez
1
• ex: MLPs, Recurrent NNs, RBF NNs...
• Main drawbacks: difficult to process time patterns, biologically implausible.
10/11/2019
Artificial Neural Networks - I
ms
10 20 30 40 50 60 70 80 90 100
10/11/2019
Artificial Neural Networks - I
6
神经网络的复杂性
• 神经网路的复杂多样,不仅在于神经元和突触 的数量大、组合方式复杂和联系广泛,还在于 突触传递的机制复杂。现在已经发现和阐明的 突触传递机制有:突触后兴奋,突触后抑制, 突触前抑制,突触前兴奋,以及“远程”抑制 等等。在突触传递机制中,释放神经递质是实 现突触传递机能的中心环节,而不同的神经递 质有着不同的作用性质和特点
• 10 billion neurons in human brain • Summation of input stimuli
– Spatial (signals) – Temporal (pulses)

人工神经网络简介

人工神经网络简介

人工神经网络简介本文主要对人工神经网络基础进行了描述,主要包括人工神经网络的概念、发展、特点、结构、模型。

本文是个科普文,来自网络资料的整理。

一、人工神经网络的概念人工神经网络(Artificial Neural Network,ANN)简称神经网络(NN),是基于生物学中神经网络的基本原理,在理解和抽象了人脑结构和外界刺激响应机制后,以网络拓扑知识为理论基础,模拟人脑的神经系统对复杂信息的处理机制的一种数学模型。

该模型以并行分布的处理能力、高容错性、智能化和自学习等能力为特征,将信息的加工和存储结合在一起,以其独特的知识表示方式和智能化的自适应学习能力,引起各学科领域的关注。

它实际上是一个有大量简单元件相互连接而成的复杂网络,具有高度的非线性,能够进行复杂的逻辑操作和非线性关系实现的系统。

神经网络是一种运算模型,由大量的节点(或称神经元)之间相互联接构成。

每个节点代表一种特定的输出函数,称为激活函数(activation function)。

每两个节点间的连接都代表一个对于通过该连接信号的加权值,称之为权重(weight),神经网络就是通过这种方式来模拟人类的记忆。

网络的输出则取决于网络的结构、网络的连接方式、权重和激活函数。

而网络自身通常都是对自然界某种算法或者函数的逼近,也可能是对一种逻辑策略的表达。

神经网络的构筑理念是受到生物的神经网络运作启发而产生的。

人工神经网络则是把对生物神经网络的认识与数学统计模型相结合,借助数学统计工具来实现。

另一方面在人工智能学的人工感知领域,我们通过数学统计学的方法,使神经网络能够具备类似于人的决定能力和简单的判断能力,这种方法是对传统逻辑学演算的进一步延伸。

人工神经网络中,神经元处理单元可表示不同的对象,例如特征、字母、概念,或者一些有意义的抽象模式。

网络中处理单元的类型分为三类:输入单元、输出单元和隐单元。

输入单元接受外部世界的信号与数据;输出单元实现系统处理结果的输出;隐单元是处在输入和输出单元之间,不能由系统外部观察的单元。

人工神经网络知识概述

人工神经网络知识概述

人工神经网络知识概述人工神经网络(Artificial Neural Networks,ANN)系统是20世纪40年代后出现的。

它是由众多的神经元可调的连接权值连接而成,具有大规模并行处理、分布式信息存储、良好的自组织自学习能力等特点。

BP(Back Propagation)算法又称为误差反向传播算法,是人工神经网络中的一种监督式的学习算法。

BP 神经网络算法在理论上可以逼近任意函数,基本的结构由非线性变化单元组成,具有很强的非线性映射能力。

而且网络的中间层数、各层的处理单元数及网络的学习系数等参数可根据具体情况设定,灵活性很大,在优化、信号处理与模式识别、智能控制、故障诊断等许多领域都有着广泛的应用前景。

人工神经元的研究起源于脑神经元学说。

19世纪末,在生物、生理学领域,Waldeger等人创建了神经元学说。

人们认识到复杂的神经系统是由数目繁多的神经元组合而成。

大脑皮层包括有100亿个以上的神经元,每立方毫米约有数万个,它们互相联结形成神经网络,通过感觉器官和神经接受来自身体内外的各种信息,传递至中枢神经系统内,经过对信息的分析和综合,再通过运动神经发出控制信息,以此来实现机体与内外环境的联系,协调全身的各种机能活动。

神经元也和其他类型的细胞一样,包括有细胞膜、细胞质和细胞核。

但是神经细胞的形态比较特殊,具有许多突起,因此又分为细胞体、轴突和树突三部分。

细胞体内有细胞核,突起的作用是传递信息。

树突是作为引入输入信号的突起,而轴突是作为输出端的突起,它只有一个。

树突是细胞体的延伸部分,它由细胞体发出后逐渐变细,全长各部位都可与其他神经元的轴突末梢相互联系,形成所谓“突触”。

在突触处两神经元并未连通,它只是发生信息传递功能的结合部,联系界面之间间隙约为(15~50)×10米。

突触可分为兴奋性与抑制性两种类型,它相应于神经元之间耦合的极性。

每个神经元的突触数目正常,最高可达10个。

各神经元之间的连接强度和极性有所不同,并且都可调整、基于这一特性,人脑具有存储信息的功能。

人工神经网络的定义

人工神经网络的定义

人工神经网络的定义
人工神经网络的定义
人工神经网络(Artificial Neural Networks,简写为ANNs)也简称为神经网络或称作连接模型,是对人脑或自然神经网络若干基本特性的抽象和模拟。

人工神经网络以对大脑的生理研究成果为基础的,其目的在于模拟大脑的某些机理与机制,实现某个方面的功能。

国际着名的神经网络研究专家,第一家神经计算机公司的创立者与领导人Hecht Nielsen给人工神经网络下的定义就是:“人工神经网络是由人工建立的以有向图为拓扑结构的动态系统,它通过对连续或断续的输入作状态相应而进行信息处理” 这一定义是恰当的。

人工神经网络的研究,可以追溯到1957年Rosenblatt提出的感知器模型(Perceptron)。

它几乎与人工智能——AI同时起步,但30余年来却并未取得人工智能那样巨大的成功,中间经历了一段长时间的萧条。

直到80年代,获得了关于人工神经网络切实可行的算法,以及以Von Neumann体系为依托的传统算法在知识处理方面日益显露出其力不从心后,人们才重新对人工神经网络发生了兴趣,导致神经网络的复兴。

目前在神经网络研究方法上已形成多个流派,最富有成果的研究工作包括:多层网络BP算法,Hopfield网络模型,自适应共振理论,自组织特征映射理论等。

人工神经网络是在现代神经科学的基础上提出来的。

它虽然反映了人脑功能的基本特征,但远不是自然神经网络的逼真描写,而只是它的某种简化抽象和模拟。

人工神经网络基本原理

人工神经网络基本原理

人工神经网络人工神经网络(Artificial Neural Networks, ANN),一种模仿动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。

这种网络依靠系统的复杂程度,通过调整内部大量节点之间相互连接的关系,从而达到处理信息的目的。

人工神经网络具有自学习和自适应的能力,可以通过预先提供的一批相互对应的输入-输出数据,分析掌握两者之间潜在的规律,最终根据这些规律,用新的输入数据来推算输出结果,这种学习分析的过程被称为“训练”。

(引自《环球科学》2007年第一期《神经语言:老鼠胡须下的秘密》)概念由大量处理单元互联组成的非线性、自适应信息处理系统。

它是在现代神经科学研究成果的基础上提出的,试图通过模拟大脑神经网络处理、记忆信息的方式进行信息处理。

人工神经网络具有四个基本特征:(1)非线性非线性关系是自然界的普遍特性。

大脑的智慧就是一种非线性现象。

人工神经元处于激活或抑制二种不同的状态,这种行为在数学上表现为一种非线性关系。

具有阈值的神经元构成的网络具有更好的性能,可以提高容错性和存储容量。

(2)非局限性一个神经网络通常由多个神经元广泛连接而成。

一个系统的整体行为不仅取决于单个神经元的特征,而且可能主要由单元之间的相互作用、相互连接所决定。

通过单元之间的大量连接模拟大脑的非局限性。

联想记忆是非局限性的典型例子。

(3)非常定性人工神经网络具有自适应、自组织、自学习能力。

神经网络不但处理的信息可以有各种变化,而且在处理信息的同时,非线性动力系统本身也在不断变化。

经常采用迭代过程描写动力系统的演化过程。

(4)非凸性一个系统的演化方向,在一定条件下将取决于某个特定的状态函数。

例如能量函数,它的极值相应于系统比较稳定的状态。

非凸性是指这种函数有多个极值,故系统具有多个较稳定的平衡态,这将导致系统演化的多样性。

人工神经网络中,神经元处理单元可表示不同的对象,例如特征、字母、概念,或者一些有意义的抽象模式。

神经网络概论外文文献翻译中英文

神经网络概论外文文献翻译中英文

外文文献翻译(含:英文原文及中文译文)英文原文Neural Network Introduction1 ObjectivesAs you read these words you are using a complex biological neural network. Y ou have a highly interconnected set of some 1011neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons, a rich assembly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.Scientists have only just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.This leads to the following question: Although we have only a rudimentary understanding of biological neural networks, is it possible to construct a small set of simple artifi cial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is aboutartificial neural networks.The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This book is about such neurons, the networks that contain them and their training.2 HistoryThe history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the paper in historical perspective.Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept,a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These experiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CA T) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CA T system.The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in physics, psychology andneurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the neural network field.McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.) At about the same time, Bernard Widrow and Ted Hoff [WiHo60]introduced a new learning algorithm and used it to train adaptive linear neural networks, which were similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.) Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Rosenblatt and Widrow wereaware of these limitations and proposed new networks that would overcome them. However, they were not able to successfully modify their learning algorithms to train the more complex networks.Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment, caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and separately developed new neural networks that could act as memories. Stephen Grossberg [Gros76] was also very active during this period in the investigation of self-organizing networks.Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which toexperiment. During the 1980s both of these impediments were overcome, and researchin neural networks increased dramatically. New personal computers and workstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced.Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield [Hopf82].The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and 12 for a development of the backpropagation algorithm.) These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural network field has progressed. As one might note, the progress has not always been "slow but sure." There have been periods of dramatic progress and periods when relatively little has been accomplished.Many of the advances in neural networks have had to do with new concepts, such as innovative architectures and training. Just as important has been the availability of powerful new computers on which to test these new concepts.Well, so much for the history of neural networks to this date. The real question is, "What will happen in the next ten to twenty years?" Will neural networks take a permanent place as a mathematical/engineering tool, or will they fade away as have so many promising technologies? At present, the answer seems to be that neural networks will not only have their day but will have a permanent place, not as a solution to every problem, but as a tool to be used in appropriate situations. In addition, remember that we still know very little about how the brain works. The most important advances in neural networks almost certainly lie in the future.Although it is difficult to predict the future success of neural networks, the large number and wide variety of applications of this newtechnology are very encouraging. The next section describes some of these applications.3 ApplicationsA recent newspaper article described the use of neural networks in literature research by Aston University. It stated that "the network can be taught to recognize individual writing styles, and the researchers used it to compare works attributed to Shakespeare and his contemporaries." A popular science television program recently documented the use of neural networks by an Italian research institute to test the purity of olive oil. These examples are indicative of the broad range of applications that can be found for neural networks. The applications are expanding because neural networks are good at solving problems, not just in engineering, science and mathematics, but m medicine, business, finance and literature as well. Their application to a wide variety of problems in many fields makes them very attractive. Also, faster computers and faster algorithms have made it possible to use neural networks to solve complex industrial problems that formerly required too much computation.The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MA TLAB with the permission of the Math Works, Inc.The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channelequalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.AerospaceHigh performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectorsAutomotiveAutomobile automatic guidance systems, warranty activity analyzers BankingCheck and other document readers, credit application evaluatorsDefenseWeapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification ElectronicsCode sequence prediction, integrated circuit chip layout, processcontrol, chip failure analysis, machine vision, voice synthesis, nonlinear modelingEntertainmentAnimation, special effects, market forecastingFinancialReal estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis, portfolio trading program, corporate financial analysis, currency price predictionInsurancePolicy application evaluation, product optimizationManufacturingManufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systemsMedicalBreast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency room test advisement0il and GasExplorationRoboticsTrajectory control, forklift robot, manipulator controllers, vision systems SpeechSpeech recognition, speech compression, vowel classification, text to speech synthesisSecuritiesMarket analysis, automatic bond rating, stock trading advisory systems TelecommunicationsImage and data compression, automated information services,real-time translation of spoken language, customer payment processing systemsTransportationTruck brake diagnosis systems, vehicle scheduling, routing systems ConclusionThe number of neural network applications, the money that has been invested in neural network software and hardware, and the depth and breadth of interest in these devices have been growing rapidly.4 Biological InspirationThe artificial neural networks discussed in this text are only remotely related to their biological counterparts. In this section we willbriefly describe those characteristics of brain function that have inspired the development of artificial neural networks.The brain consists of a large number (approximately 1011) of highly connected elements (approximately 104 connections per element) called neurons. For our purposes these neurons have three principal components: the dendrites, the cell body and the axon. The dendrites are tree-like receptive networks of nerve fibers that carry electrical signals into the cell body. The cell body effectively sums and thresholds these incoming signals. The axon is a single long fiber that carries the signal from the cell body out to other neurons. The point of contact between an axon of one cell and a dendrite of another cell is called a synapse. It is the arrangement of neurons and the strengths of the individual synapses, determined by a complex chemical process, that establishes the function of the neural network. Some of the neural structure is defined at birth. Other parts are developed through learning, as new connections are made and others waste away. This development is most noticeable in the early stages of life. For example, it has been shown that if a young cat is denied use of one eye during a critical window of time, it will never develop normal vision in that eye.Neural structures continue to change throughout life. These later changes tend to consist mainly of strengthening or weakening of synaptic junctions. For instance, it is believed that new memories are formed bymodification of these synaptic strengths. Thus, the process of learning a new friend's face consists of altering various synapses.Artificial neural networks do not approach the complexity of the brain. There are, however, two key similarities between bio logical and artificial neural networks. First, the building blocks of both networks are simple computational devices (although artificial neurons are much simpler than biological neurons) that are highly interconnected. Second, the connections between neurons determine the function of the network. The primary objective of this book will be to determine the appropriate connections to solve particular problems.It is worth noting that even though biological neurons are very slow when compared to electrical circuits, the brain is able to perform many tasks much faster than any conventional computer. This is in part because of the massively parallel structure of biological neural networks; all of the neurons are operating at the same time. Artificial neural networks share this parallel structure. Even though most artificial neural networks are currently implemented on conventional digital computers, their parallel structure makes them ideally suited to implementation using VLSI, optical devices and parallel processors.In the following chapter we will introduce our basic artificial neuron and will explain how we can combine such neurons to form networks. This will provide a background for Chapter 3, where we take our firstlook at neural networks in action.中文译文神经网络概述1.目的当你现在看这本书的时候, 就正在使用一个复杂的生物神经网络。

外文翻译---人工神经网络

外文翻译---人工神经网络

外文翻译---人工神经网络英文文献英文资料:Artificial neural networks (ANNs) to ArtificialNeuralNetworks, abbreviations also referred to as the neural network (NNs) or called connection model (ConnectionistModel), it is a kind of model animals neural network behavior characteristic, distributed parallel information processing algorithm mathematical model. This network rely on the complexity of the system, through the adjustment of mutual connection between nodes internal relations, so as to achieve the purpose of processing information. Artificial neural network has since learning and adaptive ability, can provide in advance of a batch of through mutual correspond of the input/output data, analyze master the law of potential between, according to the final rule, with a new input data to calculate, this study analyzed the output of the process is called the "training". Artificial neural network is made of a number of nonlinear interconnected processing unit, adaptive information processing system. It is in the modern neuroscience research results is proposed on the basis of, trying to simulate brain neural network processing, memory information way information processing. Artificial neural network has four basic characteristics:(1) the nonlinear relationship is the nature of the nonlinear common characteristics. The wisdom of the brain is a kind of non-linear phenomena. Artificial neurons in the activation or inhibit the two different state, this kind of behavior in mathematics performance for a nonlinear relationship. Has the threshold of neurons in the network formed by the has betterproperties, can improve the fault tolerance and storage capacity.(2) the limitations a neural network by DuoGe neurons widely usually connected to. A system of the overall behavior depends not only on the characteristics of single neurons, and may mainly by the unit the interaction between the, connected to the. Through a large number of connection between units simulation of the brain limitations. Associative memory is a typical example of limitations.(3) very qualitative artificial neural network is adaptive, self-organizing, learning ability. Neural network not only handling information can have all sorts of change, and in the treatment of the information at the same time, the nonlinear dynamic system itself is changing. Often by iterative process description of the power system evolution.(4) the convexity a system evolution direction, in certain conditions will depend on a particular state function. For example energy function, it is corresponding to the extreme value of the system stable state. The convexity refers to the function extreme value, it has DuoGe DuoGe system has a stable equilibrium state, this will cause the system to the diversity of evolution.Artificial neural network, the unit can mean different neurons process of the object, such as characteristics, letters, concept, or some meaningful abstract model. The type of network processing unit is divided into three categories: input unit, output unit and hidden units. Input unit accept outside the world of signal and data; Output unit of output system processing results; Hidden unit is in input and output unit, not between by external observation unit. The system The connections between neurons right value reflect the connection between the unit strength, information processing and embodied in the network said theprocessing unit in the connections. Artificial neural network is a kind of the procedures, and adaptability, brain style of information processing, its essence is through the network of transformation and dynamic behaviors have akind of parallel distributed information processing function, and in different levels and imitate people cranial nerve system level of information processing function. It is involved in neuroscience, thinking science, artificial intelligence, computer science, etc DuoGe field cross discipline.Artificial neural network is used the parallel distributed system, with the traditional artificial intelligence and information processing technology completely different mechanism, overcome traditional based on logic of the symbols of the artificial intelligence in the processing of intuition and unstructured information of defects, with the adaptive, self-organization and real-time characteristic of the study.Development historyIn 1943, psychologists W.S.M cCulloch and mathematical logic W.P home its established the neural network and the math model, called MP model. They put forward by MP model of the neuron network structure and formal mathematical description method, and prove the individual neurons can perform the logic function, so as to create artificial neural network research era. In 1949, the psychologist put forward the idea of synaptic contact strength variable. In the s, the artificial neural network to further development, a more perfect neural network model was put forward, including perceptron and adaptive linear elements etc. M.M insky, analyzed carefully to Perceptron as a representative of the neural network system function and limitations in 1969 after the publication of the book "Perceptron, and points out thatthe sensor can't solve problems high order predicate. Their arguments greatly influenced the research into the neural network, and at that time serial computer and the achievement of the artificial intelligence, covering up development new computer and new ways of artificial intelligence and the necessity and urgency, make artificial neural network of research at a low. During this time, some of the artificial neural network of the researchers remains committed to this study, presented to meet resonance theory (ART nets), self-organizing mapping, cognitive machine network, but the neural network theory study mathematics. The research for neural network of research and development has laid a foundation. In 1982, the California institute of J.J.H physicists opfield Hopfield neural grid model proposed, and introduces "calculation energy" concept, gives the network stability judgment. In 1984, he again put forward the continuous time Hopfield neural network model for the neural computers, the study of the pioneering work, creating a neural network for associative memory and optimization calculation, the new way of a powerful impetus to the research into the neural network, in 1985, and scholars have proposed a wave ears, the study boltzmann model using statistical thermodynamics simulated annealing technology, guaranteed that the whole system tends to the stability of the points. In 1986 the cognitive microstructure study, puts forward the parallel distributed processing theory. Artificial neural network of research by each developed country, the congress of the United States to the attention of the resolution will be on jan. 5, 1990 started ten years as the decade of the brain, the international research organization called on its members will the decade of the brain into global behavior. In Japan's "real world computing(springboks claiming)" project, artificial intelligence research into an important component.Network modelArtificial neural network model of the main consideration network connection topological structure, the characteristics, the learning rule neurons. At present, nearly 40 kinds of neural network model, with back propagation network, sensor, self-organizing mapping, the Hopfieldnetwork.the computer, wave boltzmann machine, adapt to the ear resonance theory. According to the topology of the connection, the neural network model can be divided into:(1) prior to the network before each neuron accept input and output level to the next level, the network without feedback, can use a loop to no graph. This network realization from the input space to the output signal of the space transformation, it information processing power comes from simple nonlinear function of DuoCi compound. The network structure is simple, easy to realize. Against the network is a kind of typical prior to the network.(2) the feedback network between neurons in the network has feedback, can use a no to complete the graph. This neural network information processing is state of transformations, can use the dynamics system theory processing. The stability of the system with associative memory function has close relationship. The Hopfield network.the computer, wave ear boltzmann machine all belong to this type.Learning typeNeural network learning is an important content, it is through the adaptability of the realization of learning. According to the change of environment, adjust to weights, improve thebehavior of the system. The proposed by the Hebb Hebb learning rules for neural network learning algorithm to lay the foundation. Hebb rules say that learning process finally happened between neurons in the synapse, the contact strength synapses parts with before and after the activity and synaptic neuron changes. Based on this, people put forward various learning rules and algorithm, in order to adapt to the needs of different network model. Effective learning algorithm, and makes the godThe network can through the weights between adjustment, the structure of the objective world, said the formation of inner characteristics of information processing method, information storage and processing reflected in the network connection. According to the learning environment is different, the study method of the neural network can be divided into learning supervision and unsupervised learning. In the supervision and study, will the training sample data added to the network input, and the corresponding expected output and network output, in comparison to get error signal control value connection strength adjustment, the DuoCi after training to a certain convergence weights. While the sample conditions change, the study can modify weights to adapt to the new environment. Use of neural network learning supervision model is the network, the sensor etc. The learning supervision, in a given sample, in the environment of the network directly, learning and working stages become one. At this time, the change of the rules of learning to obey the weights between evolution equation of. Unsupervised learning the most simple example is Hebb learning rules. Competition rules is a learning more complex than learning supervision example, it is according to established clustering on weights adjustment. Self-organizing mapping, adapt to theresonance theory is the network and competitive learning about the typical model.Analysis methodStudy of the neural network nonlinear dynamic properties, mainly USES the dynamics system theory and nonlinear programming theory and statistical theory to analysis of the evolution process of the neural network and the nature of the attractor, explore the synergy of neural network behavior and collective computing functions, understand neural information processing mechanism. In order to discuss the neural network and fuzzy comprehensive deal of information may, the concept of chaos theory and method will play a role. The chaos is a rather difficult toprecise definition of the math concepts. In general, "chaos" it is to point to by the dynamic system of equations describe deterministic performance of the uncertain behavior, or call it sure the randomness. "Authenticity" because it by the intrinsic reason and not outside noise or interference produced, and "random" refers to the irregular, unpredictable behavior, can only use statistics method description. Chaotic dynamics of the main features of the system is the state of the sensitive dependence on the initial conditions, the chaos reflected its inherent randomness. Chaos theory is to point to describe the nonlinear dynamic behavior with chaos theory, the system of basic concept, methods, it dynamics system complex behavior understanding for his own with the outside world and for material, energy and information exchange process of the internal structure of behavior, not foreign and accidental behavior, chaos is a stationary. Chaotic dynamics system of stationary including: still, stable quantity, the periodicity, with sex and chaos of accuratesolution... Chaos rail line is overall stability and local unstable combination of results, call it strange attractor.A strange attractor has the following features: (1) some strange attractor is a attractor, but it is not a fixed point, also not periodic solution; (2) strange attractor is indivisible, and that is not divided into two and two or more to attract children. (3) it to the initial value is very sensitive, different initial value can lead to very different behavior.superiorityThe artificial neural network of characteristics and advantages, mainly in three aspects: first, self-learning. For example, only to realize image recognition that the many different image model and the corresponding should be the result of identification input artificial neural network, the network will through the self-learning function, slowly to learn to distinguish similar images. The self-learning function for the forecast has special meaning. The prospect of artificial neural network computer will provide mankind economic forecasts, market forecast, benefit forecast, the application outlook is very great. The second, with lenovo storage function. With the artificial neural network of feedback network can implement this association. Third, with high-speed looking for the optimal solution ability. Looking for a complex problem of the optimal solution, often require a lot of calculation, the use of a problem in some of the design of feedback type and artificial neural network, use the computer high-speed operation ability, may soon find the optimal solution.Research directionThe research into the neural network can be divided into the theory research and application of the two aspects of research.Theory study can be divided into the following two categories: 1, neural physiological and cognitive science research on human thinking and intelligent mechanism.2, by using the neural basis theory of research results, with mathematical method to explore more functional perfect, performance more superior neural network model, the thorough research network algorithm and performance, such as: stability and convergence, fault tolerance, robustness, etc.; The development of new network mathematical theory, such as: neural network dynamics, nonlinear neural field, etc.Application study can be divided into the following two categories:1, neural network software simulation and hardware realization of research.2, the neural network in various applications in the field of research. These areas include: pattern recognition, signal processing, knowledge engineering, expert system, optimize the combination, robot control, etc. Along with the neural network theory itself and related theory, related to the development of technology, the application of neural network will further.Development trend and research hot spotArtificial neural network characteristic of nonlinear adaptive information processing power, overcome traditional artificial intelligence method for intuitive, such as mode, speech recognition, unstructured information processing of the defects in the nerve of expert system, pattern recognition and intelligent control, combinatorial optimization, and forecast areas to be successful application. Artificial neural network and other traditional method unifies, will promote the artificial intelligence and information processing technology development. In recentyears, the artificial neural network is on the path of human cognitive simulation further development, and fuzzy system, genetic algorithm, evolution mechanism combined to form a computational intelligence, artificial intelligence is an important direction in practical application, will be developed. Information geometry will used in artificial neural network of research, to the study of the theory of the artificial neural network opens a new way. The development of the study neural computers soon, existing product to enter the market. With electronics neural computers for the development of artificial neural network to provide good conditions.Neural network in many fields has got a very good application, but the need to research is a lot. Among them, are distributed storage, parallel processing, since learning, the organization and nonlinear mapping the advantages of neural network and other technology and the integration of it follows that the hybrid method and hybrid systems, has become a hotspot. Since the other way have their respective advantages, so will the neural network with other method, and the combination of strong points, and then can get better application effect. At present this in a neural network and fuzzy logic, expert system, genetic algorithm, wavelet analysis, chaos, the rough set theory, fractal theory, theory of evidence and grey system and fusion.汉语翻译人工神经网络(ArtificialNeuralNetworks,简写为ANNs)也简称为神经网络(NNs)或称作连接模型(ConnectionistModel),它是一种模范动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。

人工神经网络外文翻译

人工神经网络外文翻译

人工神经网络外文翻译附录二:外文翻译原文The Kahnawake Survival School (KSS), located in the Kahnawake Mohawk Territory on the south shore of the St. Lawrence River across from Montreal, is more than a high school. It is a teaching tool for students, a community gathering place and a shelter in case of disaster. The 5500 m2 (60,000 ft2) building is composed of a central block with two wings, spread partially on two floors. The school opened in August 2008.After a thorough inspection of the land owned by the Kahnawake Education Center (KEC), the design team chose a location in an existing clearing where the building would be adjacent to a forest but still accessible from the main highway of the reserve. The proposed location was analyzed by all building professionals to optimize the orientation/layout of the building.Considering the age and history of the site, the decision was made to design a building and make it fit the site, preventing major site modifications and unnecessary cutting down of trees. This consideration was even more important since an endangered tree species (Butternut) was present. Also, all soil removed from the site for excavation was piled near the highway to create a natural wall of dirt, grass and trees, to reduce the noise from the highway.Although building the facility in a clearing was a major construction challenge, it had many advantages for mechanical and electrical engineering.About the AuthorNicolas Lemire, P.Eng., is a principal at Pageau Morel and Associates, Montreal, QC, Canada.Table 1: Average energy consumption for teaching facilities in 2006 (per Natural Resources Canada, OEE).Average Teaching Facility in Quebec 1.90 GJ/m2/yrAverage Teaching Facility in Canada 1.70 GJ/m2/yrTable 2: Energy comparison of schools in Quebec, Ontario and Canada名称GJ/m2/yr Reduction %Average Teaching Facility in Quebec 1.90 —Average Teaching Facility in Canada 1.70 —1.30 0Reference Building (School) as PerMNECB of Canada for Quebec)1.272.4Reference Building (School) as PerMNECB of Canada for OntarioKSS (Actual Data for 2008 – 2009) 0.63 51.4Table 3: Construction costs in Canadian dollars.cost$ $/m2 $/ft2 Mechanical 1459555 265.37 24.33 Electrical 1059676 192.67 17.66 Whole Building 8591279 1562.05 143.19For example, trees surrounding the path provide shade for the south fade of the facility during summer. With leaves shed in the winter, the resulting solar gains are used as an auxiliary heating source.Table 1 shows data published by Natural Resources Canada on the average energy consumption of teaching facilities in 2006.Table 2 shows an energy comparison between the average facilities presented in Table 1, the reference building as per the Model National Energy Code for Buildings (MNECB) of Canada for a teaching facility including geothermal energy in the Province of Quebec1 and in the Province of Ontario,2 and the new Kahnawake Survival School.Table 2 shows that KSS performs well at 66.8% more efficient than the average teaching facility in the Province of Quebec and 51.4% more efficient than the reference building of the MNECB of Canada for a teaching facility in Quebec using geothermal energy.Heat recovery is applied to fresh air and exhaust using an enthalpy wheel. Coils and filters in the systems are selected at 1.8 m/s (350 fpm) to reduce static pressure loss. Variable frequency drives are installed on all fans and motors and are directly coupled to avoid belt losses and reduce maintenance. Motors are high efficiency. A closed loop geothermal heat exchanger (15 vertical boreholes of 137 m2 [450 ft]) is used to supply tempered glycol to local geothermal water-to-air heat pumps. A water-to-water heat pump produces hot or cold glycolto supply a coil in the main fresh air ventilation unit that provides outside air to all building areas.Additional energy efficient measures implemented are high efficiency lighting fixtures and fresh air control using CO2 sensors for the gym.Ducting was designed to group the classrooms in four zones, with a fifth zone for the offices. As soon as all classrooms in one zone are unoccupied, that ventilation zone is shut down. When all zones are closed, the main system is turned off.Physical separations are present between classrooms and offices, giving a means of controlling IAQ and energy consumption depending on occupation modes (offices are used in summer, while classes are not; classes can be used at night throughout the year while offices are empty at night, etc.). Commissioning was performed with emphasis on performances of primary air-handling units (AHUs) and ventilation strategies. To deal with the amount of fresh air injected into the gymnasium, CO2 sensors were installed in the return duct leading to the air-handling units, analyzing CO2 quantities contained in return air. Fresh air is injected in the mixing box of the AHU to maintain CO2 levels at 800 ppm.The special variable air volume (V A V) system with predetermined outside air rate and terminal reheat is an efficient way of providing effective indoor environmental quality to the users. All zones in the building can maintain effective temperature within the ASHRAE comfort zone as defined in ASHRAE Standard 55. A minimum humidity level of 30% is maintained during winter using electrical steam generator humidifiers installed in the air-Image: . 2009 DigitalGlobeImage: . 2009 DigitalGlobe technology award case studiesBuilding at a GlanceName: Kahnawake Survival SchoolLocation: Kahnawake, QC, CanadaOwner: Kahnawake Education CenterPrincipal Use: SchoolIncludes: High School, CommunityCenter, Public AssemblyEmployees/Occupants: 450 studentsGross Square Footage: 60,000 ft2Substantial Completion/Occupancy: August 2008Occupancy: 100%handling units. During summer, a maximum of 60% is allowed (design criteria for offices).Considering the many types of activity occurring in the building (teaching, administration, community activities, shows, sporting events, community meetings and shelter), two basic options were analyzed: dedicated systems and centralized systems. After analysis, it was decided to combine both strategies. The use of a centralized system to condition the amount of outside air required and send it in all zones was the best solution. The capability to operate at variable flow was a major aspect of this system since classrooms and teaching areas are not used 24/7 while offices and administrative areas operate throughout the year.Because classrooms are not used during the hotter months of the year (from mid-June to the end of August), cooling the classrooms and the gym was questioned. It was decided to use local geothermal water-to-air heat pumps into the administrative and office areas and in the cafeteria/student lounge as those areas were more likely to be used throughout the year or during summer for events. The fresh air system was equipped with a geothermal water-to-water heat pump to allow heating/cooling/dehumidification of fresh air supplied to all areas includingclassrooms and the gym.For the gym, a provision has been made to allow installation of cooling capacity in the system in the future by adding a water-water heat pump to supply a coil. Natural ventilation is available for all classrooms and the gym using operable windows. The main central corridor is open on two stories and continues higher (almost three floors) to act as a natural chimney. All classrooms are opened to the central corridor (using operable panels). When the outside conditions are adequate, aspecial green light shows teachers/users it is a good day to use natural ventilation. Operable windows located at the top of the natural chimney are opened, and teachers/users can decide if they want to open them.If very hot days occur during the academic year, a provision for two propeller fans, located at each end of the main corridor, was planned (at the top of thenatural chimney). This would force air movement through the building (using natural ventilation openings in classrooms but closing the windows at the top of the natural chimney). The same strategy was applied to the gym, allowing it to be naturally cooled. Also, a dedicated air-handling unit was installed into the gym for ventilation and heating purposes. Variable frequency drives were installed on each fan. Heat recovery was implemented on exhaust fans (washrooms, janitor rooms, etc).A centralized building automation system (BAS) links all mechanical components through a centralized DDC network. A central panel is located in the main mechanical room and is simple to use so that occupants who are present outside of normal business hours can start/stop different features of the building (natural ventilation, forced natural ventilation fans, primary fresh air system, gym ventilation system and gym forced natural ventilation fans). Commissioning was done on the BAS, which helped improve energy efficiency. This process continued after delivery of thebuilding and will continue for a few years to perfectly tune the building to the desired operation.Capital costs were controlled by providing simple systems that rely on well-established, low-cost technologies and by optimizing equipment selection for dependability, low maintenance and maximum efficiency. A major advantage of the V A V systems with terminal reheat is that, despite different load requirements, a comfortable environment can be maintained in all rooms. This makes the systems flexible enough to adapt, at low cost, to any layout modification. Designing complicated systems is not always a guarantee for energy efficiency. In fact, the guiding principle is that simpler systems (as long as energy efficiency is not compromised) are understood better by maintenance personnel, which lowers operation costs.The decrease of energy consumption led to a reduction of CO2 emissions by about 192 Mg (212 tons) per year. The total energy consumption reduction per year corresponds approximately to the energy consumption of 92 average houses.外文翻译中文本文研究人工神经网络(ANN)的适用性,如汽车空调系统(AAC)的适用性能预测。

人工神经网络

人工神经网络

3、阈值函数(Threshold Function)阶跃函数
o
β
0
θ
net

4、S形函数
压缩函数(Squashing Function)和逻辑斯特 函数(Logistic Function)。
f(net)=a+b/(1+exp(-d*net)) a,b,d为常数。它的饱和值为a和a+b。 最简单形式为: f(net)= 1/(1+exp(-d*net))
用的; – 5)一个神经元接受的信号的累积效果决定该神经
元的状态; – 6) 每个神经元可以有一个“阈值”。
人工神经元
人工神经元
• 人工神经元模型应该具有生物神经元的六 个基本特性。
x1 w1
x2 w2 … xn wn
∑ net=XW
人工神经元的基本构成
x1 w1
x2 w2 … xn wn
∑ net=XW
• 2、 循环联接 –反馈信号。
联接模式
• 3、层(级)间联接 –层间(Inter-field)联接指不同层中的神 经元之间的联接。这种联接用来实现层 间的信号传递
ANN的网络结构
网络的分层结构
• 简单单级网
x1 x2 … xn
w11 w1m
w2m … wn1
wnm 输入层
o1
o2
… om
输出层
x1 x2 … xn
w11 w1m w2m …wn1
输入层
o1
o2
… 输出层
V
om
单级横向反馈网
• V=(vij) • NET=XW+OV • O=F(NET) • 时间参数——神经元的状态在主时钟的控制下同
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

英文文献英文资料:Artificial neural networks (ANNs) to ArtificialNeuralNetworks, abbreviations also referred to as the neural network (NNs) or called connection model (ConnectionistModel), it is a kind of model animals neural network behavior characteristic, distributed parallel information processing algorithm mathematical model. This network rely on the complexity of the system, through the adjustment of mutual connection between nodes internal relations, so as to achieve the purpose of processing information. Artificial neural network has since learning and adaptive ability, can provide in advance of a batch of through mutual correspond of the input/output data, analyze master the law of potential between, according to the final rule, with a new input data to calculate, this study analyzed the output of the process is called the "training". Artificial neural network is made of a number of nonlinear interconnected processing unit, adaptive information processing system. It is in the modern neuroscience research results is proposed on the basis of, trying to simulate brain neural network processing, memory information way information processing. Artificial neural network has four basic characteristics:(1) the nonlinear relationship is the nature of the nonlinear common characteristics. The wisdom of the brain is a kind of non-linear phenomena. Artificial neurons in the activation or inhibit the two different state, this kind of behavior in mathematics performance for a nonlinear relationship. Has the threshold of neurons in the network formed by the has better properties, can improve the fault tolerance and storage capacity.(2) the limitations a neural network by DuoGe neurons widely usually connected to. A system of the overall behavior depends not only on the characteristics of single neurons, and may mainly by the unit the interaction between the, connected to the. Through a large number of connection between units simulation of the brain limitations. Associative memory is a typical example of limitations.(3) very qualitative artificial neural network is adaptive, self-organizing, learning ability. Neural network not only handling information can have all sorts of change, and in the treatment of the information at the same time, the nonlinear dynamic system itself is changing. Often by iterative process description of the power system evolution.(4) the convexity a system evolution direction, in certain conditions will depend on a particular state function. For example energy function, it is corresponding to the extreme value of the system stable state. The convexity refers to the function extreme value, it has DuoGe DuoGe system has a stable equilibrium state, this will cause the system to the diversity of evolution.Artificial neural network, the unit can mean different neurons process of the object, such as characteristics, letters, concept, or some meaningful abstract model. The type of network processing unit is divided into three categories: input unit, output unit and hidden units. Input unit accept outside the world of signal and data; Output unit of output system processing results; Hidden unit is in input and output unit, not between by external observation unit. The system The connections between neurons right value reflect the connection between the unit strength, information processing and embodied in the network said the processing unit in the connections. Artificial neural network is a kind of the procedures, and adaptability, brain style of information processing, its essence is through the network of transformation and dynamic behaviors have akind of parallel distributed information processing function, and in different levels and imitate people cranial nerve system level of information processing function. It is involved in neuroscience, thinking science, artificial intelligence, computer science, etc DuoGe field cross discipline.Artificial neural network is used the parallel distributed system, with the traditional artificial intelligence and information processing technology completely different mechanism, overcome traditional based on logic of the symbols of the artificial intelligence in the processing of intuition and unstructured information of defects, with the adaptive, self-organization and real-time characteristic of the study.Development historyIn 1943, psychologists W.S.M cCulloch and mathematical logic W.P home its established the neural network and the math model, called MP model. They put forward by MP model of the neuron network structure and formal mathematical description method, and prove the individual neurons can perform the logic function, so as to create artificial neural network research era. In 1949, the psychologist put forward the idea of synaptic contact strength variable. In the s, the artificial neural network to further development, a more perfect neural network model was put forward, including perceptron and adaptive linear elements etc. M.M insky, analyzed carefully to Perceptron as a representative of the neural network system function and limitations in 1969 after the publication of the book "Perceptron, and points out that the sensor can't solve problems high order predicate. Their arguments greatly influenced the research into the neural network, and at that time serial computer and the achievement of the artificial intelligence, covering up development new computer and new ways of artificial intelligence and the necessity and urgency, make artificial neural network of research at a low. During this time, some of the artificial neural network of the researchers remains committed to this study, presented to meet resonance theory (ART nets), self-organizing mapping, cognitive machine network, but the neural network theory study mathematics. The research for neural network of research and development has laid a foundation. In 1982, the California institute of J.J.H physicists opfield Hopfield neural grid model proposed, and introduces "calculation energy" concept, gives the network stability judgment. In 1984, he again put forward the continuous time Hopfield neural network model for the neural computers, the study of the pioneering work, creating a neural network for associative memory and optimization calculation, the new way of a powerful impetus to the research into the neural network, in 1985, and scholars have proposed a wave ears, the study boltzmann model using statistical thermodynamics simulated annealing technology, guaranteed that the whole system tends to the stability of the points. In 1986 the cognitive microstructure study, puts forward the parallel distributed processing theory. Artificial neural network of research by each developed country, the congress of the United States to the attention of the resolution will be on jan. 5, 1990 started ten years as the decade of the brain, the international research organization called on its members will the decade of the brain into global behavior. In Japan's "real world computing (springboks claiming)" project, artificial intelligence research into an important component.Network modelArtificial neural network model of the main consideration network connection topological structure, the characteristics, the learning rule neurons. At present, nearly 40 kinds of neural network model, with back propagation network, sensor, self-organizing mapping, the Hopfieldnetwork.the computer, wave boltzmann machine, adapt to the ear resonance theory. According to the topology of the connection, the neural network model can be divided into:(1) prior to the network before each neuron accept input and output level to the next level, the network without feedback, can use a loop to no graph. This network realization from the input space to the output signal of the space transformation, it information processing power comes from simple nonlinear function of DuoCi compound. The network structure is simple, easy to realize. Against the network is a kind of typical prior to the network.(2) the feedback network between neurons in the network has feedback, can use a no to complete the graph. This neural network information processing is state of transformations, can use the dynamics system theory processing. The stability of the system with associative memory function has close relationship. The Hopfield network.the computer, wave ear boltzmann machine all belong to this type.Learning typeNeural network learning is an important content, it is through the adaptability of the realization of learning. According to the change of environment, adjust to weights, improve the behavior of the system. The proposed by the Hebb Hebb learning rules for neural network learning algorithm to lay the foundation. Hebb rules say that learning process finally happened between neurons in the synapse, the contact strength synapses parts with before and after the activity and synaptic neuron changes. Based on this, people put forward various learning rules and algorithm, in order to adapt to the needs of different network model. Effective learning algorithm, and makes the godThe network can through the weights between adjustment, the structure of the objective world, said the formation of inner characteristics of information processing method, information storage and processing reflected in the network connection. According to the learning environment is different, the study method of the neural network can be divided into learning supervision and unsupervised learning. In the supervision and study, will the training sample data added to the network input, and the corresponding expected output and network output, in comparison to get error signal control value connection strength adjustment, the DuoCi after training to a certain convergence weights. While the sample conditions change, the study can modify weights to adapt to the new environment. Use of neural network learning supervision model is the network, the sensor etc. The learning supervision, in a given sample, in the environment of the network directly, learning and working stages become one. At this time, the change of the rules of learning to obey the weights between evolution equation of. Unsupervised learning the most simple example is Hebb learning rules. Competition rules is a learning more complex than learning supervision example, it is according to established clustering on weights adjustment. Self-organizing mapping, adapt to the resonance theory is the network and competitive learning about the typical model.Analysis methodStudy of the neural network nonlinear dynamic properties, mainly USES the dynamics system theory and nonlinear programming theory and statistical theory to analysis of the evolution process of the neural network and the nature of the attractor, explore the synergy of neural network behavior and collective computing functions, understand neural information processing mechanism. In order to discuss the neural network and fuzzy comprehensive deal of information may, the concept of chaos theory and method will play a role. The chaos is a rather difficult toprecise definition of the math concepts. In general, "chaos" it is to point to by the dynamic system of equations describe deterministic performance of the uncertain behavior, or call it sure the randomness. "Authenticity" because it by the intrinsic reason and not outside noise or interference produced, and "random" refers to the irregular, unpredictable behavior, can only use statistics method description. Chaotic dynamics of the main features of the system is the state of the sensitive dependence on the initial conditions, the chaos reflected its inherent randomness. Chaos theory is to point to describe the nonlinear dynamic behavior with chaos theory, the system of basic concept, methods, it dynamics system complex behavior understanding for his own with the outside world and for material, energy and information exchange process of the internal structure of behavior, not foreign and accidental behavior, chaos is a stationary. Chaotic dynamics system of stationary including: still, stable quantity, the periodicity, with sex and chaos of accurate solution... Chaos rail line is overall stability and local unstable combination of results, call it strange attractor.A strange attractor has the following features: (1) some strange attractor is a attractor, but it is not a fixed point, also not periodic solution; (2) strange attractor is indivisible, and that is not divided into two and two or more to attract children. (3) it to the initial value is very sensitive, different initial value can lead to very different behavior.superiorityThe artificial neural network of characteristics and advantages, mainly in three aspects: first, self-learning. For example, only to realize image recognition that the many different image model and the corresponding should be the result of identification input artificial neural network, the network will through the self-learning function, slowly to learn to distinguish similar images. The self-learning function for the forecast has special meaning. The prospect of artificial neural network computer will provide mankind economic forecasts, market forecast, benefit forecast, the application outlook is very great. The second, with lenovo storage function. With the artificial neural network of feedback network can implement this association. Third, with high-speed looking for the optimal solution ability. Looking for a complex problem of the optimal solution, often require a lot of calculation, the use of a problem in some of the design of feedback type and artificial neural network, use the computer high-speed operation ability, may soon find the optimal solution.Research directionThe research into the neural network can be divided into the theory research and application of the two aspects of research. Theory study can be divided into the following two categories:1, neural physiological and cognitive science research on human thinking and intelligent mechanism.2, by using the neural basis theory of research results, with mathematical method to explore more functional perfect, performance more superior neural network model, the thorough research network algorithm and performance, such as: stability and convergence, fault tolerance, robustness, etc.; The development of new network mathematical theory, such as: neural network dynamics, nonlinear neural field, etc.Application study can be divided into the following two categories:1, neural network software simulation and hardware realization of research.2, the neural network in various applications in the field of research. These areas include: pattern recognition, signal processing, knowledge engineering, expert system, optimize the combination, robot control, etc. Along with the neural network theory itself and related theory, related to the development of technology, the application of neural network will further.Development trend and research hot spotArtificial neural network characteristic of nonlinear adaptive information processing power, overcome traditional artificial intelligence method for intuitive, such as mode, speech recognition, unstructured information processing of the defects in the nerve of expert system, pattern recognition and intelligent control, combinatorial optimization, and forecast areas to be successful application. Artificial neural network and other traditional method unifies, will promote the artificial intelligence and information processing technology development. In recent years, the artificial neural network is on the path of human cognitive simulation further development, and fuzzy system, genetic algorithm, evolution mechanism combined to form a computational intelligence, artificial intelligence is an important direction in practical application, will be developed. Information geometry will used in artificial neural network of research, to the study of the theory of the artificial neural network opens a new way. The development of the study neural computers soon, existing product to enter the market. With electronics neural computers for the development of artificial neural network to provide good conditions.Neural network in many fields has got a very good application, but the need to research is a lot. Among them, are distributed storage, parallel processing, since learning, the organization and nonlinear mapping the advantages of neural network and other technology and the integration of it follows that the hybrid method and hybrid systems, has become a hotspot. Since the other way have their respective advantages, so will the neural network with other method, and the combination of strong points, and then can get better application effect. At present this in a neural network and fuzzy logic, expert system, genetic algorithm, wavelet analysis, chaos, the rough set theory, fractal theory, theory of evidence and grey system and fusion.汉语翻译人工神经网络(ArtificialNeuralNetworks,简写为ANNs)也简称为神经网络(NNs)或称作连接模型(ConnectionistModel),它是一种模范动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。

相关文档
最新文档