英文文献加翻译(基于神经网络和遗传算法的模糊系统的自动设计)

合集下载

人工智能英文文献原文及译文

人工智能英文文献原文及译文

附件四英文文献原文Artificial Intelligence"Artificial intelligence" is a word was originally Dartmouth in 1956 to put forward. From then on, researchers have developed many theories and principles, the concept of artificial intelligence is also expands. Artificial intelligence is a challenging job of science, the person must know computer knowledge, psychology and philosophy. Artificial intelligence is included a wide range of science, it is composed of different fields, such as machine learning, computer vision, etc, on the whole, the research on artificial intelligence is one of the main goals of the machine can do some usually need to perform complex human intelligence. But in different times and different people in the "complex" understanding is different. Such as heavy science and engineering calculation was supposed to be the brain to undertake, now computer can not only complete this calculation, and faster than the human brain can more accurately, and thus the people no longer put this calculation is regarded as "the need to perform complex human intelligence, complex tasks" work is defined as the development of The Times and the progress of technology, artificial intelligence is the science of specific target and nature as The Times change and development. On the one hand it continues to gain new progress on the one hand, and turning to more meaningful, the more difficult the target. Current can be used to study the main material of artificial intelligence and artificial intelligence technology to realize the machine is a computer, the development history of artificial intelligence is computer science and technology and the development together. Besides the computer science and artificial intelligence also involves information, cybernetics, automation, bionics, biology, psychology, logic, linguistics, medicine and philosophy and multi-discipline. Artificial intelligence research include: knowledge representation, automatic reasoning and search method, machine learning and knowledge acquisition and processing of knowledge system, natural language processing, computer vision, intelligent robot, automatic program design, etc.Practical application of machine vision: fingerprint identification,face recognition, retina identification, iris identification, palm, expert system, intelligent identification, search, theorem proving game, automatic programming, and aerospace applications.Artificial intelligence is a subject categories, belong to the door edge discipline of natural science and social science.Involving scientific philosophy and cognitive science, mathematics, neurophysiological, psychology, computer science, information theory, cybernetics, not qualitative theory, bionics.The research category of natural language processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combined scheduling problem, perception, pattern recognition, logic design program, soft calculation, inaccurate and uncertainty, the management of artificial life, neural network, and complex system, human thinking mode of genetic algorithm.Applications of intelligent control, robotics, language and image understanding, genetic programming robot factory.Safety problemsArtificial intelligence is currently in the study, but some scholars think that letting computers have IQ is very dangerous, it may be against humanity. The hidden danger in many movie happened.The definition of artificial intelligenceDefinition of artificial intelligence can be divided into two parts, namely "artificial" or "intelligent". "Artificial" better understanding, also is controversial. Sometimes we will consider what people can make, or people have high degree of intelligence to create artificial intelligence, etc. But generally speaking, "artificial system" is usually significance of artificial system.What is the "smart", with many problems. This involves other such as consciousness, ego, thinking (including the unconscious thoughts etc. People only know of intelligence is one intelligent, this is the universal view of our own. But we are very limited understanding of the intelligence of the intelligent people constitute elements are necessary to find, so it is difficult to define what is "artificial" manufacturing "intelligent". So the artificial intelligence research often involved in the study of intelligent itself. Other about animal or other artificial intelligence system is widely considered to be related to the study of artificial intelligence.Artificial intelligence is currently in the computer field, the moreextensive attention. And in the robot, economic and political decisions, control system, simulation system application. In other areas, it also played an indispensable role.The famous American Stanford university professor nelson artificial intelligence research center of artificial intelligence under such a definition: "artificial intelligence about the knowledge of the subject is and how to represent knowledge -- how to gain knowledge and use of scientific knowledge. But another American MIT professor Winston thought: "artificial intelligence is how to make the computer to do what only can do intelligent work." These comments reflect the artificial intelligence discipline basic ideas and basic content. Namely artificial intelligence is the study of human intelligence activities, has certain law, research of artificial intelligence system, how to make the computer to complete before the intelligence needs to do work, also is to study how the application of computer hardware and software to simulate human some intelligent behavior of the basic theory, methods and techniques.Artificial intelligence is a branch of computer science, since the 1970s, known as one of the three technologies (space technology, energy technology, artificial intelligence). Also considered the 21st century (genetic engineering, nano science, artificial intelligence) is one of the three technologies. It is nearly three years it has been developed rapidly, and in many fields are widely applied, and have made great achievements, artificial intelligence has gradually become an independent branch, both in theory and practice are already becomes a system. Its research results are gradually integrated into people's lives, and create more happiness for mankind.Artificial intelligence is that the computer simulation research of some thinking process and intelligent behavior (such as study, reasoning, thinking, planning, etc.), including computer to realize intelligent principle, make similar to that of human intelligence, computer can achieve higher level of computer application. Artificial intelligence will involve the computer science, philosophy and linguistics, psychology, etc. That was almost natural science and social science disciplines, the scope of all already far beyond the scope of computer science and artificial intelligence and thinking science is the relationship between theory and practice, artificial intelligence is in the mode of thinking science technology application level, is one of its application. From the view of thinking, artificial intelligence is not limited to logicalthinking, want to consider the thinking in image, the inspiration of thought of artificial intelligence can promote the development of the breakthrough, mathematics are often thought of as a variety of basic science, mathematics and language, thought into fields, artificial intelligence subject also must not use mathematical tool, mathematical logic, the fuzzy mathematics in standard etc, mathematics into the scope of artificial intelligence discipline, they will promote each other and develop faster.A brief history of artificial intelligenceArtificial intelligence can be traced back to ancient Egypt's legend, but with 1941, since the development of computer technology has finally can create machine intelligence, "artificial intelligence" is a word in 1956 was first proposed, Dartmouth learned since then, researchers have developed many theories and principles, the concept of artificial intelligence, it expands and not in the long history of the development of artificial intelligence, the slower than expected, but has been in advance, from 40 years ago, now appears to have many AI programs, and they also affected the development of other technologies. The emergence of AI programs, creating immeasurable wealth for the community, promoting the development of human civilization.The computer era1941 an invention that information storage and handling all aspects of the revolution happened. This also appeared in the U.S. and Germany's invention is the first electronic computer. Take a few big pack of air conditioning room, the programmer's nightmare: just run a program for thousands of lines to set the 1949. After improvement can be stored procedure computer programs that make it easier to input, and the development of the theory of computer science, and ultimately computer ai. This in electronic computer processing methods of data, for the invention of artificial intelligence could provide a kind of media.The beginning of AIAlthough the computer AI provides necessary for technical basis, but until the early 1950s, people noticed between machine and human intelligence. Norbert Wiener is the study of the theory of American feedback. Most familiar feedback control example is the thermostat. It will be collected room temperature and hope, and reaction temperature compared to open or close small heater, thus controlling environmental temperature. The importance of the study lies in the feedback loop Wiener:all theoretically the intelligence activities are a result of feedback mechanism and feedback mechanism is. Can use machine. The findings of the simulation of early development of AI.1955, Simon and end Newell called "a logical experts" program. This program is considered by many to be the first AI programs. It will each problem is expressed as a tree, then choose the model may be correct conclusion that a problem to solve. "logic" to the public and the AI expert research field effect makes it AI developing an important milestone in 1956, is considered to be the father of artificial intelligence of John McCarthy organized a society, will be a lot of interest machine intelligence experts and scholars together for a month. He asked them to Vermont Dartmouth in "artificial intelligence research in summer." since then, this area was named "artificial intelligence" although Dartmouth learn not very successful, but it was the founder of the centralized and AI AI research for later laid a foundation.After the meeting of Dartmouth, AI research started seven years. Although the rapid development of field haven't define some of the ideas, meeting has been reconsidered and Carnegie Mellon university. And MIT began to build AI research center is confronted with new challenges. Research needs to establish the: more effective to solve the problem of the system, such as "logic" in reducing search; expert There is the establishment of the system can be self learning.In 1957, "a new program general problem-solving machine" first version was tested. This program is by the same logic "experts" group development. The GPS expanded Wiener feedback principle, can solve many common problem. Two years later, IBM has established a grind investigate group Herbert AI. Gelerneter spent three years to make a geometric theorem of solutions of the program. This achievement was a sensation.When more and more programs, McCarthy busy emerge in the history of an AI. 1958 McCarthy announced his new fruit: LISP until today still LISP language. In. "" mean" LISP list processing ", it quickly adopted for most AI developers.In 1963 MIT from the United States government got a pen is 22millions dollars funding for research funding. The machine auxiliary recognition from the defense advanced research program, have guaranteed in the technological progress on this plan ahead of the Soviet union. Attracted worldwide computer scientists, accelerate the pace of development of AI research.Large programAfter years of program. It appeared a famous called "SHRDLU." SHRDLU "is" the tiny part of the world "project, including the world (for example, only limited quantity of geometrical form of research and programming). In the MIT leadership of Minsky Marvin by researchers found, facing the object, the small computer programs can solve the problem space and logic. Other as in the late 1960's STUDENT", "can solve algebraic problems," SIR "can understand the simple English sentence. These procedures for handling the language understanding and logic.In the 1970s another expert system. An expert system is a intelligent computer program system, and its internal contains a lot of certain areas of experience and knowledge with expert level, can use the human experts' knowledge and methods to solve the problems to deal with this problem domain. That is, the expert system is a specialized knowledge and experience of the program system. Progress is the expert system could predict under certain conditions, the probability of a solution for the computer already has. Great capacity, expert systems possible from the data of expert system. It is widely used in the market. Ten years, expert system used in stock, advance help doctors diagnose diseases, and determine the position of mineral instructions miners. All of this because of expert system of law and information storage capacity and become possible.In the 1970s, a new method was used for many developing, famous as AI Minsky tectonic theory put forward David Marr. Another new theory of machine vision square, for example, how a pair of image by shadow, shape, color, texture and basic information border. Through the analysis of these images distinguish letter, can infer what might be the image in the same period. PROLOGE result is another language, in 1972. In the 1980s, the more rapid progress during the AI, and more to go into business. 1986, the AI related software and hardware sales $4.25 billion dollars. Expert system for its utility, especially by demand. Like digital electric company with such company XCON expert system for the VAX mainframe programming. Dupont, general motors and Boeing has lots of dependence of expert system for computer expert. Some production expert system of manufacture software auxiliary, such as Teknowledge and Intellicorp established. In order to find and correct the mistakes, existing expert system and some other experts system was designed,such as teach users learn TVC expert system of the operating system.From the lab to daily lifePeople began to feel the computer technique and artificial intelligence. No influence of computer technology belong to a group of researchers in the lab. Personal computers and computer technology to numerous technical magazine now before a people. Like the United States artificial intelligence association foundation. Because of the need to develop, AI had a private company researchers into the boom. More than 150 a DEC (it employs more than 700 employees engaged in AI research) that have spent 10 billion dollars in internal AI team.Some other AI areas in the 1980s to enter the market. One is the machine vision Marr and achievements of Minsky. Now use the camera and production, quality control computer. Although still very humble, these systems have been able to distinguish the objects and through the different shape. Until 1985 America has more than 100 companies producing machine vision systems, sales were us $8 million.But the 1980s to AI and industrial all is not a good year for years. 1986-87 AI system requirements, the loss of industry nearly five hundred million dollars. Teknowledge like Intellicorp and two loss of more than $6 million, about one-third of the profits of the huge losses forced many research funding cuts the guide led. Another disappointing is the defense advanced research programme support of so-called "intelligent" this project truck purpose is to develop a can finish the task in many battlefield robot. Since the defects and successful hopeless, Pentagon stopped project funding.Despite these setbacks, AI is still in development of new technology slowly. In Japan were developed in the United States, such as the fuzzy logic, it can never determine the conditions of decision making, And neural network, regarded as the possible approaches to realizing artificial intelligence. Anyhow, the eighties was introduced into the market, the AI and shows the practical value. Sure, it will be the key to the 21st century. "artificial intelligence technology acceptance inspection in desert storm" action of military intelligence test equipment through war. Artificial intelligence technology is used to display the missile system and warning and other advanced weapons. AI technology has also entered family. Intelligent computer increase attracting public interest. The emergence of network game, enriching people's life.Some of the main Macintosh and IBM for application software such as voice and character recognition has can buy, Using fuzzy logic,AI technology to simplify the camera equipment. The artificial intelligence technology related to promote greater demand for new progress appear constantly. In a word ,Artificial intelligence has and will continue to inevitably changed our life.附件三英文文献译文人工智能“人工智能”一词最初是在1956 年Dartmouth在学会上提出来的。

外文翻译---人工神经网络

外文翻译---人工神经网络

英文文献英文资料:Artificial neural networks (ANNs) to ArtificialNeuralNetworks, abbreviations also referred to as the neural network (NNs) or called connection model (ConnectionistModel), it is a kind of model animals neural network behavior characteristic, distributed parallel information processing algorithm mathematical model. This network rely on the complexity of the system, through the adjustment of mutual connection between nodes internal relations, so as to achieve the purpose of processing information. Artificial neural network has since learning and adaptive ability, can provide in advance of a batch of through mutual correspond of the input/output data, analyze master the law of potential between, according to the final rule, with a new input data to calculate, this study analyzed the output of the process is called the "training". Artificial neural network is made of a number of nonlinear interconnected processing unit, adaptive information processing system. It is in the modern neuroscience research results is proposed on the basis of, trying to simulate brain neural network processing, memory information way information processing. Artificial neural network has four basic characteristics:(1) the nonlinear relationship is the nature of the nonlinear common characteristics. The wisdom of the brain is a kind of non-linear phenomena. Artificial neurons in the activation or inhibit the two different state, this kind of behavior in mathematics performance for a nonlinear relationship. Has the threshold of neurons in the network formed by the has better properties, can improve the fault tolerance and storage capacity.(2) the limitations a neural network by DuoGe neurons widely usually connected to. A system of the overall behavior depends not only on the characteristics of single neurons, and may mainly by the unit the interaction between the, connected to the. Through a large number of connection between units simulation of the brain limitations. Associative memory is a typical example of limitations.(3) very qualitative artificial neural network is adaptive, self-organizing, learning ability. Neural network not only handling information can have all sorts of change, and in the treatment of the information at the same time, the nonlinear dynamic system itself is changing. Often by iterative process description of the power system evolution.(4) the convexity a system evolution direction, in certain conditions will depend on a particular state function. For example energy function, it is corresponding to the extreme value of the system stable state. The convexity refers to the function extreme value, it has DuoGe DuoGe system has a stable equilibrium state, this will cause the system to the diversity of evolution.Artificial neural network, the unit can mean different neurons process of the object, such as characteristics, letters, concept, or some meaningful abstract model. The type of network processing unit is divided into three categories: input unit, output unit and hidden units. Input unit accept outside the world of signal and data; Output unit of output system processing results; Hidden unit is in input and output unit, not between by external observation unit. The system The connections between neurons right value reflect the connection between the unit strength, information processing and embodied in the network said the processing unit in the connections. Artificial neural network is a kind of the procedures, and adaptability, brain style of information processing, its essence is through the network of transformation and dynamic behaviors have akind of parallel distributed information processing function, and in different levels and imitate people cranial nerve system level of information processing function. It is involved in neuroscience, thinking science, artificial intelligence, computer science, etc DuoGe field cross discipline.Artificial neural network is used the parallel distributed system, with the traditional artificial intelligence and information processing technology completely different mechanism, overcome traditional based on logic of the symbols of the artificial intelligence in the processing of intuition and unstructured information of defects, with the adaptive, self-organization and real-time characteristic of the study.Development historyIn 1943, psychologists W.S.M cCulloch and mathematical logic W.P home its established the neural network and the math model, called MP model. They put forward by MP model of the neuron network structure and formal mathematical description method, and prove the individual neurons can perform the logic function, so as to create artificial neural network research era. In 1949, the psychologist put forward the idea of synaptic contact strength variable. In the s, the artificial neural network to further development, a more perfect neural network model was put forward, including perceptron and adaptive linear elements etc. M.M insky, analyzed carefully to Perceptron as a representative of the neural network system function and limitations in 1969 after the publication of the book "Perceptron, and points out that the sensor can't solve problems high order predicate. Their arguments greatly influenced the research into the neural network, and at that time serial computer and the achievement of the artificial intelligence, covering up development new computer and new ways of artificial intelligence and the necessity and urgency, make artificial neural network of research at a low. During this time, some of the artificial neural network of the researchers remains committed to this study, presented to meet resonance theory (ART nets), self-organizing mapping, cognitive machine network, but the neural network theory study mathematics. The research for neural network of research and development has laid a foundation. In 1982, the California institute of J.J.H physicists opfield Hopfield neural grid model proposed, and introduces "calculation energy" concept, gives the network stability judgment. In 1984, he again put forward the continuous time Hopfield neural network model for the neural computers, the study of the pioneering work, creating a neural network for associative memory and optimization calculation, the new way of a powerful impetus to the research into the neural network, in 1985, and scholars have proposed a wave ears, the study boltzmann model using statistical thermodynamics simulated annealing technology, guaranteed that the whole system tends to the stability of the points. In 1986 the cognitive microstructure study, puts forward the parallel distributed processing theory. Artificial neural network of research by each developed country, the congress of the United States to the attention of the resolution will be on jan. 5, 1990 started ten years as the decade of the brain, the international research organization called on its members will the decade of the brain into global behavior. In Japan's "real world computing (springboks claiming)" project, artificial intelligence research into an important component.Network modelArtificial neural network model of the main consideration network connection topological structure, the characteristics, the learning rule neurons. At present, nearly 40 kinds of neural network model, with back propagation network, sensor, self-organizing mapping, the Hopfieldnetwork.the computer, wave boltzmann machine, adapt to the ear resonance theory. According to the topology of the connection, the neural network model can be divided into:(1) prior to the network before each neuron accept input and output level to the next level, the network without feedback, can use a loop to no graph. This network realization from the input space to the output signal of the space transformation, it information processing power comes from simple nonlinear function of DuoCi compound. The network structure is simple, easy to realize. Against the network is a kind of typical prior to the network.(2) the feedback network between neurons in the network has feedback, can use a no to complete the graph. This neural network information processing is state of transformations, can use the dynamics system theory processing. The stability of the system with associative memory function has close relationship. The Hopfield network.the computer, wave ear boltzmann machine all belong to this type.Learning typeNeural network learning is an important content, it is through the adaptability of the realization of learning. According to the change of environment, adjust to weights, improve the behavior of the system. The proposed by the Hebb Hebb learning rules for neural network learning algorithm to lay the foundation. Hebb rules say that learning process finally happened between neurons in the synapse, the contact strength synapses parts with before and after the activity and synaptic neuron changes. Based on this, people put forward various learning rules and algorithm, in order to adapt to the needs of different network model. Effective learning algorithm, and makes the godThe network can through the weights between adjustment, the structure of the objective world, said the formation of inner characteristics of information processing method, information storage and processing reflected in the network connection. According to the learning environment is different, the study method of the neural network can be divided into learning supervision and unsupervised learning. In the supervision and study, will the training sample data added to the network input, and the corresponding expected output and network output, in comparison to get error signal control value connection strength adjustment, the DuoCi after training to a certain convergence weights. While the sample conditions change, the study can modify weights to adapt to the new environment. Use of neural network learning supervision model is the network, the sensor etc. The learning supervision, in a given sample, in the environment of the network directly, learning and working stages become one. At this time, the change of the rules of learning to obey the weights between evolution equation of. Unsupervised learning the most simple example is Hebb learning rules. Competition rules is a learning more complex than learning supervision example, it is according to established clustering on weights adjustment. Self-organizing mapping, adapt to the resonance theory is the network and competitive learning about the typical model.Analysis methodStudy of the neural network nonlinear dynamic properties, mainly USES the dynamics system theory and nonlinear programming theory and statistical theory to analysis of the evolution process of the neural network and the nature of the attractor, explore the synergy of neural network behavior and collective computing functions, understand neural information processing mechanism. In order to discuss the neural network and fuzzy comprehensive deal of information may, the concept of chaos theory and method will play a role. The chaos is a rather difficult toprecise definition of the math concepts. In general, "chaos" it is to point to by the dynamic system of equations describe deterministic performance of the uncertain behavior, or call it sure the randomness. "Authenticity" because it by the intrinsic reason and not outside noise or interference produced, and "random" refers to the irregular, unpredictable behavior, can only use statistics method description. Chaotic dynamics of the main features of the system is the state of the sensitive dependence on the initial conditions, the chaos reflected its inherent randomness. Chaos theory is to point to describe the nonlinear dynamic behavior with chaos theory, the system of basic concept, methods, it dynamics system complex behavior understanding for his own with the outside world and for material, energy and information exchange process of the internal structure of behavior, not foreign and accidental behavior, chaos is a stationary. Chaotic dynamics system of stationary including: still, stable quantity, the periodicity, with sex and chaos of accurate solution... Chaos rail line is overall stability and local unstable combination of results, call it strange attractor.A strange attractor has the following features: (1) some strange attractor is a attractor, but it is not a fixed point, also not periodic solution; (2) strange attractor is indivisible, and that is not divided into two and two or more to attract children. (3) it to the initial value is very sensitive, different initial value can lead to very different behavior.superiorityThe artificial neural network of characteristics and advantages, mainly in three aspects: first, self-learning. For example, only to realize image recognition that the many different image model and the corresponding should be the result of identification input artificial neural network, the network will through the self-learning function, slowly to learn to distinguish similar images. The self-learning function for the forecast has special meaning. The prospect of artificial neural network computer will provide mankind economic forecasts, market forecast, benefit forecast, the application outlook is very great. The second, with lenovo storage function. With the artificial neural network of feedback network can implement this association. Third, with high-speed looking for the optimal solution ability. Looking for a complex problem of the optimal solution, often require a lot of calculation, the use of a problem in some of the design of feedback type and artificial neural network, use the computer high-speed operation ability, may soon find the optimal solution.Research directionThe research into the neural network can be divided into the theory research and application of the two aspects of research. Theory study can be divided into the following two categories:1, neural physiological and cognitive science research on human thinking and intelligent mechanism.2, by using the neural basis theory of research results, with mathematical method to explore more functional perfect, performance more superior neural network model, the thorough research network algorithm and performance, such as: stability and convergence, fault tolerance, robustness, etc.; The development of new network mathematical theory, such as: neural network dynamics, nonlinear neural field, etc.Application study can be divided into the following two categories:1, neural network software simulation and hardware realization of research.2, the neural network in various applications in the field of research. These areas include: pattern recognition, signal processing, knowledge engineering, expert system, optimize the combination, robot control, etc. Along with the neural network theory itself and related theory, related to the development of technology, the application of neural network will further.Development trend and research hot spotArtificial neural network characteristic of nonlinear adaptive information processing power, overcome traditional artificial intelligence method for intuitive, such as mode, speech recognition, unstructured information processing of the defects in the nerve of expert system, pattern recognition and intelligent control, combinatorial optimization, and forecast areas to be successful application. Artificial neural network and other traditional method unifies, will promote the artificial intelligence and information processing technology development. In recent years, the artificial neural network is on the path of human cognitive simulation further development, and fuzzy system, genetic algorithm, evolution mechanism combined to form a computational intelligence, artificial intelligence is an important direction in practical application, will be developed. Information geometry will used in artificial neural network of research, to the study of the theory of the artificial neural network opens a new way. The development of the study neural computers soon, existing product to enter the market. With electronics neural computers for the development of artificial neural network to provide good conditions.Neural network in many fields has got a very good application, but the need to research is a lot. Among them, are distributed storage, parallel processing, since learning, the organization and nonlinear mapping the advantages of neural network and other technology and the integration of it follows that the hybrid method and hybrid systems, has become a hotspot. Since the other way have their respective advantages, so will the neural network with other method, and the combination of strong points, and then can get better application effect. At present this in a neural network and fuzzy logic, expert system, genetic algorithm, wavelet analysis, chaos, the rough set theory, fractal theory, theory of evidence and grey system and fusion.汉语翻译人工神经网络(ArtificialNeuralNetworks,简写为ANNs)也简称为神经网络(NNs)或称作连接模型(ConnectionistModel),它是一种模范动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。

外文文献翻译译稿和原文

外文文献翻译译稿和原文

外文文献翻译译稿1卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,通过对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标及速度。

在很多工程应用(如雷达、计算机视觉)中都可以找到它的身影。

同时,卡尔曼滤波也是控制理论以及控制系统工程中的一个重要课题。

例如,对于雷达来说,人们感兴趣的是其能够跟踪目标。

但目标的位置、速度、加速度的测量值往往在任何时候都有噪声。

卡尔曼滤波利用目标的动态信息,设法去掉噪声的影响,得到一个关于目标位置的好的估计。

这个估计可以是对当前目标位置的估计(滤波),也可以是对于将来位置的估计(预测),也可以是对过去位置的估计(插值或平滑)。

命名[编辑]这种滤波方法以它的发明者鲁道夫.E.卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。

斯坦利。

施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。

卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。

关于这种滤波器的论文由Swerling(1958)、Kalman (1960)与Kalman and Bucy(1961)发表。

目前,卡尔曼滤波已经有很多不同的实现。

卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。

除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton开发的平方根滤波器的变种。

也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。

以下的讨论需要线性代数以及概率论的一般知识。

卡尔曼滤波建立在线性代数和隐马尔可夫模型(hidden Markov model)上。

其基本动态系统可以用一个马尔可夫链表示,该马尔可夫链建立在一个被高斯噪声(即正态分布的噪声)干扰的线性算子上的。

系统的状态可以用一个元素为实数的向量表示。

基于神经网络和遗传算法的采油控制系统

基于神经网络和遗传算法的采油控制系统

第36卷 第1期吉林大学学报(工学版) Vol .36 No .12006年1月Journal of J ilin University (Engineering and Technol ogy Editi on ) Jan .2006文章编号:1671-5497(2006)01-0082-005收稿日期:2005206207.基金项目:吉林省科技发展计划项目(20040532).作者简介:李英(1978-),男,博士研究生.研究方向:可重构机械臂控制.E 2mail:liyings mart2004@yahoo 通讯联系人:李元春(1962-),男,教授,博士生导师.研究方向:复杂系统的建模与优化,机器人控制.E 2mail:liyc@e mail .jlu .edu .cn基于神经网络和遗传算法的采油控制系统李 英,李元春(吉林大学通信工程学院,长春130022)摘 要:为了解决部分抽油机“长期相对轻载”和“空抽”的问题,采用抽油机间歇采油控制方法对采油控制系统进行了设计。

利用非线性规范化方法的非线性同伦BP 神经网络对采油模型进行辨识,采用遗传算法优化停机时间。

该控制系统在油田中的实验结果表明,在保证了采油量的前提下,节电率达30%以上,实现了抽油机采油的智能控制。

关键词:自动控制技术;非线性同伦;神经网络;遗传算法;采油控制系统中图分类号:TP273 文献标识码:AO il Pu m p i n g Con trol System Ba sed on Neura l Networkand Geneti c A lgor ithmL i Ying,L i Yuan 2chun(College of Co mm unication Engineering,J ilin U niversity,Changchun 130022,China )Abstract:I n order t o s olve the p r oble m s of l ong 2ter m light 2l oad and idle pump ing that many oil wells faced,a contr ol syste m t o operate the oil pump inter m ittently was devel oped .It is based on nonlinear homot op ic BPneural net w ork with nonlinear nor malizati on method t o identify the oil pump ing model,and the genetic algorith m t o op ti m ize the downti m e .The s pot test of the contr ol syste m showed that up t o 30%of the energy -saving rate was achieved under the guaranteed oil out put and the intelligent contr ol of the oil pu mp ing was realized .Key words:aut omatic contr ol technol ogy;nonlinear homot opy;neural net w ork;genetic algorithm;oil pu mp ing contr ol syste m0 引 言近年来,对油田抽油机智能控制的研究已经成为热点。

DeepLearning论文翻译(NatureDeepReview)

DeepLearning论文翻译(NatureDeepReview)

DeepLearning论⽂翻译(NatureDeepReview)原论⽂出处:by Yann LeCun, Yoshua Bengio & Geoffrey HintonNature volume521, pages436–444 (28 May 2015)译者:零楚L()这篇论⽂性质为深度学习的综述,原本只是想做做笔记,但找到的翻译都不怎么通顺。

既然要啃原⽂献,索性就做个翻译,尽⼒准确通畅。

转载使⽤请注明本⽂出处,当然实在不注明我也并没有什么办法。

论⽂中⼤量使⽤貌似作者默认术语但⼜难以赋予合适中⽂意义或会造成歧义的词语及其在本⽂中将采⽤的固定翻译有:representation-“特征描述”objective function/objective-“误差函数/误差” 本意该是⽬标函数/评价函数但实际应⽤中都是使⽤的cost function+正则化项作为⽬标函数(参考链接:)所以本⽂直接将其意为误差函数便于理解的同时并不会影响理解的正确性这种量的翻译写下⾯这句话会不会太⾃⼤了点,不过应该也没关系吧。

看过这么多书⾥接受批评指正的谦辞,张宇⽼师版本的最为印象深刻:我⽆意以“⽔平有限”为遁词,诚⼼接受批评指正。

那么,让我们开始这⼀切吧。

Nature Deep Review摘要:深度学习能让那些具有多个处理层的计算模型学习如何表⽰⾼度抽象的数据。

这些⽅法显著地促进了⼀些领域最前沿技术的发展,包括语⾳识别,视觉对象识别,对象检测和如药物鉴定、基因组学等其它很多领域。

深度学习使⽤⼀种后向传播算法(BackPropagation algorithm,通常称为BP算法)来指导机器如何让每个处理层根据它们⾃⾝前⾯的处理层的“特征描述”改变⽤于计算本层“特征描述”的参数,由此深度学习可以发现所给数据集的复杂结构。

深度卷积⽹络为图像、视频、语⾳和⾳频的处理领域都带来了突破,⽽递归⽹络(或更多译为循环神经⽹络)则在处理⽂本、语⾳等序列数据⽅⾯展现出潜⼒。

fpga英文文献翻译

fpga英文文献翻译

Field-programmable gate array(现场可编程门阵列)1、History ——历史FPGA业界的可编程只读存储器(PROM)和可编程逻辑器件(PLD)萌芽。

可编程只读存储器(PROM)和可编程逻辑器件(PLD)都可以分批在工厂或在现场(现场可编程)编程,然而,可编程逻辑被硬线连接在逻辑门之间。

在80年代末期,为海军水面作战部提供经费的的史蒂夫·卡斯尔曼提出要开发将实现60万可再编程门计算机实验。

卡斯尔曼是成功的,并且与系统有关的专利是在1992年发行的。

1985年,大卫·W·佩奇和卢文R.彼得森获得专利,一些行业的基本概念和可编程逻辑阵列,门,逻辑块技术公司开始成立。

同年,Xilinx共同创始人,Ross Freeman和Bernard Vonderschmitt发明了第一个商业上可行的现场可编程门阵列——XC2064。

该XC2064可实现可编程门与其它门之间可编程互连,是一个新的技术和市场的开端。

XC2064有一个64位可配置逻辑块(CLB),有两个三输入查找表(LUT)。

20多年后,Ross Freeman 进入全国发明家名人堂,名人堂对他的发明赞誉不绝。

Xilinx继续受到挑战,并从1985年到90年代中期迅速增长,当竞争对手如雨后春笋般成立,削弱了显著的市场份额。

到1993年,Actel大约占市场的18%。

上世纪90年代是FPGA的爆炸性时期,无论是在复杂性和生产量。

在90年代初期,FPGA的电信和网络进行了初步应用。

到这个十年结束时,FPGA行业领袖们以他们的方式进入消费电子,汽车和工业应用。

1997年,一个在苏塞克斯大学工作的研究员阿德里安·汤普森,合并遗传算法技术和FPGA来创建一个声音识别装置,使得FPGA的名气可见一斑。

汤姆逊的算法配置10×10的细胞在Xilinx的FPGA芯片阵列,以两个音区分,利用数字芯片的模拟功能。

电气工程的外文文献(及翻译)

电气工程的外文文献(及翻译)

电气工程的外文文献(及翻译)文献一:Electric power consumption prediction model based on grey theory optimized by genetic algorithms本文介绍了一种基于混合灰色理论与遗传算法优化的电力消耗预测模型。

该模型使用时间序列数据来建立模型,并使用灰色理论来解决数据的不确定性问题。

通过遗传算法的优化,模型能够更好地预测电力消耗,并取得了优异的预测结果。

此模型可以在大规模电力网络中使用,并具有较高的可行性和可靠性。

文献二:Intelligent control for energy-efficient operation of electric motors本文研究了一种智能控制方法,用于电动机的节能运行。

该方法提供了一种更高效的控制策略,使电动机能够在不同负载条件下以较低的功率运行。

该智能控制使用模糊逻辑方法来确定最佳的控制参数,并使用遗传算法来优化参数。

实验结果表明,该智能控制方法可以显著降低电动机的能耗,节省电能。

文献三:Fault diagnosis system for power transformers based on dissolved gas analysis本文介绍了一种基于溶解气体分析的电力变压器故障诊断系统。

通过对变压器油中的气体样品进行分析,可以检测和诊断变压器内部存在的故障类型。

该系统使用人工神经网络模型来对气体分析数据进行处理和分类。

实验结果表明,该系统可以准确地检测和诊断变压器的故障,并有助于实现有效的维护和管理。

文献四:Power quality improvement using series active filter based on iterative learning control technique本文研究了一种基于迭代研究控制技术的串联有源滤波器用于电能质量改善的方法。

参考文献(人工智能)

参考文献(人工智能)

参考文献(人工智能)曹晖目的:对参考文献整理(包括摘要、读书笔记等),方便以后的使用。

分类:粗分为论文(paper)、教程(tutorial)和文摘(digest)。

0介绍 (1)1系统与综述 (1)2神经网络 (2)3机器学习 (2)3.1联合训练的有效性和可用性分析 (2)3.2文本学习工作的引导 (2)3.3★采用机器学习技术来构造受限领域搜索引擎 (3)3.4联合训练来合并标识数据与未标识数据 (5)3.5在超文本学习中应用统计和关系方法 (5)3.6在关系领域发现测试集合规律性 (6)3.7网页挖掘的一阶学习 (6)3.8从多语种文本数据库中学习单语种语言模型 (6)3.9从因特网中学习以构造知识库 (7)3.10未标识数据在有指导学习中的角色 (8)3.11使用增强学习来有效爬行网页 (8)3.12★文本学习和相关智能A GENTS:综述 (9)3.13★新事件检测和跟踪的学习方法 (15)3.14★信息检索中的机器学习——神经网络,符号学习和遗传算法 (15)3.15用NLP来对用户特征进行机器学习 (15)4模式识别 (16)4.1JA VA中的模式处理 (16)0介绍1系统与综述2神经网络3机器学习3.1 联合训练的有效性和可用性分析标题:Analyzing the Effectiveness and Applicability of Co-training链接:Papers 论文集\AI 人工智能\Machine Learning 机器学习\Analyzing the Effectiveness and Applicability of Co-training.ps作者:Kamal Nigam, Rayid Ghani备注:Kamal Nigam (School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, knigam@)Rayid Ghani (School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 rayid@)摘要:Recently there has been significant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies todatasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminativein nature and robust to the assumptions of their embedded classifiers.3.2 文本学习工作的引导标题:Bootstrapping for Text Learning Tasks链接:Papers 论文集\AI 人工智能\Machine Learning 机器学习\Bootstrap for Text Learning Tasks.ps作者:Rosie Jones, Andrew McCallum, Kamal Nigam, Ellen Riloff备注:Rosie Jones (rosie@, 1 School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213)Andrew McCallum (mccallum@, 2 Just Research, 4616 Henry Street, Pittsburgh, PA 15213)Kamal Nigam (knigam@)Ellen Riloff (riloff@, Department of Computer Science, University of Utah, Salt Lake City, UT 84112)摘要:When applying text learning algorithms to complex tasks, it is tedious and expensive to hand-label the large amounts of training data necessary for good performance. This paper presents bootstrapping as an alternative approach to learning from large sets of labeled data. Instead of a large quantity of labeled data, this paper advocates using a small amount of seed information and alarge collection of easily-obtained unlabeled data. Bootstrapping initializes a learner with the seed information; it then iterates, applying the learner to calculate labels for the unlabeled data, and incorporating some of these labels into the training input for the learner. Two case studies of this approach are presented. Bootstrapping for information extraction provides 76% precision for a 250-word dictionary for extracting locations from web pages, when starting with just a few seed locations. Bootstrapping a text classifier from a few keywords per class and a class hierarchy provides accuracy of 66%, a level close to human agreement, when placing computer science research papers into a topic hierarchy. The success of these two examples argues for the strength of the general bootstrapping approach for text learning tasks.3.3 ★采用机器学习技术来构造受限领域搜索引擎标题:Building Domain-specific Search Engines with Machine Learning Techniques链接:Papers 论文集\AI 人工智能\Machine Learning 机器学习\Building Domain-Specific Search Engines with Machine Learning Techniques.ps作者:Andrew McCallum, Kamal Nigam, Jason Rennie, Kristie Seymore备注:Andrew McCallum (mccallum@ , Just Research, 4616 Henry Street Pittsburgh, PA 15213)Kamal Nigam (knigam@ , School of Computer Science, Carnegie Mellon University Pittsburgh, PA 15213)Jason Rennie (jr6b@)Kristie Seymore (kseymore@)摘要:Domain-specific search engines are growing in popularity because they offer increased accuracy and extra functionality not possible with the general, Web-wide search engines. For example, allows complex queries by age-group, size, location and cost over summer camps. Unfortunately these domain-specific search engines are difficult and time-consuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, information extraction and text classification that enables efficient spidering, identifying informative text segments, and populating topic hierarchies. Using these techniques, we have built a demonstration system: a search engine forcomputer science research papers. It already contains over 50,000 papers and is publicly available at ....采用多项Naive Bayes 文本分类模型。

卷积神经网络机器学习外文文献翻译中英文2020

卷积神经网络机器学习外文文献翻译中英文2020

卷积神经网络机器学习相关外文翻译中英文2020英文Prediction of composite microstructure stress-strain curves usingconvolutional neural networksCharles Yang,Youngsoo Kim,Seunghwa Ryu,Grace GuAbstractStress-strain curves are an important representation of a material's mechanical properties, from which important properties such as elastic modulus, strength, and toughness, are defined. However, generating stress-strain curves from numerical methods such as finite element method (FEM) is computationally intensive, especially when considering the entire failure path for a material. As a result, it is difficult to perform high throughput computational design of materials with large design spaces, especially when considering mechanical responses beyond the elastic limit. In this work, a combination of principal component analysis (PCA) and convolutional neural networks (CNN) are used to predict the entire stress-strain behavior of binary composites evaluated over the entire failure path, motivated by the significantly faster inference speed of empirical models. We show that PCA transforms the stress-strain curves into an effective latent space by visualizing the eigenbasis of PCA. Despite having a dataset of only 10-27% of possible microstructure configurations, the mean absolute error of the prediction is <10% of therange of values in the dataset, when measuring model performance based on derived material descriptors, such as modulus, strength, and toughness. Our study demonstrates the potential to use machine learning to accelerate material design, characterization, and optimization.Keywords:Machine learning,Convolutional neural networks,Mechanical properties,Microstructure,Computational mechanics IntroductionUnderstanding the relationship between structure and property for materials is a seminal problem in material science, with significant applications for designing next-generation materials. A primary motivating example is designing composite microstructures for load-bearing applications, as composites offer advantageously high specific strength and specific toughness. Recent advancements in additive manufacturing have facilitated the fabrication of complex composite structures, and as a result, a variety of complex designs have been fabricated and tested via 3D-printing methods. While more advanced manufacturing techniques are opening up unprecedented opportunities for advanced materials and novel functionalities, identifying microstructures with desirable properties is a difficult optimization problem.One method of identifying optimal composite designs is by constructing analytical theories. For conventional particulate/fiber-reinforced composites, a variety of homogenizationtheories have been developed to predict the mechanical properties of composites as a function of volume fraction, aspect ratio, and orientation distribution of reinforcements. Because many natural composites, synthesized via self-assembly processes, have relatively periodic and regular structures, their mechanical properties can be predicted if the load transfer mechanism of a representative unit cell and the role of the self-similar hierarchical structure are understood. However, the applicability of analytical theories is limited in quantitatively predicting composite properties beyond the elastic limit in the presence of defects, because such theories rely on the concept of representative volume element (RVE), a statistical representation of material properties, whereas the strength and failure is determined by the weakest defect in the entire sample domain. Numerical modeling based on finite element methods (FEM) can complement analytical methods for predicting inelastic properties such as strength and toughness modulus (referred to as toughness, hereafter) which can only be obtained from full stress-strain curves.However, numerical schemes capable of modeling the initiation and propagation of the curvilinear cracks, such as the crack phase field model, are computationally expensive and time-consuming because a very fine mesh is required to accommodate highly concentrated stress field near crack tip and the rapid variation of damage parameter near diffusive cracksurface. Meanwhile, analytical models require significant human effort and domain expertise and fail to generalize to similar domain problems.In order to identify high-performing composites in the midst of large design spaces within realistic time-frames, we need models that can rapidly describe the mechanical properties of complex systems and be generalized easily to analogous systems. Machine learning offers the benefit of extremely fast inference times and requires only training data to learn relationships between inputs and outputs e.g., composite microstructures and their mechanical properties. Machine learning has already been applied to speed up the optimization of several different physical systems, including graphene kirigami cuts, fine-tuning spin qubit parameters, and probe microscopy tuning. Such models do not require significant human intervention or knowledge, learn relationships efficiently relative to the input design space, and can be generalized to different systems.In this paper, we utilize a combination of principal component analysis (PCA) and convolutional neural networks (CNN) to predict the entire stress-strain c urve of composite failures beyond the elastic limit. Stress-strain curves are chosen as the model's target because t hey are difficult to predict given their high dimensionality. In addition, stress-strain curves are used to derive important material descriptors such as modulus, strength, and toughness. In this sense, predicting stress-straincurves is a more general description of composites properties than any combination of scaler material descriptors. A dataset of 100,000 different composite microstructures and their corresponding stress-strain curves are used to train and evaluate model performance. Due to the high dimensionality of the stress-strain dataset, several dimensionality reduction methods are used, including PCA, featuring a blend of domain understanding and traditional machine learning, to simplify the problem without loss of generality for the model.We will first describe our modeling methodology and the parameters of our finite-element method (FEM) used to generate data. Visualizations of the learned PCA latent space are then presented, a long with model performance results.CNN implementation and trainingA convolutional neural network was trained to predict this lower dimensional representation of the stress vector. The input to the CNN was a binary matrix representing the composite design, with 0's corresponding to soft blocks and 1's corresponding to stiff blocks. PCA was implemented with the open-source Python package scikit-learn, using the default hyperparameters. CNN was implemented using Keras with a TensorFlow backend. The batch size for all experiments was set to 16 and the number of epochs to 30; the Adam optimizer was used to update the CNN weights during backpropagation.A train/test split ratio of 95:5 is used –we justify using a smaller ratio than the standard 80:20 because of a relatively large dataset. With a ratio of 95:5 and a dataset with 100,000 instances, the test set size still has enough data points, roughly several thousands, for its results to generalize. Each column of the target PCA-representation was normalized to have a mean of 0 and a standard deviation of 1 to prevent instable training.Finite element method data generationFEM was used to generate training data for the CNN model. Although initially obtained training data is compute-intensive, it takes much less time to train the CNN model and even less time to make high-throughput inferences over thousands of new, randomly generated composites. The crack phase field solver was based on the hybrid formulation for the quasi-static fracture of elastic solids and implementedin the commercial FEM software ABAQUS with a user-element subroutine (UEL).Visualizing PCAIn order to better understand the role PCA plays in effectively capturing the information contained in stress-strain curves, the principal component representation of stress-strain curves is plotted in 3 dimensions. Specifically, we take the first three principal components, which have a cumulative explained variance ~85%, and plot stress-strain curves in that basis and provide several different angles from which toview the 3D plot. Each point represents a stress-strain curve in the PCA latent space and is colored based on the associated modulus value. it seems that the PCA is able to spread out the curves in the latent space based on modulus values, which suggests that this is a useful latent space for CNN to make predictions in.CNN model design and performanceOur CNN was a fully convolutional neural network i.e. the only dense layer was the output layer. All convolution layers used 16 filters with a stride of 1, with a LeakyReLU activation followed by BatchNormalization. The first 3 Conv blocks did not have 2D MaxPooling, followed by 9 conv blocks which did have a 2D MaxPooling layer, placed after the BatchNormalization layer. A GlobalAveragePooling was used to reduce the dimensionality of the output tensor from the sequential convolution blocks and the final output layer was a Dense layer with 15 nodes, where each node corresponded to a principal component. In total, our model had 26,319 trainable weights.Our architecture was motivated by the recent development and convergence onto fully-convolutional architectures for traditional computer vision applications, where convolutions are empirically observed to be more efficient and stable for learning as opposed to dense layers. In addition, in our previous work, we had shown that CNN's werea capable architecture for learning to predict mechanical properties of 2Dcomposites [30]. The convolution operation is an intuitively good fit forpredicting crack propagation because it is a local operation, allowing it toimplicitly featurize and learn the local spatial effects of crack propagation.After applying PCA transformation to reduce the dimensionality ofthe target variable, CNN is used to predict the PCA representation of thestress-strain curve of a given binary composite design. After training theCNN on a training set, its ability to generalize to composite designs it hasnot seen is evaluated by comparing its predictions on an unseen test set.However, a natural question that emerges i s how to evaluate a model's performance at predicting stress-strain curves in a real-world engineeringcontext. While simple scaler metrics such as mean squared error (MSE)and mean absolute error (MAE) generalize easily to vector targets, it isnot clear how to interpret these aggregate summaries of performance. It isdifficult to use such metrics to ask questions such as “Is this modeand “On average, how poorly will aenough to use in the real world” given prediction be incorrect relative to some given specification”. Although being able to predict stress-strain curves is an importantapplication of FEM and a highly desirable property for any machinelearning model to learn, it does not easily lend itself to interpretation. Specifically, there is no simple quantitative way to define whether two-world units.stress-s train curves are “close” or “similar” with real Given that stress-strain curves are oftentimes intermediary representations of a composite property that are used to derive more meaningful descriptors such as modulus, strength, and toughness, we decided to evaluate the model in an analogous fashion. The CNN prediction in the PCA latent space representation is transformed back to a stress-strain curve using PCA, and used to derive the predicted modulus, strength, and toughness of the composite. The predicted material descriptors are then compared with the actual material descriptors. In this way, MSE and MAE now have clearly interpretable units and meanings. The average performance of the model with respect to the error between the actual and predicted material descriptor values derived from stress-strain curves are presented in Table. The MAE for material descriptors provides an easily interpretable metric of model performance and can easily be used in any design specification to provide confidence estimates of a model prediction. When comparing the mean absolute error (MAE) to the range of values taken on by the distribution of material descriptors, we can see that the MAE is relatively small compared to the range. The MAE compared to the range is <10% for all material descriptors. Relatively tight confidence intervals on the error indicate that this model architecture is stable, the model performance is not heavily dependent on initialization, and that our results are robust to differenttrain-test splits of the data.Future workFuture work includes combining empirical models with optimization algorithms, such as gradient-based methods, to identify composite designs that yield complementary mechanical properties. The ability of a trained empirical model to make high-throughput predictions over designs it has never seen before allows for large parameter space optimization that would be computationally infeasible for FEM. In addition, we plan to explore different visualizations of empirical models-box” of such models. Applying machine in an effort to “open up the blacklearning to finite-element methods is a rapidly growing field with the potential to discover novel next-generation materials tailored for a variety of applications. We also note that the proposed method can be readily applied to predict other physical properties represented in a similar vectorized format, such as electron/phonon density of states, and sound/light absorption spectrum.ConclusionIn conclusion, we applied PCA and CNN to rapidly and accurately predict the stress-strain curves of composites beyond the elastic limit. In doing so, several novel methodological approaches were developed, including using the derived material descriptors from the stress-strain curves as interpretable metrics for model performance and dimensionalityreduction techniques to stress-strain curves. This method has the potential to enable composite design with respect to mechanical response beyond the elastic limit, which was previously computationally infeasible, and can generalize easily to related problems outside of microstructural design for enhancing mechanical properties.中文基于卷积神经网络的复合材料微结构应力-应变曲线预测查尔斯,吉姆,瑞恩,格瑞斯摘要应力-应变曲线是材料机械性能的重要代表,从中可以定义重要的性能,例如弹性模量,强度和韧性。

神经网络和模糊系统

神经网络和模糊系统

05
CATALOGUE
应用案例
控制系统
神经网络在控制系统中主要用于优化 和预测控制策略。
通过训练神经网络来学习系统的动态 行为,可以实现对系统的精确控制。 例如,在机器人控制、航空航天控制 等领域,神经网络被用于提高系统的 稳定性和响应速度。
数据分类
模糊系统在数据分类中主要用于处理不确定性和不精确性。
练出最优的神经网络模型。
反向传播算法
根据输出层的误差,计算出每 层的误差梯度,然后根据梯度 下降法更新权重和偏差。
随机梯度下降法
在训练过程中,每次只使用一 部分数据来计算梯度,然后更 新权重和偏差,以提高训练效 率。
自适应学习率算法
根据误差梯度的变化情况,动 态调整学习率,以加快收敛速
度并避免陷入局部最小值。
自适应神经模糊系统
自适应神经模糊系统是在神经模糊系统的基础上,增加了 自适应调整能力。它能够根据系统的运行状态和输入数据 的特性,自适应地调整模糊规则和隶属函数的参数,以更 好地适应环境和任务的变化。
自适应神经模糊系统通过引入在线学习算法和自适应调整 策略,使得系统能够根据运行过程中的反馈信息,不断优 化模糊规则和参数,提高系统的实时性和准确性。
ቤተ መጻሕፍቲ ባይዱ
混合神经模糊系统
混合神经模糊系统是一种将不同类型的神经网络和模糊逻辑结合起来,形成一个 多层次、多模态的混合智能系统。它利用不同类型神经网络的优势,结合多种模 糊逻辑方法,实现对复杂系统的全面建模和控制。
混合神经模糊系统通过集成不同类型的神经网络和模糊逻辑方法,能够充分发挥 各自的优势,提高系统的整体性能。同时,它还能够处理不同类型的输入数据和 任务,具有更强的泛化能力和适应性。
应用前景

心理学英文文献翻译:The Role of Autobiographical Memory Networks in the Experience

心理学英文文献翻译:The Role of Autobiographical Memory Networks in the Experience

自传体记忆系统在消极情绪体验中:如何从过去的记忆中得出目前的感受The Role of Autobiographical Memory Networks in the Experience of Negative Emotions: How Our Remembered Past Elicits Our Current FeelingsFrederick L. Philippe and Richard Koestner McGill University Serge Lecours, Genevieve Beaulieu-Pelletier,and Katy Bois Universite´ de Montre´al摘要本研究考察了在消极情绪体验中,自传体记忆网络所起到的作用。

两个实验结果发现,自传体记忆及其相关的记忆系统在消极情绪体验中的作用是活跃而明显的。

此外,与自我决定理论一致,对能力、自主及人际关系的心理需要的受挫,是自传体记忆影响消极情绪体验的关键环节。

研究一揭示,在与损失主题相关的自传体记忆系统中,心理需要的受挫与抑郁情绪正相关,而非其他的消极情绪。

研究二以一个预测设计揭示,相对于对待不公的情绪,愤怒相关及内疚相关的自传体记忆网络二者在情景愤怒上的差别更为显著。

所有的结果都是在控制神经质(研究一、二)和自我控制(研究二)及效价(研究一)和情绪(研究二)的基础之上,通过测量自传体记忆网络而得出的。

这些结果呈现了需要受挫在自传体记忆网络中持续的具有情感意义的表达。

关键词自传体记忆,消极情绪,需要受挫,记忆网络,自我决定理论尽管记忆与情绪的关系很久以来为研究者所兴趣的论题,但众多研究者主要关心的是——情绪怎样影响情绪。

例如,上世纪80年代到90年代间的研究考察了个体当前的情绪如何影响了情绪一致性记忆(如,Bower & Cohen, 1982; Clore & Parrott, 1991),或者情绪记忆的效果优于中性记忆的方式(Heuer & Reisberg,1990)。

中英文参考文献

中英文参考文献

中英文参考文献
中英文参考文献是学术研究中必不可少的部分,用于向读者提供关于研究背景、方法和结果的详细信息。

以下是一些中英文参考文献的示例:
中文参考文献:
1. 张三. (2019). 机器学习算法在数据挖掘中的应用研究. 中国计算机学会.
2. 李四, 王五, & 赵六. (2018). 人工智能的发展及其应用. 北京: 电子工业出版社.
3. 吕七, 刘八, & 陈九. (2017). 自然语言处理技术的最新进展. 人工智能, 25(3), 28-35.
英文参考文献:
1. Zhang, S. (2019). Application of machine learning algorithms in data mining. China Computer Federation.
2. Li, S., Wang, W., & Zhao, L. (2018). The development and applications of artificial intelligence. Beijing: Electronics Industry Press.
3. Lyu, Q., Liu, B., & Chen, J. (2017). The latest advances in natural language processing technology. Artificial Intelligence, 25(3), 28-35.。

英翻汉论文资料

英翻汉论文资料

通过以优先权为基础的模糊特性使机器人导航于非常杂乱的环境中摘要:自主地面车辆(自导车辆系统)在应用程序的一个关键挑战是导航在密集杂乱的环境障碍。

在配置其事先不知道的的障碍时机器人的任务变得更加复杂。

这类系统最流行的控制方法是基于精确整合一对机器人传感器信息来反应区域的导航方案。

由于环境的不确定性, 已经提出了模糊的行为系统。

在应用基于模糊控制反应行为的导航控制系统中最困难的问题是,判断或融合个人的反应行为,解决这一问题使用到了优先逻辑。

本文使用多值逻辑框架从而提出了用个性化设计的模糊行为系统控制自动车辆的导航。

仿真和实验结果显示,该方法能够让机器人顺利和有效地导航并穿过杂乱环境,例如茂密的森林。

实验比较了向量场直方图方法(VFH),证明了该方法即使对于长路径的目标一般也会分析的平滑。

1、介绍安全操纵自主地面车辆(自导车辆系统)在无序复杂的环境、密集杂乱的障碍中行驶,对于自动车辆目标导向应用程序而言仍然是一个重大的挑战。

导航是一个多目标控制问题,旨在确保机器人不仅在不碰到障碍物的前提下到达目标,但也要以确保稳定安全的速度行驶。

问题是特别困难的,因为一些导航目标可能会与另一个相反。

导航控制算法在杂乱的环境下不会太复杂是很重要的,因为这将导致一个迟缓反应。

已经承认传统平面感觉模板行为方法在这样的环境不是有效的,相反,精确整合两传感器信息来控制行动的当地导航策略定会让机器人成功完成其使命。

复杂性控制是通过将导航控制问题分解成可以独立并行控制的更简单和定义明确的子问题来克服。

这些子问题及其控制器被称为反应执行者,这种方法来自运动机器人技术。

这种技术吸引了许多机器人专家的兴趣,甚至被用于工业过程控制的应用程序。

自从它被引入运动机器人后迅速推广,导致用可以处理机器人的不确定性信息的模糊逻辑控制器反应的模糊行为方法的发展。

模糊逻辑还允许控制变量的连续性如航向角和速度的考虑,而不是最新行为所用的离散的数字。

此外,它允许程序使用一个设计师自然思维方式的算法学术语言来编写导航算法。

nlp有重要意义的三篇文献

nlp有重要意义的三篇文献

nlp有重要意义的三篇文献
1. "A Statistical Approach to Machine Translation" by Peter
F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer。

该文献是自然语言处理中机器翻译领域最经典的论文之一。

该文献首次提出了基于统计方法的机器翻译,将自然语言处理领域从规则驱动转向数据驱动,开创了机器翻译研究的新时代。

2. "Mining the Web: Discovering Knowledge from Hypertext Data" by Soumen Chakrabarti。

该文献是关于NLP和web挖掘领域的重要著作之一。

文中提供了一种基于链接分析的方法,能够从互联网上大规模的非结构化文本中提取有用的知识,包括关键词提取、实体识别、文本分类等。

3. "Word2Vec" by Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean。

该文献提出的Word2Vec演算法是自然语言处理中最流行的词向量表示方法之一。

通过将单词映射到向量空间,Word2Vec能够比较有效地表示自然语言的语义信息,从而在许多NLP任务中取得了不错的效果。

电子信息 外文文献译文

电子信息 外文文献译文

XXXX学院毕业设计(论文)外文参考文献译文本2012届原文出处A Novel Cross-layer Quality-of-service ModelFor Mobile AD hoc Network毕业设计(论文)题目基于COMNETIII的局域网的规划与设计院(系)电气与电子信息学院专业名称电子信息工程学生姓名学生学号指导教师A Novel Cross-layer Quality-of-service ModelFor Mobile AD hoc NetworkLeichun Wang, Shihong Chen, Kun Xiao, Ruimin Hu National Engineering Research Center of Multimedia Software, Wuhan UniversityWuhan 430072, Hubei, chinaEmail:******************Abstract:The divided-layer protocol architecture for Mobile ad hoc Networks (simply MANETs) can only provide partial stack. This leads to treat difficulties in QoS guarantee of multimedia information transmission in MANETs, this paper proposes Across-layers QoS Model for MANETs, CQMM. In CQMM, a core component was added network status repository (NSR), which was the center of information exchange and share among different protocol layers in the stack. At the same time, CQMM carried out all kinds of unified QoS controls. It is advantageous that CQMM avoids redundancy functions among the different protocol layers in the stack and performs effective QoS controls and overall improvements on the network performances.Keyword: Cross-layers QoS Model, Mobile Ad hoc Networks (MANETs), Network Status Repository (NSR), QoS Controls.1 introductionWith the rapid development of multimedia technologies and the great increase of his bandwidth for personal communication, video and video services begin to be deployed in MANETs. Different from static networks and Internet, multimedia communications in MANETs such as V oice and Video services require strict QoS guarantee, especially the delay guarantee. In addition, communication among different users can be integrated services with different QoS requirements. These lead to great challenges in QoS guarantee of multimedia communication in MANETs. There are two main reasons in these: 1) MANETs runs in atypical wireless environment with time-varying and unreliable physical link, broadcast channel, and dynamic and limited bandwidth and so forth. Therefore, it can only provide limited capability for differentiated services with strict QoS requirements [1].2) It is difficult that traditional flow project and access control mechanism are implemented because of mobility, multiple hops and self-organization of MANETs.At present, most researches on QoS based on traditional divided-layer protocol architecture for MANETs focus on MAC protocol supporting QoS [2], QoS routingprotocol [3] and adaptive application layer protocol with QoS support [4], and so on. It is avoid less that there will be some redundancies on functions among the different protocol layers in the stack. This will increase the complexity of QoS implementation and cause some difficulties in overall improvement on the network performances. Therefore, it is not suitable for MANETs with low processing abilityIn recent years, the cross-layers design based on the partial protocol layers in MANETs was put forward.[1] proposed the mechanism with QoS guarantee for heterogeneous flow MAC layer.[5,6,7,8] did some researches on implementing video communication with QoS guarantee by exchange and cooperation of information among a few layers in MANETs. These can improve QoS in MANETs’communication to some extent. However, MANETs is much more complex than wired system and static network, and improvements on QoS guarantee depend on full cooperation among all layers in the protocol stack. Therefore, it is difficult for the design to provide efficient QoS guarantee for communication and overall improvements on the network performances in MANETs.To make good use of limited resources and optimize overall performances in MANETs, this paper proposes a novel cross-layer QoS model, CQMM, where different layers can exchange information fully and unified QoS managements and controls can be performed.The rest of the paper is organized as follows. CQMM is described in section 2 in detail. In section 3, we analyze CQMM by the comparison with DQMM.The section 4 concludes the paper.2. A CROSS-LAYER QOS MODEL FOR MANETS-CQMM2.1 Architecture of CQMMIn MANETs, present researches on QoS are mostly based on traditional divided-layer protocol architecture, where signals and algorithms supporting QoS are designed and implemented in different layers respectively, such as MAC protocol supporting QoS in data link layer [9], routing protocol with QoS support in network layer[10.11],and so forth. It can be summarized as A Divided-layer QoS Model for MANETs, DQMM (see fig.1).In DQMM, different layers in the protocol stack are designed and work independently; there are only static interfaces between different layers that are neighboring in logic; and each protocol layer has some QoS controls such as error control in logic link layer, congestion control in network, etc. On the one hand, DQMM can simplify the design of MANETs greatly and gain the protocols with high reliability and extensibility. On the other one, DQMM also has some shortcomings: 1) due to the independent design among he different protocol layers, there are some redundancy functions among the different protocollayers in the stack, 2) it is difficult that information is exchanged among different layers that are not neighboring in logic, which leads to some problems in unified managements, QoS controls and overall improvements on the network performances.Fig.1Therefore, it is necessary that more attention are focused on the cooperation among physical layer data link layer, network layer and higher when attempting to optimize performances of each of layer in MANETs. For this reason, we combine parameters dispersed in different layers and design a novel cross-layer QoS model, CQMM, to improve the QoS guarantee and the overall network performances. The architecture of CQMM is provided in fig 2From fig.2, it can be seen that CQMM keeps the core functions and relative independence of each protocol layer in the stack and allows direct information exchange between two neighboring layers in logics to maintain advantages of the modular architecture .On the basic of these, a core component is added in CQMM, Network Status Repository (simply NSR).NSR is the center, by which different layers can exchange and share information fully. On the one hand, each protocol layer can read the status information of other protocol layers from NSR to determine its functions and implementation mechanisms. On the other one, each protocol layer can write its status information to NSR that can be provided with other layers in the protocol stack. In CQMM, the protocol layers that are neighboring in logics can exchange information directly orindirectly by NSR, and the protocol layers that are not neighboring in logics can exchange information using cross-layer ways via NSR. Therefore, information exchange is flexible in CQMM.All kinds of QoS controls in CQMM such as management and scheduling of network resources, network lifetime, error control, and congestion control and performance optimization and so on are not carried out independently. On the contrary, CQMM is in charge of the unified management and all QoS controls by the cooperation among different protocol layers in the stack. Each QoS control in MANETs is related to all layers in the protocol stack, and also constrained by all layers in the stack. The results of all QoS operations and managements are fed back to the different layers and written back to NSR, which will become the parameters of all kinds of QoS controls in MANETs.2.2 protocol design in CQMMIn CQMM, the protocol designs aims at the full and free information exchange and cooperation among different protocol layers to avoid possible redundancy functions when maintaining the relative independence among different layers and the advantages of the modular architecture.Physical layer: Physical layer is responsible for modulation, transmission and receiving of data, and also the key to the size, the cost and the energy consumption of each node in MANETs. In CQMM, the design of physical layer is to choose the transmission media, the frequency range and the modulation algorithm wit the low cost, power and complexity, big channel capability and so on, according to the cost of implementation, energy constraint, and capability and QoS requirements from high layer.Data link layer: The layer is low layer in the protocol stack and can be divided into two sub-layers: logic link sub-layer and MAC sub-layer. Compared with high layers, data link layer can sense network status in MANETs earlier such as the change of channel quality, the network congestion and so on. Therefore, on the one hand data link layer can perform the basic QoS controls such as error control and management of communication channel. On the other one, the layer can be combined with high layers to establish, choose and maintain the routing faster, prevent the congestion of the network earlier, and choose appropriate transport mechanisms and control strategies for transport layer.Network layer: The design and implementation of network layer protocol in CQMM is to establish, choose and maintain appropriate routings by taking into consideration the power, the cache, the reliability of each node in a routing. QoS requirements of services from high layer such as the bandwidth and the delay, and implementation strategies oferror control in logic link sub-layer and the way of the channel management in MAC sub-layer.Transport layer: In CQMM, the protocol design of transport layer needs to be aware of both functions and implementation mechanism of lower layers such as the way of error control in data link layer, the means to establish, choose and maintain routing in the network layer, and QoS requirements from the application layer, to determine corresponding transmission strategies. In addition, the transport layer also needs to analyze all kinds of events from low layers such as the interrupt and change of the routing and the network congestion, and then respond properly to avoid useless sending data.Application layer: There are two different strategies in the design of the application layer: 1) differentiated services. According to the functions provided by the low layers applications are classed as the different ones with different priority levels. 2) Application-aware design. Analyze specific requirements of different applications such as the bandwidth, the delay and the delay twitter and so on, and then assign and implement the functions for each layer in the protocol stack according to the requirements.2.3 QoS Cooperation and Management in CQMMIn CQM, the core of QoS cooperation and management is that NSR acts as the exchange and share center of status information in protocol stack, and by the full exchange and share of network status among different protocol layers the management and scheduling of the network resources and the overall optimization of the network performances can be implemented effectively. The management and scheduling of the network resources, the cross-layer QoS cooperation and the overall optimization of the network performances.Management and scheduling of network resources: Network resources include all kinds of resources such as the cache, the energy and the queue in each node, and the communication channel among nodes and so froth. In CQMM, the management and scheduling of the network resources are not to the unified management and scheduling of the network resources and full utilization of limited resources in order to increase the QoS of all kinds of communication.QoS cooperation and control: In CQMM, all kinds of QoS controls and cooperation such as the rate adaptation, the delay guarantee and the congestion control and so on, are not implemented by each layer alone, but completed through the operation of all layers in the protocol stack. For example, the congestion in MANETs can be earlier prevented and controlled by the cooperation among different layers such as ACK from MAC sub-layer,the routing information and the loss rate and delay of package from network layer, and the information of rate adaptation in transport layer and so on.Performances Optimization: In CQMM, the optimization of the network performances aims to establish a network optimization model constrained by all layers in the protocol architecture and finds the “best”ways according to the model in order to improve the overall performances in MANETs.3. ANALYSIS OF CQMMPresent QoS models for MANETs can mainly be classed as a QoS model based on traditional divided-layer architecture DQMM and a cross-layer QoS model proposed by this paper CQMM. QoS model used by [1, 5-8] is to some extent extended on the basis of DQMM in nature. Here, we only compare CQMM with DQMM3.1 Information ExchangeDifferent protocol architecture and principle between CQMM lead to great differences in the means, the frequency, the time and the requirement of the information exchange, (see table 1)From Table 1, it can be seen that compared wit DQMM CQMM has some advantages: 1) more flexible information exchange. Neighboring layers can information by the interfaces between layers or NSR, and crossing layers may exchange information through NSR; 2) simpler transform in information format. Different layers can exchange information by NSR, so these layers only need to deal with the format transform between the layers and NSR;3)lower requirements. The protocol layers can read them in proper time Information from different protocol layers temporarily stored in NSR, so the layers exchanging information are not required to be synchronous in time;4) more accurate control. NSR in CQMM can store information of some time from the different layers, which is advantageous to master the network status and manage the network more accurately. However, these require higher information exchange frequencies among the different layers,, more processing time of each node, and more communication among them.。

粒子群优化算法PSO介绍中英文翻译word版

粒子群优化算法PSO介绍中英文翻译word版

粒子群优化算法(PSO)介绍在频谱资源日趋紧张的今天,想要通过增加频谱宽度来提高系统容量的方式已经很难实现;同时,想在时域、频域或码域进一步提高系统容量已经十分困难。

在这种情形下,人们把目光投向了空域,期望能够从中寻觅新的源泉。

随着人们对于无线移动通信的要求愈来愈高,专门是对高速多媒体传输的迫切需求,与之相关能够提高系统容量的技术也开始受到人们的特别重视。

20世纪90年代以来,对于群体智能的研究逐渐兴起。

Eberhart和Kennedy于1995年提出的粒子群优化算法(PSO),作为一种简单有效的优化算法迅速在各个领域取得了普遍的应用。

PSO算法的思想来源是鸟群在觅食进程中表现的群体智慧。

通常单个自然生物并非是智能的,可是整个生物群体却表现出处置复杂问题的能力,这就是群体智能。

各类生物聚集成生物种群,都有其内在行为规律,而人类作为高级生物,研究并掌握了这种规律,模拟设计出各类优化算法并运用于各类问题。

类似的还有按照生物繁衍特性产生的遗传算法,对蚂蚁群落食物收集进程的模拟产生的蚁群算法。

PSO算法目前已经普遍用于函数优化、神经网络训练、模糊系统控制和其他遗传算法涉及到的应用领域。

PSO算法较之其他的优化算法实现简单,也没有许多参数需要调整。

可是它也有着收敛过快、易收敛于局部极值的现象,专门是面对高维复杂的问题时如阵列天线方向图综合问题。

人们提出了很多的改良算法,来提高PSO算法的性能。

惯性权重和紧缩因子是目前应用比较普遍的对大体粒子群算法的改良,能够改善优化性能可是收敛较慢。

文献中将粒子群算法和遗传算法在方向图综合上的应用做了比较,能够看出粒子群算法较之遗传算法有计算量小易于实现等特点,但也能够看到大体的PSO算法和遗传算法的收敛速度都不快或往往在某个局部极值停滞太久很难跳出。

粒子群优化算法(PSO粒子群优化(PSO:Particle Swarm Optimization))是一种进化计算技术(evolutionary computation)是一种有效的全局优化技术,有Eberhart 博士和kennedy博士发明。

英文文献及翻译(计算机专业)

英文文献及翻译(计算机专业)

NET-BASED TASK MANAGEMENT SYSTEMHector Garcia-Molina, Jeffrey D. Ullman, Jennifer WisdomABSTRACTIn net-based collaborative design environment, design resources become more and more varied and complex. Besides common information management systems, design resources can be organized in connection with design activities.A set of activities and resources linked by logic relations can form a task. A task has at least one objective and can be broken down into smaller ones. So a design project can be separated into many subtasks forming a hierarchical structure.Task Management System (TMS) is designed to break down these tasks and assign certain resources to its task nodes.As a result of decomposition.al1 design resources and activities could be managed via this system.KEY WORDS:Collaborative Design, Task Management System (TMS), Task Decomposition, Information Management System1 IntroductionAlong with the rapid upgrade of request for advanced design methods, more and more design tool appeared to support new design methods and forms. Design in a web environment with multi-partners being involved requires a more powerful and efficient management system .Design partners can be located everywhere over the net with their own organizations. They could be mutually independent experts or teams of tens of employees. This article discusses a task management system (TMS) which manages design activities and resources by breaking down design objectives and re-organizing design resources in connection with the activities. Comparing with common information management systems (IMS) like product data management system and document management system, TMS can manage the whole design process. It has two tiers which make it much more f1exible in structure.The 1ower tier consists of traditional common IMSS and the upper one fulfillslogic activity management through controlling a tree-like structure, allocating design resources and making decisions about how to carry out a design project. Its functioning paradigm varies in differe nt projects depending on the project’s scale and purpose. As a result of this structure, TMS can separate its data model from its logic mode1.It could bring about structure optimization and efficiency improvement, especially in a large scale project.2 Task Management in Net-Based Collaborative Design Environment2.1 Evolution of the Design EnvironmentDuring a net-based collaborative design process, designers transform their working environment from a single PC desktop to LAN, and even extend to WAN. Each design partner can be a single expert or a combination of many teams of several subjects, even if they are far away from each other geographically. In the net-based collaborative design environment, people from every terminal of the net can exchange their information interactively with each other and send data to authorized roles via their design tools. The Co Design Space is such an environment which provides a set of these tools to help design partners communicate and obtain design information. Code sign Space aims at improving the efficiency of collaborative work, making enterprises increase its sensitivity to markets and optimize the configuration of resource.2.2 Management of Resources and Activities in Net-Based Collaborative EnvironmentThe expansion of design environment also caused a new problem of how to organize the resources and design activities in that environment. As the number of design partners increases, resources also increase in direct proportion. But relations between resources increase in square ratio. To organize these resources and their relations needs an integrated management system which can recognize them and provide to designers in case of they are needed.One solution is to use special information management system (IMS).An IMS can provide database, file systems and in/out interfaces to manage a given resource. Forexample there are several IMS tools in Co Design Space such as Product Data Management System, Document Management System and so on. These systems can provide its special information which design users want.But the structure of design activities is much more complicated than these IM S could manage, because even a simple design project may involve different design resources such as documents, drafts and equipments. Not only product data or documents, design activities also need the support of organizations in design processes. This article puts forward a new design system which attempts to integrate different resources into the related design activities. That is task management system (TMS).3 Task Breakdown Model3.1 Basis of Task BreakdownWhen people set out to accomplish a project, they usually separate it into a sequence of tasks and finish them one by one. Each design project can be regarded as an aggregate of activities, roles and data. Here we define a task as a set of activities and resources and also having at least one objective. Because large tasks can be separated into small ones, if we separate a project target into several lower—level objectives, we define that the project is broken down into subtasks and each objective maps to a subtask. Obviously if each subtask is accomplished, the project is surely finished. So TMS integrates design activities and resources through planning these tasks.Net-based collaborative design mostly aims at products development. Project managers (PM) assign subtasks to designers or design teams who may locate in other cities. The designers and teams execute their own tasks under the constraints which are defined by the PM and negotiated with each other via the collaborative design environment. So the designers and teams are independent collaborative partners and have incompact coupling relationships. They are driven together only by theft design tasks. After the PM have finished decomposing the project, each designer or team leader who has been assigned with a subtask become a 1ow-class PM of his own task. And he can do the same thing as his PM done to him, re-breaking down and re-assigning tasks.So we put forward two rules for Task Breakdown in a net-based environment, incompact coupling and object-driven. Incompact coupling means the less relationshipbetween two tasks. When two subtasks were coupled too tightly, the requirement for communication between their designers will increase a lot. Too much communication wil1 not only waste time and reduce efficiency, but also bring errors. It will become much more difficult to manage project process than usually in this situation. On the other hand every task has its own objective. From the view point of PM of a superior task each subtask could be a black box and how to execute these subtasks is unknown. The PM concerns only the results and constraints of these subtasks, and may never concern what will happen inside it.3.2 Task Breakdown MethodAccording to the above basis, a project can be separated into several subtasks. And when this separating continues, it will finally be decomposed into a task tree. Except the root of the tree is a project, all eaves and branches are subtasks. Since a design project can be separated into a task tree, all its resources can be added to it depending on their relationship. For example, a Small-Sized-Satellite.Design (3SD) project can be broken down into two design objectives as Satellite Hardware. Design (SHD) and Satellite-Software-Exploit (SSE). And it also has two teams. Design team A and design team B which we regard as design resources. When A is assigned to SSE and B to SHD. We break down the project as shown in Fig 1.It is alike to manage other resources in a project in this way. So when we define a collaborative design project’s task model, we should first claim the project’s targets. These targets include functional goals, performance goals, and quality goals and so on. Then we could confirm how to execute this project. Next we can go on to break down it. The project can be separated into two or more subtasks since there are at 1east two partners in a collaborative project. Either we could separate the project into stepwise tasks, which have time sequence relationships in case of some more complex projects and then break down the stepwise tasks according to their phase-to-phase goals.There is also another trouble in executing a task breakdown. When a task is broken into severa1 subtasks; it is not merely “a simple sum motion” of other tasks. In most cases their subtasks could have more complex relations.To solve this problem we use constraints. There are time sequence constraint (TSC) and logic constraint (LC). The time sequence constraint defines the time relationships among subtasks. The TSC has four different types, FF, FS, SF and SS. Fmeans finish and S presents start. If we say Tabb is FS and lag four days, it means Tb should start no later than four days after Ta is finished.The logic constraint is much more complicated. It defines logic relationship among multiple tasks.Here is given an example:“Task TA is separated into three subtasks, Ta, T b and Tc. But there are two more rules.Tb and Tc can not be executed until Ta is finished.Tb and Tc can not be executed both,that means if Tb was executed, Tc should not be executed, and vice versa. This depends on the result of Ta.”So we say Tb and Tc have a logic constraint. After finishing breaking down the tasks, we can get a task tree as Fig, 2 illustrates.4 TMS Realization4.1 TMS StructureAccording to our discussion about task tree model and task breakdown basis, we can develop a Task Management System (TMS) based on Co Design Space using Java language, JSP technology and Microsoft SQL 2000. The task management system’s structure is shown in Fig. 3.TMS has four main modules namely Task Breakdown, Role Management, Statistics and Query and Data Integration. The Task Breakdown module helps users to work out task tree. Role Management module performs authentication and authorization of access control. Statistics and Query module is an extra tool for users to find more information about their task. The last Data Integration Module provides in/out interface for TMS with its peripheral environment.4.2 Key Points in System Realization4.2.1 Integration with Co Design SpaceCo Design Space is an integrated information management system which stores, shares and processes design data and provides a series of tools to support users. These tools can share all information in the database because they have a universal DataMode1. Which is defined in an XML (extensible Markup Language) file, and has a hierarchical structure. Based on this XML structure the TMS h data mode1 definition is organized as following.<?xml version= 1.0 encoding= UTF-8’?><!--comment:Common Resource Definitions Above.The Followingare Task Design--><!ELEMENT ProductProcessResource (Prcses?, History?,AsBuiltProduct*,ItemsObj?, Changes?, ManufacturerParts?,SupplierParts?,AttachmentsObj? ,Contacts?,PartLibrary?,AdditionalAttributes*)><!ELEMENT Prcses (Prcs+) ><!ELEMENT Prcs (Prcses,PrcsNotes?,PrcsArc*,Contacts?,AdditionalAttributes*,Attachments?)><!ELEM ENT PrcsArc EMPTY><!ELEMENT PrcsNotes(PrcsNote*)><!ELEMENT PrcsNote EMPTY>Notes: Element “Pros” is a task node ob ject, and “Process” is a task set object which contains subtask objects and is belongs to a higher class task object. One task object can have no more than one “Presses” objects. According to this definition, “Prcs” objects are organized in a tree-formation process. The other objects are resources, such as task link object (“Presage”), task notes (“Pros Notes”), and task documents (“Attachments”) .These resources are shared in Co Design database.文章出处:计算机智能研究[J],47卷,2007:647-703基于网络的任务管理系统摘要在网络与设计协同化的环境下,设计资源变得越来越多样化和复杂化。

bp神经网络参考文献(推荐96个)

bp神经网络参考文献(推荐96个)

BP(back propagation)神经网络是1986年由Rumelhart和McClelland为首的科学家提出的概念,是一种按照误差逆向传播算法训练的多层前馈神经网络,是目前应用最广泛的神经网络。

以下是整理好的关于bp神经网络参考文献96个,希望对您有所帮助。

bp神经网络参考文献一: [1]唐睿旋,晏鄂川,唐薇. 基于粗糙集和BP神经网络的滑坡易发性评价[J]. 煤田地质与勘探,2017,45(06):129-138. [2]郑贵洲,乐校冬,王红平,花卫华. 基于WorldView-02高分影像的BP和RBF神经网络遥感水深反演[J]. 地球科学,2017,42(12):2345-2353. [3]朱聪聪,朱国维,张庆朝. BP神经网络在高密度电法反演中的改进与应用[J]. 煤炭技术,2017,36(12):90-92. [4]孙家文,黄杰,于永海,尹晶,王浩然. BP神经网络平衡岬湾岸线形态模型及其应用研究[J]. 海洋环境科学,2018,37(01):143-150. [5]李嘉康,李其杰,赵颖,廖洪林. 基于CEEMD-BP神经网络的海温异常预测研究[J]. 数学的实践与认识,2017,47(24):163-171. [6]冯鑫伟,黄领梅,沈冰. BP神经网络组合模型在次洪量预测中的应用[J]. 水土保持通报,2017,37(06):173-177. [7]周德红,冯豪,程乐棋,李文. 遗传算法优化的BP神经网络在地震死亡人数评估中的应用[J]. 安全与环境学报,2017,17(06):2267-2272. [8]王力,周志杰,赵福均. 基于BP神经网络和证据理论的超声检测缺陷识别[J]. 电光与控制,2018,25(01):65-69. [9]王小飞,汪建光,袁于评. BP神经网络在遥感影像波段拟合中的应用[J]. 现代测绘,2018,41(01):44-46. [10]林志东,陈兴伟,张仓荣. 基于灰色关联与BP神经网络的台风非台风暴雨洪水分类模拟[J]. 山地学报,2017,35(06):882-889. [11]宋建国,李赋真,徐维秀,李哲. 改进的神经网络级联相关算法及其在初至拾取中的应用[J]. 石油地球物理勘探,2018,53(01):8-16+4. [12]褚继花. 遗传算法优化BP神经网络水文预报过程模型研究[J]. 水利规划与设计,2018(01):65-66+118. [13]李强,杨天邦,涂公平. GA-BP神经网络模型应用于岩芯扫描仪测定海洋沉积物中多种组分的半定量分析[J]. 分析仪器,2018(01):75-79. [14]曹文洁,肖长来,梁秀娟,韩良跃,胡冰. RBF神经网络在地下水动态预测中的应用[J]. 水利水电技术,2018,49(02):43-48. [15]黄海燕,王叶鹏. 边坡稳定因素的BP网络分析[J]. 安徽建筑,2018,24(01):285-286. [16]王春香,张勇,梁亮,王岩辉. 基于GA-BP神经网络的三维点云孔洞修补研究[J]. 制造技术与机床,2018(03):76-79. [17]孙菊秋,刘向楠. 人工神经网络模型在地下水水位预测中的应用[J]. 陕西水利,2018(02):189-190. [18]潘微,邢建勇,万莉颖. 一种基于BP神经网络方法的HY-2A散射计反演风场偏差订正方案[J]. 海洋预报,2018,35(02):8-18. [19]王鹤,刘梦琳,席振铢,彭星亮,何航. 基于遗传神经网络的大地电磁反演[J]. 地球物理学报,2018,61(04):1563-1575. [20]蔡润,武震,云欢,郭鹏. 基于BP和SOM神经网络相结合的地震预测研究[J]. 四川大学学报(自然科学版),2018,55(02):307-315. [21]侯奇,刘静,管骁. 基于神经网络的微生物生长预测模型[J]. 食品与机械,2018,34(02):120-123. [22]张志勰,虞旦. BP和RBF神经网络在函数逼近上的对比与研究[J]. 工业控制计算机,2018,31(05):119-120. [23]赵学伟,王萍,李新举,刘宁. 基于BP神经网络GPR反演滨海盐渍土含盐量模型构建[J]. 山东农业科学,2018,50(05):152-155. [24]丁书敏,范宏. 基于优化BP神经网络的P2P投资组合定量分析[J]. 中国集体经济,2018(20):95-97. [25]曲娜,杨万昌,刘臻. 基于BP神经网络的分层越浪式波能发电装置越浪量估算研究[J]. 中国水运(下半月),2018,18(07):71-72. bp神经网络参考文献二: [26]刘振华,范宏运,朱宇泽,柳尚. 基于BP神经网络的溶洞规模预测及应用[J]. 中国岩溶,2018,37(01):139-145. [27]李敬明,倪志伟,朱旭辉,许莹. 基于佳点萤火虫算法与BP神经网络并行集成学习的旱情预测模型[J]. 系统工程理论与实践,2018,38(05):1343-1353. [28]赵亚飞,韦广梅. 基于BP神经网络的有限元应力修匀的研究[J]. 计算机工程与应用,2018,54(13):67-72. [29]熊建宁. 基于BP神经网络算法下的边坡安全预测[J]. 江西水利科技,2018,44(03):176-179. [30]郑威,杨英,惠力,鲁成杰,赵彬,杨立. 基于BP神经网络模型的ADCP倾斜条件下的修正算法研究[J]. 山东科学,2018,31(03):1-7. [31]戴妙林,屈佳乐,刘晓青,李强伟,马永志. 基于GA-BP算法的岩质边坡稳定性和加固效应预测模型及其应用研究[J]. 水利水电技术,2018,49(05):165-171. [32]张丽娟,张文勇. 基于Heston模型和遗传算法优化的混合神经网络期权定价研究[J]. 管理工程学报,2018,32(03):142-149. [33]王珲玮,徐彦,张非非. 基于遗传算法优化BP神经网络的弹体结构载荷识别[J]. 工业控制计算机,2018,31(06):74-76. [34]陈阳,胡伍生,严宇翔,龙凤阳,张良. 基于神经网络模型误差补偿技术的对流层延迟模型研究[J]. 大地测量与地球动力学,2018,38(06):577-580+586. [35]姜宝良,李林晓,李腾超. 基于BP神经网络的新乡百泉逐月泉水流量动态分析[J]. 矿产勘查,2018,9(03):516-521. [36]单鹏,冒晓莉,张加宏,马涛,陈永. 基于PSO-BP神经网络的探空湿度太阳辐射误差修正[J]. 科学技术与工程,2018,18(19):1-8. [37]李志新,赖志琴. 年径流变化的BP神经网络预报模型研究[J]. 水电能源科学,2018,36(07):10-12. [38]肖恭伟,欧吉坤,刘国林,张红星. 基于改进的BP神经网络构建区域精密对流层延迟模型[J]. 地球物理学报,2018,61(08):3139-3148. [39]地力夏提·艾木热拉,丁建丽,穆艾塔尔·赛地,米热古力·艾尼瓦尔,邹杰. 基于T-S 模糊神经网络模型的干旱区土壤盐分预测研究[J]. 西南农业学报,2018,31(07):1418-1424. [40]赖金燕,黄建儒. 水文时间序列的小波神经网络工具箱预测[J]. 科技视界,2018(16):164-165+167. [41]王占武. 神经网络在边坡监测中的应用研究[J]. 计算机产品与流通,2018(04):44+111. [42]范晓东,邱波,刘园园,魏诗雅,段福庆. 一种基于遗传优化的BP神经网络的测光红移估计算法[J]. 光谱学与光谱分析,2018,38(08):2374-2378. [43]邓才林,周芳翊,丁健. BP神经网络在县域GPS高程拟合中的应用[J]. 工程勘察,2018,46(08):51-56. [44]杜言霞,于子敏,温继昌,舒毅,吴勇凯,谢启杰. 基于神经网络技术的天气雷达超折射回波识别[J]. 气象科技,2018,46(04):644-650. [45]卢志宏,刘辛瑶,常书娟,杨胜利,赵薇薇,杨勇,刘爱军. 基于BP神经网络的草原矿区表层土壤N/P高光谱反演模型[J]. 草业科学,2018,35(09):2127-2136. [46]刘强,冯忠伦,刘红利,王维,林洪孝,王刚. 结合RVA法建立天然径流量还原计算的BP神经网络模型[J]. 中国农村水利水电,2018(10):54-59. [47]陈有利,朱宪春,胡波,顾小丽. 基于BP神经网络的宁波市台风灾情预估模型研究[J]. 大气科学学报,2018,41(05):668-675. [48]刘毅聪,刘祚秋. 基于人工神经网络及优化方法的岩体力学参数反分析法综述[J]. 广东土木与建筑,2018,25(09):31-35. [49]林焰,杨建辉. 考虑投资者情绪的GARCH-改进神经网络期权定价模型[J]. 系统管理学报,2018,27(05):863-871+880. [50]朱智慧,曹庆,徐杰. 神经网络方法在上海沿海海浪预报中的应用[J]. 海洋预报,2018,35(05):25-33. bp神经网络参考文献三: [51]张彬. 融合遗传算法和BP神经网络对基坑地表沉降预测的应用研究[J]. 北京测绘,2018,32(10):1152-1155. [52]陈文雄,朱咏,陈学林. 基于LM-BP神经网络的黑河龙电渠流量推测研究[J]. 地下水,2018,40(05):115-117+130. [53]张春露,白艳萍. ARIMA时间序列模型和BP神经网络组合预测在铁路客座率中的应用[J]. 数学的实践与认识,2018,48(21):105-113. [54]王帅,黄海鸿,韩刚,刘志峰. 基于PCA与GA-BP神经网络的磁记忆信号定量评价[J]. 电子测量与仪器学报,2018,32(10):190-196. [55]汪子豪,秦其明,孙元亨,张添源,任华忠. 基于BP神经网络的地表温度空间降尺度方法[J]. 遥感技术与应用,2018,33(05):793-802. [56]冯姣姣,王维真,李净,刘雯雯. 基于BP神经网络的华东地区太阳辐射模拟及时空变化分析[J]. 遥感技术与应用,2018,33(05):881-889+955. [57]王国盛,拾兵,何昆,刘帆,徐丽. 基于GA-BP神经网络的孤立波爬高预测[J]. 中国海洋大学学报(自然科学版),2018,48(S2):165-170. [58]王杰,徐锡杰,解斐斐. 基于PSO算法的BP神经网络对植被叶片氮素含量的预测[J]. 北京测绘,2018,32(11):1289-1292. [59]孔凡涛,蔡盼盼,张解成,蒋鑫. BP神经网络在大地电磁反演中的应用[J]. 科技创新与应用,2018(32):19-21. [60]陈笑,王发信,戚王月,周婷. 基于遗传算法的BP神经网络模型在地下水埋深预测中的应用——以蒙城县为例[J]. 水利水电技术,2018,49(04):1-7. [61]陈记臣,查悉妮,卓文珊,周月英,姚寒梅,张泳华,刘祖发. 基于AdaBoost算法和BP神经网络的咸潮模拟研究[J]. 人民珠江,2017,38(01):5-10. [62]王建金,石朋,瞿思敏,肖紫薇,戴韵秋,陈颖冰,陈星宇. 与马斯京根汇流模型耦合的BP神经网络修正算法[J]. 中国农村水利水电,2017(01):113-117. [63]钟仕林. 基于BP神经网络的管道投资风险评价模型研究[J]. 山西建筑,2017,43(02):251-252. [64]钱建国,刘淑亮. BP神经网络在GPS高程拟合中的应用探讨[J]. 测绘与空间地理信息,2017,40(01):18-20. [65]卢志宏,武晓东,柴享贤,杨素文,李燕妮,叶丽娜. 应用BP神经网络对荒漠啮齿动物种群数量的预测研究[J]. 动物学杂志,2017,52(02):227-234. [66]曹斌,邱振戈,朱述龙,曹彬才. BP神经网络遥感水深反演算法的改进[J]. 测绘通报,2017(02):40-44. [67]易金鑫,胡晓冬,姚建华,黄利安,Kovalenko Volodymyr. 基于改进BP神经网络算法的激光相变硬化层深度的研究[J]. 应用激光,2017,37(01):72-78. [68]杨淑华,刘洁莉,梁进秋,杨春仓,秦雅娟,徐鑫,李腊平,张玉芳. 基于BP神经网络的马铃薯气候产量预报模型[J]. 农学学报,2017,7(04):29-33. [69]孙雪,张琳. BP神经网络在基坑变形预测中的应用及改进[J]. 勘察科学技术,2017(01):47-51. [70]李永梅,张立根,海云端,董越. 基于BP神经网络的宁夏耕地资源动态变化及预测[J]. 湖南农业科学,2017(01):81-85. [71]王晓颖. 改进BP神经网络模型的地基变形预测[J]. 测绘与空间地理信息,2017,40(03):215-217. [72]雷凯栋,张淑娟. 精准扶贫指标——农机总动力基于BP神经网络SAS分析预测[J]. 农产品加工,2017(06):74-76. [73]赵涛,于师建. 基于GA-BP神经网络算法的高密度电法非线性反演[J]. 煤田地质与勘探,2017,45(02):147-151. [74]马凯,梁敏. 基于BP神经网络高光谱图像分类研究[J]. 测绘与空间地理信息,2017,40(05):118-121. [75]黎玥君,郭品文. 基于BP神经网络的浙北夏季降尺度降水预报方法的应用[J]. 大气科学学报,2017,40(03):425-432. bp神经网络参考文献四 : [76]王存友,黄张裕,汪闩林,欧阳经富. BP神经网络与多项式拟合在沉降监测中的应用[J]. 地理空间信息,2017,15(06):107-108+113+6. [77]李玉能,马建军,池恩安,陈永麟. 基于BP神经网络的高含水岩石爆破震动参数预报[J]. 爆破,2017,34(02):68-73. [78]严禛,伍星蓉. 基于BFO-BP神经网络的储层预测研究[J]. 能源与环保,2017,39(07):210-213. [79]吕晶,谢润成,周文,刘毅,尹帅,张冲. LM-BP神经网络在泥页岩地层横波波速拟合中的应用[J]. 中国石油大学学报(自然科学版),2017,41(03):75-83. [80]晏红波,杨庆,任超,毕旋旋. 基于EEMD的BP神经网络边坡预测研究[J]. 水力发电,2017,43(07):37-40. [81]刘立君,姜亚青,王晓鹏,姚纪荣. 激光熔凝参数BP神经网络的反求[J]. 哈尔滨理工大学学报,2017,22(03):112-116. [82]赵鹤,李向群,孙昊. 基于BP神经网络算法的高层建筑物地基沉降预测分析[J]. 长春师范大学学报,2017,36(04):31-33. [83]王竹. 半分布式耦合BP神经网络洪水预报模型研究[J]. 中国农村水利水电,2017(08):96-102. [84]李嘉康,赵颖,廖洪林,李其杰. 基于改进EMD算法和BP神经网络的SST预测研究[J]. 气候与环境研究,2017,22(05):587-600. [85]郑威,杨英,赵彬,惠力,鲁成杰,杨书凯. 基于BP神经网络模型研究背景流对于ADCP波浪估计的影响[J]. 气象水文海洋仪器,2017,34(03):1-6. [86]李月,徐守余. BP神经网络在砂体连通性评价中的应用[J]. 甘肃科学学报,2017,29(04):16-21. [87]王楠,唐永刚,何康. BP神经网络在摩擦学领域的应用现状[J]. 济宁学院学报,2017,38(05):15-22. [88]单红喜. 基于BP神经网络的深基坑沉降预测[J]. 山西建筑,2017,43(28):78-79. [89]郑贵强,吕大炜,李小明,朱雪征. 基于BP神经网络方法模拟深煤层物性临界深度[J]. 煤炭技术,2017,36(10):11-13. [90]龚巧灵,官冬杰. 基于BP神经网络的三峡库区重庆段水资源安全评价[J]. 水土保持研究,2017,24(06):292-299. [91]王旭,温泉,陈龙飞,张豪杰,王芳,刘玉芳. DBR光纤激光拍频检测结合BP神经网络的温度传感解调技术[J]. 光学与光电技术,2017,15(05):6-9. [92]高宁,吴秋堂,王静燕. 基于LM-BP神经网络的高层建筑物沉降变形预测[J]. 河南城建学院学报,2017,26(04):7-12. [93]冯非凡,武雪玲,牛瑞卿,许石罗,于宪煜. 粒子群优化BP神经网络的滑坡敏感性评价[J]. 测绘科学,2017,42(10):170-175. [94]林宇锋,邓洪敏,史兴宇. 基于新的改进粒子群算法的BP神经网络在拟合非线性函数中的应用[J]. 计算机科学,2017,44(S2):51-54. [95]李建飞. 基于全极化SAR数据的BP神经网络分类研究[J]. 科学技术创新,2017(32):161-162. [96]谭延嗣. BP神经网络应用于地质灾害预测的研究[J]. 江西建材,2017(24):229+233.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

附录1基于神经网络和遗传算法的模糊系统的自动设计摘要本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。

介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。

还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。

1 前言模糊系统往往是人工手动设计。

这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。

为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。

前者包括辅助模糊系统设计的开发环境。

许多环境已具有商业用途。

后者介绍了自动设计的技术。

尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。

有三种主要的设计决策模糊控制系统设计:(1)确定模糊规则数,(2)确定隶属度函数的形式。

(3)确定变化参数再者,必须作出另外两个决定:(4)确定输入变量的数量(5)确定论证方法(1)和(2)相互协调确定如何覆盖输入空间。

他们之间有高度的相互依赖性。

(3)用以确定TSK(Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。

(4)符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。

像逆向消除(4)和信息标准的技术在此设计中经常被利用。

(5)相当于决定使用哪一个模糊算子和解模糊化的方法。

虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。

[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。

神经网络模型(以更普遍的梯度)和基于遗传算法的神经网络(最常见的梯度的基础)和遗传算法被用于模糊系统的自动设计。

基于神经网络的方法主要是用来设计模糊隶属度函数。

这有两种主要的方法;(一)直接的多维的模糊隶属度函数的设计:该方法首先通过数据库确定规则的数目。

然后通过每个簇的等级的训练来确定隶属函数的形式。

更多细节将在第二章给出。

(二)间接的多维的模糊隶属度函数的设计:这种方法通过结合一维模糊隶属函数构建多维的模糊隶属度函数。

隶属度函数梯度技术被用于调节试图减少模糊系统的期望产量和实际生产所需的产出总量的误差。

第一种方法的优点在于它可以直接产生非线性多维的模糊隶属度函数;没有必要通过结合一维模糊隶属函数构建多维的模糊隶属度函数。

第二种方法的优点在于可通过监测模糊系统的最后性能来调整。

这两种方法都将在第二章介绍。

许多基于遗传算法的方法与方法二在本质上一样;一维隶属函数的形式利用遗传算法自动的调整。

这些方法中很多只考虑了一个或两个前面提及的设计问题。

在第三章中,我们将介绍一种三个设计问题同时考虑的方法。

2 神经网络方法2.1多维输入空间的直接的模糊分区该方法利用神经网络来实现多维的非线性隶属度函数,被称为基于NN的模糊推理。

该方法的优点在于它可以产生非线性多维的模糊隶属度函数。

在传统的模糊系统中,用于前期部分的一维隶属度函数是独立设计的,然后结合起来间接实现多维的模糊隶属度函数。

可以说,神经网络方法在由神经网络吸收的结合操作方面是传统模糊系统的一种更普遍的形式。

当输入变量是独立的时传统的间接设计方法就有问题。

例如,设计一个基于将温度和湿度作为输入的模糊系统的空调控制系统。

在模糊系统的传统设计方法中,隶属函数的温度和湿度是独立设计的。

输入空间所产生的模糊分区如图1(a)。

然而,当输入变量是独立的,如温度、湿度,模糊分区如图1(b)比较合适。

很难构建这样来自一维模糊隶属度函数的非线性分区。

由于NN-driven模糊推理直接构建多维的非线性的模糊隶属度函数,很有可能使线性分区如图1(b)。

NN-driven模糊推理的设计的有三个步骤:聚集给出的训练数据,利用神经网络的模糊分区输入空间,和设计各分区空间的随机部分。

第一步是要聚集培训资料,确定规则的数目。

这一步之前,不恰当的输入变量已经利用信息或淘汰落后指标的方法消除掉了。

逆向消除方法的任意消除n个输入变量和训练神经网络的n - 1个输入变量。

然后比较n个和n-1个变量的神经网络的性能。

如果n-1个变量的神经网络的性能与n个变量的性能相似或者更好,那么消除输入变量就被认为是无关紧要的。

然后这些数据被聚集,得到了数据的分布。

集群数量是规则的数目。

第二步是决定在第一步中得到的集群资料的簇边界;输入空间的划分并确定多维输入的隶属函数。

监督数据是由在第1步中获得的隶属度的输入数据聚类提供的。

第一个带有n输入和c输出的神经网络被准备好,其中n是输入变量的数量,c是在第一步中得到的集群数量。

为了神经网络的数据,图2中NN数量,产生于第一步提供的集群信息。

一般来说,每个输入变量被分配到其中的一个集群。

集群任务就是将输入变量和培训模式相结合。

例如,在属于集群2的四个集群和输入向量的案例中,监督的培训模式将是(0,1,0,0)。

在某些情况下,如果他/她相信一个输入的数据点应按不同的聚类,用户不得非法干预和手动建造部分监督。

举例来说,如果用户认为一个数据点同样属于一个两个班,适当的监管输出模式可能(0.5,0.5,0,0)。

这个神经网络在关于该培训资料的训练结束后,神经网络计算特定输入属于各集群向量。

因此,我们认为该神经网络通过学习获得特征的隶属度函数所有的规则,可以产生与隶属度相适应的任意的输入向量。

利用神经网络如发生器的模糊系统是NN-driven模糊推理。

第三步是随机的设计。

因为我们知道哪个集群能举出一个输入数据,我们可以使用输入数据和期望的结果训练随机的部分。

神经网络的表达可在这里,如[3,4]中所言,但是其他的方法,如数学方程或模糊变量,可以用来代替。

该模型的最关键的是神经网络的输入空间分割模糊聚类。

图2所示的一个例子NN-driven模糊推理系统。

这是一个输出由神经网络或TSK模型计算的一个单独的价值的模型。

在图乘法和加法计算加权平均值。

如果后续的部分输出模糊值,适当的t-conorm和/或解模糊化操作应该被使用。

图1.模糊划分:(a)常规(b)期望图2.NN-driven结构模糊推理实例2.2调整参数的模糊系统这个定义隶属度函数形式的参数来减少模糊系统输出和监督的数据之间的误差。

两种方法用于修改这些参数:摘要现有基于梯度方法和遗传算法。

遗传算法的方法将在下一章节讲述,基于梯度的方法将在这部分解释。

这个基于梯度的方法的程序是:(1)决定如何确定的隶属度函数的形式(2)利用梯度方法调整降低模糊系统的实际输出与期望输出的参数,通常最速下降。

隶属函数的中心的位置和宽度通常用来定义参数的形状。

Ichihashi et al. [6]and Nomura et al. [7, 8], Horikawa et al.[9][10], Ichihashi et al.[ll] and Wang et al. [12], Jang [13][14] 已经分别用三角形,结合sigmoidal、高斯,钟型隶属度函数。

他们利用最速下降法来调整模糊隶属函数参数。

图3. 神经网络调整模糊系统的参数图4. 调整模糊系统的神经网络图3显示了此方法和同构于图4. 图中的u ij在i-th 规则下输入模糊隶属函数的参数x j,而它实际上是代表一个描述隶属度函数的形式的参数向量。

也就是说,这个方法使模糊系统作为神经网络的模糊隶属度函数和通过节点执行重量和规则一样。

任何网络学习算法,例如反向传播算法,可以用来设计这种结构。

3遗传算法方法3.1遗传算法与模糊控制遗传算法是进行优化、生物激励的技术,他的运行用二进制表示,并进行繁殖,交叉和变异。

繁殖后代的权利是通过应用程序提供的一种健身价值。

遗传算法吸引人是因为他们不需要存在的衍生物,他们的同时搜索的鲁棒性很强,并能避免陷入局部最小。

SeverM的论文提出了利用自动遗传算法的模糊系统的设计方法。

大量的工作主要集中在调整的模糊隶属度函数[17]-[25]。

其他的方法使用遗传算法来确定模糊规则数[18,26]。

在[26]中,通过专家制定了一系列规则,并且遗传算法找到他们的最佳的组合。

在[18],卡尔已经开发出一种方法用于测定模糊隶属度函数和模糊规则数。

在这篇文章中,卡尔的方法首先用遗传算法按照预先定义的规则库确定规则的数目。

这个阶段后,利用遗传算法来调整模糊隶属度函数。

虽然这些方法设计的系统表现的比手工设计的系统好,它们仍可能会欠佳,因为他们一次只有一个或两个三大设计阶段。

因为这些设计阶段可能不会是独立的,所以重要的是要考虑它们同时找到全局最优解。

在下一节里,我们提出一个结合的三个主要设计阶段自动设计方法。

3.2基于遗传算法的模糊系统的综合设计这部分提出了一种利用遗传算法的自动模糊系统的设计方法, 并将三个主要设计阶段一体化:隶属函数的形式, 模糊规则数, 和规则后件同一时间内确定[27]。

当将遗传算法应用于程序上时,有两个主要步骤;(a)选择合适的基因表达,(b) 设计一个评价函数的人口排名。

在接下来的段落里,我们讨论我们的模糊系统表现及基因表达。

一个嵌入验前知识的评价函数和方法将会在接下来的章节中提及模糊系统和基因表现我们用TSK模型的模糊系统,它被广泛应用于控制问题,对地图系统的状态来控制的价值。

TSK模型在随之而来的模糊模型中的线性方程与模糊语言表达方面有别于传统的模糊系统。

例如,一个TSK模型规则的形式:如果X1是A,X2是B,那么y=w1X1+w2X2+w3;是常数其中wn最后的控制价值通过每个规则的输出和依据规则的加权的射击力量来计算。

我们利用左基地,右基地,和以往的中心点距离(第一个中心是一个绝对的位置)对三角形隶属度函数进行参数化,。

其他参数化的形状,如双曲形、高斯,钟形,或梯形可以代替。

不同于大多数方法、重叠限制不是放在在我们的系统和完整的重叠存在(见图5)。

图5. (a)隶属度函数表示法(b)可能的隶属函数一般来说,每个输入变量模糊集理论的数量与确定的模糊规则的数目相结合。

例如,一个带有m输入变量的TSK模型,每n个模糊集,将会产生n m模糊规则。

因为规则的数目直接取决于这个数目的隶属函数,消除隶属度函数对消除规则有直接的影响。

每一隶属函数需要三个参数并且每一个模糊规则需要三个参数。

因此,每个变量需要n个模糊输入的m-input-one-output系统要求3(mn+n m)参量。

这个基因表达明确包含三个成员函数的参数及前述的随机参数。

然而,规则的数,通过应用编码含蓄的边界条件和隶属函数的定位。

我们可以隐含的控制规则的数目,消除隶属函数的中心位置的范围之外的相应的包含这些内容的输入变量和规则。

例如,在单摆应用隶属度函数使用θ的中心位置大于90°都是可以避免的。

相关文档
最新文档