神经网络和遗传算法的模糊系统的自动设计论文中英文资料对照外文翻译
传感器技术论文中英文对照资料外文翻译文献
传感器技术论文中英文对照资料外文翻译文献Development of New Sensor TechnologiesSensors are devices that can convert physical。
chemical。
logical quantities。
etc。
into electrical signals。
The output signals can take different forms。
such as voltage。
current。
frequency。
pulse。
etc。
and can meet the requirements of n n。
processing。
recording。
display。
and control。
They are indispensable components in automatic n systems and automatic control systems。
If computers are compared to brains。
then sensors are like the five senses。
Sensors can correctly sense the measured quantity and convert it into a corresponding output。
playing a decisive role in the quality of the system。
The higher the degree of n。
the higher the requirements for sensors。
In today's n age。
the n industry includes three parts: sensing technology。
n technology。
and computer technology。
《神经网络与模糊系统》课程论文
《神经网络与模糊系统》课程论文题目基于深度学习的图像特征提取院〔系〕电子工程学院学号xxx专业智能信息处理年级xxx学生某某xxx指导教师xxxx2014 年12 月31日基于深度学习的图像特征提取摘要:大数据时代的降临,为深度学习理论的开展创造了良好的条件。
本文介绍了深度学习的开展背景,主要讨论了深度学习中的自编码的方法,对自编码方法实现仿真应用,在以后能应用到SAR图像上进展自动特征提取,最后阐述该理论的目前遇到的困难。
关键词:深度学习autoencoder convolution pooling一引言深度学习是机器学习研究中的一个新的领域,其核心思想在于模拟人脑的层级抽象结构,通过无监视的方式分析大规模数据,开掘大数据中蕴藏的有价值信息。
深度学习应大数据而生,给大数据提供了一个深度思考的大脑。
自2006年以来,深度学习在学术界持续升温。
斯坦福大学、纽约大学、加拿大蒙特利尔大学等成为研究深度学习的重镇。
2010年,美国国防部DARPA计划首次资助深度学习项目,参与方有斯坦福大学、纽约大学和NEC美国研究院。
支持深度学习的一个重要依据,就是脑神经系统确实具有丰富的层次结构。
一个最著名的例子就是Hubel-Wiesel模型,由于揭示了视觉神经的机理而曾获得诺贝尔医学与生理学奖。
除了仿生学的角度,目前深度学习的理论研究还根本处于起步阶段,但在应用领域已显现出巨大能量。
2011年以来,微软研究院和Google的语音识别研究人员先后采用DNN技术降低语音识别错误率20%~30%,是语音识别领域十多年来最大的突破性进展。
2012年,DNN技术在图像识别领域取得惊人的效果,在ImageNet评测上将错误率从26%降低到15%。
在这一年,DNN还被应用于制药公司的Druge Activity问题,并获得世界最好成绩,这一重要成果被《纽约时报》报道。
今天Google、微软、百度等知名的拥有大数据的高科技公司争相投入资源,占领深度学习的技术制高点,正是因为它们都看到了在大数据时代,更加复杂且更加强大的深度模型能深刻揭示海量数据里所承载的复杂而丰富的信息,并对未来或未知事件做更精准的。
模糊逻辑中英文对照外文翻译文献
模糊逻辑中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:模糊逻辑欢迎进入模糊逻辑的精彩世界,你可以用新科学有力地实现一些东西。
在你的技术与管理技能的领域中,增加了基于模糊逻辑分析和控制的能力,你就可以实现除此之外的其他人与物无法做到的事情。
以下就是模糊逻辑的基础知识:随着系统复杂性的增加,对系统精确的阐述变得越来越难,最终变得无法阐述。
于是,终于到达了一个只有靠人类发明的模糊逻辑才能解决的复杂程度。
模糊逻辑用于系统的分析和控制设计,因为它可以缩短工程发展的时间;有时,在一些高度复杂的系统中,这是唯一可以解决问题的方法。
虽然,我们经常认为控制是和控制一个物理系统有关系的,但是,扎德博士最初设计这个概念的时候本意并非如此。
实际上,模糊逻辑适用于生物,经济,市场营销和其他大而复杂的系统。
模糊这个词最早出现在扎德博士于1962年在一个工程学权威刊物上发表论文中。
1963年,扎德博士成为加州大学伯克利分校电气工程学院院长。
那就意味着达到了电气工程领域的顶尖。
扎德博士认为模糊控制是那时的热点,不是以后的热点,更不应该受到轻视。
目前已经有了成千上万基于模糊逻辑的产品,从聚焦照相机到可以根据衣服脏度自我控制洗涤方式的洗衣机等。
如果你在美国,你会很容易找到基于模糊的系统。
想一想,当通用汽车告诉大众,她生产的汽车其反刹车是根据模糊逻辑而造成的时候,那会对其销售造成多么大的影响。
以下的章节包括:1)介绍处于商业等各个领域的人们他们如果从模糊逻辑演变而来的利益中得到好处,以及帮助大家理解模糊逻辑是怎么工作的。
2)提供模糊逻辑是怎么工作的一种指导,只有人们知道了这一点,才能运用它用于做一些对自己有利的事情。
这本书就是一个指导,因此尽管你不是电气领域的专家,你也可以运用模糊逻辑。
需要指出的是有一些针对模糊逻辑的相反观点和批评。
一个人应该学会观察反面的各个观点,从而得出自己的观点。
我个人认为,身为被表扬以及因写关于模糊逻辑论文而受到赞赏的作者,他会认为,在这个领域中的这种批评有点过激。
遗传算法中英文对照外文翻译文献
遗传算法中英文对照外文翻译文献遗传算法中英文对照外文翻译文献(文档含英文原文和中文翻译)Improved Genetic Algorithm and Its Performance AnalysisAbstract: Although genetic algorithm has become very famous with its global searching, parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as slow convergence speed. In this paper, based on several general theorems, an improved genetic algorithm using variant chromosome length and probability of crossover and mutation is proposed, and its main idea is as follows : at the beginning of evolution, our solution with shorter length chromosome and higher probability of crossover and mutation; and at the vicinity of global optimum, with longer length chromosome and lower probability of crossover and mutation. Finally, testing with some critical functions shows that our solution can improve the convergence speed of genetic algorithm significantly , its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.Genetic algorithm is an adaptive searching technique based on a selection and reproduction mechanism found in the natural evolution process, and it was pioneered by Holland in the 1970s. It has become very famous with its global searching,________________________________ 遗传算法中英文对照外文翻译文献 ________________________________ parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as poor local searching, premature converging, as well as slow convergence speed. In recent years, these problems have been studied.In this paper, an improved genetic algorithm with variant chromosome length andvariant probability is proposed. Testing with some critical functions shows that it can improve the convergence speed significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.In section 1, our new approach is proposed. Through optimization examples, insection 2, the efficiency of our algorithm is compared with the genetic algorithm which only reserves the best individual. And section 3 gives out the conclusions. Finally, some proofs of relative theorems are collected and presented in appendix.1 Description of the algorithm1.1 Some theoremsBefore proposing our approach, we give out some general theorems (see appendix)as follows: Let us assume there is just one variable (multivariable can be divided into many sections, one section for one variable) x £ [ a, b ] , x £ R, and chromosome length with binary encoding is 1.Theorem 1 Minimal resolution of chromosome isb 一 a2l — 1Theorem 3 Mathematical expectation Ec(x) of chromosome searching stepwith one-point crossover iswhere Pc is the probability of crossover.Theorem 4 Mathematical expectation Em ( x ) of chromosome searching step with bit mutation isE m ( x ) = ( b- a) P m 遗传算法中英文对照外文翻译文献Theorem 2 wi = 2l -1 2 i -1 Weight value of the ith bit of chromosome is(i = 1,2,・・・l )E *)= P c1.2 Mechanism of algorithmDuring evolutionary process, we presume that value domains of variable are fixed, and the probability of crossover is a constant, so from Theorem 1 and 3, we know that the longer chromosome length is, the smaller searching step of chromosome, and the higher resolution; and vice versa. Meanwhile, crossover probability is in direct proportion to searching step. From Theorem 4, changing the length of chromosome does not affect searching step of mutation, while mutation probability is also in direct proportion to searching step.At the beginning of evolution, shorter length chromosome( can be too shorter, otherwise it is harmful to population diversity ) and higher probability of crossover and mutation increases searching step, which can carry out greater domain searching, and avoid falling into local optimum. While at the vicinity of global optimum, longer length chromosome and lower probability of crossover and mutation will decrease searching step, and longer length chromosome also improves resolution of mutation, which avoid wandering near the global optimum, and speeds up algorithm converging.Finally, it should be pointed out that chromosome length changing keeps individual fitness unchanged, hence it does not affect select ion ( with roulette wheel selection) .2.3 Description of the algorithmOwing to basic genetic algorithm not converging on the global optimum, while the genetic algorithm which reserves the best individual at current generation can, our approach adopts this policy. During evolutionary process, we track cumulative average of individual average fitness up to current generation. It is written as1 X G x(t)= G f vg (t)t=1where G is the current evolutionary generation, 'avg is individual average fitness.When the cumulative average fitness increases to k times ( k> 1, k £ R) of initial individual average fitness, we change chromosome length to m times ( m is a positive integer ) of itself , and reduce probability of crossover and mutation, which_______________________________ 遗传算法中英文对照外文翻译文献________________________________can improve individual resolution and reduce searching step, and speed up algorithm converging. The procedure is as follows:Step 1 Initialize population, and calculate individual average fitness f avg0, and set change parameter flag. Flag equal to 1.Step 2 Based on reserving the best individual of current generation, carry out selection, regeneration, crossover and mutation, and calculate cumulative average of individual average fitness up to current generation 'avg ;f avgStep 3 If f vgg0 三k and Flag equals 1, increase chromosome length to m times of itself, and reduce probability of crossover and mutation, and set Flag equal to 0; otherwise continue evolving.Step 4 If end condition is satisfied, stop; otherwise go to Step 2.2 Test and analysisWe adopt the following two critical functions to test our approach, and compare it with the genetic algorithm which only reserves the best individual:sin 2 弋 x2 + y2 - 0.5 [1 + 0.01( 2 + y 2)]x, y G [-5,5]f (x, y) = 4 - (x2 + 2y2 - 0.3cos(3n x) - 0.4cos(4n y))x, y G [-1,1]22. 1 Analysis of convergenceDuring function testing, we carry out the following policies: roulette wheel select ion, one point crossover, bit mutation, and the size of population is 60, l is chromosome length, Pc and Pm are the probability of crossover and mutation respectively. And we randomly select four genetic algorithms reserving best individual with various fixed chromosome length and probability of crossover and mutation to compare with our approach. Tab. 1 gives the average converging generation in 100 tests.In our approach, we adopt initial parameter l0= 10, Pc0= 0.3, Pm0= 0.1 and k= 1.2, when changing parameter condition is satisfied, we adjust parameters to l= 30, Pc= 0.1, Pm= 0.01.From Tab. 1, we know that our approach improves convergence speed of genetic algorithm significantly and it accords with above analysis.2.2 Analysis of online and offline performanceQuantitative evaluation methods of genetic algorithm are proposed by Dejong, including online and offline performance. The former tests dynamic performance; and the latter evaluates convergence performance. To better analyze online and offline performance of testing function, w e multiply fitness of each individual by 10, and we give a curve of 4 000 and 1 000 generations for fl and f2, respectively.(a) onlineFig. 1 Online and offline performance of fl(a) online (b) onlineFig. 2 Online and offline performance of f2From Fig. 1 and Fig. 2, we know that online performance of our approach is just little worse than that of the fourth case, but it is much better than that of the second, third and fifth case, whose online performances are nearly the same. At the same time, offline performance of our approach is better than that of other four cases.3 ConclusionIn this paper, based on some general theorems, an improved genetic algorithmusing variant chromosome length and probability of crossover and mutation is proposed. Testing with some critical functions shows that it can improve convergence speed of genetic algorithm significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.AppendixWith the supposed conditions of section 1, we know that the validation of Theorem 1 and Theorem 2 are obvious.Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one point crossover isb - a PEc(x) = 21 cwhere Pc is the probability of crossover.Proof As shown in Fig. A1, we assume that crossover happens on the kth locus, i. e. parent,s locus from k to l do not change, and genes on the locus from 1 to k are exchanged.During crossover, change probability of genes on the locus from 1 to k is 2 (“1” to “0” or “0” to “1”). So, after crossover, mathematical expectation of chromosome searching step on locus from 1 to k is1 chromosome is equal, namely l Pc. Therefore, after crossover, mathematical expectation of chromosome searching step isE (x ) = T 1 -• P • E (x ) c l c ckk =1Substituting Eq. ( A1) into Eq. ( A2) , we obtain 尸 11 b - a p b - a p • (b - a ) 1 E (x ) = T • P • — •• (2k -1) = 7c • • [(2z -1) ― l ] = ——— (1 一 )c l c 2 21 — 121 21 — 1 21 21 —1 k =1 lb - a _where l is large,-——-口 0, so E (x ) 口 -——P2l — 1 c 21 c 遗传算法中英文对照外文翻译文献 厂 / 、 T 1 T 1 b — a - 1E (x )="—w ="一• ---------- • 2 j -1 二 •ck2 j 2 21 -1 2j =1 j =1 Furthermore, probability of taking • (2k -1) place crossover on each locus ofFig. A1 One point crossoverTheorem 4 Mathematical expectation E m(")of chromosome searching step with bit mutation E m (x)—(b a)* P m, where Pm is the probability of mutation.Proof Mutation probability of genes on each locus of chromosome is equal, say Pm, therefore, mathematical expectation of mutation searching step is一i i - b —a b b- aE (x) = P w = P•—a«2i-1 = P•—a q2,-1)= (b- a) •m m i m 21 -1 m 2 i -1 mi=1 i=1一种新的改进遗传算法及其性能分析摘要:虽然遗传算法以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称,但是它仍然有一定的缺陷,比如收敛速度慢。
蜂窝系统切换技术论文中英文资料对照外文翻译文献
蜂窝系统切换技术论文中英文资料对照外文翻译文献一、英文原文:Handoff in Cellular SystemsNishith D. Tripathi, NortelJeffrey H. Reed and Hugh F. VanLandinghamMPRG, Virginia TechCellular SystemDeployment ScenariosThe radio propagation environment and related handoff challenges are different in different cellular structures. A handoff algorithm with fixed parameters cannot perform well in different system environments. Specific characteristics of the communication systems should be taken into account while designing handoff algorithms. Several basic cellular structures (e.g., macrocells, microcells, and overlay systems) and special architectures (e.g., underlays, multichannel bandwidth systems,and evolutionary architectures) are described next. Integrated cordless and cellular systems, integrated cellular systems, and integrated terrestrial and satellite systems are also described.MacrocellsMacrocell radii are in several kilometers. Due to the low cellcrossing rate, centralized handoff is possible despite the large number of MSs the MSC has to manage. The signal quality in the uplink and downlink is approximately the same. The transition region between the BSs is large; handoff schemes should allow some delay to avoid flip-flopping. However, the delay should beshort enough to preserve the signal quality because the interference increases as the MS penetrates the new cell. This cell penetration is called cell dragging. Macrocells have relatively gentle path loss characteristics . The averaging interval (i.e., the time period used to average the signal strength variations) should be long enough to get rid of fadingfluctuations. First- and second-generation cellular systems provide wide-area coverage even in cities using macrocells .Typically, a BS transceiver in a macrocell transmits high output power with the antenna mounted several meters high on a tower to illuminate a large area.MicrocellsSome capacity improvement techniques (e.g., larger bandwidths, improved methods for speech coding, channel coding,and modulation) will not be sufficient to satisfy the required service demand. The use of microcells is considered the single most effective means of increasing the capacity of cellular systems.Microcells increase capacity, but radio resource management becomes more difficult. Microcells can be classified as one-, two-, or threedimensional,depending on whether they are along a road or a highway, covering an area such as a number of adjacent roads,or located in multilevel buildings, respectively . Microcells can be classified as hot spots (service areas with a higher traffic density or areas that are covered poorly), downtown clustered microcells (contiguous areas serving pedestrians and mobiles), and in-building 3-D cells (serving office buildings and pedestrians).Typically, a BS transceiver in a microcell transmits low output power with the antenna mounted at lamppost level (approximately 5 m above ground).The MS also transmits low power, which leads to longer battery life. Since BS antennas have lower heights compared to the surrounding buildings, RF signals propagate mostly along the streets.The antenna may cover 100–200 m in each street direction, serving a few city blocks. This propagation environment has low time dispersion, which allows high data rates.Microcells are more sensitive to the traffic and interference than macrocells due to short-term variations (e.g., traffic and interferencevariations),medium/long-term alterations (e.g., new buildings), and incremental growth of the radio network (e.g., new BSs) . The number of handoffs per cell is increased by an order of magnitude, and the time available to make a handoff is decreased. Using an umbrella cell is one way to reduce the handoff rate. Due to the increase in the microcell boundary crossings and expected high trafficloads, a higher degree of decentralization of the handoff process becomes necessary.Microcells encounter a propagation phenomenon called the corner effect. The corner effect is characterized by a sudden large drop (e.g., 20–30 dB) in signal strength (e.g., at 10–20 m distance) when a mobile turns around a corner.The corner effect is due to the loss of the line of sight (LOS) component from the serving BS to the MS. The corner effect demands a faster handoff and can change the signal quality very fast. The corner effect is hard to predict. A long measurement averaging interval is not desirable due to the corner effect. Moving obstacles can temporarily hinder the path between a BS and an MS,which resembles the corner effect. Reference studies the properties of symmetrical cell plans in a Manhattan-type environment. Cell plans affect signal-to-interference ratio (SIR) performance in the uplink and downlink significantly. Symmetrical cell plans have four nearest co-channel BSs located at the same distance. Such cell plans can be classified into half-square (HS), full-square (FS), and rectangular (R) cell plans. These cell plans are described next.Half-Square Cell Plan—This cell planplaces BSs with omnidirectional antennas at each intersection, and each BS covers half a block in all four directions. This cell plan avoids the street corner effect and provides the highest capacity. This cell plan has only LOS handoffs. Figure 2 shows an example of a half-square cell plan in a microcellular system.Full-Square Cell Plan —There is a BSwith an omnidirectional antenna located at every other intersection, and each BS coversa block in all four directions. It is possible for an MS to experience the street corner effect for this cell plan. The FS cell plan can have LOS or NLOS handoffs. Figure 3 shows an example of a fullsquare cell plan in a microcellular system.Rectangular Cell Plan —Each BS covers a fraction of either a horizontal or vertical street with the BS located in the middle of the cell. This cell plan can easily be adapted to market penetration. Fewer BSs with high transmit power can be used initially. As user density increases, new BSs can be added with reduced transmit power from appropriate BSs.The street corner effect is possible for this cell plan. The R cell plan can have LOS or NLOS handoffs. Figure 4 shows an example of a rectangular cell plan in a microcellular system. Macrocell/Microcell Overlays Congestion of certain microcells, the lack of service of microcells in some areas, and high speed of some users are some reasons for higher handoff rates and signaling load for microcells. To alleviate some of these problems, a mixed-cell architecture (called an overlay/underlay system) consisting of largesizemacrocells (called umbrella cells or overlay cells) and small-size microcells(called underlay cells) can be used. Figure 5 illustrates an overlay system.The macrocell/microcell overlay architecture provides a balance between maximizing the number of users per unit area and minimizing the network control load associated with handoff. Macrocells provide wide-area coverage beyond microcell service areas and ensure better intercell handoff.Microcells provide capacity due to greater frequency reuse and cover areas with high traffic density (called hot spots). Examples of hot spots include an airport,a railway station, or a parking lot. In less congested areas (e.g., areas beyond a city center or outside the main streets of a city) traffic demand is not very high, and macrocells can provide adequate coverage in such areas. Macrocells also serve highspeed MSs and the areas not covered by microcells (e.g., dueto lack of channels or the MS being out of the microcell range). Also, after the microcellular system is used to its fullest extent, the overflow traffic can be routed to macrocells.One of the important issues for the overlay/underlay system is the determination of optimum distribution of channels in the macrocells and microcells.Reference evaluates four approaches to sharing the available spectrum between the two tiers. Approach 1 uses TDMA for microcell and CDMA for macrocell. Approach 2 uses CDMA for microcell and TDMA for macrocell. Approach 3 uses TDMA in both tiers, while approach 4 uses orthogonal frequency channels in both tiers.The overlay/underlay system has several advantages over a pure microcell system:• The BSs are required only in high traffic load areas. Since it is not necessary to cover the whole service area with microcells,infrastructure costs are saved.• The number of ha ndoffs in an overlay system is much less than in a microcell system because fast-moving vehicles can be connected to the overlay macrocell.• Both calling from an MS and location registration can easily be done through the microcell system.There are several classes of umbrella cells. In one class, orthogonal channels are distributed between microcells and macrocells.In another class, microcells use channels that are temporarily unused by macrocells. In yet another class,microcells reuse the channels already assigned to macrocells and use slightly higher transmit power levels to counteract the interference from the macrocells.Within the overlay/underlay system environment, four types of handovers need to be managed[19]: microcell to microcell, microcell to macrocell, macrocell to macrocell, and macrocell to microcell.Reference describes combined cell splitting and overlaying. Reuse of channels in the two cells is done by establishing an overlaid small cell served by the same cell site as the large cell. Small cells reuse the split cell’s channels because of the large distance between the split cell and the small inner cell, while the large cell cannot reuse these channels. Overlaid cells are approximately 50 percent more spectrally efficient than segmenting (the process of distributing the channels among the small- and largesize cells to avoid interference).A practical approach for implementation of a microcell system overlaid with an existing macrocell system is proposed in . This reference introduces channel segregation (a self-organized dynamic channel assignment)and automatic transmit power control to obviate the need to design channel assignment and transmit power control for the microcell system. The available channels are reused automatically between microcells and macrocells. A slight increase of transmit power for the microcell system compensates for the macrocell-to-microcell interference.Simulation results indicate that the local traffic is accommodated by the microcells laid under macrocells without any significant channel management effort. The methodology of the Global System for Mobile Communications (GSM)-based system is extended to the macrocell/microcell overlay system in. The use of random frequency hopping and adaptive frequency planning is recommended,and different issues related to handoff and frequency planning for an overlay system are discussed. Four strategies are designed to determine a suitable cell for a user for an overlay system. Two strategies are based on the dwell time (the time for which a call can be maintained in a cell without handoff), and the other two strategies are based on user speed estimation. A speed estimation technique based on dwell times is also proposed.A CDMA cellular system can provide full connectivity between the microcells and the overlaying macrocells without capacity degradation. Reference analyzes several factors that determine the cell size, the soft handoff (SHO) zone, and the capacity of the cell clusters. Several techniques for overlay-underlay cell clustering are also outlined. Application of CDMA to microcell/macrocell overlay have the following major advantages:• A heterogeneous environment can be illuminated uniformly by using a distributed antenna (with a series of radiators with different propagation delays) while still maintaining a high-quality signal.• SHO obviates the need for complex frequency planning.Reference studies the feasibility of a CDMA overlay that can share the 1850–1990 MHz personal communications services (PCS) band with existing microwave signals (transmitted by utility companies and state agencies). The results of several field tests demonstrate the application of such an overlay for the PCS band. The issue of use of a CDMA microcell underlay for an existing analog macrocell is the focus of. It is shown that high capacity can be achieved in a microcell at the expense of a slight degradation in macrocell performance.Reference finds that transmit and receive notch filters should be used at the microcell BSs. It shows that key parameters for such an overlay are the powers of the CDMA BS and MS transmitters relative to the macrocell BSs and the MSs served by the macrocells. Reference [25] studies spectrum management in an overlay system. A new cell selection method is proposed, which uses the history of microcell sojourn times. A procedure to determine an optimum velocity threshold for the proposed method is also outlined. A systematic approach to optimal frequency spectrum management is described.Special Architectures There are several special cellular architectures that try to improve spectral efficiency without a large increase in infrastructure costs. Some ofthese structures, discussed here, include an underlay/overlay system (which is different from the overlay/underlay system described earlier) and a multichannel bandwidth system. Many cellular systems are expected to evolve from a macrocellular system to an overlay/underlay system. A study that focuses on such evolution is described in [26].A Multiple-Channel-Bandwidth System—Multiple channel bandwidths can be used within a cell to improve spectral efficiency.In a multiple-channel-bandwidth system (MCBS), a cell has two or three ring-shaped regions with different bandwidth channels [28]. Figure 7 shows an MCBS. Assume that 30 kHz is the normal bandwidth for a signal.Now, for a three-ring MCBS, 30 kHz channels can be used in the outermost ring, 15 kHz channels in the middle ring, and 7.5 kHz channels in the innermost ring. The areas of these rings can be determined based on the expected traffic conditions.Thus, instead of using 30 kHz channels throughout the cell, different bandwidth channels (e.g., 15 kHz and 7.5 kHz) can be used to increase the number of channels in a cell. The MCBS uses the fact that a wide-bandwidth channel requires a lower carrier-to-interference ratio (C/I) than a narrow-bandwidth channel for the same voice quality. For example, C/I requirements for 30 kHz,15 kHz, and 7.5 kHz channel bandwidths are 18 dB, 24 dB, and 30 dB, respectively, based on subjective voice quality tests [28]. If the transmit power at a cell cite is the same for all the bandwidths, a wide channel can serve a large cell while a narrow channel can serve a relatively small cell. Moreover, since a wide channel can tolerate a higher level of co-channel interference (CCI), it can afford a smaller D/R ratio (the ratio of co-channel distance to cell radius). Thus, in the MCBS more channels become available due to multiple-bandwidth signals, and frequency can be reused more closely in a given service region due to different C/I requirements.Integrated Wireless SystemsIntegrated wireless systems are exemplified by integrated cordless and cellular systems, integrated cellular systems, and integrated terrestrial and satellite systems. Such integrated systems combine the features of individual wireless systems to achieve the goals of improved mobility and low cost.Integrated Terrestrial Systems —Terrestrial intersystem handoff may be between two cellular systems or between a cellular system and a cordless telephone system. Examples of systems that need intersystem handoffs include GSM–Digital European Cordless Telephone (DECT), CDMA in macrocells, and TDMA in microcells. When a call initiated in a cellular system controlled by an MSC enters a system controlled by another MSC, intersystem handoff is required to continue the call [29]. In this case one MSC makes a handoff request to another MSC to save the call. The MSCs need to have software for intersystem handoff if intersystem handoff is to be implemented. Compatibility between the concerned MSCs needs to be considered, too.There are several possible outcomes of an intersystem handoff [29]:• A long-distance call becomes a local call when an MS becomes a roamer.• A long-distance call becomes a local call when a roamer becomes a home mobile unit.• A local call becomes a long distance call when a home mobile unit becomes a roamer.• A local call becomes a long-distance call when a roamer becomes a home mobile unit. There is a growing trend toward service portability across dissimilar systems such as GSM and DECT [30]. For example,it is nice to have intersystem handoff between cordless and cellular coverage. Cost-effective handoff algorithms for such scenarios represent a significant research area. This article outlines different approaches to achieving intersystem handoff. Simulation results arepresented for handoff between GSM and DECT/Wide Access Communications System (WACS). The paper shows that a minor adjustment to the DECT specification can greatly simplify the implementation of an MS capable of intersystem handoff between GSM and DECT.Integrated Terrestrial and Satellite Systems—In an integrated cellular/satellite system, the advantages of satellites and cellular systems can be combined. Satellites can provide widearea coverage, completion of coverage, immediate service, and additional capacity (by handling overflow traffic). A cellular system can provide a high-capacity economical system. Some of the issues involved in an integrated system are discussed in [31]. In particular, the procedures of GSM are examined for their application to the integrated systems.The future public land mobile telecommunication system (FPLMTS) will provide a personal telephone system that enables a person with a handheld terminal to reach anywhere in the world [32]. The FPLMTS will include low Earth orbit (LEO) or geostationary Earth orbit (GEO) satellites as well as terrestrial cellular systems. When an MS is inside the coverage area of a terrestrial cellular system, the BS will act as a relay station and provide a link between the MS and the satellite.When an MS is outside the terrestrial system coverage area, it will have a direct communication link with the satellite.Different issues such as system architecture, call handling, performance analysis of the access, and transmission protocols are discussed in [32]. The two handoff scenarios in an integrated system are described below.Handoff from the Land Mobile Satellite System to the TerrestrialSystem —While operating, the MS monitors the satellite link and evaluates the link performance. The received signal strengths (RSSs) are averaged (e.g., over a 30 s time period) to minimize signal strength variations. If the RSS falls below a certain threshold N consecutive times (e.g., N = 3), the MS begins measuring RSS from the terrestrial cellular system.If the terrestrial signals are strong enough, handoff is made to the terrestrial system, provided that the terrestrial system can serve the MS.Handoff from the Terrestrial System to the Land Mobile Satellite System —When an MS is getting service from the terrestrial system, the BS sends an acknowledge request(called page) at predefined intervals to ensure that the MS is still inside the coverage area. If an acknowledge request signal from the MS (called page response) is not received at the BS for N consecutive times, it is handed off to the land mobile satellite system (LMSS).Reference [33] focuses on personalcommunication systems with hierarchical overlays that incorporate terrestrial and satellite systems. The lowest level in the hierarchy is formed by microcells. Macrocells overlay microcells and form the middle level in the hierarchy. Satellite beams overlay macrocells and constitute the topmost hierarchy level. Two types of subscribers are considered, satellite-only and dual cellular/satellite. Call attempts from satellite-only subscribers are served by satellite systems, while call attempts from dual subscribers are first directed to the serving terrestrial systems with the satellites taking care of the overflow traffic. An analytical model for teletraffic performance is developed, and performance measures such as traffic distribution, blocking probability, and forced termination probability are evaluated for low-speed and high-speed users.Handoff Evaluation MechanismsThree basic mechanisms used to evaluate the performance of handoff algorithms include the analytical, simulation, and emulation approaches. These mechanisms are described here. The Analytical Approach This approach can quickly give a preliminary idea about the performance of some handoff algorithms for simplified handoff scenarios. This approach is valid only under specified constraints (e.g., assumptions about the RSS profiles). Actual handoff procedures are quite complicated and are not memoryless.This makes the analytical approach less realistic. For real-world situations, this approach is complex and mathematically intractable. Some of the analytical approaches appearing in the literature are briefly touched on below.二、英文翻译:蜂窝系统切换技术蜂窝系统部署方案其无线电传播环境和相关切换的难度是在于针对不同的单元结构。
遗传算法中英文对照外文翻译文献
遗传算法中英文对照外文翻译文献(文档含英文原文和中文翻译)Improved Genetic Algorithm and Its Performance Analysis Abstract: Although genetic algorithm has become very famous with its global searching, parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as slow convergence speed. In this paper, based on several general theorems, an improved genetic algorithm using variant chromosome length and probability of crossover and mutation is proposed, and its main idea is as follows : at the beginning of evolution, our solution with shorter length chromosome and higher probability of crossover and mutation; and at the vicinity of global optimum, with longer length chromosome and lower probability of crossover and mutation. Finally, testing with some critical functions shows that our solution can improve the convergence speed of genetic algorithm significantly , its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.Genetic algorithm is an adaptive searching technique based on a selection and reproduction mechanism found in the natural evolution process, and it was pioneered by Holland in the 1970s. It has become very famous with its global searching,parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as poor local searching, premature converging, as well as slow convergence speed. In recent years, these problems have been studied.In this paper, an improved genetic algorithm with variant chromosome length and variant probability is proposed. Testing with some critical functions shows that it can improve the convergence speed significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.In section 1, our new approach is proposed. Through optimization examples, in section 2, the efficiency of our algorithm is compared with the genetic algorithm which only reserves the best individual. And section 3 gives out the conclusions. Finally, some proofs of relative theorems are collected and presented in appendix.1 Description of the algorithm1.1 Some theoremsBefore proposing our approach, we give out some general theorems (see appendix) as follows: Let us assume there is just one variable (multivariable can be divided into many sections, one section for one variable) x ∈ [ a, b ] , x ∈ R, and chromosome length with binary encoding is 1.Theorem 1 Minimal resolution of chromosome is s =12l --a b Theorem 2 Weight value of the ith bit of chromosome isw i =12l --a b 12-i ( i = 1,2,…l ) Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one-point crossover isE c (x) = la b 2-P c where Pc is the probability of crossover.Theorem 4 Mathematical expectation Em ( x ) of chromosome searching step with bit mutation isE m ( x ) = ( b- a) P m1. 2 Mechanism of algorithmDuring evolutionary process, we presume that value domains of variable are fixed, and the probability of crossover is a constant, so from Theorem 1 and 3, we know that the longer chromosome length is, the smaller searching step of chromosome, and the higher resolution; and vice versa. Meanwhile, crossover probability is in direct proportion to searching step. From Theorem 4, changing the length of chromosome does not affect searching step of mutation, while mutation probability is also in direct proportion to searching step.At the beginning of evolution, shorter length chromosome( can be too shorter, otherwise it is harmful to population diversity ) and higher probability of crossover and mutation increases searching step, which can carry out greater domain searching, and avoid falling into local optimum. While at the vicinity of global optimum, longer length chromosome and lower probability of crossover and mutation will decrease searching step, and longer length chromosome also improves resolution of mutation, which avoid wandering near the global optimum, and speeds up algorithm converging.Finally, it should be pointed out that chromosome length changing keeps individual fitness unchanged, hence it does not affect select ion ( with roulette wheel selection) .1. 3 Description of the algorithmOwing to basic genetic algorithm not converging on the global optimum, while the genetic algorithm which reserves the best individual at current generation can, our approach adopts this policy. During evolutionary process, we track cumulative average of individual average fitness up to current generation. It is written as X(t) = G 1∑=G t avg f1(t)where G is the current evolutionary generation,avg f is individual averagefitness. When the cumulative average fitness increases to k times ( k> 1, k ∈ R) of initial individual average fitness, we change chromosome length to m times ( m is a positive integer ) of itself , and reduce probability of crossover and mutation, whichcan improve individual resolution and reduce searching step, and speed up algorithm converging. The procedure is as follows:Step 1 Initialize population, and calculate individual average fitness0avg f ,and set change parameter flag. Flag equal to 1.Step 2 Based on reserving the best individual of current generation, carry out selection, regeneration, crossover and mutation, and calculate cumulative average of individual average fitness up to current generationavg f ;Step 3 If 0avg avgf f ≥k and Flag equals 1, increase chromosome length to m times of itself, and reduce probability of crossover and mutation, and set Flag equal to 0; otherwise continue evolving.Step 4 If end condition is satisfied, stop; otherwise go to Step 2.2 Test and analysisWe adopt the following two critical functions to test our approach, and compare it with the genetic algorithm which only reserves the best individual: ()]01.01[5.0sin 5.0),(2222221y x y x y x f ++-+-= ]5,5[ ∈,-y x))4cos(4.0)3cos(3.02(4),(222y x y x y x f ππ--+-= ]1,1[ ∈,-y x2. 1 Analysis of convergenceDuring function testing, we carry out the following policies: roulette wheel select ion, one point crossover, bit mutation, and the size of population is 60, l is chromosome length, Pc and Pm are the probability of crossover and mutation respectively. And we randomly select four genetic algorithms reserving best individual with various fixed chromosome length and probability of crossover and mutation to compare with our approach. Tab. 1 gives the average converging generation in 100 tests.In our approach, we adopt initial parameter l0= 10, Pc0= 0.3, Pm0= 0.1 and k=1.2, when changing parameter condition is satisfied, we adjust parameters to l= 30, Pc= 0.1, Pm= 0.01.From Tab. 1, we know that our approach improves convergence speed of genetic algorithm significantly and it accords with above analysis.2. 2 Analysis of online and offline performanceQuantitative evaluation methods of genetic algorithm are proposed by Dejong, including online and offline performance. The former tests dynamic performance; and the latter evaluates convergence performance. To better analyze online and offline performance of testing function, w e multiply fitness of each individual by 10, and we give a curve of 4 000 and 1 000 generations for f1 and f2, respectively.(a) online (b) onlineFig. 1 Online and offline performance of f1(a) online (b) onlineFig. 2 Online and offline performance of f2From Fig. 1 and Fig. 2, we know that online performance of our approach is just little worse than that of the fourth case, but it is much better than that of the second, third and fifth case, whose online performances are nearly the same. At the same time, offline performance of our approach is better than that of other four cases.3 ConclusionIn this paper, based on some general theorems, an improved genetic algorithmusing variant chromosome length and probability of crossover and mutation is proposed. Testing with some critical functions shows that it can improve convergence speed of genetic algorithm significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.AppendixWith the supposed conditions of section 1, we know that the validation of Theorem 1 and Theorem 2 are obvious.Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one point crossover is Ec(x) = c P l a b 2-where Pc is the probability of crossover.Proof As shown in Fig. A1, we assume that crossover happens on the kth locus, i. e. parent’s locus from k to l do not change, and genes on the locus from 1 to k are exchanged.During crossover, change probability of genes on the locus from 1 to k is 21(“1” to “0” or “0” to “1”). So, after crossover, mathematical expectation of chromosome searching step on locus from 1 to k is)12(12212122121)(111-•--•=•--•==-==∑∑k l j k j l j kj ck a b a b w x E Furthermore, probability of taking place crossover on each locus of chromosome is equal, namely l 1Pc. Therefore, after crossover, mathematical expectation of chromosome searching step is)(1)(11x E P lx E ck c l k c ••=∑-= Substituting Eq. ( A1) into Eq. ( A2) , we obtain )1211(2)(])12[(122)12(12211)(11---•=--•--•=-•--•••=∑-=l c i l c k l c l k c l a b P l a b l P a b P lx E where l is large, 012≈-l l , so )(x E c c P l a b 2-≈Fig. A1 One point crossoverTheorem 4 Mathematical expectation)(x E m of chromosome searching step with bit mutation m m P a b x E •-=)()(, where Pm is the probability of mutation.Proof Mutation probability of genes on each locus of chromosome is equal, say Pm, therefore, mathematical expectation of mutation searching step isE m (x )=P m ·w i =i =1l åP m ·b -a 2l -1·2i -1=i =1l åP m ·b -a 2i -1·(2i -1)=(b -a )·P m一种新的改进遗传算法及其性能分析摘要:虽然遗传算法以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称,但是它仍然有一定的缺陷,比如收敛速度慢。
外文翻译---人工神经网络
外文翻译---人工神经网络英文文献英文资料:Artificial neural networks (ANNs) to ArtificialNeuralNetworks, abbreviations also referred to as the neural network (NNs) or called connection model (ConnectionistModel), it is a kind of model animals neural network behavior characteristic, distributed parallel information processing algorithm mathematical model. This network rely on the complexity of the system, through the adjustment of mutual connection between nodes internal relations, so as to achieve the purpose of processing information. Artificial neural network has since learning and adaptive ability, can provide in advance of a batch of through mutual correspond of the input/output data, analyze master the law of potential between, according to the final rule, with a new input data to calculate, this study analyzed the output of the process is called the "training". Artificial neural network is made of a number of nonlinear interconnected processing unit, adaptive information processing system. It is in the modern neuroscience research results is proposed on the basis of, trying to simulate brain neural network processing, memory information way information processing. Artificial neural network has four basic characteristics:(1) the nonlinear relationship is the nature of the nonlinear common characteristics. The wisdom of the brain is a kind of non-linear phenomena. Artificial neurons in the activation or inhibit the two different state, this kind of behavior in mathematics performance for a nonlinear relationship. Has the threshold of neurons in the network formed by the has betterproperties, can improve the fault tolerance and storage capacity.(2) the limitations a neural network by DuoGe neurons widely usually connected to. A system of the overall behavior depends not only on the characteristics of single neurons, and may mainly by the unit the interaction between the, connected to the. Through a large number of connection between units simulation of the brain limitations. Associative memory is a typical example of limitations.(3) very qualitative artificial neural network is adaptive, self-organizing, learning ability. Neural network not only handling information can have all sorts of change, and in the treatment of the information at the same time, the nonlinear dynamic system itself is changing. Often by iterative process description of the power system evolution.(4) the convexity a system evolution direction, in certain conditions will depend on a particular state function. For example energy function, it is corresponding to the extreme value of the system stable state. The convexity refers to the function extreme value, it has DuoGe DuoGe system has a stable equilibrium state, this will cause the system to the diversity of evolution.Artificial neural network, the unit can mean different neurons process of the object, such as characteristics, letters, concept, or some meaningful abstract model. The type of network processing unit is divided into three categories: input unit, output unit and hidden units. Input unit accept outside the world of signal and data; Output unit of output system processing results; Hidden unit is in input and output unit, not between by external observation unit. The system The connections between neurons right value reflect the connection between the unit strength, information processing and embodied in the network said theprocessing unit in the connections. Artificial neural network is a kind of the procedures, and adaptability, brain style of information processing, its essence is through the network of transformation and dynamic behaviors have akind of parallel distributed information processing function, and in different levels and imitate people cranial nerve system level of information processing function. It is involved in neuroscience, thinking science, artificial intelligence, computer science, etc DuoGe field cross discipline.Artificial neural network is used the parallel distributed system, with the traditional artificial intelligence and information processing technology completely different mechanism, overcome traditional based on logic of the symbols of the artificial intelligence in the processing of intuition and unstructured information of defects, with the adaptive, self-organization and real-time characteristic of the study.Development historyIn 1943, psychologists W.S.M cCulloch and mathematical logic W.P home its established the neural network and the math model, called MP model. They put forward by MP model of the neuron network structure and formal mathematical description method, and prove the individual neurons can perform the logic function, so as to create artificial neural network research era. In 1949, the psychologist put forward the idea of synaptic contact strength variable. In the s, the artificial neural network to further development, a more perfect neural network model was put forward, including perceptron and adaptive linear elements etc. M.M insky, analyzed carefully to Perceptron as a representative of the neural network system function and limitations in 1969 after the publication of the book "Perceptron, and points out thatthe sensor can't solve problems high order predicate. Their arguments greatly influenced the research into the neural network, and at that time serial computer and the achievement of the artificial intelligence, covering up development new computer and new ways of artificial intelligence and the necessity and urgency, make artificial neural network of research at a low. During this time, some of the artificial neural network of the researchers remains committed to this study, presented to meet resonance theory (ART nets), self-organizing mapping, cognitive machine network, but the neural network theory study mathematics. The research for neural network of research and development has laid a foundation. In 1982, the California institute of J.J.H physicists opfield Hopfield neural grid model proposed, and introduces "calculation energy" concept, gives the network stability judgment. In 1984, he again put forward the continuous time Hopfield neural network model for the neural computers, the study of the pioneering work, creating a neural network for associative memory and optimization calculation, the new way of a powerful impetus to the research into the neural network, in 1985, and scholars have proposed a wave ears, the study boltzmann model using statistical thermodynamics simulated annealing technology, guaranteed that the whole system tends to the stability of the points. In 1986 the cognitive microstructure study, puts forward the parallel distributed processing theory. Artificial neural network of research by each developed country, the congress of the United States to the attention of the resolution will be on jan. 5, 1990 started ten years as the decade of the brain, the international research organization called on its members will the decade of the brain into global behavior. In Japan's "real world computing(springboks claiming)" project, artificial intelligence research into an important component.Network modelArtificial neural network model of the main consideration network connection topological structure, the characteristics, the learning rule neurons. At present, nearly 40 kinds of neural network model, with back propagation network, sensor, self-organizing mapping, the Hopfieldnetwork.the computer, wave boltzmann machine, adapt to the ear resonance theory. According to the topology of the connection, the neural network model can be divided into:(1) prior to the network before each neuron accept input and output level to the next level, the network without feedback, can use a loop to no graph. This network realization from the input space to the output signal of the space transformation, it information processing power comes from simple nonlinear function of DuoCi compound. The network structure is simple, easy to realize. Against the network is a kind of typical prior to the network.(2) the feedback network between neurons in the network has feedback, can use a no to complete the graph. This neural network information processing is state of transformations, can use the dynamics system theory processing. The stability of the system with associative memory function has close relationship. The Hopfield network.the computer, wave ear boltzmann machine all belong to this type.Learning typeNeural network learning is an important content, it is through the adaptability of the realization of learning. According to the change of environment, adjust to weights, improve thebehavior of the system. The proposed by the Hebb Hebb learning rules for neural network learning algorithm to lay the foundation. Hebb rules say that learning process finally happened between neurons in the synapse, the contact strength synapses parts with before and after the activity and synaptic neuron changes. Based on this, people put forward various learning rules and algorithm, in order to adapt to the needs of different network model. Effective learning algorithm, and makes the godThe network can through the weights between adjustment, the structure of the objective world, said the formation of inner characteristics of information processing method, information storage and processing reflected in the network connection. According to the learning environment is different, the study method of the neural network can be divided into learning supervision and unsupervised learning. In the supervision and study, will the training sample data added to the network input, and the corresponding expected output and network output, in comparison to get error signal control value connection strength adjustment, the DuoCi after training to a certain convergence weights. While the sample conditions change, the study can modify weights to adapt to the new environment. Use of neural network learning supervision model is the network, the sensor etc. The learning supervision, in a given sample, in the environment of the network directly, learning and working stages become one. At this time, the change of the rules of learning to obey the weights between evolution equation of. Unsupervised learning the most simple example is Hebb learning rules. Competition rules is a learning more complex than learning supervision example, it is according to established clustering on weights adjustment. Self-organizing mapping, adapt to theresonance theory is the network and competitive learning about the typical model.Analysis methodStudy of the neural network nonlinear dynamic properties, mainly USES the dynamics system theory and nonlinear programming theory and statistical theory to analysis of the evolution process of the neural network and the nature of the attractor, explore the synergy of neural network behavior and collective computing functions, understand neural information processing mechanism. In order to discuss the neural network and fuzzy comprehensive deal of information may, the concept of chaos theory and method will play a role. The chaos is a rather difficult toprecise definition of the math concepts. In general, "chaos" it is to point to by the dynamic system of equations describe deterministic performance of the uncertain behavior, or call it sure the randomness. "Authenticity" because it by the intrinsic reason and not outside noise or interference produced, and "random" refers to the irregular, unpredictable behavior, can only use statistics method description. Chaotic dynamics of the main features of the system is the state of the sensitive dependence on the initial conditions, the chaos reflected its inherent randomness. Chaos theory is to point to describe the nonlinear dynamic behavior with chaos theory, the system of basic concept, methods, it dynamics system complex behavior understanding for his own with the outside world and for material, energy and information exchange process of the internal structure of behavior, not foreign and accidental behavior, chaos is a stationary. Chaotic dynamics system of stationary including: still, stable quantity, the periodicity, with sex and chaos of accuratesolution... Chaos rail line is overall stability and local unstable combination of results, call it strange attractor.A strange attractor has the following features: (1) some strange attractor is a attractor, but it is not a fixed point, also not periodic solution; (2) strange attractor is indivisible, and that is not divided into two and two or more to attract children. (3) it to the initial value is very sensitive, different initial value can lead to very different behavior.superiorityThe artificial neural network of characteristics and advantages, mainly in three aspects: first, self-learning. For example, only to realize image recognition that the many different image model and the corresponding should be the result of identification input artificial neural network, the network will through the self-learning function, slowly to learn to distinguish similar images. The self-learning function for the forecast has special meaning. The prospect of artificial neural network computer will provide mankind economic forecasts, market forecast, benefit forecast, the application outlook is very great. The second, with lenovo storage function. With the artificial neural network of feedback network can implement this association. Third, with high-speed looking for the optimal solution ability. Looking for a complex problem of the optimal solution, often require a lot of calculation, the use of a problem in some of the design of feedback type and artificial neural network, use the computer high-speed operation ability, may soon find the optimal solution.Research directionThe research into the neural network can be divided into the theory research and application of the two aspects of research.Theory study can be divided into the following two categories: 1, neural physiological and cognitive science research on human thinking and intelligent mechanism.2, by using the neural basis theory of research results, with mathematical method to explore more functional perfect, performance more superior neural network model, the thorough research network algorithm and performance, such as: stability and convergence, fault tolerance, robustness, etc.; The development of new network mathematical theory, such as: neural network dynamics, nonlinear neural field, etc.Application study can be divided into the following two categories:1, neural network software simulation and hardware realization of research.2, the neural network in various applications in the field of research. These areas include: pattern recognition, signal processing, knowledge engineering, expert system, optimize the combination, robot control, etc. Along with the neural network theory itself and related theory, related to the development of technology, the application of neural network will further.Development trend and research hot spotArtificial neural network characteristic of nonlinear adaptive information processing power, overcome traditional artificial intelligence method for intuitive, such as mode, speech recognition, unstructured information processing of the defects in the nerve of expert system, pattern recognition and intelligent control, combinatorial optimization, and forecast areas to be successful application. Artificial neural network and other traditional method unifies, will promote the artificial intelligence and information processing technology development. In recentyears, the artificial neural network is on the path of human cognitive simulation further development, and fuzzy system, genetic algorithm, evolution mechanism combined to form a computational intelligence, artificial intelligence is an important direction in practical application, will be developed. Information geometry will used in artificial neural network of research, to the study of the theory of the artificial neural network opens a new way. The development of the study neural computers soon, existing product to enter the market. With electronics neural computers for the development of artificial neural network to provide good conditions.Neural network in many fields has got a very good application, but the need to research is a lot. Among them, are distributed storage, parallel processing, since learning, the organization and nonlinear mapping the advantages of neural network and other technology and the integration of it follows that the hybrid method and hybrid systems, has become a hotspot. Since the other way have their respective advantages, so will the neural network with other method, and the combination of strong points, and then can get better application effect. At present this in a neural network and fuzzy logic, expert system, genetic algorithm, wavelet analysis, chaos, the rough set theory, fractal theory, theory of evidence and grey system and fusion.汉语翻译人工神经网络(ArtificialNeuralNetworks,简写为ANNs)也简称为神经网络(NNs)或称作连接模型(ConnectionistModel),它是一种模范动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。
网络设计与规划中英文对照外文翻译文献
网络设计与规划中英文对照外文翻译文献现代企业面临的挑战尽管企业进行了大量的IT资本投资,但许多公司发现,大部分关键网络资源和信息资产仍处于自由状态。
实际上,许多"孤立"的应用程序和数据库无法相互通信,这是一种常见的商业现象。
2.The n: Service-Oriented ork Architecture (SONA)___'___(SONA) ___ is based on a service-oriented architecture (SOA) approach。
___.解决方案:面向服务的网络架构(SONA)___的面向服务的网络架构(SONA)是一个全面的框架,帮助企业克服网络设计和规划的挑战。
SONA基于面向服务的架构(SOA)方法,使企业能够将不同的应用程序和数据库集成到一个统一的网络中。
3.___ SONABy implementing SONA。
businesses ___ of benefits。
___。
increased security。
___。
___ security features。
such as identity and access management。
to protect critical n assets。
Finally。
___.SONA的好处通过实施SONA,企业可以获得许多好处,包括提高网络敏捷性、增加安全性和降低成本。
SONA通过提供灵活和可扩展的网络架构,使企业能够快速适应不断变化的业务需求。
此外,SONA提供了增强的安全功能,如身份和访问管理,以保护关键信息资产。
最后,SONA通过简化网络管理和减少对额外硬件和软件的需求,帮助企业降低成本。
4.nIn today's fast-paced business environment。
it is essential for ___。
secure。
and cost-effective ork architecture.结论在今天快节奏的商业环境中,企业必须拥有一个可以快速适应不断变化的业务需求的网络基础设施。
神经网络概论外文文献翻译中英文
外文文献翻译(含:英文原文及中文译文)英文原文Neural Network Introduction1 ObjectivesAs you read these words you are using a complex biological neural network. Y ou have a highly interconnected set of some 1011neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons, a rich assembly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.Scientists have only just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.This leads to the following question: Although we have only a rudimentary understanding of biological neural networks, is it possible to construct a small set of simple artifi cial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is aboutartificial neural networks.The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This book is about such neurons, the networks that contain them and their training.2 HistoryThe history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the paper in historical perspective.Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept,a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These experiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CA T) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CA T system.The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in physics, psychology andneurophysiology by such scientists as Hermann von Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the neural network field.McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.) At about the same time, Bernard Widrow and Ted Hoff [WiHo60]introduced a new learning algorithm and used it to train adaptive linear neural networks, which were similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.) Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Rosenblatt and Widrow wereaware of these limitations and proposed new networks that would overcome them. However, they were not able to successfully modify their learning algorithms to train the more complex networks.Many people, influenced by Minsky and Papert, believed that further research on neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment, caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 Teuvo Kohonen [Koho72] and James Anderson [Ande72] independently and separately developed new neural networks that could act as memories. Stephen Grossberg [Gros76] was also very active during this period in the investigation of self-organizing networks.Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which toexperiment. During the 1980s both of these impediments were overcome, and researchin neural networks increased dramatically. New personal computers and workstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced.Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield [Hopf82].The second key development of the 1980s was the backpropagation algo rithm for training multilayer perceptron networks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and 12 for a development of the backpropagation algorithm.) These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in the neural network field has progressed. As one might note, the progress has not always been "slow but sure." There have been periods of dramatic progress and periods when relatively little has been accomplished.Many of the advances in neural networks have had to do with new concepts, such as innovative architectures and training. Just as important has been the availability of powerful new computers on which to test these new concepts.Well, so much for the history of neural networks to this date. The real question is, "What will happen in the next ten to twenty years?" Will neural networks take a permanent place as a mathematical/engineering tool, or will they fade away as have so many promising technologies? At present, the answer seems to be that neural networks will not only have their day but will have a permanent place, not as a solution to every problem, but as a tool to be used in appropriate situations. In addition, remember that we still know very little about how the brain works. The most important advances in neural networks almost certainly lie in the future.Although it is difficult to predict the future success of neural networks, the large number and wide variety of applications of this newtechnology are very encouraging. The next section describes some of these applications.3 ApplicationsA recent newspaper article described the use of neural networks in literature research by Aston University. It stated that "the network can be taught to recognize individual writing styles, and the researchers used it to compare works attributed to Shakespeare and his contemporaries." A popular science television program recently documented the use of neural networks by an Italian research institute to test the purity of olive oil. These examples are indicative of the broad range of applications that can be found for neural networks. The applications are expanding because neural networks are good at solving problems, not just in engineering, science and mathematics, but m medicine, business, finance and literature as well. Their application to a wide variety of problems in many fields makes them very attractive. Also, faster computers and faster algorithms have made it possible to use neural networks to solve complex industrial problems that formerly required too much computation.The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MA TLAB with the permission of the Math Works, Inc.The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channelequalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.AerospaceHigh performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectorsAutomotiveAutomobile automatic guidance systems, warranty activity analyzers BankingCheck and other document readers, credit application evaluatorsDefenseWeapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification ElectronicsCode sequence prediction, integrated circuit chip layout, processcontrol, chip failure analysis, machine vision, voice synthesis, nonlinear modelingEntertainmentAnimation, special effects, market forecastingFinancialReal estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis, portfolio trading program, corporate financial analysis, currency price predictionInsurancePolicy application evaluation, product optimizationManufacturingManufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systemsMedicalBreast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency room test advisement0il and GasExplorationRoboticsTrajectory control, forklift robot, manipulator controllers, vision systems SpeechSpeech recognition, speech compression, vowel classification, text to speech synthesisSecuritiesMarket analysis, automatic bond rating, stock trading advisory systems TelecommunicationsImage and data compression, automated information services,real-time translation of spoken language, customer payment processing systemsTransportationTruck brake diagnosis systems, vehicle scheduling, routing systems ConclusionThe number of neural network applications, the money that has been invested in neural network software and hardware, and the depth and breadth of interest in these devices have been growing rapidly.4 Biological InspirationThe artificial neural networks discussed in this text are only remotely related to their biological counterparts. In this section we willbriefly describe those characteristics of brain function that have inspired the development of artificial neural networks.The brain consists of a large number (approximately 1011) of highly connected elements (approximately 104 connections per element) called neurons. For our purposes these neurons have three principal components: the dendrites, the cell body and the axon. The dendrites are tree-like receptive networks of nerve fibers that carry electrical signals into the cell body. The cell body effectively sums and thresholds these incoming signals. The axon is a single long fiber that carries the signal from the cell body out to other neurons. The point of contact between an axon of one cell and a dendrite of another cell is called a synapse. It is the arrangement of neurons and the strengths of the individual synapses, determined by a complex chemical process, that establishes the function of the neural network. Some of the neural structure is defined at birth. Other parts are developed through learning, as new connections are made and others waste away. This development is most noticeable in the early stages of life. For example, it has been shown that if a young cat is denied use of one eye during a critical window of time, it will never develop normal vision in that eye.Neural structures continue to change throughout life. These later changes tend to consist mainly of strengthening or weakening of synaptic junctions. For instance, it is believed that new memories are formed bymodification of these synaptic strengths. Thus, the process of learning a new friend's face consists of altering various synapses.Artificial neural networks do not approach the complexity of the brain. There are, however, two key similarities between bio logical and artificial neural networks. First, the building blocks of both networks are simple computational devices (although artificial neurons are much simpler than biological neurons) that are highly interconnected. Second, the connections between neurons determine the function of the network. The primary objective of this book will be to determine the appropriate connections to solve particular problems.It is worth noting that even though biological neurons are very slow when compared to electrical circuits, the brain is able to perform many tasks much faster than any conventional computer. This is in part because of the massively parallel structure of biological neural networks; all of the neurons are operating at the same time. Artificial neural networks share this parallel structure. Even though most artificial neural networks are currently implemented on conventional digital computers, their parallel structure makes them ideally suited to implementation using VLSI, optical devices and parallel processors.In the following chapter we will introduce our basic artificial neuron and will explain how we can combine such neurons to form networks. This will provide a background for Chapter 3, where we take our firstlook at neural networks in action.中文译文神经网络概述1.目的当你现在看这本书的时候, 就正在使用一个复杂的生物神经网络。
人形机器人论文中英文资料对照外文翻译
人形机器人论文中英文资料对照外文翻译| |在获取信息和感觉器官的非结构化动态环境中,后续的决策和对自身不确定性的控制在很大程度上共存。
软件计算方法也可以被人们想象出来。
在机器人领域,关键问题之一是从感觉数据中提取有用的知识,然后将信息和感觉的不确定性分成不同的层次本文提出了一种基于广义融合混合分类(人工神经网络的力量,论坛渔业局)的生成合成数据的观察模型,该模型已经制定并应用于验证,以及一种从实际硬件机器人生成合成数据的模型当选择这种融合时,主要目标是根据内部(关节传感器)和外部(视觉摄像机)的感觉信息最小化机器人操作的不确定性任务目前,一种被广泛有效使用的方法是研究具有5个自由度的实验室机器人和具有模型模拟视觉控制的机械手。
最近研究的处理不确定性的主要方法包括选择加权参数(几何融合),并且指出在标准机械手控制器设计中训练的神经网络是不可用的。
这些方法大大降低了机械手控制的不确定性,在不同层次的混合配置中更快更准确。
这些方法通过了严格的模拟和实验。
关键词:传感器融合、频分双工、游离脂肪酸、人工神经网络、软计算、操纵器、重复性、准确性、协方差矩阵、不确定性、不确定性椭球1简介越来越多的产品出现在各种机器人的应用中(工业、军事、科学、医学、社会福利、家庭和娱乐)。
它们在广泛的范围内运行,哪一个在非结构化环境中运行在大多数情况下,了解环境是如何变化的以及如何在每一瞬间最佳地控制机器人的动作是非常重要的。
移动机器人基本上也有能力定位和操作非常大的非结构化动态环境,并处理重大的不确定性。
对于机器人运动的最佳控制来说,了解周围环境在每一瞬间的变化是至关重要的。
移动机器人本质上还必须在非常大的未成熟的动态环境中导航和操作,并处理显著的不确定性。
当机器人在自然的不确定环境中工作时,给定工作的完成条件总是存在一定程度的不确定性。
在执行给定的操作时,这些条件有时会发生变化。
导致不确定性的主要原因是机器人运动参数和各种任务定义信息中出现的差异。
神经网络和遗传算法在模糊系统设计中的自动化优化方法研究
神经网络和遗传算法在模糊系统设计中的
自动化优化方法研究
本文研究了神经网络和遗传算法在模糊系统设计中的自动化优化方法。
模糊系统是一种有效的工程工具,它对于可以用模糊语言描述而难以用精确语言描述的问题有着很好的应用效果。
然而,由于模糊系统参数的确定问题,使得模糊系统的设计变得十分复杂。
因此,本文提出了基于神经网络和遗传算法的自动化优化方法以简化设计流程。
神经网络是一种模拟人类神经系统的计算模型。
利用神经网络可以进行非线性的函数逼近和模式分类。
神经网络的主要结构包括输入层、隐含层和输出层。
本文采用BP神经网络进行优化,将模糊系统参数设置为BP神经网络的输入,将模糊系统的表现作为输出,利用反向传播算法来实现模糊系统参数的自动化调整。
遗传算法是一种群体智能算法,它通过模拟生物进化的过程来搜索最优解。
通过遗传算法,可以在种群中不断地迭代搜索,通过选择、交叉、变异等基本操作,逐步逼近最优解。
本文将模糊控制
器的参数作为遗传算法的优化目标,将模糊系统的表现作为适应度
函数,通过遗传算法来实现模糊控制器参数的自动化调整。
通过对比实验,本文证明了基于神经网络和遗传算法的自动化
优化方法相比于一般方法能够更好地提高模糊系统的性能。
优化后
的模糊系统参数能够更好地适应不同的控制需求,并且具有更高的
控制精度和鲁棒性。
因此,本文的研究成果在模糊控制领域具有很
好的应用前景,对于推动模糊控制技术的发展具有一定的指导意义。
模糊神经网络外文翻译文献
模糊神经网络外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:Neuro-fuzzy generalized predictive control ofboiler steam temperatureXiangjie LIU, Jizhen LIU, Ping GUANABSTRACTPower plants are nonlinear and uncertain complex systems. Reliablecontrol of superheated steam temperature is necessary to ensure high efficiency and high load-following capability in the operation of modern power plant. A nonlinear generalized predictive controller based on neuro-fuzzy network (NFGPC) is proposed in this paper. The proposed nonlinear controller is applied to control the superheated steam temperature of a 200MW power plant. From the experiments on the plant and the simulation of the plant, much better performance than the traditional controller is obtained.Keywords:Neuro-fuzzy networks; Generalized predictive control; Superheated steam temperature1. IntroductionContinuous process in power plant and power station are complex systems characterized by nonlinearity, uncertainty and load disturbance. The superheater is an important part of the steam generation process in the boiler-turbine system, where steam is superheated before entering the turbine that drives the generator. Controlling superheated steam temperature is not only technically challenging, but also economically important.From Fig.1,the steam generated from the boiler drum passes through the low-temperature superheater before it enters the radiant-type platen superheater. Water is sprayed onto the steam to control the superheated steam temperature in both the low and high temperature superheaters. Proper control of the superheated steam temperature is extremely important to ensure theoverall efficiency and safety of the power plant. It is undesirable that the steam temperature is too high, as it can damage the superheater and the high pressure turbine, or too low, as it will lower the efficiency of the power plant. It is also important to reduce the temperature fluctuations inside the superheater, as it helps to minimize mechanical stress that causes micro-cracks in the unit, in order to prolong the life of the unit and to reduce maintenance costs. As the GPC is derived by minimizing these fluctuations, it is amongst the controllers that are most suitable for achieving this goal.The multivariable multi-step adaptive regulator has been applied to control the superheated steam temperature in a 150 t/h boiler, and generalized predictive control was proposed to control the steam temperature. A nonlinear long-range predictive controller based on neural networks is developed into control the main steam temperature and pressure, and the reheated steam temperature at several operating levels. The control of the main steam pressure and temperature based on a nonlinear model that consists of nonlinear static constants and linear dynamics is presented in that.Fig.1 The boiler and superheater steam generation processFuzzy logic is capable of incorporating human experiences via the fuzzy rules. Nevertheless, the design of fuzzy logic controllers is somehow time consuming, as the fuzzy rules are often obtained by trials and errors. In contrast, neural networks not only have the ability to approximate non-linear functions with arbitrary accuracy, they can also be trained from experimental data. The neuro-fuzzy networks developed recently have the advantages of model transparency of fuzzy logic and learning capability of neural networks. The NFN is have been used to develop self-tuning control, and is therefore a useful tool for developing nonlinear predictive control. Since NFN is can be considered as a network that consists of several local re-gions, each of which contains a local linear model, nonlinear predictive control based on NFN can be devised with the network incorporating all the local generalized predictive controllers (GPC) designed using the respective local linear models. Following this approach, the nonlinear generalized predictive controllers based on the NFN, or simply, the neuro-fuzzy generalized predictive controllers (NFG-PCs)are derived here. The proposed controller is then applied to control the superheated steam temperature of the 200MW power unit. Experimental data obtained from the plant are used to train the NFN model, and from which local GPC that form part of the NFGPC is then designed. The proposed controller is tested first on the simulation of the process, before applying it to control the power plant.2. Neuro-fuzzy network modellingConsider the following general single-input single-output nonlinear dynamic system:),1(),...,(),(),...,1([)(''+-----=u y n d t u d t u n t y t y f t y∆+--/)()](),...,1('t e n t e t e e (1)where f[.]is a smooth nonlinear function such that a Taylor series expansion exists, e(t)is a zero mean white noise and Δis the differencing operator,''',,e u y n n n and d are respectively the known orders and time delay ofthe system. Let the local linear model of the nonlinear system (1) at the operating point )(t o be given by the following Controlled Auto-Regressive Integrated Moving Average (CARIMA) model:)()()()()()(111t e z C t u z B z t y z A d ----+∆= (2) Where)()(),()(1111----∆=z andC z B z A z A are polynomials in 1-z , the backward shift operator. Note that the coefficients of these polynomials are a function of the operating point )(t o .The nonlinear system (1) is partitioned into several operating regions, such that each region can be approximated by a local linear model. Since NFN is a class of associative memory networks with knowledge stored locally, they can be applied to model this class of nonlinear systems. A schematic diagram of the NFN is shown in Fig.2.B-spline functions are used as the membership functions in the NFN for the following reasons. First,B-spline functions can be readily specified by the order of the basis function and the number of inner knots. Second, they are defined on a bounded support, andthe output of the basis function is always positive, i.e.,],[,0)(j k j j k x x λλμ-∉=and ],[,0)(j k j j k x x λλμ-∈>.Third, the basis functions form a partition of unity, i.e.,.][,1)(min,∑∈≡jmam j k x x x x μ (3)And fourth, the output of the basis functions can be obtained by a recurrence equation.Fig. 2 neuro-fuzzy network The membership functions of the fuzzy variables derived from the fuzzy rules can be obtained by the tensor product of the univariate basis functions. As an example, consider the NFN shown in Fig.2, which consists of the following fuzzy rules:IF operating condition i (1x is positive small, ... , and n x is negative large),THEN the output is given by the local CARIMA model i:...)()(ˆ...)1(ˆ)(ˆ01+-∆+-++-=d t u b n t y a t y a t y i i a i in i i i a )(...)()(c i in i b i in n t e c t e n d t u b cb -+++--∆+ (4)or)()()()()(ˆ)(111t e z C t u z B z t yz A i i i i d i i ----+∆= (5) Where )()(),(111---z andC z B z A i i i are polynomials in the backward shift operator 1-z , and d is the dead time of the plant,)(t u i is the control, and )(t e i isa zero mean independent random variable with a variance of 2δ. Themultivariate basis function )(k i x a is obtained by the tensor products of the univariate basis functions,p i x A a nk k i k i ,...,2,1,)(1==∏=μ(6)where n is the dimension of the input vector x, and p, the total number of weights in the NFN, is given by,∏=+=nk i i k R p 1)((7)Where i k and i R are the order of the basis function and the number of inner knots respectively. The properties of the univariate B-spline basis functions described previously also apply to the multivariate basis function, which is defined on the hyper-rectangles. The output of the NFN is,∑∑∑=====p i i i p i ip i i i a y aa y y 111ˆˆˆ (8)译文:锅炉蒸汽温度模糊神经网络的广义预测控制Xiangjie LIU, Jizhen LIU, Ping GUAN摘要发电厂是非线性和不确定性的复杂系统。
计算机网络技术中英文对照外文翻译文献
中英文资料外文翻译网站建设技术1.介绍网络技术的发展,为今天全球性的信息交流与资在建立源共享和交往提供了更多的途径和可能。
足不出户便可以知晓天下大事,按几下键盘或点几下鼠标可以与远在千里之外的朋友交流,网上通信、网上浏览、网上交互、网上电子商务已成为现代人们生活的一部分。
Internet 时代, 造就了人们新的工作和生活方式,其互联性、开放性和共享信息的模式,打破了传统信息传播方式的重重壁垒,为人们带来了新的机遇。
随着计算机和信息时代的到来,人类社会前进的脚步在逐渐加快。
近几年网页设计发展,快得人目不暇接。
随着网页设计技术的发展,丰富多彩的网页成为网上一道亮丽的风景线。
要想设计美观实用的网页就应该深入掌握网站建设技术。
在建立网站时,我们分析了网站建立的目的、内容、功能、结构,应用了更多的网页设计技术。
2、网站的定义2.1 如何定义网站确定网站的任务和目标,是建设网站所面临的最重要的问题。
为什么人们会来到你的网站? 你有独特的服务吗? 人们第一次到你的网站是为了什么? 他们还会再来吗? 这些问题都是定义网站时必须考虑的问题。
要定义网站,首先,必须对整个网站有一个清晰认识,弄清到底要设计什么、主要的目的与任务、如何对任务进行组织与规划。
其次,保持网站的高品质。
在众多网站的激烈竞争中,高品质的产品是长期竞争的最大优势。
一个优秀的网站应具备:(1)用户访问网站的速度要快;(2)注意反馈与更新。
及时更新网站内容、及时反馈用户的要求;(3)首页设计要合理。
首页给访问者留下的第一印象很重要,设计务必精美,以求产生良好的视觉效果。
2.2 网站的内容和功能在网站的内容方面,就是要做到新、快、全三面。
网站内容的类型包括静态的、动态的、功能的和事物处理的。
确定网站的内容是根据网站的性质决定的,在设计政府网站、商业网站、科普性网站、公司介绍网站、教学交流网站等的内容和风格时各有不同。
我们建立的网站同这些类型的网站性质均不相同。
英文文献加翻译(基于神经网络和遗传算法的模糊系统的自动设计)
附录1基于神经网络和遗传算法的模糊系统的自动设计摘要本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。
介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。
还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。
1 前言模糊系统往往是人工手动设计。
这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。
为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。
前者包括辅助模糊系统设计的开发环境。
许多环境已具有商业用途。
后者介绍了自动设计的技术。
尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。
有三种主要的设计决策模糊控制系统设计:(1)确定模糊规则数,(2)确定隶属度函数的形式。
(3)确定变化参数再者,必须作出另外两个决定:(4)确定输入变量的数量(5)确定论证方法(1)和(2)相互协调确定如何覆盖输入空间。
他们之间有高度的相互依赖性。
(3)用以确定TSK(Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。
(4)符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。
像逆向消除(4)和信息标准的技术在此设计中经常被利用。
(5)相当于决定使用哪一个模糊算子和解模糊化的方法。
虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。
[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。
神经网络模型(以更普遍的梯度)和基于遗传算法的神经网络(最常见的梯度的基础)和遗传算法被用于模糊系统的自动设计。
基于神经网络的方法主要是用来设计模糊隶属度函数。
这有两种主要的方法;(一)直接的多维的模糊隶属度函数的设计:该方法首先通过数据库确定规则的数目。
网络设计与规划中英文对照外文翻译文献
网络设计与规划中英文对照外文翻译文献(文档含英文原文和中文翻译)Service-Oriented Network Architecture (SONA)1.T he challenges facing businessesAlthough a large number of IT capital investment, but many companies have found that most of the critical network resources and information assets remain in the free state. In fact, can not have hundreds of "orphaned" applications and databases communicate with each other is a common business phenomenon.This is partly due to growing internal and external customers, but due to unpredictable demand. Many companies have been forced to rapidly deploy new technologies, often leading to the deployment of a plurality of discrete systems, and thus can not effectively share information across the organization. For example, if you do not create the applications and information together various overlapping networks, sales, customer service or purchasing department will not be able to easily access customer records. Many companies have found that the blind expansion brought them multiple underutilized and irreconcilable separation systems and resources. These disparate systems while also difficult to manage and expensive to administer.2. Intelligent Information Network - The Cisco AdvantageCisco Systems, Inc. ® With the Intelligent Information Network (IIN) program, is helping global IT organizations solve these problems and meet new challenges, such as the deployment of service-oriented architecture, Web services and virtualization. IIN elaborated network in terms of promoting the development of integrated hardware and software, which will enable organizations to better align IT resources with business priorities. By intelligent built into the existing network infrastructure, IIN will help organizations achieve lower infrastructure complexity and cost advantages.3. Power NetworksInnovative IT environment focused on by traditional server-based system to distributenew business applications. However, the network is still transparent connectivity and support IT infrastructure platform for all components. Cisco ® Service-Oriented Network Architecture (SONA), enterprises can optimize applications, processes and resources to achieve greater business benefits. By providing better network capabilities and intelligence, companies can improve the efficiency of network-related activities, as well as more funds for new strategic investments and innovation.Standardization reduces the amount of assets needed to support the same operating costs, thereby improving asset efficiency. Virtualization optimizes the use of assets, physical resources can be divided logically for use in all sectors of the dispersion. Improve the efficiency of the entire network can enhance the flexibility and scalability, and then have a huge impact on business development, customer loyalty and profits - thereby enhancing their competitive advantage.4. Use architecture to succeedCisco SONA framework illustrates how companies should develop into intelligent information network to accelerate applications, business processes and resources, and to IT to provide enterprises with better service.Cisco SONA Cisco and Cisco partner solutions in the industry, services and experience to provide proven, scalable business solutions.Cisco SONA framework illustrates how to build on the full integration of the intelligent network integration system, in order to greatly improve the flexibility and efficiency.Enterprises can deploy this integrated intelligence among the entire network, including data centers, branch offices and campus environments.4-1 Cisco Service-Oriented Network ArchitectureApplication layer business applications collaborative applicationsInteractive Services Layer Application Networking Services Adaptive Management ServicesInfrastructure ServicesNetwork infrastructure virtualizationNetwork infrastructure layer Park branch office data center WAN / MAN teleworkers Client server storageIntelligent Information Network5. Three levels of Cisco SONANetwork infrastructure layer, where all the IT resources on the Internet converged network platformInteractive services layer, the use of network infrastructure, applications and business processes efficient allocation of resourcesApplication layer, contains business applications and collaboration applications, take advantage of the efficiency of interactive servicesIn the network infrastructure layer of Cisco's proven enterprise architecture provides comprehensive design guide that provides a comprehensive, integrated end-system design guidelines for your entire network.In the interactive services layer, Cisco will integrate a full-service intelligent systems to optimize the distribution business and collaboration applications, thereby providing more predictable, more reliable performance, while reducing operating costs.At the application layer, through deep integration with the network fabric, Cisco application networking solutions without having to install the client or change the application, the entire application delivery while maintaining application visibility and security.6. Build business advantage of Cisco SONASimpler, more flexible, integrated infrastructure will provide greater flexibility and adaptability, and thus a lower cost for higher commercial returns. Use Cisco SONA, you will be able to improve overall IT efficiency and utilization, thereby enhancing the effectiveness of IT, we call network multiplicative effect.7. Network amplification effectZoom effect refers to the network to help enterprises enhance the contribution of IT across the enterprise through Cisco SONA. Optimal efficiency and use IT resources will be more low-cost to produce higher impact on the business, so that your network of value-added resources become profitable.Network amplification effect is calculated as follows:Efficiency = Cost ÷ IT assets (IT assets cost + operating costs)Utilization percentage (such as the percentage of available storage being used) assets to total assets used =Effectiveness = Efficiency x usageAsset Effectiveness Network amplifying effect = assets ÷ efficacy when using the Cisco SONA when not in use Cisco SONA8. Investment incomeCisco Advantage Cisco SONA in intelligent systems is not only to improve efficiency and reduce costs. By Cisco SONA, through the power of your network can achieve:Increase in income and opportunityImproved customer relationsImprove business resiliency and flexibilityIncrease productivity and efficiency and reduce costs9. Real-Time DevelopmentBy Cisco SONA toward more intelligent integrated network development, enterprises can be completed in phases: integration, standardization, virtualization and automation. Working with Cisco channel partner or customer groups, you can use the Cisco SONA framework to develop a blueprint for the development of enterprises. With rich experience in Cisco Lifecycle Management Services, a leading position in the field of standardization, mature enterprise architecture and create targeted industry solutions, Cisco account team can help you meet business requirements in real time.10.The development of the Intelligent Information NetworkRole of the network is evolving. Tomorrow's intelligent network will provide more than basic connectivity, bandwidth and application user access services, which will provide end functionality and centralized control, to achieve true enterprise transparency and flexibility.Cisco SONA enables enterprises to extend their existing infrastructure, towards the development of intelligent network to accelerate applications and improve business processes. Cisco provides design, support and financing services to maximize your return on investment.服务导向网络架构(SONA)1.企业面临的挑战尽管投入大量IT资金,但许多企业发现大多数的关键网络资源和信息资产仍处于游离状态。
毕业设计论文外文文献翻译智能交通信号灯控制中英文对照
英语原文Intelligent Traffic Light Controlby Marco Wiering The topic I picked for our community project was traffic lights. In a community, people need stop signs and traffic lights to slow down drivers from going too fast. If there were no traffic lights or stop signs, people’s lives would be in danger from drivers going too fast.The urban traffic trends towards the saturation, the rate of increase of the road of big city far lags behind rate of increase of the car.The urban passenger traffic has already become the main part of city traffic day by day and it has used about 80% of the area of road of center district. With the increase of population and industry activity, people's traffic is more and more frequent, which is unavoidable. What means of transportation people adopt produces pressure completely different to city traffic. According to calculating, if it is 1 to adopt the area of road that the public transport needs, bike needs 5-7, car needs 15-25, even to walk is 3 times more than to take public transits. So only by building road can't solve the city traffic problem finally yet. Every large city of the world increases the traffic policy to the first place of the question.For example,according to calculating, when the automobile owning amount of Shanghai reaches 800,000 (outside cars count separately ), if it distributes still as now for example: center district accounts for great proportion, even when several loop-lines and arterial highways have been built up , the traffic cannot be improved more than before and the situation might be even worse. So the traffic policy Shanghai must adopt , or called traffic strategy is that have priority to develop public passenger traffic of city, narrow the scope of using of the bicycle progressively , control the scale of growth of the car traffic in the center district, limit the development of the motorcycle strictly.There are more municipals project under construction in big city. the influence on the traffic is greater.Municipal infrastructure construction is originally a good thing of alleviating the traffic, but in the course of constructing, it unavoidably influence the local traffic. Some road sections are blocked, some change into an one-way lane, thus the vehicle can only take a devious route . The construction makes the road very narrow, forming the bottleneck, which seriously influence the car flow.When having stop signs and traffic lights, people have a tendency to drive slower andlook out for people walking in the middle of streets. To put a traffic light or a stop sign in a community, it takes a lot of work and planning from the community and the city to put one in. It is not cheap to do it either. The community first needs to take a petition around to everyone in the community and have them sign so they can take it to the board when the next city council meeting is. A couple residents will present it to the board, and they will decide weather or not to put it in or not. If not put in a lot of residents might be mad and bad things could happened to that part of the city.When the planning of putting traffic lights and stop signs, you should look at the subdivision plan and figure out where all the buildings and schools are for the protection of students walking and riding home from school. In our plan that we have made, we will need traffic lights next to the school, so people will look out for the students going home. We will need a stop sign next to the park incase kids run out in the street. This will help the protection of the kids having fun. Will need a traffic light separating the mall and the store. This will be the busiest part of the town with people going to the mall and the store. And finally there will need to be a stop sign at the end of the streets so people don’t drive too fast and get in a big accident. If this is down everyone will be safe driving, walking, or riding their bikes.In putting in a traffic light, it takes a lot of planning and money to complete it. A traffic light cost around $40,000 to $125,000 and sometimes more depending on the location. If a business goes in and a traffic light needs to go in, the business or businesses will have to pay some money to pay for it to make sure everyone is safe going from and to that business. Also if there is too many accidents in one particular place in a city, a traffic light will go in to safe people from getting a severe accident and ending their life and maybe someone else’s.The reason I picked this part of our community development report was that traffic is a very important part of a city. If not for traffic lights and stop signs, people’s lives would be in danger every time they walked out their doors. People will be driving extremely fast and people will be hit just trying to have fun with their friends. So having traffic lights and stop signs this will prevent all this from happening.Traffic in a city is very much affected by traffic light controllers. When waiting for a traffic light, the driver looses time and the car uses fuel. Hence, reducing waiting times before traffic lights can save our European society billions of Euros annually. To make traffic light controllers more intelligent, we exploit the emergence of novel technologies such as communication networks and sensor networks, as well as the use of more sophisticated algorithms for setting traffic lights. Intelligent traffic light control does not only mean thattraffic lights are set in order to minimize waiting times of road users, but also that road users receive information about how to drive through a city in order to minimize their waiting times. This means that we are coping with a complex multi-agent system, where communication and coordination play essential roles. Our research has led to a novel system in which traffic light controllers and the behaviour of car drivers are optimized using machine-learning methods.Our idea of setting a traffic light is as follows. Suppose there are a number of cars with their destination address standing before a crossing. All cars communicate to the traffic light their specific place in the queue and their destination address. Now the traffic light has to decide which option (ie, which lanes are to be put on green) is optimal to minimize the long-term average waiting time until all cars have arrived at their destination address. The learning traffic light controllers solve this problem by estimating how long it would take for a car to arrive at its destination address (for which the car may need to pass many different traffic lights) when currently the light would be put on green, and how long it would take if the light would be put on red. The difference between the waiting time for red and the waiting time for green is the gain for the car. Now the traffic light controllers set the lights in such a way to maximize the average gain of all cars standing before the crossing. To estimate the waiting times, we use 'reinforcement learning' which keeps track of the waiting times of individual cars and uses a smart way to compute the long term average waiting times using dynamic programming algorithms. One nice feature is that the system is very fair; it never lets one car wait for a very long time, since then its gain of setting its own light to green becomes very large, and the optimal decision of the traffic light will set his light to green. Furthermore, since we estimate waiting times before traffic lights until the destination of the road user has been reached, the road user can use this information to choose to which next traffic light to go, thereby improving its driving behaviour through a city. Note that we solve the traffic light control problem by using a distributed multi-agent system, where cooperation and coordination are done by communication, learning, and voting mechanisms. To allow for green waves during extremely busy situations, we combine our algorithm with a special bucket algorithm which propagates gains from one traffic light to the next one, inducing stronger voting on the next traffic controller option.We have implemented the 'Green Light District', a traffic simulator in Java in which infrastructures can be edited easily by using the mouse, and different levels of road usage can be simulated. A large number of fixed and learning traffic light controllers have already been tested in the simulator and the resulting average waiting times of cars have been plotted and compared. The results indicate that the learning controllers can reduce average waiting timeswith at least 10% in semi-busy traffic situations, and even much more when high congestion of the traffic occurs.We are currently studying the behaviour of the learning traffic light controllers on many different infrastructures in our simulator. We are also planning to cooperate with other institutes and companies in the Netherlands to apply our system to real world traffic situations. For this, modern technologies such as communicating networks can be brought to use on a very large scale, making the necessary communication between road users and traffic lights possible.中文翻译:智能交通信号灯控制马克·威宁我所选择的社区项目主题是交通灯。
英文文献加翻译(神经网络和遗传算法模糊系统自动设计方案)
附录1基于神经网络和遗传算法的模糊系统的自动设计摘要本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。
介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。
还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。
1 前言模糊系统往往是人工手动设计。
这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。
为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。
前者包括辅助模糊系统设计的开发环境。
许多环境已具有商业用途。
后者介绍了自动设计的技术。
尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。
有三种主要的设计决策模糊控制系统设计:(1)确定模糊规则数,(2)确定隶属度函数的形式。
(3)确定变化参数再者,必须作出另外两个决定:(4)确定输入变量的数量(5)确定论证方法<1)和<2)相互协调确定如何覆盖输入空间。
他们之间有高度的相互依赖性。
<3)用以确定TSK<Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。
(4>符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。
像逆向消除<4)和信息标准的技术在此设计中经常被利用。
(5>相当于决定使用哪一个模糊算子和解模糊化的方法。
虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。
[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。
神经网络模型<以更普遍的梯度>和基于遗传算法的神经网络(最常见的梯度的基础>和遗传算法被用于模糊系统的自动设计。
基于神经网络的方法主要是用来设计模糊隶属度函数。
这有两种主要的方法;(一)直接的多维的模糊隶属度函数的设计:该方法首先通过数据库确定规则的数目。
智能自动移动机器人系统研究中英文外文文献翻译
本科毕业设计(论文)中英文对照翻译(此文档为word格式,下载后您可任意修改编辑!)原文The investigation of an autonomous intelligent mobile robot systemfor indoor environment navigationS KarelinAbstractThe autonomous mobile robotics system designed and implemented for indoor environment navigation is a nonholonomic differential drive system with two driving wheels mounted on the same axis driven by two PID controlled motors and two caster wheels mounted in the front andback respectively. It is furnished with multiple kinds of sensors such as IR detectors ,ultrasonic sensors ,laser line generators and cameras,constituting a perceiving system for exploring its surroundings. Its computation source is a simultaneously running system composed of multiprocessor with multitask and multiprocessing programming. Hybrid control architecture is employed on the rmbile robot to perform complex tasks. The mobile robot system is implemented at the Center for Intelligent Design , Automation and Manufacturfing of City University of Hong Kong.Key words:mobile robot ; intelligent control ; sensors ; navigation IntroductionWith increasing interest in application of autonomous mobile robots in the factory and in service environments,many investigations have been done in areas such as design,sensing,control and navigation,etc. Autonomousreaction to the real wand,exploring the environment,follownng the planned path wnthout collisions and carrying out desired tasks are the main requirements of intelligent mobile robots. As humans,we can conduct these actions easily. For robots however,it is tremendously difficult. An autonomous mobile robot should make use of various sensors to sense the environment and interpret and organize the sensed information to plan a safe motion path using some appropriate algorithms while executing its tasks. Many different kinds of senors havebeen utilized on mobile robots,such as range sensors,light sensors,force sensors,sound sensors,shaft encoders,gyro scope s,for obstacle awidance,localizatio n,rmtion sensing,navigation and internal rmnitoring respectively. Many people use infrared and ultrasonic range sensors to detect obstacles in its reaching ser range finders are also employed in obstacle awidance behavior of mobile robots in cluttered space.Cameras are often introduced into the vision system for mobile robot navigation. Although many kinds of sensors are available,sensing doesn’t mean perceiving. The mechanical shape and driving type are commonly first taken into consideration while implementing a rmbile robot. A robot’s shape can have a strong impact on how robust it is,and DC serve rmtors or stepOper motors are often the two choices to employ as actuators. The shape of a robot may affect its configurations of components,ae sthetics,and even the movement behaviors of the robot. An improper shape can make robot run a greater risk of being trapped in a cluttered room or of failing to find its way through a narrow space. We choose an octahedral shape that has both advantages of rectangular and circular shapes,and overcomes their drawbacks. The framework of the octahedral shaped robot is easy to make,components inside are easily arrange and can pass through narrow places and rotate wrath corners and nearby objects,and is more aesthetic in appearance. The perception subsystem accomplishes the task of getting various data from thesurroundings,including distance of the robot from obstacles,landmarks,etc.Infrared and ultrasonic range sen}rs,laser rangefinders and cameras are utilized and mounted on the rmbile robot to achieve perception of the environment. These sensors are controlled independently by some synchronously running microprocessors that are arranged wrath distributive manner,and activated by the main processor on which a supervising program runs. At present,infrared and ultranic sensors,laser rangefinders are programmed to detect obstacles and measure distance of the robot from objects in the environment,and cameras are programmed for the purpose of localization and navigation.The decision-making subsystem is the most important part of an intelligent mobile robot that organizes and utilizes the information obtained from the perception subsystem. It obtains reasonable results by some intelligent control algorithm and guides the rmbile robot. On our mobile robotic system intelligence is realized based on behaviourism and classical planning principles. The decision-making system is composed of twa levels global task planning based on knowledge base and map of working enviro nment,reactive control to deal with the dynamic real world. Reaction tasks in the decision-making system are decomposed into classes of behaviors that the robot exhibits to accomplish the task. Fuzzy logic is used to implement some basic behaviors. A state machine mechanism is applied to coordinate different behaviors. Because manykinds of electronic components such as range sensors,cameras,frame grabbers,laser line generators,microprocessors,DC motors,encoders,are employed on the mobile robot,a power source must supply various voltage levels which should are stable and have sufficient power. As the most common solution to power source of mobile robots,two sealed lead acid batteries in series writh 24 V output are employed in our mobile robot for the rmtor drive components and electronic components which require 24 V,15V,士12V,+9V,士5V,variously. For the conversion and regulation of the voltage,swritching DC DC converters are used because of their high efficiency,low output ripple and noise,and wride input voltage range. Three main processors are Motorola MC68040 based single board computers on which some supervisory programs and decision-making programs run. These MC68040 boards run in parallel and share memory using a VMEbus. Three motorola MC68HC11 based controllers act as the lower level controllers of the infrared and ultranic range senors,which communicate with the main processors through serial ports. The multi-processor system is organized into a hierarchical and distributive structure to implement fast gathering of information and rapid reaction. Harmony,a multiprocessing and multitasking operating system for real-time control,runs on the main processors to implement multiprocessing and multitasking programming. Harmony is a runtime only environment and program executions are performed by downloadingcrosscompiled executable images into target processors. The hardware architecture of the mobile robot is shown in Fig. Robots control For robots,the three rmst comrmn drive systems are wheels,tracks and legs. Wheeled robots are mechanically simpler and easier to construct than legged and tracked systems that generally require more complex and heavier hardware,so our mobile robot is designed as a wheeled robot. For a wheeled robot,appropriate arrangements of driving and steering wheels should be chosen from differential,synchro,tricycle,and automotive type drive mechanisms. Differential drives use twa caster wheels and two driven wheels on a common axis driven independently,which enable the robot to move straight,in an arc and turn in place. All wheels are rotate simultaneously in the synchro drive;tricycle drive includes two driven wheels and one steering wheel;automobile type drive rotates the front twa wheels together like a car. It is obvious that differential drive is the simplest locomotion system for both programming and construction.However,a difficult problem for differentially driven robots is how to make the robot go straight,especially when the motors of the two wheels encounter different loads. To follow a desired path,the rmtor velocity must be controlled dynamically. In our mobile robot system a semv motor controller is used which implements PID control.Ibwer amplifiers that drive the motors amplify the signals from each channel of serwcontroller. Feedback is provided by shaft encoders on the wheels.The block diagram of the motor control electronic components are shown in Fig. 2,and the strategy of two wheel speed control based PID principle is illustrated in Fig.3. Top loop is for tracking the desired left motor velocity;bottom loop for tracking right motor velocity;Integral loop ensures the robot to go straight as desired and controls the steering of the robot. This is a simple PI control that can satisfy the general requirements.Sensing subsystemSensor based planning makes use of sensor information reflecting the current state of the environment,in contrast to classical planning,which assumes full knowledge of the environment prior to planning. The perceptive subsystem integrates the visual and proximity senors for the reaction of the robot. It plays an important role in the robot behavioral decision-making processes and motion control. Field of view of perceptive subsystem is the first consideration in the design of the sensing system. Fneld of view should be wide enough with sufficient depth of field to understand well the robot’s surroundings. Multiple sensors can provide information that is difficult to extract from single sensor systems. Multiple sensors are complementary to each other,providing a better understanding of the work environment. Omnidirectional sensory capability is endowed on our mobile robot. When attempting to utilize multiple senors,it must be decided how many different kinds of sensorsare to be used in order to achieve the desired motion task,both accurately and economically.Ultrasonic range sensing is an attractive sensing rmdalityfor mobile robots because it is relatively simple to implement and process,has low cost and energy consumption. In addition,high frequencies can be used to minimize interference from the surrounding environment. A special purpose built infrared ranging system operates similar to sonar,determining the obstacle’s presence or absence and also the distance to an object. For detecting smaller obstacles a laser rangefinder can be used. It can be titled down to the ground to detect the small objects near the robot. Identifying robot self position and orientation is a basic behavior that can be part of high level complex behaviors. For localizing a dead reckoning method is adopted using the output of shaft encoders. This method can have accumulated error on the position and orientation. Many external sensors can be used for identification of position and orientation. Cameras are the most popular sensor for this purpose,because of naturally occurring features of a mom as landmarks,such as air conditioning system,fluorescent lamps,and suspended ceiling frames.Any type of sensor has inherent disadvantages that need to be taken into consideration. For infrared range senors,if there is a sharply defined boundary on the target betweendifferent materials,colors,etc.,the sensor may not be able to calculate distance accurately. Some of these problemscan be avoided if due care is taken when installing and setting up the sensor. Crosstalk and specular reflection are the two main problems for ultrasonic sensors. The firing rates,blanking intervals,firing order,and timeouts of the ultrasonic sensor system can configured to improve performance. Laser ranging systems can fail to detect objects made of transparent materials or with poor light reflectivity. In this work,we have chosen range sensors and imaging sensors as the primary source of information. The range sensors employed include ultrasonic sensors and short and long range infrared sensors with features above mentioned. The imaging sensors comprise gray scale video cameras and laser rangefinders. Twenty-four ultrasonic sensors are arranged in a ring with a separation angle of 15 degrees on our mobile robot to detect the objects in a 3600 field of view. This will allow the robot to navigatearound an unstructured environment and to construct ac curate sonar maps by using environmental objects as naturally occurring beacons. With the sonar system we can detect objects from a minimum range of 15 cm to a maximum range of 10. 0 m. Infrared range sensors use triangulation,emitting an infrared spot from an emitter,and measuring the position of the imaged spot with a PSD (position sensitive detector).Since these devices use triangulation,object color,orientation,and ambient light have greater effect on sensitivity rather than accuracy. Since the transmission signal is light instead of sound,we may expect a dramatically shortercycle time for obtaining all infrared sensor measurements. A getup of 16 short and a group of 16 long infrared sensors are mounted in twa rings with equal angular Generally speaking,the robot motion closed control loops comprising sensing,planning,and acting should take very short cycle times,so a parallel computation mechanism is employed in our mobile robot based on multiprocessor. Usually we can make events run in parallel on single microprocessor or multiprocessor by twa methods,multitasking and multiprocessing. Well known multitasking OS is like Microsoft window' 95 and UNIX OS that can make multitask run in parallel on a sequential machine by giving a fraction of time to each behavior looply. In fact,multitask mechanism just simulates the effect of all events running simultaneously. Running all events on multiprocessor can realize true parallelism. In our mobile robot,using Harmony OS both multitasking and multiprocessing programming is implemented on multiprocessor (MC68040 processors) which share memories and communicate each other by VMEbus. Harmony allows creating many tasks as desired which can be map toseveral microprocesors and run in parallel .In addition,tasks written in C run on MC68040 can activate the assembly code in the MC68HC11 SBC which control infrared and ultrasonic sensors and get distances dates. These SBC run simultaneously with MC68040 processors. An instance of an implemented task structure is shown in Fng. 5.Some experiments,such as following lines,avoiding obstacles and area filling have been carried out on the rmbile system to demonstrates its real-time reactions to the working surroundings and robustness of the system. ConclusionWe have described the implementation of a intelligent mobile robot testbed for autonomous navigation in indoor environments and for investigation of relative theories and technologies of intelligent systems. The robot is furnished with range sensors,laser line generators and vision system to perceive its surroundings. Parallel computation based on multiprocessor is employed in the mobile robot to improve its power of reasoning and response. Low level processing and sensor control is carried out with low cost dedicated microcontrollers. A task based real-time operating system supports a variety of different control structures,allowing us to experiment with different approaches. The experiments indicate the effectiveness of the mobile robot system .The platform has been used for experimenu and research such as sensor data fusion,area filling,feedback control,as well as artificial intelligence.译文基于室内环境导航的智能自动移动机器人系统研究卡若琳摘要这种为室内境导航条件下设计生产的自主移动机器人系统是一个不完整的差速传动系统,它有两个安装在同一轴上通过两个PID控制的电机驱动的驱动轮和两个分别安装在前部和后部的脚轮。
纺织专业 人工神经网络 中英文 外文 资料 文献 原文和翻译
Textile Research Journal Article Use of Artificial Neural Networks for Determining the LevelingAction Point at the Auto-leveling Draw FrameAssad Farooq1and Chokri CherifInstitute of Textile and Clothing Technology, TechnischeUniversität Dresden. Dresden, GermanyAbstractArtificial neural networks with their ability of learning from data have been successfully applied in the textile industry. The leveling action point is one of the important auto-leveling parameters of the drawing frame and strongly influences the quality of the manufactured yarn. This paper reports a method of predicting the leveling action point using artificial neural networks. Various leveling action point affecting variables were selected as inputs for training the artificial neural networks with the aim to optimize the auto-leveling by limiting the leveling action point search range. The Levenberg Marquardt algorithm is incorporated into the back-propagation to accelerate the training and Bayesian regularization is applied to improve the generalization of the networks. The results obtained are quite promising.Key words:artificial neural networks, auto-lev-eling, draw frame, leveling action point。
智能控制系统毕业论文中英文资料对照外文翻译文献
controlled object, as intelligent load monitoring test, is the use of single-chip I / O port output signal of relay control, then the load to control or monitor, thus similar to any one single chip control system structure, often simplified to input part, an output part and an electronic control unit ( ECU )
information, which can more effectively assist the security personnel to deal with the crisis, and minimize the damage and loss, it has great practical significance, some risk homework, or artificial unable to complete the operation, can be used to realize intelligent device, which solves a lot of artificial can not solve the problem, I think, with the development of the society, intelligent load in all aspects of social life play an important reuse.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
基于神经网络和遗传算法的模糊系统的自动设计摘要本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。
介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。
还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。
1 前言模糊系统往往是人工手动设计。
这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。
为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。
前者包括辅助模糊系统设计的开发环境。
许多环境已具有商业用途。
后者介绍了自动设计的技术。
尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。
有三种主要的设计决策模糊控制系统设计:(1)确定模糊规则数,(2)确定隶属度函数的形式。
(3)确定变化参数再者,必须作出另外两个决定:(4)确定输入变量的数量(5)确定论证方法(1)和(2)相互协调确定如何覆盖输入空间。
他们之间有高度的相互依赖性。
(3)用以确定TSK(Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。
(4)符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。
像逆向消除(4)和信息标准的技术在此设计中经常被利用。
(5)相当于决定使用哪一个模糊算子和解模糊化的方法。
虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。
[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。
神经网络模型(以更普遍的梯度)和基于遗传算法的神经网络(最常见的梯度的基础)和遗传算法被用于模糊系统的自动设计。
基于神经网络的方法主要是用来设计模糊隶属度函数。
这有两种主要的方法;(一)直接的多维的模糊隶属度函数的设计:该方法首先通过数据库确定规则的数目。
然后通过每个簇的等级的训练来确定隶属函数的形式。
更多细节将在第二章给出。
(二)间接的多维的模糊隶属度函数的设计:这种方法通过结合一维模糊隶属函数构建多维的模糊隶属度函数。
隶属度函数梯度技术被用于调节试图减少模糊系统的期望产量和实际生产所需的产出总量的误差。
第一种方法的优点在于它可以直接产生非线性多维的模糊隶属度函数;没有必要通过结合一维模糊隶属函数构建多维的模糊隶属度函数。
第二种方法的优点在于可通过监测模糊系统的最后性能来调整。
这两种方法都将在第二章介绍。
许多基于遗传算法的方法与方法二在本质上一样;一维隶属函数的形式利用遗传算法自动的调整。
这些方法中很多只考虑了一个或两个前面提及的设计问题。
在第三章中,我们将介绍一种三个设计问题同时考虑的方法。
2 神经网络方法2.1多维输入空间的直接的模糊分区该方法利用神经网络来实现多维的非线性隶属度函数,被称为基于NN的模糊推理。
该方法的优点在于它可以产生非线性多维的模糊隶属度函数。
在传统的模糊系统中,用于前期部分的一维隶属度函数是独立设计的,然后结合起来间接实现多维的模糊隶属度函数。
可以说,神经网络方法在由神经网络吸收的结合操作方面是传统模糊系统的一种更普遍的形式。
当输入变量是独立的时传统的间接设计方法就有问题。
例如,设计一个基于将温度和湿度作为输入的模糊系统的空调控制系统。
在模糊系统的传统设计方法中,隶属函数的温度和湿度是独立设计的。
输入空间所产生的模糊分区如图1(a)。
然而,当输入变量是独立的,如温度、湿度,模糊分区如图1(b)比较合适。
很难构建这样来自一维模糊隶属度函数的非线性分区。
由于NN-driven模糊推理直接构建多维的非线性的模糊隶属度函数,很有可能使线性分区如图1(b)。
NN-driven模糊推理的设计的有三个步骤:聚集给出的训练数据,利用神经网络的模糊分区输入空间,和设计各分区空间的随机部分。
第一步是要聚集培训资料,确定规则的数目。
这一步之前,不恰当的输入变量已经利用信息或淘汰落后指标的方法消除掉了。
逆向消除方法的任意消除n个输入变量和训练神经网络的n - 1个输入变量。
然后比较n个和n-1个变量的神经网络的性能。
如果n-1个变量的神经网络的性能与n个变量的性能相似或者更好,那么消除输入变量就被认为是无关紧要的。
然后这些数据被聚集,得到了数据的分布。
集群数量是规则的数目。
第二步是决定在第一步中得到的集群资料的簇边界;输入空间的划分并确定多维输入的隶属函数。
监督数据是由在第1步中获得的隶属度的输入数据聚类提供的。
第一个带有n输入和c输出的神经网络被准备好,其中n是输入变量的数量,c是在第一步中得到的集群数量。
为了神经网络的数据,图2中NN数量,产生于第一步提供的集群信息。
一般来说,每个输入变量被分配到其中的一个集群。
集群任务就是将输入变量和培训模式相结合。
例如,在属于集群2的四个集群和输入向量的案例中,监督的培训模式将是(0,1,0,0)。
在某些情况下,如果他/她相信一个输入的数据点应按不同的聚类,用户不得非法干预和手动建造部分监督。
举例来说,如果用户认为一个数据点同样属于一个两个班,适当的监管输出模式可能(0.5,0.5,0,0)。
这个神经网络在关于该培训资料的训练结束后,神经网络计算特定输入属于各集群向量。
因此,我们认为该神经网络通过学习获得特征的隶属度函数所有的规则,可以产生与隶属度相适应的任意的输入向量。
利用神经网络如发生器的模糊系统是NN-driven模糊推理。
第三步是随机的设计。
因为我们知道哪个集群能举出一个输入数据,我们可以使用输入数据和期望的结果训练随机的部分。
神经网络的表达可在这里,如[3,4]中所言,但是其他的方法,如数学方程或模糊变量,可以用来代替。
该模型的最关键的是神经网络的输入空间分割模糊聚类。
图2所示的一个例子NN-driven模糊推理系统。
这是一个输出由神经网络或TSK模型计算的一个单独的价值的模型。
在图乘法和加法计算加权平均值。
如果后续的部分输出模糊值,适当的t-conorm和/或解模糊化操作应该被使用。
图1.模糊划分:(a)常规(b)期望图2.NN-driven结构模糊推理实例2.2调整参数的模糊系统这个定义隶属度函数形式的参数来减少模糊系统输出和监督的数据之间的误差。
两种方法用于修改这些参数:摘要现有基于梯度方法和遗传算法。
遗传算法的方法将在下一章节讲述,基于梯度的方法将在这部分解释。
这个基于梯度的方法的程序是:(1)决定如何确定的隶属度函数的形式(2)利用梯度方法调整降低模糊系统的实际输出与期望输出的参数,通常最速下降。
隶属函数的中心的位置和宽度通常用来定义参数的形状。
Ichihashi et al. [6]and Nomura et al. [7, 8], Horikawa et al.[9][10], Ichihashi et al.[ll] and Wang et al. [12], Jang [13][14] 已经分别用三角形,结合sigmoidal、高斯,钟型隶属度函数。
他们利用最速下降法来调整模糊隶属函数参数。
图3. 神经网络调整模糊系统的参数图4. 调整模糊系统的神经网络图3显示了此方法和同构于图4. 图中的u ij在i-th 规则下输入模糊隶属函数的参数x j,而它实际上是代表一个描述隶属度函数的形式的参数向量。
也就是说,这个方法使模糊系统作为神经网络的模糊隶属度函数和通过节点执行重量和规则一样。
任何网络学习算法,例如反向传播算法,可以用来设计这种结构。
3遗传算法方法3.1遗传算法与模糊控制遗传算法是进行优化、生物激励的技术,他的运行用二进制表示,并进行繁殖,交叉和变异。
繁殖后代的权利是通过应用程序提供的一种健身价值。
遗传算法吸引人是因为他们不需要存在的衍生物,他们的同时搜索的鲁棒性很强,并能避免陷入局部最小。
SeverM的论文提出了利用自动遗传算法的模糊系统的设计方法。
大量的工作主要集中在调整的模糊隶属度函数[17]-[25]。
其他的方法使用遗传算法来确定模糊规则数[18,26]。
在[26]中,通过专家制定了一系列规则,并且遗传算法找到他们的最佳的组合。
在[18],卡尔已经开发出一种方法用于测定模糊隶属度函数和模糊规则数。
在这篇文章中,卡尔的方法首先用遗传算法按照预先定义的规则库确定规则的数目。
这个阶段后,利用遗传算法来调整模糊隶属度函数。
虽然这些方法设计的系统表现的比手工设计的系统好,它们仍可能会欠佳,因为他们一次只有一个或两个三大设计阶段。
因为这些设计阶段可能不会是独立的,所以重要的是要考虑它们同时找到全局最优解。
在下一节里,我们提出一个结合的三个主要设计阶段自动设计方法。
3.2基于遗传算法的模糊系统的综合设计这部分提出了一种利用遗传算法的自动模糊系统的设计方法, 并将三个主要设计阶段一体化:隶属函数的形式, 模糊规则数, 和规则后件同一时间内确定[27]。
当将遗传算法应用于程序上时,有两个主要步骤;(a)选择合适的基因表达,(b) 设计一个评价函数的人口排名。
在接下来的段落里,我们讨论我们的模糊系统表现及基因表达。
一个嵌入验前知识的评价函数和方法将会在接下来的章节中提及模糊系统和基因表现我们用TSK模型的模糊系统,它被广泛应用于控制问题,对地图系统的状态来控制的价值。
TSK模型在随之而来的模糊模型中的线性方程与模糊语言表达方面有别于传统的模糊系统。
例如,一个TSK模型规则的形式:如果X1是A,X2是B,那么y=w1X1+w2X2+w3;是常数其中wn最后的控制价值通过每个规则的输出和依据规则的加权的射击力量来计算。
我们利用左基地,右基地,和以往的中心点距离(第一个中心是一个绝对的位置)对三角形隶属度函数进行参数化,。
其他参数化的形状,如双曲形、高斯,钟形,或梯形可以代替。
不同于大多数方法、重叠限制不是放在在我们的系统和完整的重叠存在(见图5)。
图5. (a)隶属度函数表示法(b)可能的隶属函数一般来说,每个输入变量模糊集理论的数量与确定的模糊规则的数目相结合。
例如,一个带有m输入变量的TSK模型,每n个模糊集,将会产生n m模糊规则。
因为规则的数目直接取决于这个数目的隶属函数,消除隶属度函数对消除规则有直接的影响。
每一隶属函数需要三个参数并且每一个模糊规则需要三个参数。
因此,每个变量需要n个模糊输入的m-input-one-output系统要求3(mn+n m)参量。
这个基因表达明确包含三个成员函数的参数及前述的随机参数。
然而,规则的数,通过应用编码含蓄的边界条件和隶属函数的定位。
我们可以隐含的控制规则的数目,消除隶属函数的中心位置的范围之外的相应的包含这些内容的输入变量和规则。
例如,在单摆应用隶属度函数使用θ的中心位置大于90°都是可以避免的。