遗传算法中英文对照外文翻译文献

合集下载

(完整word版)外文翻译-遗传算法

(完整word版)外文翻译-遗传算法

What is a genetic algorithm?●Methods of representation●Methods of selection●Methods of change●Other problem—solving techniquesConcisely stated,a genetic algorithm (or GA for short)is a programming technique that mimics biological evolution as a problem-solving strategy。

Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem,encoded in some fashion,and a metric called a fitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work,with the aim of the GA being to improve them, but more often they are generated at random.The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates,of course,most will not work at all, and these will be deleted. However,purely by chance, a few may hold promise — they may show activity,even if only weak and imperfect activity,toward solving the problem.These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect;random changes are introduced during the copying process. These digital offspring then go on to the next generation,forming a new pool of candidate solutions,and are subjected to a second round of fitness evaluation。

遗传算法中英文对照外文翻译文献

遗传算法中英文对照外文翻译文献

遗传算法中英文对照外文翻译文献遗传算法中英文对照外文翻译文献(文档含英文原文和中文翻译)Improved Genetic Algorithm and Its Performance AnalysisAbstract: Although genetic algorithm has become very famous with its global searching, parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as slow convergence speed. In this paper, based on several general theorems, an improved genetic algorithm using variant chromosome length and probability of crossover and mutation is proposed, and its main idea is as follows : at the beginning of evolution, our solution with shorter length chromosome and higher probability of crossover and mutation; and at the vicinity of global optimum, with longer length chromosome and lower probability of crossover and mutation. Finally, testing with some critical functions shows that our solution can improve the convergence speed of genetic algorithm significantly , its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.Genetic algorithm is an adaptive searching technique based on a selection and reproduction mechanism found in the natural evolution process, and it was pioneered by Holland in the 1970s. It has become very famous with its global searching,________________________________ 遗传算法中英文对照外文翻译文献 ________________________________ parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as poor local searching, premature converging, as well as slow convergence speed. In recent years, these problems have been studied.In this paper, an improved genetic algorithm with variant chromosome length andvariant probability is proposed. Testing with some critical functions shows that it can improve the convergence speed significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.In section 1, our new approach is proposed. Through optimization examples, insection 2, the efficiency of our algorithm is compared with the genetic algorithm which only reserves the best individual. And section 3 gives out the conclusions. Finally, some proofs of relative theorems are collected and presented in appendix.1 Description of the algorithm1.1 Some theoremsBefore proposing our approach, we give out some general theorems (see appendix)as follows: Let us assume there is just one variable (multivariable can be divided into many sections, one section for one variable) x £ [ a, b ] , x £ R, and chromosome length with binary encoding is 1.Theorem 1 Minimal resolution of chromosome isb 一 a2l — 1Theorem 3 Mathematical expectation Ec(x) of chromosome searching stepwith one-point crossover iswhere Pc is the probability of crossover.Theorem 4 Mathematical expectation Em ( x ) of chromosome searching step with bit mutation isE m ( x ) = ( b- a) P m 遗传算法中英文对照外文翻译文献Theorem 2 wi = 2l -1 2 i -1 Weight value of the ith bit of chromosome is(i = 1,2,・・・l )E *)= P c1.2 Mechanism of algorithmDuring evolutionary process, we presume that value domains of variable are fixed, and the probability of crossover is a constant, so from Theorem 1 and 3, we know that the longer chromosome length is, the smaller searching step of chromosome, and the higher resolution; and vice versa. Meanwhile, crossover probability is in direct proportion to searching step. From Theorem 4, changing the length of chromosome does not affect searching step of mutation, while mutation probability is also in direct proportion to searching step.At the beginning of evolution, shorter length chromosome( can be too shorter, otherwise it is harmful to population diversity ) and higher probability of crossover and mutation increases searching step, which can carry out greater domain searching, and avoid falling into local optimum. While at the vicinity of global optimum, longer length chromosome and lower probability of crossover and mutation will decrease searching step, and longer length chromosome also improves resolution of mutation, which avoid wandering near the global optimum, and speeds up algorithm converging.Finally, it should be pointed out that chromosome length changing keeps individual fitness unchanged, hence it does not affect select ion ( with roulette wheel selection) .2.3 Description of the algorithmOwing to basic genetic algorithm not converging on the global optimum, while the genetic algorithm which reserves the best individual at current generation can, our approach adopts this policy. During evolutionary process, we track cumulative average of individual average fitness up to current generation. It is written as1 X G x(t)= G f vg (t)t=1where G is the current evolutionary generation, 'avg is individual average fitness.When the cumulative average fitness increases to k times ( k> 1, k £ R) of initial individual average fitness, we change chromosome length to m times ( m is a positive integer ) of itself , and reduce probability of crossover and mutation, which_______________________________ 遗传算法中英文对照外文翻译文献________________________________can improve individual resolution and reduce searching step, and speed up algorithm converging. The procedure is as follows:Step 1 Initialize population, and calculate individual average fitness f avg0, and set change parameter flag. Flag equal to 1.Step 2 Based on reserving the best individual of current generation, carry out selection, regeneration, crossover and mutation, and calculate cumulative average of individual average fitness up to current generation 'avg ;f avgStep 3 If f vgg0 三k and Flag equals 1, increase chromosome length to m times of itself, and reduce probability of crossover and mutation, and set Flag equal to 0; otherwise continue evolving.Step 4 If end condition is satisfied, stop; otherwise go to Step 2.2 Test and analysisWe adopt the following two critical functions to test our approach, and compare it with the genetic algorithm which only reserves the best individual:sin 2 弋 x2 + y2 - 0.5 [1 + 0.01( 2 + y 2)]x, y G [-5,5]f (x, y) = 4 - (x2 + 2y2 - 0.3cos(3n x) - 0.4cos(4n y))x, y G [-1,1]22. 1 Analysis of convergenceDuring function testing, we carry out the following policies: roulette wheel select ion, one point crossover, bit mutation, and the size of population is 60, l is chromosome length, Pc and Pm are the probability of crossover and mutation respectively. And we randomly select four genetic algorithms reserving best individual with various fixed chromosome length and probability of crossover and mutation to compare with our approach. Tab. 1 gives the average converging generation in 100 tests.In our approach, we adopt initial parameter l0= 10, Pc0= 0.3, Pm0= 0.1 and k= 1.2, when changing parameter condition is satisfied, we adjust parameters to l= 30, Pc= 0.1, Pm= 0.01.From Tab. 1, we know that our approach improves convergence speed of genetic algorithm significantly and it accords with above analysis.2.2 Analysis of online and offline performanceQuantitative evaluation methods of genetic algorithm are proposed by Dejong, including online and offline performance. The former tests dynamic performance; and the latter evaluates convergence performance. To better analyze online and offline performance of testing function, w e multiply fitness of each individual by 10, and we give a curve of 4 000 and 1 000 generations for fl and f2, respectively.(a) onlineFig. 1 Online and offline performance of fl(a) online (b) onlineFig. 2 Online and offline performance of f2From Fig. 1 and Fig. 2, we know that online performance of our approach is just little worse than that of the fourth case, but it is much better than that of the second, third and fifth case, whose online performances are nearly the same. At the same time, offline performance of our approach is better than that of other four cases.3 ConclusionIn this paper, based on some general theorems, an improved genetic algorithmusing variant chromosome length and probability of crossover and mutation is proposed. Testing with some critical functions shows that it can improve convergence speed of genetic algorithm significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.AppendixWith the supposed conditions of section 1, we know that the validation of Theorem 1 and Theorem 2 are obvious.Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one point crossover isb - a PEc(x) = 21 cwhere Pc is the probability of crossover.Proof As shown in Fig. A1, we assume that crossover happens on the kth locus, i. e. parent,s locus from k to l do not change, and genes on the locus from 1 to k are exchanged.During crossover, change probability of genes on the locus from 1 to k is 2 (“1” to “0” or “0” to “1”). So, after crossover, mathematical expectation of chromosome searching step on locus from 1 to k is1 chromosome is equal, namely l Pc. Therefore, after crossover, mathematical expectation of chromosome searching step isE (x ) = T 1 -• P • E (x ) c l c ckk =1Substituting Eq. ( A1) into Eq. ( A2) , we obtain 尸 11 b - a p b - a p • (b - a ) 1 E (x ) = T • P • — •• (2k -1) = 7c • • [(2z -1) ― l ] = ——— (1 一 )c l c 2 21 — 121 21 — 1 21 21 —1 k =1 lb - a _where l is large,-——-口 0, so E (x ) 口 -——P2l — 1 c 21 c 遗传算法中英文对照外文翻译文献 厂 / 、 T 1 T 1 b — a - 1E (x )="—w ="一• ---------- • 2 j -1 二 •ck2 j 2 21 -1 2j =1 j =1 Furthermore, probability of taking • (2k -1) place crossover on each locus ofFig. A1 One point crossoverTheorem 4 Mathematical expectation E m(")of chromosome searching step with bit mutation E m (x)—(b a)* P m, where Pm is the probability of mutation.Proof Mutation probability of genes on each locus of chromosome is equal, say Pm, therefore, mathematical expectation of mutation searching step is一i i - b —a b b- aE (x) = P w = P•—a«2i-1 = P•—a q2,-1)= (b- a) •m m i m 21 -1 m 2 i -1 mi=1 i=1一种新的改进遗传算法及其性能分析摘要:虽然遗传算法以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称,但是它仍然有一定的缺陷,比如收敛速度慢。

遗传学英语文献

遗传学英语文献

遗传学英语文献Genetics has been a field of study that has captivated the minds of scientists and laypeople alike for centuries. The intricacies of the genetic code and its influence on the development and behavior of living organisms have been the subject of extensive research and literature. In the realm of English literature, the topic of genetics has been explored in various forms, from scientific treatises to fictional narratives.One of the seminal works in the field of genetics is Charles Darwin's "On the Origin of Species," published in 1859. This groundbreaking publication laid the foundation for the theory of evolution through natural selection, which has had a profound impact on our understanding of genetics and the diversity of life on Earth. Darwin's work not only presented his scientific findings but also engaged in a broader philosophical discourse on the implications of his theory, sparking debates and conversations that continue to this day.Another notable contribution to the literature on genetics is the work of Gregor Mendel, an Augustinian friar whose experiments with peaplants in the mid-19th century laid the groundwork for our understanding of heredity. Mendel's laws of inheritance, which describe the patterns of genetic inheritance, have become a cornerstone of modern genetics. While Mendel's work was not widely recognized during his lifetime, it has since been celebrated as a pivotal moment in the history of science.In the realm of fiction, genetics has been a recurring theme, often used as a tool to explore the ethical and social implications of scientific advancements. One such example is Aldous Huxley's "Brave New World," published in 1932, which presents a dystopian future where human beings are genetically engineered and society is strictly controlled. Huxley's novel raises questions about the potential consequences of genetic manipulation and the impact it could have on individual autonomy and societal structures.Similarly, Mary Shelley's "Frankenstein," published in 1818, can be interpreted as an exploration of the ethical boundaries of scientific experimentation, particularly in the realm of creating life. The story of Victor Frankenstein's creation of a sentient being, and the subsequent consequences of his actions, has become a classic in the science fiction genre and continues to be analyzed and discussed in the context of genetics and the limits of scientific inquiry.In more recent years, the field of genetics has been further exploredin popular fiction, such as Michael Crichton's "Jurassic Park," which explores the potential of genetic engineering to resurrect extinct species. This novel, and the subsequent film adaptations, have captured the public's imagination and sparked discussions about the ethical and practical implications of such advancements.Beyond fiction, the field of genetics has also been the subject of various scientific texts and scholarly works, which have helped to advance our understanding of the genetic mechanisms that govern the development and function of living organisms. These works range from textbooks and research papers to more accessible popular science books, which aim to bridge the gap between the scientific community and the general public.One such example is James Watson and Francis Crick's "The Double Helix," a firsthand account of their groundbreaking discovery of the structure of DNA, which revolutionized our understanding of the genetic code. This book not only presents the scientific findings but also provides insights into the personalities and dynamics of the scientists involved in the research, offering a glimpse into the human side of scientific discovery.Another notable work in the field of genetics literature is "The Selfish Gene" by Richard Dawkins, published in 1976. This book presents a gene-centric view of evolution, which has had a significant impact onour understanding of the mechanisms of natural selection and the role of genetics in shaping the natural world. Dawkins' engaging writing style and thought-provoking ideas have made this book a classic in the field of evolutionary biology and genetics.In conclusion, the field of genetics has been the subject of a rich and diverse body of English literature, spanning from scientific treatisesto imaginative works of fiction. These literary contributions have not only advanced our understanding of the genetic mechanisms that govern living organisms but have also explored the ethical, social, and philosophical implications of our growing knowledge in this field. As the field of genetics continues to evolve, it is likely that we will see new and innovative perspectives emerge in the literature, further enriching our understanding of this captivating and ever-expanding area of study.。

【机械类文献翻译】采用遗传算法优化加工夹具定位和加紧位置

【机械类文献翻译】采用遗传算法优化加工夹具定位和加紧位置

附录Machining fixture locating and clamping position optimizationusing genetic algorithmsNecmettin Kaya*Department of Mechanical Engineering, Uludag University, Go¨ru¨kle, Bursa 16059, Turkey Received 8 July 2004; accepted 26 May 2005Available online 6 September 2005AbstractDeformation of the workpiece may cause dimensional problems in machining. Supports and locators are used in order to reduce the error caused by elastic deformation of the workpiece. The optimization of support, locator and cl amp locations is a critical problem to minimize the geometric error in workpiece machining. In this paper, the application of genetic algorithms (GAs) to the fixture layout optimization is presented to handle fixture layout optimization problem. A genetic algorithm based approach is developed to optimise fixture layout through integrating a finite element code running in batch mode to compute the objective function values for each generation. Case studies are given to illustrate the application of proposed approach. Chromosome library approach is used to decrease the total solution time. Developed GA keeps track of previously analyzed designs; therefore the numbers of function evaluations are decreased about 93%. The results of this approach show that the fixture layout optimization problems are multi-modal problems. Optimized designs do not have any apparent similarities although they provide very similar performances. Keywords: Fixture design; Genetic algorithms; Optimization1. IntroductionFixtures are used to locate and constrain a workpiece during a machining operation, minimizing workpiece and fixture tooling deflections due to clamping and cutting forces are critical to ensuring accuracy of the machining operation. Traditionally, machining fixtures are designed and manufactured throughtrial-and-error, which prove to be both expensive and time-consuming to themanufacturing process. To ensure a workpiece is manufactured according to specified dimensions and tolerances, it must be appropriately located and clamped, making it imperative to develop tools that will eliminate costly and time-consuming trial-and-error designs. Proper workpiece location and fixture design are crucial to product quality in terms of precision, accuracy and finish of the machined part.Theoretically, the 3-2-1 locating principle can satisfactorily locate all prismatic shaped workpieces. This method provides the maximum rigidity with the minimum number of fixture elements. To position a part from a kinematic point of view means constraining the six degrees of freedom of a free moving body (three translations and three rotations). Three supports are positioned below the part to establish the location of the workpiece on its vertical axis. Locators are placed on two peripheral edges and intended to establish the location of the workpiece on the x and y horizontal axes. Properly locating the workpiece in the fixture is vital to the overall accuracy and repeatability of the manufacturing process. Locators should be positioned as far apart as possible and should be placed on machined surfaces wherever possible. Supports are usually placed to encompass the center of gravity of a workpiece and positioned as far apart as possible to maintain its stability. The primary responsibility of a clamp in fixture is to secure the part against the locators and supports. Clamps should not be expected to resist the cutting forces generated in the machining operation.For a given number of fixture elements, the machining fixture synthesis problem is the finding optimal layout or positions of the fixture elements around the workpiece. In this paper, a method for fixture layout optimization using genetic algorithms is presented. The optimization objective is to search for a 2D fixture layout that minimizes the maximum elastic deformation at different locations of the workpiece. ANSYS program has been used for calculating the deflection of the part under clamping and cutting forces. Two case studies are given to illustrate the proposed approach.2. Review of related worksFixture design has received considerable attention in recent years. However, little attention has been focused on the optimum fixture layout design. Menassa and DeVries[1]used FEA for calculating deflections using the minimization of the workpiece deflection at selected points as the design criterion. The designproblem was to determine the position of supports. Meyer and Liou[2] presented an approach that uses linear programming technique to synthesize fixtures for dynamic machining conditions. Solution for the minimum clamping forces and locator forces is given. Li and Melkote[3]used a nonlinear programming method to solve the layout optimization problem. The method minimizes workpiece location errors due to localized elastic deformation of the workpiece. Roy andLiao[4]developed a heuristic method to plan for the best supporting and clamping positions. Tao et al.[5]presented a geometrical reasoning methodology for determining the optimal clamping points and clamping sequence for arbitrarily shaped workpieces. Liao and Hu[6]presented a system for fixture configuration analysis based on a dynamic model which analyses the fixture–workpiece system subject to time-varying machining loads. The influence of clamping placement is also investigated. Li and Melkote[7]presented a fixture layout and clamping force optimal synthesis approach that accounts for workpiece dynamics during machining. A combined fixture layout and clamping force optimization procedure presented.They used the contact elasticity modeling method that accounts for the influence of workpiece rigid body dynamics during machining. Amaral et al. [8] used ANSYS to verify fixture design integrity. They employed 3-2-1 method. The optimization analysis is performed in ANSYS. Tan et al. [9] described the modeling, analysis and verification of optimal fixturing configurations by the methods of force closure, optimization and finite element modeling.Most of the above studies use linear or nonlinear programming methods which often do not give global optimum solution. All of the fixture layout optimization procedures start with an initial feasible layout. Solutions from these methods are depending on the initial fixture layout. They do not consider the fixture layout optimization on overall workpiece deformation.The GAs has been proven to be useful technique in solving optimization problems in engineering[10–12]. Fixture design has a large solution space and requires a search tool to find the best design. Few researchers have used the GAs for fixture design and fixture layout problems. Kumar et al. [13] have applied both GAs and neural networks for designing a fixture. Marcelin [14]has used GAs to the optimization of support positions. Vallapuzha et al. [15]presented GA based optimization method that uses spatial coordinates to represent the locations offixture elements. Fixture layout optimization procedure was implemented using MATLAB and the genetic algorithm toolbox. HYPERMESH and MSC/NASTRAN were used for FE model. Vallapuzha et al. [16] presented results of an extensive investigation into the relative effectiveness of various optimization methods. They showed that continuous GA yielded the best quality solutions. Li and Shiu [17] determined the optimal fixture configuration design for sheet metal assembly using GA. MSC/NASTRAN has been used for fitness evaluation. Liao[18]presented a method to automatically select the optimal numbers of locators and clamps as well as their optimal positions in sheet metal assembly fixtures. Krishnakumar and Melkote [19]developed a fixture layout optimization technique that uses the GA to find the fixture layout that minimizes the deformation of the machined surface due to clamping and machining forces over the entire tool path. Locator and clamp positions are specified by node numbers. A built-in finite element solver was developed.Some of the studies do not consider the optimization of the layout for entire tool path and chip removal is not taken into account. Some of the studies used node numbers as design parameters.In this study, a GA tool has been developed to find the optimal locator and clamp positions in 2D workpiece. Distances from the reference edges as design parameters are used rather than FEA node numbers. Fitness values of real encoded GA chromosomes are obtained from the results of FEA. ANSYS has been used for FEA calculations. A chromosome library approach is used in order to decrease the solution time. Developed GA tool is tested on two test problems. Two case studies are given to illustrate the developed approach. Main contributions of this paper can be summarized as follows:(1) developed a GA code integrated with a commercial finite element solver;(2) GA uses chromosome library in order to decrease the computation time;(3) real design parameters are used rather than FEA node numbers;(4) chip removal is taken into account while tool forces moving on the workpiece.3. Genetic algorithm conceptsGenetic algorithms were first developed by John Holland. Goldberg [10] published a book explaining the theory and application examples of genetic algorithm in details. A genetic algorithm is a random search technique thatmimics some mechanisms of natural evolution. The algorithm works on a population of designs. The population evolves from generation to generation, gradually improving its adaptation to the environment through natural selection; fitter individuals have better chances of transmitting their characteristics to later generations.In the algorithm, the selection of the natural environment is replaced by artificial selection based on a computed fitness for each design. The term fitness is used to designate the chromosome’s chances of survival and it is essentially the objective function of the optimization problem. The chromosomes that define characteristics of biological beings are replaced by strings of numerical values representing the design variables.GA is recognized to be different than traditional gradient based optimization techniques in the following four major ways [10]:1. GAs work with a coding of the design variables and parameters in the problem, rather than with the actual parameters themselves.2. GAs makes use of population-type search. Many different design points are evaluated during each iteration instead of sequentially moving from one point to the next.3. GAs needs only a fitness or objective function value. No derivatives or gradients are necessary.4. GAs use probabilistic transition rules to find new design points for exploration rather than using deterministic rules based on gradient information to find these new points.4. Approach4.1. Fixture positioning principlesIn machining process, fixtures are used to keep workpieces in a desirable position for operations. The most important criteria for fixturing ar e workpiece position accuracy and workpiece deformation. A good fixture design minimizes workpiece geometric and machining accuracy errors. Another fixturing requirement is that the fixture must limit deformation of the workpiece. It is important to consider the cutting forces as well as the clamping forces. Without adequate fixture support, machining operations do not conform to designed tolerances. Finite element analysis is a powerful tool in the resolution of some of these problems [22].Common locating method for prismatic parts is 3-2-1 method. This method provides the maximum rigidity with the minimum number of fixture elements. A workpiece in 3D may be positively located by means of six points positioned so that they restrict nine degrees of freedom of the workpiece. The other three degrees of freedom are removed by clamp elements. An example layout for 2D workpiece based 3-2-1 locating principle is shown in Fig. 4.Fig. 4. 3-2-1 locating layout for 2D prismatic workpieceThe number of locating faces must not exceed two so as to avoid a redundant location. Based on the 3-2-1 fixturing principle there are two locating planes for accurate location containing two and one locators. Therefore, there are maximum of two side clampings against each locating plane. Clamping forces are always directed towards the locators in order to force the workpiece to contact all locators. The clamping point should be positioned opposite the positioning points to prevent the workpiece from being distorted by the clamping force.Since the machining forces travel along the machining area, it is necessary to ensure that the reaction forces at locators are positive for all the time. Any negative reaction force indicates that the workpiece is free from fixture elements. In other words, loss of contact or the separation between the workpiece and fixture element might happen when the reaction force is negative. Positive reaction forces at the locators ensure that the workpiece maintains contact with all the locators from the beginning of the cut to the end. The clamping forces should be just sufficient to constrain and locate the workpiece without causing distortion or damage to the workpiece. Clamping force optimization is not considered in this paper.4.2. Genetic algorithm based fixture layout optimization approachIn real design problems, the number of design parameters can be very large and their influence on the objective function can be very complicated. The objective function must be smooth and a procedure is needed to compute gradients. Genetic algorithms strongly differ in conception from other search methods, including traditional optimization methods and other stochastic methods [23]. By applying GAs to fixture layout optimization, an optimal or group of sub-optimal solutions can be obtained.In this study, optimum locator and clamp positions are determined using genetic algorithms. They are ideally suited for the fixture layout optimization problem since no direct analytical relationship exists between the machining error and the fixture layout. Since the GA deals with only the design variables and objective function value for a particular fixture layout, no gradient or auxiliary information is needed [19].The flowchart of the proposed approach is given in Fig. 5.Fixture layout optimization is implemented using developed software written in Delphi language named GenFix. Displacement values are calculated in ANSYS software [24]. The execution of ANSYS in GenFix is simply done by WinExec function in Delphi. The interaction between GenFix and ANSYS is implemented in four steps:(1) Locator and clamp positions are extracted from binary string as real parameters.(2) These parameters and ANSYS input batch file (modeling, solution and post processing commands) are sent to ANSYS using WinExec function.(3) Displacement values are written to a text file after solution.(4) GenFix reads this file and computes fitness value for current locator and clamp positions.In order to reduce the computation time, chromosomes and fitness values are stored in a library for further evaluation. GenFix first checks if current chromosome’s fitness value has been calculated before. If not, locator positions are sent to ANSYS, otherwise fitness values are taken from the library. During generating of the initial population, every chromosome is checked whether it is feasible or not. If the constraint is violated, it is eliminated and new chromosome is created. This process creates entirely feasible initial population. This ensures that workpiece is stable under the action of clamping and cutting forces for everychromosome in the initial population.The written GA program was validated using two test cases. The first test case uses Himmelblau function [21]. In the second test case, the GA program was used to optimise the support positions of a beam under uniform loading.5. Fixture layout optimization case studiesThe fixture layout optimization problem is defined as: finding the positions of the locators and clamps, so that workpiece deformation at specific region is minimized. Note that number of locators and clamps are not design parameter, since they are known and fixed for the 3-2-1 locating scheme. Hence, the design parameters are selected as locator and clamp positions. Friction is not considered in this paper. Two case studies are given to illustrate the proposed approach.6. ConclusionIn this paper, an evolutionary optimization technique of fixture layout optimization is presented. ANSYS has been used for FE calculation of fitness values. It is seen that the combined genetic algorithm and FE method approach seems to be a powerful approach for present type problems. GA approach is particularly suited for problems where there does not exist a well-defined mathematical relationship between the objective function and the design variables. The results prove the success of the application of GAs for the fixture layout optimization problems.In this study, the major obstacle for GA application in fixture layout optimization is the high computation cost. Re-meshing of the workpiece is required for every chromosome in the population. But, usages of chromosome library, the number of FE evaluations are decreased from 6000 to 415. This results in a tremendous gain in computational efficiency. The other way to decrease the solution time is to use distributed computation in a local area network.The results of this approach show that the fixture layout optimization problems are multi-modal problems. Optimized designs do not have any apparent similarities although they provide very similar performances. It is shown that fixture layout problems are multi-modal therefore heuristic rules for fixture design should be used in GA to select best design among others.Fig. 5. The flowchart of the proposed methodology and ANSYS interface.采用遗传算法优化加工夹具定位和加紧位置摘要:工件变形的问题可能导致机械加工中的空间问题。

遗传算法简介 英文版

遗传算法简介 英文版

of steepest gradient. Simple to implement, guaranteed convergence. Must know something about the derivative. Can easily get stuck in a local minimum.
Difficult Problems
Appeared in Jan/Feb 2002
SIAM News in the 100Dollar, 100-Digit Challenge.

exp(sin(50*x)) + sin(60*exp(y)) + sin(70*sin(x)) + sin(sin(80*y)) sin(10*(x+y)) + 0.25*(x^2 + y^2)
Time Share Evaluation Function
function [assignment, soln] = local_min(assignment, options)
global family_info
cost = 0; occupancy = zeros(16,1); for i = 1:64 if ceil(assignment(i)/4) == family_info(i,2) cost = cost + 0; elseif ceil(assignment(i)/4) == family_info(i,3) cost = cost + 2*family_info(i,1); elseif ceil(assignment(i)/4) == family_info(i,4) cost = cost + 4*family_info(i,1); else cost = cost + 50 + 7*family_info(i,1); end building = ceil(assignment(i)/4); occupancy(building) = occupancy(building) + family_info(i,1); end for i = 1:16 if occupancy(building) > 22 cost = cost + 1000; end end soln = -cost; % first choice % second choice % third choice % didn't get any choice

外文翻译---一种基于树结构的快速多目标遗传算法

外文翻译---一种基于树结构的快速多目标遗传算法

附录4一种基于树结构的快速多目标遗传算法介绍:一般来讲,解决多目标的科学和工程问题,是一个非常困难的任务。

在这些多目标优化问题(MOPS)中,这些目标往往在一个高维的问题空间发生冲突,而且多目标优化也需要更多的计算资源。

一些经典的优化方法表明将多目标优化转化成为单目标优化问题,其中许多运行被要求找到多个解决方案。

这使得一种算法返回一组候选解,这比只返回一个基于目标的权重解的算法更好。

由于这个原因,在过去20年中,人们越来越感兴趣把进化算法(EAs)应用到多目标优化中。

许多多目标进化算法(MOEAs)已经被提出,这些多目标进化算法使用Pareto占优的概念来引导搜索,并返回一组非支配解作为结果。

与在单目标优化中找到最优解作为最终的解不同,在多目标优化中有二个目标:(1)收敛到Pareto最优解集(2)在Pareto最优解集中保持解的多样性。

为了解决在多目标优化中这两个有时候会冲突的任务,许多策略和方法被提出。

这些方法的一个共同的问题是,它们往往是错综复杂的。

对于这两项任务,为了得到更优秀的解,一些复杂的策略通常被使用,并且许多参数需要依据经验和已经得到的问题信息进行调整。

另外,许多多目标进化算法有高达()2GMNO的计算复杂度或者需要更多的处理时间(G是代数,M是目标函数的数量,N是种群大小。

这些符号在下文也保持相同的含义)。

在这篇文章中,我们提出了一种基于树结构的快速多目标遗传算法。

(这个数据结构是一个二进制树,它保存了在多目标优化中解的三值支配关系(例如,正在支配、被支配和非支配),因此,我们命名它为支配树(DT)。

由于一些独特的性能,使支配树能够含蓄地包含种群个体的密度信息,并且很明显地减少了种群个体之间的比较。

计算复杂度实验也表明,支配树是一种处理种群有效的工具。

基于支配树的进化算法(DTEA)统一了在支配树中的收敛性和多样性策略,即多目标进化算法中的两个目标,并且由于只有几个参数,这种算法很容易操作。

立体仓库中英文对照外文翻译文献

立体仓库中英文对照外文翻译文献

立体仓库中英文对照外文翻译文献(文档含英文原文和中文翻译)由一个单一的存储/检索机服务的多巷道自动化立体仓库存在的拣选分拣问题摘要随着现代化科技的发展,仓库式存储系统在设计与运行方面出现了巨大的改革。

自动化立体仓库(AS / RS)嵌入计算机驱动正变得越来越普遍。

由于AS / RS 使用的增加对计算机控制的需要与支持也在提高。

这项研究解决了在多巷道立体仓库的拣选问题,在这种存储/检索(S / R)操作中,每种货物可以在多个存储位置被寻址到。

提出运算方法的目标是,通过S/R系统拣选货物来最大限度的减少行程时间。

我们开发的遗传式和启发式算法,以及通过比较从大量的问题中得到一个最佳的解决方案。

关键词:自动化立体仓库,AS / RS系统,拣选,遗传算法。

1.言在现今的生产环境中,库存等级保持低于过去。

那是因为这种较小的存储系统不仅降低库存量还增加了拣选货物的速度。

自动化立体仓库(AS / RS),一方面通过提供快速响应,来达到高操作效率;另一方面它还有助于运作方面的系统响应时间,减少的拣选完成的总行程时间。

因此,它常被用于制造业、储存仓库和分配设备等行业中。

拣选是仓库检索功能的基本组成部分。

它的主要目的是,在预先指定的地点中选择适当数量的货物以满足客户拣选要求。

虽然拣选操作仅仅是物体在仓储中装卸操作之一,但它却是“最耗时间和花费最大的仓储功能。

许多情形下,仓储盈利的高低就在于是否能将拣选操作运行处理好”。

(Bozer和White)Ratliff和Rosenthal,他们关于自动化立体仓库系统(AS/RS)的拣选问题进行的研究,发明了基图算法,在阶梯式布局中选取最短的访问路径。

Roodbergen 和de Koster 拓展了Ratliff 和Rosenthal算法。

他们认为,在平行巷道拣选问题上,应该穿越巷道末端和中间端进行拣选,就此他们发明了一种动态的规划算法解决这问题。

就此Van den Berg 和Gademann发明了一种运输模型(TP),它是对于指定的存储和卸载进行测算的仪器。

一种新的改进遗传算法及其性能分析外文翻译、中英文翻译、外文文献翻译

一种新的改进遗传算法及其性能分析外文翻译、中英文翻译、外文文献翻译

一种新的改进遗传算法及其性能分析外文翻译@中英文翻译@外文文献翻译摘要:虽然遗传算法以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称,但是它仍然有一定的缺陷,比如收敛速度慢。

本文根据几个基本定理,提出了一种使用变异染色体长度和交叉变异概率的改进遗传算法,它的主要思想是:在进化的开始阶段,我们使用短一些的变异染色体长度和高一些的交叉变异概率来解决,在全局最优解附近,使用长一些的变异染色体长度和低一些的交叉变异概率。

最后,一些关键功能的测试表明,我们的解决方案可以显著提高遗传算法的收敛速度,其综合性能优于只保留最佳个体的遗传算法。

关键字:编译染色体长度;变异概率;遗传算法;在线离线性能遗传算法是一种以自然界进化中的选择和繁殖机制为基础的自适应的搜索技术,它是由Holland 1975年首先提出的。

它以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称。

然而它也有一些缺点,如本地搜索不佳,过早收敛,以及收敛速度慢。

近些年,这个问题被广泛地进行了研究。

本文提出了一种使用变异染色体长度和交叉变异概率的改进遗传算法。

一些关键功能的测试表明,我们的解决方案可以显著提高遗传算法的收敛速度,其综合性能优于只保留最佳个体的遗传算法。

在第一部分,提出了我们的新算法。

第二部分,通过几个优化例子,将该算法和只保留最佳个体的遗传算法进行了效率的比较。

第三部分,就是所得出的结论。

最后,相关定理的证明过程可见附录。

1. 算法的描述1.1 一些定理在提出我们的算法之前,先给出一个一般性的定理(见附件),如下:我们假设有一个变量(多变量可以拆分成多个部分,每一部分是一个变量)x ∈[ a, b ] , x ∈R,二进制的染色体编码是1.定理1 染色体的最小分辨率是s =定理2 染色体的第i位的权重值是w i = ( i = 1,2,…l )定理3 单点交叉的染色体搜索步骤的数学期望E c(x)是E c (x) = P c其中P c是交叉概率定理4位变异的染色体搜索步骤的数学期望E m(x)是E m ( x ) = ( b- a) P m其中P m是变异概率1.2 算法机制在进化过程中,我们假设变量的值域是固定的,交叉的概率是一个常数,所以从定理1 和定理3我们知道,较长的染色体长度有着较少的染色体搜索步骤和较高的分辨率;反之亦然。

基于遗传算法的车辆路径规划中英文外文文献翻译

基于遗传算法的车辆路径规划中英文外文文献翻译

本科毕业设计(论文)中英文对照翻译(此文档为word格式,下载后您可任意修改编辑!)译文基于遗传算法的车辆路径规划研究克鲁尼·贝克1 引言基本的车辆路径问题(VRP)由客户的数量、每一个指定重量的货物交付所组成。

每一个从仓库中派遣的车辆,都必须按要求交货。

要求车辆运送路线必须开始和完成都是在仓库中,以便所有客户需求都得到满足以及每辆车服务一个客户。

车辆的运输能力时限定的,每辆车都有其自身的最大行驶距离。

在后一种情况下,运输距离限制可能与每个客户有关,因为车辆是按照客户的特定要求来安排的。

因此,对一辆车来说,为许多客户服务,将会导致其在总的行驶距离上无法满足。

可行的方案就是找出一组运送路线以满足客户的这些需求,并使得运输成本最低,通常的做法是总行驶距离最小化,或尽量减少使用汽车的数量,然后使这批车辆的行驶距离最小化。

例如,拉波特给出了各种解决车辆路径问题的数学公式。

使用启发法来解决问题是比较现实的。

在这方面的课题研究上,有很多的研究文献,包括拉波特和奥斯曼所给出的各种扩展性问题。

塔亚尔和罗查特运用禁忌搜索法,获得了基准车辆路径问题的最佳结果。

不同的研究者使用禁忌搜索模拟退火法也获得了类似的结果。

然而,雷诺观察到,使用启发法需要大量的计算时间和许多的参数设置。

最近,有一个新的算法可以用来这一组合优化难题,那就是蚁群优化法,这方面有很多成功的报道,包括在车辆路径问题中也得到了使用。

两个最优启发法的改善了路线优化问题,这种方法也给了仅略次于禁忌搜索法的结果。

当今,作为现代共通启发式演算法之一,现代遗传算法(GAs)已经被广泛使用。

现代遗传算法(GAs)的应用也被用于解决多种车辆路径组合优化以及校车路径规划问题中。

混合动力车辆路径规划使用遗传算法(GAs)的报道也很多。

然而,现代遗传算法(GAs)目前为止,在车辆路径问题VRP上表现出很大的影响。

本研究的目的是提出一个概念上的,关于车辆路径问题的遗传算法,在计算时间和质量上,它可与其他现代启发式方法相竞争。

遗传算法原理(英文)

遗传算法原理(英文)

Soft Computing Lab.
WASEDA UNIVERSITY , IPS
2
Evolutionary Algorithms and Optimization:
Theory and its Applications
Part 2: Network Design
Network Design Problems Minimum Spanning Tree Logistics Network Design Communication Network and LAN Design
Book Info
Provides a comprehensive survey of selection strategies, penalty techniques, and genetic operators used for constrained and combinatorial problems. Shows how to use genetic algorithms to make production schedules and enhance system reliability.
Soft Computing Lab.
WASEDA UNIVERSITY , IPS
4
Evolutionary Algorithms and Optimization:
Theory and its Applications
Part 4: Scheduling
Machine Scheduling and Multi-processor Scheduling Flow-shop Scheduling and Job-shop Scheduling Resource-constrained Project Scheduling Advanced Planning and Scheduling Multimedia Real-time Task Scheduling

英文文献加翻译(基于神经网络和遗传算法的模糊系统的自动设计)

英文文献加翻译(基于神经网络和遗传算法的模糊系统的自动设计)

附录1基于神经网络和遗传算法的模糊系统的自动设计摘要本文介绍了基于神经网络和遗传算法的模糊系统的设计,其目的在于缩短开发时间并提高该系统的性能。

介绍一种利用神经网络来描绘的多维非线性隶属函数和调整隶属函数参数的方法。

还提及了基于遗传算法的集成并自动化三个模糊系统的设计平台。

1 前言模糊系统往往是人工手动设计。

这引起了两个问题:一是由于人工手动设计是费时间的,所以开发费用很高;二是无法保证获得最佳的解决方案。

为了缩短开发时间并提高模糊系统的性能,有两种独立的途径:开发支持工具和自动设计方法。

前者包括辅助模糊系统设计的开发环境。

许多环境已具有商业用途。

后者介绍了自动设计的技术。

尽管自动设计不能保证获得最优解,他们仍是可取的手工技巧,因为设计是引导走向和依某些标准的最优解。

有三种主要的设计决策模糊控制系统设计:(1)确定模糊规则数,(2)确定隶属度函数的形式。

(3)确定变化参数再者,必须作出另外两个决定:(4)确定输入变量的数量(5)确定论证方法(1)和(2)相互协调确定如何覆盖输入空间。

他们之间有高度的相互依赖性。

(3)用以确定TSK(Takagi-Sugeno-Kang)模式【1】中的线性方程式的系数,或确定隶属度函数以及部分的Mamdani模型【2】。

(4)符合决定最低套相关的输入变量,计算所需的目标决策或控制的价值观。

像逆向消除(4)和信息标准的技术在此设计中经常被利用。

(5)相当于决定使用哪一个模糊算子和解模糊化的方法。

虽然由数种算法和模糊推理的方法已被提出,仍没有选择他们标准。

[5]表明动态变化的推理方法,他依据这个推理环境的结果在性能和容错性高于任何固定的推理的方法。

神经网络模型(以更普遍的梯度)和基于遗传算法的神经网络(最常见的梯度的基础)和遗传算法被用于模糊系统的自动设计。

基于神经网络的方法主要是用来设计模糊隶属度函数。

这有两种主要的方法;(一)直接的多维的模糊隶属度函数的设计:该方法首先通过数据库确定规则的数目。

遗传算法论文及翻译+遗传算法论文及翻译

遗传算法论文及翻译+遗传算法论文及翻译

具有基于多样性的变异因子的自适应遗传算法及其全局收敛性摘要本文提出了一种具有基于多样性的变异因子的自适应遗传算法(AGADM ),它混合了具有自适应交叉率和变异率的遗传因子。

通过均匀有限马尔科夫链,本文证明了在有最优解的情况下,AGADM 和具有基于多样性的变异因子的遗传算法(GADM )能够收敛到全局最优解,研究了具有自适应交叉率和变异率的自适应遗传算法的收敛性,并且就上述几种算法在解决单峰函数和多峰函数最优化问题的结果进行了比较。

结果表明:对于多峰函数,AGADM 的平均收敛代数是900,少于具有自适应概率的自适应遗传算法和具有基于多样性的自适应遗传算法。

AGADM 能够避免早熟现象的发生,能够平衡早熟发生和收敛速度的矛盾。

1 引言众所周知,遗传算法的好坏依赖于所采用的遗传因子,改善遗传因子的自适应性是提高遗传算法性能的有效途径,例如:取得更好的最优解,提高收敛速度等。

因此,一些研究者尝试根据解的情况自适应的调整遗传因子[1-3]。

同时,早熟是遗传算法的一个主要问题,自适应的遗传算法容易导致早熟的发生[4]。

为了克服早熟,我们引入多样性的概念。

多样性对遗传算法的性能有很多的影响,特别在避免早熟和陷入局部最优解方面。

一些研究者[5-8]用种群的多样性来控制进化算法的搜索方向。

通过混合Srnivas 提出的自适应交叉和变异因子和基于多样性的变异因子,本文提出了一种具有基于多样性的变异因子的自适应遗传算法(AGADM ),并且用均匀有限马尔科夫链证明了,当存在最优解的情况下,AGADM 和具有基于多样性的变异因子的遗传算法(GADM)。

但是具有自适应交叉率和变异率的自适应遗传算法(AGA )不总式收敛于全局最优解。

最后,比较了AGA ,GADM 和AGADM 的性能。

2主要内容遗传算法过去常被用来解决静态优化问题。

假设种群中有N 个个体(候选解),我们用固定长度为l 的二进制串来表示每个个体:1 2...i i il g g g ,(ij g =0,1,j=1,2,...,l,i=1,2,…,N ),适应度值表示为{|0i i f f ≤<∞,i=1,2,…,N }.定义 1 设t z 是一个随机变量的序列,它代表状态为i 的种群进化到第t 代时,种群中个体的最佳适应度值,*f 是全局最优解。

遗传算法中英文PPT

遗传算法中英文PPT
● 个体(individual)就是模拟生物个体而对问题
中的对象(一般就是问题的解)的一种称呼,一 个个体也就是搜索空间中的一个点。
Individual is just the plant in the undersolved problem ,it is also a point in the searching space.
steps
step1 在搜索空间U上定义一个适应度函 数f(x),给定种群规模N,交叉率Pc和变异率 Pm,代数T; step2 随机产生U中的N个个体s1, s2, …, sN,组成初始种群S={s1, s2, …, sN},置代数 计数器t=1; step3 计算S中每个个体的适应度f; step4 若终止条件满足,则取S中适应度 最大的个体作为所求结果,算法结束。
The computational formula of P(xi) :
P( xi ) = f ( xi )
N
∑ f (x )
j =1 j
交叉 就是互换两个染色体某些位上的基因。 There is a chance that the chromosomes of the two parents are copied unmodified as offspring ,or randomly recombined (crossover) to form offspring.
● 赌轮选择法
s4 0.31
s30.06
s1 0.14 s2 0.49
赌轮选择示意
在算法中赌轮选择法可用下面的子过程来模拟: ① 在[0, 1]区间内产生一个均匀分布的随机 ② 若r≤q1,则染色体x1被选中。 ③ 若qk-1<r≤qk(2≤k≤N), 则染色体xk被选中。 其 中的qi称为染色体xi (i=1, 2, …, n)的积累概率 其 积累概率, 积累概率 计算公式为 数r。

医药卫生 - 外文文献—遗传算法

医药卫生 - 外文文献—遗传算法
There arc many advantages of genetic algorithms over trad山onal optimization algorithm.~, and two most noticeable advantages arc: the ability of dealing with complex problems and parallelism. Genetic algorithms ean deal v.,jth various t邓s of optimization whether the objective (fitness) functionis stationary or non-stationary (change with time), linear or nonlinear, continuous or discontinuous, or with random noise. As multiple offsprings in a population act like independent agenL~, the population (or any subgr如p) can explore the search space in many directioM sirnultaneo<L~ly. This feamre makes it ideal to paralleli攻 the algorithms for implementation. Different parameters and even di介ereni groups of strings can be manipulated at the same time.
However, genetic algorithms also have some disadvantages.The formulation of fitness function, the usage of population size, the choice of the importani parameters such as the rate of mutation and crossover, and the 父Jecrion criteria criterion of new population should be carefully carried out. Any inappropriate choice will make it difficult for the algorithm io converge, or it simply produces meaningle~s results 2.2 Genetic Algorithms 2.2.1 Basic Procedure

(完整word版)外文翻译-遗传算法

(完整word版)外文翻译-遗传算法

What is a genetic algorithm?●Methods of representation●Methods of selection●Methods of change●Other problem-solving techniquesConcisely stated, a genetic algorithm (or GA for short) is a programming technique that mimics biological evolution as a problem-solving strategy. Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem, encoded in some fashion, and a metric called a fitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work, with the aim of the GA being to improve them, but more often they are generated at random.The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates, of course, most will not work at all, and these will be deleted. However, purely by chance, a few may hold promise - they may show activity, even if only weak and imperfect activity, toward solving the problem.These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect; random changes are introduced during the copying process. These digital offspring then go on to the next generation, forming a new pool of candidate solutions, and are subjected to a second round of fitness evaluation. Those candidate solutions which were worsened, or made no better, by the changes to their code are again deleted; but again, purely by chance, the random variations introduced into the population may have improved some individuals, making them into better, more complete or more efficient solutions to the problem at hand. Again these winning individuals are selected and copied over into the next generation with random changes, and the process repeats. The expectation is that the average fitness of the population will increase each round, and so by repeating this process for hundreds or thousands of rounds, very good solutions to the problem can be discovered.As astonishing and counterintuitive as it may seem to some, genetic algorithms have proven to be an enormously powerful and successful problem-solving strategy, dramatically demonstratingthe power of evolutionary principles. Genetic algorithms have been used in a wide variety of fields to evolve solutions to problems as difficult as or more difficult than those faced by human designers. Moreover, the solutions they come up with are often more efficient, more elegant, or more complex than anything comparable a human engineer would produce. In some cases, genetic algorithms have come up with solutions that baffle the programmers who wrote the algorithms in the first place!Methods of representationBefore a genetic algorithm can be put to work on any problem, a method is needed to encode potential solutions to that problem in a form that a computer can process. One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where the digit at each position represents the value of some aspect of the solution. Another, similar approach is to encode solutions as arrays of integers or decimal numbers, with each position again representing some particular aspect of the solution. This approach allows for greater precision and complexity than the comparatively restricted method of using binary numbers only and often "is intuitively closer to the problem space" (Fleming and Purshouse 2002, p. 1228).This technique was used, for example, in the work of Steffen Schulze-Kremer, who wrote a genetic algorithm to predict the three-dimensional structure of a protein based on the sequence of amino acids that go into it (Mitchell 1996, p. 62). Schulze-Kremer's GA used real-valued numbers to represent the so-called "torsion angles" between the peptide bonds that connect amino acids. (A protein is made up of a sequence of basic building blocks called amino acids, which are joined together like the links in a chain. Once all the amino acids are linked, the protein folds up into a complex three-dimensional shape based on which amino acids attract each other and which ones repel each other. The shape of a protein determines its function.) Genetic algorithms for training neural networks often use this method of encoding also.A third approach is to represent individuals in a GA as strings of letters, where each letter again stands for a specific aspect of the solution. One example of this technique is Hiroaki Kitano's "grammatical encoding" approach, where a GA was put to the task of evolving a simple set of rules called a context-freegrammar that was in turn used to generate neural networks for a variety of problems (Mitchell 1996, p. 74).The virtue of all three of these methods is that they make it easy to define operators that cause the random changes in the selected candidates: flip a 0 to a 1 or vice versa, add or subtract from the value of a number by a randomly chosen amount, or change one letter to another. (See the section on Methods of change for more detail about the genetic operators.) Another strategy, developed principally by John Koza of Stanford University and called genetic programming, represents programs as branching data structures called trees (Koza et al. 2003, p. 35). In this approach, random changes can be brought about by changing the operator or altering the value at a given node in the tree, or replacing one subtree with another.Figure 1:Three simple program trees of the kind normally used in genetic programming. The mathematical expression that each one represents is given underneath.It is important to note that evolutionary algorithms do not need to represent candidate solutions as data strings of fixed length. Some do represent them in this way, but others do not; for example, Kitano's grammatical encoding discussed above can be efficiently scaled to create large and complex neural networks, and Koza's genetic programming trees can grow arbitrarily large as necessary to solve whatever problem they are applied to.Methods of selectionThere are many different techniques which a genetic algorithm can use to select the individuals to be copied over into the next generation, but listed below are some of the most common methods.Some of these methods are mutually exclusive, but others can be and often are used in combination.Elitist selection: The most fit members of each generation are guaranteed to be selected. (Most GAs do not use pure elitism, but instead use a modified form where the single best, or a few of the best, individuals from each generation are copied into the next generation just in case nothing better turns up.)Fitness-proportionate selection: More fit individuals are more likely, but not certain, to be selected.Roulette-wheel selection: A form of fitness-proportionate selection in which the chance of an individual's being selected is proportional to the amount by which its fitness is greater or less than its competitors' fitness. (Conceptually, this can be represented as a game of roulette - each individual gets a slice of the wheel, but more fit ones get larger slices than less fit ones. The wheel is then spun, and whichever individual "owns" the section on which it lands each time is chosen.)Scaling selection: As the average fitness of the population increases, the strength of the selective pressure also increases and the fitness function becomes more discriminating. This method can be helpful in making the best selection later on when all individuals have relatively high fitness and only small differences in fitness distinguish one from another.Tournament selection: Subgroups of individuals are chosen from the larger population, and members of each subgroup compete against each other. Only one individual from each subgroup is chosen to reproduce.Rank selection: Each individual in the population is assigned a numerical rank based on fitness, and selection is based on this ranking rather than absolute differences in fitness. The advantage of this method is that it can prevent very fit individuals from gaining dominance early at the expense of less fit ones, which would reduce the population's genetic diversity and might hinder attempts to find an acceptable solution.Generational selection: The offspring of the individuals selected from each generation become the entire next generation. No individuals are retained between generations.Steady-state selection: The offspring of the individuals selected from each generation go back into the pre-existing gene pool, replacing some of the less fit members of the previous generation. Some individuals are retained between generations.Hierarchical selection: Individuals go through multiple rounds of selection each generation. Lower-level evaluations are faster and less discriminating, while those that survive to higher levels are evaluated more rigorously. The advantage of this method is that it reduces overall computation time by using faster, less selective evaluation to weed out the majority of individuals that show little or no promise, and only subjecting those who survive this initial test to more rigorous and more computationally expensive fitness evaluation.Methods of changeOnce selection has chosen fit individuals, they must be randomly altered in hopes of improving their fitness for the next generation. There are two basic strategies to accomplish this. The first and simplest is called mutation. Just as mutation in living things changes one gene to another, so mutation in a genetic algorithm causes small alterations at single points in an individual's code.The second method is called crossover, and entails choosing two individuals to swap segments of their code, producing artificial "offspring" that are combinations of their parents. This process is intended to simulate the analogous process of recombination that occurs to chromosomes during sexual reproduction. Common forms of crossover include single-point crossover, in which a point of exchange is set at a random location in the two individuals' genomes, and one individual contributes all its code from before that point and the other contributes all its code from after that point to produce an offspring, and uniform crossover, in which the value at any given location in the offspring's genome is either the value of one parent's genome at that location or the value of the other parent's genome at that location, chosen with 50/50 probability.Figure 2:Crossover and mutation. The above diagrams illustrate the effect of each of these genetic operators on individuals in a population of 8-bit strings. The upper diagram shows two individuals undergoing single-point crossover; the point of exchange is set between the fifth and sixth positions in the genome, producing a new individual that is a hybrid of its progenitors. The second diagram shows an individual undergoing mutation at position 4, changing the 0 at that position in its genome to a 1.Other problem-solving techniquesWith the rise of artificial life computing and the development of heuristic methods, other computerizedproblem-solving techniques have emerged that are in some ways similar to genetic algorithms. This section explains some of these techniques, in what ways they resemble GAs and in what ways they differ.•Neural networksA neural network, or neural net for short, is a problem-solving method basedon a computer model of how neurons are connected in the brain. A neuralnetwork consists of layers of processing units called nodes joined bydirectional links: one input layer, one output layer, and zero or more hiddenlayers in between. An initial pattern of input is presented to the input layer ofthe neural network, and nodes that are stimulated then transmit a signal to thenodes of the next layer to which they are connected. If the sum of all theinputs entering one of these virtual neurons is higher than that neuron'sso-called activation threshold, that neuron itself activates, and passes on itsown signal to neurons in the next layer. The pattern of activation thereforespreads forward until it reaches the output layer and is there returned as asolution to the presented input. Just as in the nervous system of biologicalorganisms, neural networks learn and fine-tune their performance over timevia repeated rounds of adjusting their thresholds until the actual outputmatches the desired output for any given input. This process can be supervisedby a human experimenter or may run automatically using a learning algorithm(Mitchell 1996, p. 52). Genetic algorithms have been used both to build and totrain neural networks.Figure 3: A simple feedforward neural network, with one input layer consisting of four neurons, one hidden layer consisting of three neurons, and one output layer consisting of four neurons. The number on each neuron represents its activation threshold: it will only fire if it receives at least that many inputs. The diagram shows the neural network being presented with an input string and shows how activation spreads forward through thenetwork to produce an output.•Hill-climbingSimilar to genetic algorithms, though more systematic and less random, ahill-climbing algorithm begins with one initial solution to the problem at hand, usually chosen at random. The string is then mutated, and if the mutationresults in higher fitness for the new solution than for the previous one, the newsolution is kept; otherwise, the current solution is retained. The algorithm isthen repeated until no mutation can be found that causes an increase in thecurrent solution's fitness, and this solution is returned as the result (Koza et al.2003, p. 59). (To understand where the name of this technique comes from,imagine that the space of all possible solutions to a given problem isrepresented as a three-dimensional contour landscape. A given set ofcoordinates on that landscape represents one particular solution. Thosesolutions that are better are higher in altitude, forming hills and peaks; thosethat are worse are lower in altitude, forming valleys. A "hill-climber" is thenan algorithm that starts out at a given point on the landscape and movesinexorably uphill.) Hill-climbing is what is known as a greedy algorithm,meaning it always makes the best choice available at each step in the hope thatthe overall best result can be achieved this way. By contrast, methods such asgenetic algorithms and simulated annealing, discussed below, are not greedy;these methods sometimes make suboptimal choices in the hopes that they willlead to better solutions later on.•Simulated annealing Another optimization technique similar to evolutionary algorithms is knownas simulated annealing. The idea borrows its name from the industrial processof annealing in which a material is heated to above a critical point to soften it,then gradually cooled in order to erase defects in its crystalline structure,producing a more stable and regular lattice arrangement of atoms (Haupt andHaupt 1998, p. 16). In simulated annealing, as in genetic algorithms, there is afitness function that defines a fitness landscape; however, rather than apopulation of candidates as in GAs, there is only one candidate solution.Simulated annealing also adds the concept of "temperature", a globalnumerical quantity which gradually decreases over time. At each step of thealgorithm, the solution mutates (which is equivalent to moving to an adjacentpoint of the fitness landscape). The fitness of the new solution is thencompared to the fitness of the previous solution; if it is higher, the newsolution is kept. Otherwise, the algorithm makes a decision whether to keep ordiscard it based on temperature. If the temperature is high, as it is initially,even changes that cause significant decreases in fitness may be kept and usedas the basis for the next round of the algorithm, but as temperature decreases,the algorithm becomes more and more inclined to only acceptfitness-increasing changes. Finally, the temperature reaches zero and thesystem "freezes"; whatever configuration it is in at that point becomes thesolution. Simulated annealing is often used for engineering designapplications such as determining the physical layout of components on acomputer chip (Kirkpatrick, Gelatt and Vecchi 1983).遗传算法是什么?表示方法方法的选择变化的方法其他解决问题的技术简明地说,遗传算法(GA)是一种编程技术,模仿生物进化作为一个解决问题的策略。

时间窗约束下的车辆路径问题遗传算法外文翻译(可编辑)

时间窗约束下的车辆路径问题遗传算法外文翻译(可编辑)

时间窗约束下的车辆路径问题遗传算法外文翻译外文翻译原文Genetic Algorithms for the Vehicle Routing Problem with Time Windows Material Source: Special issue on Bioinformatics and Genetic AlgorithmsAuthor: Olli Br?ysy1 IntroductionVehicle Routing Problems VRP are all around us in the sense that many consumer products such as soft drinks, beer, bread, snack foods, gasoline and pharmaceuticals are delivered to retail outlets by a fleet of trucks whose operation fits the vehicle routing model. In practice, the VRP has been recognized as one of the great success stories of operations research and it has been studied widely since the late fifties. Public services can also take advantage of these systems in order to improve their logistics chain. Garbage collection, or town cleaning, takes an ever increasing part of the budget of local authorities A typical vehicle routing problem can be described as the problem of designing least cost routes from one depot to a set of geographically scattered points cities, stores, warehouses, schools, customers etc. The routes must be designed in such a way that each point is visited only onceby exactly one vehicle, all routes start and end at the depot, and the total demands of all points on one route must not exceed the capacity of the vehicle The Vehicle Routing Problem with Time WindowsVRPTW is a generalization of the VRP involving the added complexity that every customer should be served within a given time window. Additional complexities encountered in the VRPTW are length of route constraint arising from depot time windows and cost of waiting time, which is incurred when a vehicle arrives too early at a customer location. Specific examples of problems with time windows include bank deliveries, postal deliveries, industrial refuse collection, school-bus routing and situations where the customer must provide access, verification, or payment upon delivery of the product or service[Solomon and Desrosiers, 1988].Besides being one of the most important problems of operations research in practical terms, the vehicle routing problem is also one of the most difficult problems to solve. It is quite close to one of the most famous combinatorial optimization problems, the Traveling Salesperson ProblemTSP, where only one person has to visit all the customers. The TSP is an NP-hard problem. It is believed that one may never find a computational technique that will guarantee optimal solutions to larger instances for such problems. The vehicle routing problem is even more complicated. Even for small fleet sizes and a moderate number of transportation requests, the planning task is highly complex. Hence, itis not surprising that human planners soon get overwhelmed, and must turn to simple, local rules for vehicle routing. Next we will describe basic principles of genetic algorithms and some applications for vehicle routing problem with time windows.2 General principles of genetic algorithmsThe Genetic Algorithm GA is an adaptive heuristic search method based on population genetics. The basic concepts are developed by [Holland, 1975], while the practicality of using the GA to solve complex problems is demonstrated in [De Jong, 1975] and [Goldberg, 1989]. References and details about genetic algorithms can also be found for example in [Alander, 2000] and [Mühlenbein, 1997] respectively. The creation of a new generation of individuals involves primarily four major steps or phases: representation, selection, recombination and mutation. The representation of the solution space consists of encoding significant features of a solution as a chromosome, defining an individual member of a population. Typically pictured by a bit string, a chromosome is made up of a sequence of genes, which capture the basic characteristics of a solution.The recombination or reproduction process makes use of genes of selected parents to produce offspring that will form the next generation. It combines characteristics of chromosomes to potentially create offspring with better fitness. As for mutation, it consists of randomlymodifying genes of a single individual at a time to further explore the solution space and ensure, or preserve, genetic diversity. The occurrence of mutation is generally associated with low probability. A new generation is created by repeating the selection, reproduction and mutation processes until all chromosomes in the new population replace those from the old one. A proper balance between genetic quality and diversity is therefore required within the population in order to support efficient search.Although theoretical results that characterize the behavior of the GA have been obtained for bit-string chromosomes, not all problems lend themselves easily to this representation. This is the case, in particular, for sequencing problems, like vehicle routing problem, where an integer representation is more often appropriate. We are aware of only one approach by [Thangiah, 1995] that uses bit string representation in vehicle routing context. In all other approaches for vehicle routing problem with time windows the encoding issue is disregarded.3 Applications for vehicle routing problem with time windows[Thangiah, 1995] describes a method called GIDEON that assigns customers to vehicles by partitioning the customers into sectors by genetic algorithm and customers within each formed sector are routed using the cheapest insertion method of [Golden and Stewart, 1985]. In the next step the routes are improved using λ-exchanges introduced by [Osman,1993]. The two processes are run iteratively a finite number of times to improve the solution quality. The search begins by clustering customers either according to their polar coordinate angle or randomly. The proposed search strategy accepts also infeasibilities during the search against certain penalty factors. In the GIDEON system each chromosome represents a set of possible clustering schemes and the fitness values are based on corresponding routing costs. The crossover operator exchanges a randomly selected portion of the bit string between the chromosomes and mutation is used with a very low probability to randomly change the bit values.[Potvin and Bengio, 1996] propose a genetic algorithm GENEROUS that directly applies genetic operators to solutions, thus avoiding the coding issues. The initial population is created with cheapest insertion heuristic of [Solomon, 1987] and the fitness values of the proposed approach are based on the number of vehicles and total route time. The selection process is stochastic and biased toward the best solutions. For this purpose a linear ranking scheme is used. During the recombination phase, two parent solutions are merged into a single one, so as to guarantee the feasibility of the new solution. Two types of crossover operators are used to modify a randomly selected route or to insert a route into the other parent solution. A special repair operator is then applied to the offspring to generate a new feasible solution. Mutation operators are aimed at reducing the number of routes. Finally, in order to locallyoptimize the solution, mutation operator based on Or-opt exchanges [Or, 1976] is included.[Berger et al., 1998] propose a method based on the hybridization of a genetic algorithm with well-known construction heuristics. The authors omit the coding issues and represent a solution by a set of feasible routes. The initial population is created with nearest neighbor heuristic inspired from [Solomon, 1987]. The fitness values of the individuals are based on the number of routes and total distance of the corresponding solution and for selection purposes the authors use the so called roulette-wheel scheme. In this scheme the probability of selecting an individual is proportional to its fitness; for details see [Goldberg, 1989]. The proposed crossover operator combines iteratively various routes r1 of parent solution P1 with a subset of customers, formed by r2 nearest-neighbor routes from parent solution P2. A removal procedure is first carried out to remove some key customer nodes from r1. Then an insertion heuristic inspired from [Solomon, 1987] coupled to a random customer acceptance procedure is locally applied to build a feasible route, considering the partial route r1 as an initial solution. The mutation operators are aimed at reducing the number of routes of solutions having only a few customers and locally reordering the routes.[Br?ysy, 1999a] and [Br?ysy, 1999b] extended the work of [Berger et al., 1998] by proposing several new crossover and mutation operators,testing different forms of genetic algorithms, selection schemes, scaling schemes and the significance of the initial solutions. When it comes to recombination, an approach where customers within randomly generated segments in parent solution P1 are replaced with some other customers on the near routes of parent solution P2 is found to perform best. The best-performing mutation operator selects randomly one of the shortest routes and tries to eliminate it by inserting the customers into other longer routes. Regarding different forms of genetic algorithms it is concluded that it is important to create many new offspring each generation and it is enough to maintain only one population. For selection purposes so-called tournament selection is found to perform best. In the first phase two individuals are selected with a random procedure that is biased towards better fitness scores. In the second phase, the individual with better fitness is selected. However the differences between different schemes were minor. A new scaling scheme based on a weighted combination of number of routes, total distance and waiting time is found to perform particularly well. Finally to create the initial population, several strategies, such us heuristics of [Solomon, 1987] and randomly created routes were tried and it was concluded that the best strategy is to create a diverse initial population that also contains some individuals with better fitness scores.[Homberger and Gehring, 1999] propose two evolutionarymetaheuristics for VRPTW. The proposed algorithms are based on the class of evolutionary algorithms called Evolution Strategies. Differences to GA exist with regard to the superior role of mutation compared to the recombination operators. Here the individual representation also includes a vector of so called strategy parameters in addition to the solution vector and both components are evolved by means of recombination and mutation operators. In the proposed application for VRPTW these strategy parameters refer to how often a randomly selected local search operator is applied and to binary parameter used to alternate the search between minimizing the number of vehicles and total distance.Selection of the parents is done randomly and only one offspring is created through the recombination of parents. This way a number λ?μoffspring is created, where μis the population size. At the end fitness values are used to select μ offspring to the next population. Because the parents are not involved in the selection process, deteriorations during the search are permitted. The first out of the two proposed metaheuristics, evolution strategy ES1, skips the recombination phase. The second evolution strategy ES2 uses uniform order-based crossover to modify the initially randomly created mutation codes of the two parents and tries to improve the solution vector of a third randomly selected parent using the modified code. The mutation code is used to control a set of removal and insertion operators performed by Or-opt operator [Or,1976]. The fitness values are based on number of routes, total travel distance and on a criterion that determines how easily the shortest route of the solution in terms of the number of customers on the route can be eliminated. The individuals of a starting population are generated by means of a stochastic approach, which is based on the savings algorithm of [Clarke and Wright, 1964].[Br?ysy et al., 2000] describe a two-phase hybrid evolutionary algorithm based on the hybridization of a genetic algorithm and an evolutionary algorithm consisting of several local search and route construction heuristics inspired from the studies of [Solomon, 1987] and [Taillard et al., 1997]. In the first phase a genetic algorithm based on the studies [Berger et al., 1998] and [Br?ysy, 1999a] is used to obtain a feasible solution. The evolutionary algorithm used in the second phase picks every pair of routes in random order and applies randomly one out of the four local search operators or route construction heuristics. Finally, offspring routes generated by these crossover operators are mutated according to a user-defined probability by selecting randomly one out of two operators. Selecting each possible pair of routes, mating and mutation operators are repeatedly applied for a certain number of generations and finally a feasible solution is returned. To escape from local minimum, arcs longer than average are penalized, if they appear frequently during the search.译文时间窗约束下的车辆路径问题遗传算法资料来源:生物信息和遗传算法特刊作者:奥利布瑞赛1 简介车辆路径问题(VRP)对我们身边许多消费品都很有意义,例如软性饮料,啤酒,面包,小吃,汽油和药品通过一个车队走合适的运输路线,运送至零售网点。

遗传算法介绍--中文翻译C

遗传算法介绍--中文翻译C

遗传算法介绍Tom M. Mitchell目录Ⅰ.简介Ⅱ.动机Ⅲ.遗传算法1.表示假设2.遗传算子3.适应度函数和假设选择Ⅳ.举例·表示·遗传算子·适应度函数Ⅴ.扩展Ⅰ.简介遗传算法提供了一种大致基于模拟进化的学习方法。

其中的假设常用二进制位串来描述,而位串的含义又依赖于具体的应用。

然而假设也可以被描述为符号表达式或者是计算机程序。

寻找恰当的假设是从初始的群体和集合开始的。

当前群体的成员通过诸于随机变异和交叉的方式产生下一代成员,这个过程就像生物进化。

在每一个阶段,根据给定的适应度对当前群体的假设做出相关的评估,然后挑选出最合适的假设并把它作为产生下一代的种子。

遗传算法已被成功的应用于一系列的学习任务和其它的最优化问题。

例如,它们被应用于学习机器人控制的规则集合以及人工神经网络的拓扑结构和学习参数的优化问题中。

这篇论文既覆盖了用二进制位串来描述遗传假设的遗传算法,又覆盖了用计算机程序来描述遗传假设的遗传编程。

Ⅱ.动机遗传算法(GAs)提供了一种受类似于生物进化启发的学习方法。

它不再是从一般到特殊或者从简单到复杂的搜索假设,而是通过不断的变异和重组当前所知的最好的假设部分来生成后续的假设。

在每个阶段,更新被称为当前群体的一系列假设,这种更新是通过用当前最好的假设的后代来取代群体的某一部分。

这个过程形成了产生与测试的柱状搜索假设,其中有一些当前最好的假设的变量有可能在下一步被考虑到。

遗传算法的普及是由下列许多因素启发的:·在生物系统中,进化被认为是成功的,健壮的适应方法。

·遗传算法能搜索包含复杂交互作用部分的假设空间,每个部分对总体假设适应度的影响可能很难建立模型。

·遗传算法很容易平行化,且能很容易降低超级计算机硬件的成本。

这篇论文描述了遗传算法方法,举例说明了它的应用,并且检查了它的假设搜索空间的特性。

我们同时也描述了被称为遗传编程的一个变体,在遗传编程中整个计算机程序向着某种标准的适应度进化。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

遗传算法中英文对照外文翻译文献(文档含英文原文和中文翻译)Improved Genetic Algorithm and Its Performance Analysis Abstract: Although genetic algorithm has become very famous with its global searching, parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as slow convergence speed. In this paper, based on several general theorems, an improved genetic algorithm using variant chromosome length and probability of crossover and mutation is proposed, and its main idea is as follows : at the beginning of evolution, our solution with shorter length chromosome and higher probability of crossover and mutation; and at the vicinity of global optimum, with longer length chromosome and lower probability of crossover and mutation. Finally, testing with some critical functions shows that our solution can improve the convergence speed of genetic algorithm significantly , its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.Genetic algorithm is an adaptive searching technique based on a selection and reproduction mechanism found in the natural evolution process, and it was pioneered by Holland in the 1970s. It has become very famous with its global searching,parallel computing, better robustness, and not needing differential information during evolution. However, it also has some demerits, such as poor local searching, premature converging, as well as slow convergence speed. In recent years, these problems have been studied.In this paper, an improved genetic algorithm with variant chromosome length and variant probability is proposed. Testing with some critical functions shows that it can improve the convergence speed significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.In section 1, our new approach is proposed. Through optimization examples, in section 2, the efficiency of our algorithm is compared with the genetic algorithm which only reserves the best individual. And section 3 gives out the conclusions. Finally, some proofs of relative theorems are collected and presented in appendix.1 Description of the algorithm1.1 Some theoremsBefore proposing our approach, we give out some general theorems (see appendix) as follows: Let us assume there is just one variable (multivariable can be divided into many sections, one section for one variable) x ∈ [ a, b ] , x ∈ R, and chromosome length with binary encoding is 1.Theorem 1 Minimal resolution of chromosome is s =12l --a b Theorem 2 Weight value of the ith bit of chromosome isw i =12l --a b 12-i ( i = 1,2,…l ) Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one-point crossover isE c (x) = la b 2-P c where Pc is the probability of crossover.Theorem 4 Mathematical expectation Em ( x ) of chromosome searching step with bit mutation isE m ( x ) = ( b- a) P m1. 2 Mechanism of algorithmDuring evolutionary process, we presume that value domains of variable are fixed, and the probability of crossover is a constant, so from Theorem 1 and 3, we know that the longer chromosome length is, the smaller searching step of chromosome, and the higher resolution; and vice versa. Meanwhile, crossover probability is in direct proportion to searching step. From Theorem 4, changing the length of chromosome does not affect searching step of mutation, while mutation probability is also in direct proportion to searching step.At the beginning of evolution, shorter length chromosome( can be too shorter, otherwise it is harmful to population diversity ) and higher probability of crossover and mutation increases searching step, which can carry out greater domain searching, and avoid falling into local optimum. While at the vicinity of global optimum, longer length chromosome and lower probability of crossover and mutation will decrease searching step, and longer length chromosome also improves resolution of mutation, which avoid wandering near the global optimum, and speeds up algorithm converging.Finally, it should be pointed out that chromosome length changing keeps individual fitness unchanged, hence it does not affect select ion ( with roulette wheel selection) .1. 3 Description of the algorithmOwing to basic genetic algorithm not converging on the global optimum, while the genetic algorithm which reserves the best individual at current generation can, our approach adopts this policy. During evolutionary process, we track cumulative average of individual average fitness up to current generation. It is written as X(t) = G 1∑=G t avg f1(t)where G is the current evolutionary generation,avg f is individual averagefitness. When the cumulative average fitness increases to k times ( k> 1, k ∈ R) of initial individual average fitness, we change chromosome length to m times ( m is a positive integer ) of itself , and reduce probability of crossover and mutation, whichcan improve individual resolution and reduce searching step, and speed up algorithm converging. The procedure is as follows:Step 1 Initialize population, and calculate individual average fitness0avg f ,and set change parameter flag. Flag equal to 1.Step 2 Based on reserving the best individual of current generation, carry out selection, regeneration, crossover and mutation, and calculate cumulative average of individual average fitness up to current generationavg f ;Step 3 If 0avg avgf f ≥k and Flag equals 1, increase chromosome length to m times of itself, and reduce probability of crossover and mutation, and set Flag equal to 0; otherwise continue evolving.Step 4 If end condition is satisfied, stop; otherwise go to Step 2.2 Test and analysisWe adopt the following two critical functions to test our approach, and compare it with the genetic algorithm which only reserves the best individual: ()]01.01[5.0sin 5.0),(2222221y x y x y x f ++-+-= ]5,5[ ∈,-y x))4cos(4.0)3cos(3.02(4),(222y x y x y x f ππ--+-= ]1,1[ ∈,-y x2. 1 Analysis of convergenceDuring function testing, we carry out the following policies: roulette wheel select ion, one point crossover, bit mutation, and the size of population is 60, l is chromosome length, Pc and Pm are the probability of crossover and mutation respectively. And we randomly select four genetic algorithms reserving best individual with various fixed chromosome length and probability of crossover and mutation to compare with our approach. Tab. 1 gives the average converging generation in 100 tests.In our approach, we adopt initial parameter l0= 10, Pc0= 0.3, Pm0= 0.1 and k=1.2, when changing parameter condition is satisfied, we adjust parameters to l= 30, Pc= 0.1, Pm= 0.01.From Tab. 1, we know that our approach improves convergence speed of genetic algorithm significantly and it accords with above analysis.2. 2 Analysis of online and offline performanceQuantitative evaluation methods of genetic algorithm are proposed by Dejong, including online and offline performance. The former tests dynamic performance; and the latter evaluates convergence performance. To better analyze online and offline performance of testing function, w e multiply fitness of each individual by 10, and we give a curve of 4 000 and 1 000 generations for f1 and f2, respectively.(a) online (b) onlineFig. 1 Online and offline performance of f1(a) online (b) onlineFig. 2 Online and offline performance of f2From Fig. 1 and Fig. 2, we know that online performance of our approach is just little worse than that of the fourth case, but it is much better than that of the second, third and fifth case, whose online performances are nearly the same. At the same time, offline performance of our approach is better than that of other four cases.3 ConclusionIn this paper, based on some general theorems, an improved genetic algorithmusing variant chromosome length and probability of crossover and mutation is proposed. Testing with some critical functions shows that it can improve convergence speed of genetic algorithm significantly, and its comprehensive performance is better than that of the genetic algorithm which only reserves the best individual.AppendixWith the supposed conditions of section 1, we know that the validation of Theorem 1 and Theorem 2 are obvious.Theorem 3 Mathematical expectation Ec(x) of chromosome searching step with one point crossover is Ec(x) = c P l a b 2-where Pc is the probability of crossover.Proof As shown in Fig. A1, we assume that crossover happens on the kth locus, i. e. parent’s locus from k to l do not change, and genes on the locus from 1 to k are exchanged.During crossover, change probability of genes on the locus from 1 to k is 21(“1” to “0” or “0” to “1”). So, after crossover, mathematical expectation of chromosome searching step on locus from 1 to k is)12(12212122121)(111-•--•=•--•==-==∑∑k l j k j l j kj ck a b a b w x E Furthermore, probability of taking place crossover on each locus of chromosome is equal, namely l 1Pc. Therefore, after crossover, mathematical expectation of chromosome searching step is)(1)(11x E P lx E ck c l k c ••=∑-= Substituting Eq. ( A1) into Eq. ( A2) , we obtain )1211(2)(])12[(122)12(12211)(11---•=--•--•=-•--•••=∑-=l c i l c k l c l k c l a b P l a b l P a b P lx E where l is large, 012≈-l l , so )(x E c c P l a b 2-≈Fig. A1 One point crossoverTheorem 4 Mathematical expectation)(x E m of chromosome searching step with bit mutation m m P a b x E •-=)()(, where Pm is the probability of mutation.Proof Mutation probability of genes on each locus of chromosome is equal, say Pm, therefore, mathematical expectation of mutation searching step isE m (x )=P m ·w i =i =1l åP m ·b -a 2l -1·2i -1=i =1l åP m ·b -a 2i -1·(2i -1)=(b -a )·P m一种新的改进遗传算法及其性能分析摘要:虽然遗传算法以其全局搜索、并行计算、更好的健壮性以及在进化过程中不需要求导而著称,但是它仍然有一定的缺陷,比如收敛速度慢。

相关文档
最新文档