The Wolf Colony Algorithm and Its Application

合集下载

Algorithmic Tricks for Reducing...(IJIGSP-V6-N10-1)

Algorithmic Tricks for Reducing...(IJIGSP-V6-N10-1)

I.J. Image, Graphics and Signal Processing, 2014, 10, 1-9
2
Algorithmic Tricks for Reducing the Complexity of FDWT/IDWT Basic Operations Implementation
~ ) T (l ) X(Ll 1 F2 L Y21
(2)Leabharlann where N – is a number of the original signal samples, L – is a size of sliding window that defines the part of the signal processed by the given base operation, k 0,1, ...,K 1 – is a decomposition step number indicating the granularity degree of data resolution, K – the total number of the decomposition steps. (l ) Vector Y2 1 [ y2l , y2l 1 ] - is a two-element output data vector for each basic operation (1)-(2), where y2l and y2l 1 are, respectively, detail and approximation components, which are computing as a result of the execution of FDWT basic operations. ) Vector X(Ll 1 [ x2l , x2l 1 ,...,xL2l 1 ] in (1) represents a column vector of length L whose elements represent the corresponding sequence of signal samples and vector ~ l) ~ ~ ~ in (2) represents the X( L1 [ x2l , x2l 1 ,...,xL2l 1 ]

外文文献—遗传算法

外文文献—遗传算法

附录I 英文翻译第一部分英文原文文章来源:书名:《自然赋予灵感的元启发示算法》第二、三章出版社:英国Luniver出版社出版日期:2008Chapter 2Genetic Algorithms2.1 IntroductionThe genetic algorithm (GA), developed by John Holland and his collaborators in the 1960s and 1970s, is a model or abstraction of biolo gical evolution based on Charles Darwin’s theory of natural selection. Holland was the first to use the crossover and recombination, mutation, and selection in the study of adaptive and artificial systems. These genetic operators form the essential part of the genetic algorithm as a problem-solving strategy. Since then, many variants of genetic algorithms have been developed and applied to a wide range of optimization problems, from graph colouring to pattern recognition, from discrete systems (such as the travelling salesman problem) to continuous systems (e.g., the efficient design of airfoil in aerospace engineering), and from financial market to multiobjective engineering optimization.There are many advantages of genetic algorithms over traditional optimization algorithms, and two most noticeable advantages are: the ability of dealing with complex problems and parallelism. Genetic algorithms can deal with various types of optimization whether the objective (fitness) functionis stationary or non-stationary (change with time), linear or nonlinear, continuous or discontinuous, or with random noise. As multiple offsprings in a population act like independent agents, the population (or any subgroup) can explore the search space in many directions simultaneously. This feature makes it ideal to parallelize the algorithms for implementation. Different parameters and even different groups of strings can be manipulated at the same time.However, genetic algorithms also have some disadvantages.The formulation of fitness function, the usage of population size, the choice of the important parameters such as the rate of mutation and crossover, and the selection criteria criterion of new population should be carefully carried out. Any inappropriate choice will make it difficult for the algorithm to converge, or it simply produces meaningless results.2.2 Genetic Algorithms2.2.1 Basic ProcedureThe essence of genetic algorithms involves the encoding of an optimization function as arrays of bits or character strings to represent the chromosomes, the manipulation operations of strings by genetic operators, and the selection according to their fitness in the aim to find a solution to the problem concerned. This is often done by the following procedure:1) encoding of the objectives or optimization functions; 2) defining a fitness function or selection criterion; 3) creating a population of individuals; 4) evolution cycle or iterations by evaluating the fitness of allthe individuals in the population,creating a new population by performing crossover, and mutation,fitness-proportionate reproduction etc, and replacing the old population and iterating again using the new population;5) decoding the results to obtain the solution to the problem. These steps can schematically be represented as the pseudo code of genetic algorithms shown in Fig. 2.1.One iteration of creating a new population is called a generation. The fixed-length character strings are used in most of genetic algorithms during each generation although there is substantial research on the variable-length strings and coding structures.The coding of the objective function is usually in the form of binary arrays or real-valued arrays in the adaptive genetic algorithms. For simplicity, we use binary strings for encoding and decoding. The genetic operators include crossover,mutation, and selection from the population.The crossover of two parent strings is the main operator with a higher probability and is carried out by swapping one segment of one chromosome with the corresponding segment on another chromosome at a random position (see Fig.2.2).The crossover carried out in this way is a single-point crossover. Crossover at multiple points is also used in many genetic algorithms to increase the efficiency of the algorithms.The mutation operation is achieved by flopping the randomly selected bits (see Fig. 2.3), and the mutation probability is usually small. The selection of anindividual in a population is carried out by the evaluation of its fitness, and it can remain in the new generation if a certain threshold of the fitness is reached or the reproduction of a population is fitness-proportionate. That is to say, the individuals with higher fitness are more likely to reproduce.2.2.2 Choice of ParametersAn important issue is the formulation or choice of an appropriate fitness function that determines the selection criterion in a particular problem. For the minimization of a function using genetic algorithms, one simple way of constructing a fitness function is to use the simplest form F = A−y with A being a large constant (though A = 0 will do) and y = f(x), thus the objective is to maximize the fitness function and subsequently minimize the objective function f(x). However, there are many different ways of defining a fitness function.For example, we can use the individual fitness assignment relative to the whole populationwhere is the phenotypic value of individual i, and N is the population size. The appropriateform of the fitness function will make sure that the solutions with higher fitness should be selected efficiently. Poor fitness function may result in incorrect or meaningless solutions.Another important issue is the choice of various parameters.The crossover probability is usually very high, typically in the range of 0.7~1.0. On the other hand, the mutation probability is usually small (usually 0.001 _ 0.05). If is too small, then the crossover occurs sparsely, which is not efficient for evolution. If the mutation probability is too high, the solutions could still ‘jump around’ even if the optimal solution is approaching.The selection criterion is also important. How to select the current population so that the best individuals with higher fitness should be preserved and passed onto the next generation. That is often carried out in association with certain elitism. The basic elitism is to select the most fit individual (in each generation) which will be carried over to the new generation without being modified by genetic operators. This ensures that the best solution is achieved more quickly.Other issues include the multiple sites for mutation and the population size. The mutation at a single site is not very efficient, mutation at multiple sites will increase the evolution efficiency. However, too many mutants will make it difficult for the system to converge or even make the system go astray to the wrong solutions. In reality, if the mutation is too high under high selection pressure, then the whole population might go extinct.In addition, the choice of the right population size is also very important. If the population size is too small, there is not enough evolution going on, and there is a risk for the whole population to go extinct. In the real world, a species with a small population, ecological theory suggests that there is a real danger of extinction for such species. Even the system carries on, there is still a danger of premature convergence. In a small population, if a significantly more fit individual appears too early, it may reproduces enough offsprings so that they overwhelm the whole (small) population. This will eventually drive the system to a local optimum (not the global optimum). On the other hand, if the population is too large, more evaluations of the objectivefunction are needed, which will require extensive computing time.Furthermore, more complex and adaptive genetic algorithms are under active research and the literature is vast about these topics.2.3 ImplementationUsing the basic procedure described in the above section, we can implement the genetic algorithms in any programming language. For simplicity of demonstrating how it works, we have implemented a function optimization using a simple GA in both Matlab and Octave.For the generalized De Jong’s test function where is a positive integer andr > 0 is the half length of the domain. This function has a minimum of at . For the values of , r = 100 and n = 5 as well as a population size of 40 16-bit strings, the variations of the objective function during a typical run are shown in Fig. 2.4. Any two runs will give slightly different results dueto the stochastic nature of genetic algorithms, but better estimates are obtained as the number of generations increases.For the well-known Easom functionit has a global maximum at (see Fig. 2.5). Now we can use the following Matlab/Octave to find its global maximum. In our implementation, we have used fixedlength 16-bit strings. The probabilities of crossover and mutation are respectivelyAs it is a maximization problem, we can use the simplest fitness function F = f(x).The outputs from a typical run are shown in Fig. 2.6 where the top figure shows the variations of the best estimates as they approach while the lower figure shows the variations of the fitness function.% Genetic Algorithm (Simple Demo) Matlab/Octave Program% Written by X S Yang (Cambridge University)% Usage: gasimple or gasimple(‘x*exp(-x)’);function [bestsol, bestfun,count]=gasimple(funstr)global solnew sol pop popnew fitness fitold f range;if nargin<1,% Easom Function with fmax=1 at x=pifunstr=‘-cos(x)*exp(-(x-3.1415926)^2)’;endrange=[-10 10]; % Range/Domain% Converting to an inline functionf=vectorize(inline(funstr));% Generating the initil populationrand(‘state’,0’); % Reset the random generatorpopsize=20; % Population sizeMaxGen=100; % Max number of generationscount=0; % counternsite=2; % number of mutation sitespc=0.95; % Crossover probabilitypm=0.05; % Mutation probabilitynsbit=16; % String length (bits)% Generating initial populationpopnew=init_gen(popsize,nsbit);fitness=zeros(1,popsize); % fitness array% Display the shape of the functionx=range(1):0.1:range(2); plot(x,f(x));% Initialize solution <- initial populationfor i=1:popsize,solnew(i)=bintodec(popnew(i,:));end% Start the evolution loopfor i=1:MaxGen,% Record as the historyfitold=fitness; pop=popnew; sol=solnew;for j=1:popsize,% Crossover pairii=floor(popsize*rand)+1; jj=floor(popsize*rand)+1;% Cross overif pc>rand,[popnew(ii,:),popnew(jj,:)]=...crossover(pop(ii,:),pop(jj,:));% Evaluate the new pairscount=count+2;evolve(ii); evolve(jj);end% Mutation at n sitesif pm>rand,kk=floor(popsize*rand)+1; count=count+1;popnew(kk,:)=mutate(pop(kk,:),nsite);evolve(kk);endend % end for j% Record the current bestbestfun(i)=max(fitness);bestsol(i)=mean(sol(bestfun(i)==fitness));end% Display resultssubplot(2,1,1); plot(bestsol); title(‘Best estimates’); subplot(2,1,2); plot(bestfun); title(‘Fitness’);% ------------- All sub functions ----------% generation of initial populationfunction pop=init_gen(np,nsbit)% String length=nsbit+1 with pop(:,1) for the Signpop=rand(np,nsbit+1)>0.5;% Evolving the new generationfunction evolve(j)global solnew popnew fitness fitold pop sol f;solnew(j)=bintodec(popnew(j,:));fitness(j)=f(solnew(j));if fitness(j)>fitold(j),pop(j,:)=popnew(j,:);sol(j)=solnew(j);end% Convert a binary string into a decimal numberfunction [dec]=bintodec(bin)global range;% Length of the string without signnn=length(bin)-1;num=bin(2:end); % get the binary% Sign=+1 if bin(1)=0; Sign=-1 if bin(1)=1.Sign=1-2*bin(1);dec=0;% floating point.decimal place in the binarydp=floor(log2(max(abs(range))));for i=1:nn,dec=dec+num(i)*2^(dp-i);enddec=dec*Sign;% Crossover operatorfunction [c,d]=crossover(a,b)nn=length(a)-1;% generating random crossover pointcpoint=floor(nn*rand)+1;c=[a(1:cpoint) b(cpoint+1:end)];d=[b(1:cpoint) a(cpoint+1:end)];% Mutatation operatorfunction anew=mutate(a,nsite)nn=length(a); anew=a;for i=1:nsite,j=floor(rand*nn)+1;anew(j)=mod(a(j)+1,2);endThe above Matlab program can easily be extended to higher dimensions. In fact, there is no need to do any programming (if you prefer) because there are many software packages (either freeware or commercial) about genetic algorithms. For example, Matlab itself has an extra optimization toolbox.Biology-inspired algorithms have many advantages over traditional optimization methods such as the steepest descent and hill-climbing and calculus-based techniques due to the parallelism and the ability of locating the very good approximate solutions in extremely very large search spaces.Furthermore, more powerful new generation algorithms can be formulated by combiningexisting and new evolutionary algorithms with classical optimization methods.Chapter 3Ant AlgorithmsFrom the discussion of genetic algorithms, we know that we can improve the search efficiency by using randomness which will also increase the diversity of the solutions so as to avoid being trapped in local optima. The selection of the best individuals is also equivalent to use memory. In fact, there are other forms of selection such as using chemical messenger (pheromone) which is commonly used by ants, honey bees, and many other insects. In this chapter, we will discuss the nature-inspired ant colony optimization (ACO), which is a metaheuristic method.3.1 Behaviour of AntsAnts are social insects in habit and they live together in organized colonies whose population size can range from about 2 to 25 millions. When foraging, a swarm of ants or mobile agents interact or communicate in their local environment. Each ant can lay scent chemicals or pheromone so as to communicate with others, and each ant is also able to follow the route marked with pheromone laid by other ants. When ants find a food source, they will mark it with pheromone and also mark the trails to and from it. From the initial random foraging route, the pheromone concentration varies and the ants follow the route with higher pheromone concentration, and the pheromone is enhanced by the increasing number of ants. As more and more ants follow the same route, it becomes the favoured path. Thus, some favourite routes (often the shortest or more efficient) emerge. This is actually a positive feedback mechanism.Emerging behaviour exists in an ant colony and such emergence arises from simple interactions among individual ants. Individual ants act according to simple and local information (such as pheromone concentration) to carry out their activities. Although there is no master ant overseeing the entire colony and broadcasting instructions to the individual ants, organized behaviour still emerges automatically. Therefore, such emergent behaviour is similar to other self-organized phenomena which occur in many processes in nature such as the pattern formation in animal skins (tiger and zebra skins).The foraging pattern of some ant species (such as the army ants) can show extraordinary regularity. Army ants search for food along some regular routes with an angle of about apart. We do not know how they manage to follow such regularity, but studies show that they could move in an area and build a bivouac and start foraging. On the first day, they forage in a random direction, say, the north and travel a few hundred meters, then branch to cover a large area. The next day, they will choose a different direction, which is about from the direction on the previous day and cover a large area. On the following day, they again choose a different direction about from the second day’s direction. In this way, they cover the whole area over about 2 weeks and they move out to a different location to build a bivouac and forage again.The interesting thing is that they do not use the angle of (this would mean that on the fourth day, they will search on the empty area already foraged on the first day). The beauty of this angle is that it leaves an angle of about from the direction on the first day. This means they cover the whole circle in 14 days without repeating (or covering a previously-foraged area). This is an amazing phenomenon.3.2 Ant Colony OptimizationBased on these characteristics of ant behaviour, scientists have developed a number ofpowerful ant colony algorithms with important progress made in recent years. Marco Dorigo pioneered the research in this area in 1992. In fact, we only use some of the nature or the behaviour of ants and add some new characteristics, we can devise a class of new algorithms.The basic steps of the ant colony optimization (ACO) can be summarized as the pseudo code shown in Fig. 3.1.Two important issues here are: the probability of choosing a route, and the evaporation rate of pheromone. There are a few ways of solving these problems although it is still an area of active research. Here we introduce the current best method. For a network routing problem, the probability of ants at a particular node to choose the route from node to node is given bywhere and are the influence parameters, and their typical values are .is the pheromone concentration on the route between and , and the desirability ofthe same route. Some knowledge about the route such as the distance is often used so that ,which implies that shorter routes will be selected due to their shorter travelling time, and thus the pheromone concentrations on these routes are higher.This probability formula reflects the fact that ants would normally follow the paths with higher pheromone concentrations. In the simpler case when , the probability of choosing a path by ants is proportional to the pheromone concentration on the path. The denominator normalizes the probability so that it is in the range between 0 and 1.The pheromone concentration can change with time due to the evaporation of pheromone. Furthermore, the advantage of pheromone evaporation is that the system could avoid being trapped in local optima. If there is no evaporation, then the path randomly chosen by the first ants will become the preferred path as the attraction of other ants by their pheromone. For a constant rate of pheromone decay or evaporation, the pheromone concentration usually varies with time exponentiallywhere is the initial concentration of pheromone and t is time. If , then we have . For the unitary time increment , the evaporation can beapproximated by . Therefore, we have the simplified pheromone update formula:where is the rate of pheromone evaporation. The increment is the amount of pheromone deposited at time t along route to when an ant travels a distance . Usually . If there are no ants on a route, then the pheromone deposit is zero.There are other variations to these basic procedures. A possible acceleration scheme is to use some bounds of the pheromone concentration and only the ants with the current global best solution(s) are allowed to deposit pheromone. In addition, certain ranking of solution fitness can also be used. These are hot topics of current research.3.3 Double Bridge ProblemA standard test problem for ant colony optimization is the simplest double bridge problem with two branches (see Fig. 3.2) where route (2) is shorter than route (1). The angles of these two routes are equal at both point A and pointB so that the ants have equal chance (or 50-50 probability) of choosing each route randomly at the initial stage at point A.Initially, fifty percent of the ants would go along the longer route (1) and the pheromone evaporates at a constant rate, but the pheromone concentration will become smaller as route (1) is longer and thus takes more time to travel through. Conversely, the pheromone concentration on the shorter route will increase steadily. After some iterations, almost all the ants will move along the shorter route. Figure 3.3 shows the initial snapshot of 10 ants (5 on each route initially) and the snapshot after 5 iterations (or equivalent to 50 ants have moved along this section). Well, there are 11 ants, and one has not decided which route to follow as it just comes near to the entrance.Almost all the ants (well, about 90% in this case) move along the shorter route.Here we only use two routes at the node, it is straightforward to extend it to the multiple routes at a node. It is expected that only the shortest route will be chosen ultimately. As any complex network system is always made of individual nodes, this algorithms can be extended to solve complex routing problems reasonably efficiently. In fact, the ant colony algorithms have been successfully applied to the Internet routing problem, the travelling salesman problem, combinatorial optimization problems, and other NP-hard problems.3.4 Virtual Ant AlgorithmAs we know that ant colony optimization has successfully solved NP-hard problems such asthe travelling salesman problem, it can also be extended to solve the standard optimization problems of multimodal functions. The only problem now is to figure out how the ants will move on an n-dimensional hyper-surface. For simplicity, we will discuss the 2-D case which can easily be extended to higher dimensions. On a 2D landscape, ants can move in any direction or , but this will cause some problems. How to update the pheromone at a particular point as there are infinite number of points. One solution is to track the history of each ant moves and record the locations consecutively, and the other approach is to use a moving neighbourhood or window. The ants ‘smell’ the pheromone concentration of their neighbourhood at any particular location.In addition, we can limit the number of directions the ants can move by quantizing the directions. For example, ants are only allowed to move left and right, and up and down (only 4 directions). We will use this quantized approach here, which will make the implementation much simpler. Furthermore, the objective function or landscape can be encoded into virtual food so that ants will move to the best locations where the best food sources are. This will make the search process even more simpler. This simplified algorithm is called Virtual Ant Algorithm (VAA) developed by Xin-She Yang and his colleagues in 2006, which has been successfully applied to topological optimization problems in engineering.The following Keane function with multiple peaks is a standard test functionThis function without any constraint is symmetric and has two highest peaks at (0, 1.39325) and (1.39325, 0). To make the problem harder, it is usually optimized under two constraints:This makes the optimization difficult because it is now nearly symmetric about x = y and the peaks occur in pairs where one is higher than the other. In addition, the true maximum is, which is defined by a constraint boundary.Figure 3.4 shows the surface variations of the multi-peaked function. If we use 50 roaming ants and let them move around for 25 iterations, then the pheromone concentrations (also equivalent to the paths of ants) are displayed in Fig. 3.4. We can see that the highest pheromoneconcentration within the constraint boundary corresponds to the optimal solution.It is worth pointing out that ant colony algorithms are the right tool for combinatorial and discrete optimization. They have the advantages over other stochastic algorithms such as genetic algorithms and simulated annealing in dealing with dynamical network routing problems.For continuous decision variables, its performance is still under active research. For the present example, it took about 1500 evaluations of the objective function so as to find the global optima. This is not as efficient as other metaheuristic methods, especially comparing with particle swarm optimization. This is partly because the handling of the pheromone takes time. Is it possible to eliminate the pheromone and just use the roaming ants? The answer is yes. Particle swarm optimization is just the right kind of algorithm for such further modifications which will be discussed later in detail.第二部分中文翻译第二章遗传算法2.1 引言遗传算法是由John Holland和他的同事于二十世纪六七十年代提出的基于查尔斯·达尔文的自然选择学说而发展的一种生物进化的抽象模型。

英文算法比赛作文范文

英文算法比赛作文范文

英文算法比赛作文范文I love algorithm competitions. The thrill of solving complex problems under time pressure is just so exciting.It's like a mental workout that keeps my brain sharp and focused.The best part about algorithm competitions is the sense of accomplishment when you finally crack a tough problem.It's like a victory dance in your head, and it feels amazing. Plus, it's a great way to showcase your problem-solving skills and creativity.When I first started participating in algorithm competitions, I was intimidated by the level of competition. But as I kept practicing and learning from others, Irealized that everyone starts somewhere. It's all about the journey of improvement and growth.The community aspect of algorithm competitions is also something I really enjoy. It's awesome to meet like-mindedpeople who share the same passion for problem-solving. We exchange tips, tricks, and strategies, and it's a great way to learn from each other.The adrenaline rush during the competition is unbeatable. The clock is ticking, and you're frantically trying to come up with the most efficient solution. It's like a race against time, and the feeling of solving a problem just before the buzzer goes off is indescribable.Overall, algorithm competitions have been an incredibly rewarding experience for me. It's not just about the competition itself, but the journey of self-improvement, the friendships made, and the sheer joy of problem-solving.I can't wait for the next competition!。

Linux学习篇(七):Linux驱动开发书籍推荐

Linux学习篇(七):Linux驱动开发书籍推荐

Linux学习篇(七):Linux驱动开发书籍推荐书籍表格,同序号的可能是同⼀书的中英两版,序号越⼩推荐强度越⾼:0--understanding-the-linux-kernel1--鸟哥的Linux私房菜-基础学习篇(第四版)1--深⼊Linux内核架构Wolfgang Mauerer1--Linux Kernel Development, 3rd Edition A thorough guide to the design and implementation of the Linux kernel by Robert Love1--Professional Linux Kernel Architecture by Wolfgang Mauerer2----精通LINUX设备驱动开发2--Essential Linux Device Drivers by Sreekrishnan Venkateswaran3--Advanced Programming in the UNIX Environment by W. Richard Stevens, Stephen A. Rago3--[UNIX环境⾼级编程_第⼆版]3--UNIX环境⾼级编程(中⽂第三版)4--Linux Device Driver Development Cookbook Develop custom drivers for your embedded Linux applications by Rodolfo Giometti5--Linux设备驱动开发详解:基于最新的Linux 4.0内核6--Linux Device Drivers, 3rd Edition by Jonathan Corbet, Alessandro Rubini, Greg Kroah-Hartman6--Linux For Beginners The Ultimate Guide To The Linux Operating System Linux by Vardy Adam.6--the-art-of-debugging-with-gdb-ddd-and-eclipse7--Managing Projects with GNU Make, 3rd Edition by Mecklenburg Robert编程之美 by 《编程之美》⼩组编程珠玑第2版 by Jon Bentley编码隐匿在计算机软硬件背后的语⾔ by [美] Charles Petzold计算机⽹络:⾃顶向下⽅法(第6版) by Keith W.Ross,James F.Kurose计算机组成与设计:硬件软件接⼝ by David A. Patterson, John L. Hennessy剑指Offer 名企⾯试官精讲典型编程题 by 何海涛⼈⽉神话(32周年中⽂纪念版)深⼊理解机器学习:从原理到算法从原理到算法 by Shai Shalev Shwartz Shai Ben David深⼊理解云计算基本原理和应⽤程序编程技术 by Rajkumar Buyya Christian Vecchiola,S.T算法导论(原书第3版)_中⽂ by [美] Thomas H.Cormen、Charles E.Leiserson、Ronald L.Rivest、Clifford Stein算法(第四版)中⽂版图灵程序设计丛书 Algorithms (Incomplete) by 美 Robert Sedgewick 美 Kevin Wayne译谢路云算法图解 by [美] Aditya Bhargava我的第⼀本算法书 by ⽯⽥保辉宫崎修⼀Advanced Linux programming by Mark Mitchell JeffrAlberto Liberal De Los Ríos - Linux Driver Development for Embedded Processors - Second Edition_ Learn to develop Linux embedded drivers with kernel 4.9 LTS-Independently published (2018)Beginning Linux programming by Neil Matthew RichaDC1--第⼀本Docker书修订版 by [澳] 詹姆斯·特恩布尔James TurnbullDC1--The Docker Book by James TurnbullHow Linux Works What Every Superuser Should Know by Brian WardJohn Madieu - Linux Device Drivers Development_ Develop customized drivers for embedded Linux-Packt Publishing (2017)Linux Basics for Hackers Getting Started with Networking, Scripting, and Security in Kali by OccupyTheWebLinux Kernel Module Programming Guide by Peter Jay Salzman, Michael Burian, Ori PomerantzOS0--深⼊理解计算机系统(原书第3版)by Randal E.Bryant David OHallaronOS1--现代操作系统 Modern Operation Systems by Andrew S. Tanenbaum Array ArrayOS1--Modern Operating Systems by Andrew S. Tanenbaum, Herbert BosTCPIP详解卷1:协议(原书第2版) by 凯⽂.R.福尔,W.理查德.史蒂⽂斯UNIX and Linux System Administration Handbook by Evi Nemeth et al.。

The Sequential Quadratic Programming Method

The Sequential Quadratic Programming Method

The Sequential Quadratic Programming Method
167
2 ewton Methods and Local Optimality
In this and subsequent sections we trace the development of Newton methods from the simplest case of nonlinear equations, through to the general case of nonlinear programming with equations and inequalities.
x∈IR
subject to ci (x) ≥ 0
i = 1 , 2 , . . . , m.
(1.1)
In this formulation, equation constraints must be encoded as two opposed inequality constraints, that is c(x) = 0 is replaced by c(x) ≥ 0 and −c(x) ≥ 0, which is usually not convenient. Thus in practice a more detailed formulation is appropriate, admitting also equations, linear constraints and simple bounds. One way to do this is to add slack variables to the constraints, which
The Sequential Quadratic Programming Method

Asymptotic Quantum Search and a Quantum Algorithm for Calculation of a Lower Bound of the P

Asymptotic Quantum Search and a Quantum Algorithm for Calculation of a Lower Bound of the P

ASYMPTOTIC QUANTUM SEARCH AND A QUANTUM ALGORITHM FOR CALCULATION OF A LOWER BOUND OF THE PROBABILITY OF FINDING A DIOPHANTINE EQUATIONTHAT ACCEPTS INTEGER SOLUTIONSR. V. Ramos and J. L. de Oliveirarubens@deti.ufc.br jluzeilton@deti.ufc.brDepartment of Teleinformatic Engineering, Federal University of CearaCampus do Pici, C. P. 6007, 60455-740, Fortaleza, Brazil.Several mathematical problems can be modeled as a search in a database. An example is the problem of finding the minimum of a function. Quantum algorithms for solving this problem have been proposed and all of them use the quantum search algorithm as a subroutine and several intermediate measurements are realized. In this work, it is proposed a new quantum algorithm for finding the minimum of a function in which quantum search is not used as a subroutine and only one measurement is needed. This is also named asymptotic quantum search. As an example, we propose a quantum algorithm based on asymptotic quantum search and quantum counting able to calculate a lower bound of the probability of finding a Diophantine equation with integer solution.Keywords:Quantum algorithms, quantum search, optimization, Diophantine equations.1. IntroductionThe Grover’s quantum search algorithm is an important result in quantum computation that proves that quantum superposition can speed-up the task of finding a specific value within an unordered database. The quantum search is proved to use O(N1/2) operations of the oracle (in comparison with the O(N) operations of the best classical algorithm), indicating a quadratic speed-up [1-3]. Several mathematical problems can be modeled as a search, like the problem of finding the minimum (or the maximum) of a function. Thus, some algorithms for finding the minimum or maximum using quantum search have been proposed [4,5], using the generalization of the quantum search proposed in [6] as a subroutine that is called several times. Every time the quantum search is called, at the end a measurement is realized. The number of measurements in the algorithm proposed in [4] is (log2N), where N is the number of elements in the database [7]. Aiming to reduce the number of measurements, Kowada at al [7] proposed a new quantum algorithm for finding the minimum of a function that realizes O(log N) measurements. Here, we go beyond, proposing a quantum algorithm for finding the minimum that realizes only one measurement, at the final of the algorithm. Furthermore, conversely the already proposed quantum algorithms for finding the minimum, the quantum search in the proposed algorithm is not used as a subroutine, indeed it is a part of the algorithm as any other quantum gate. In this work, as an example, we apply asymptotic quantum search together with quantum counting algorithm [3,8] in order to create a quantum algorithm able to calculate a lower bound of the probability of finding a Diophantine equation that accepts integer solutions. We are going to call this number by D.(1)(2) (3)2. Quantum algorithm for finding the minimum of a functionThe circuit for the proposed quantum algorithm is shown in Fig. 1.Fig. 1- Circuit for asymptotic quantum search.In Fig. 1, U f is the quantum gate that implements the function f , U f x 0 = x f (x ) , the gate QBSC is a quantum bit string comparator [9], QBSC x y 0 = x y b , where b =0 if x >y and b =1 if x y . At last, the oracle in the amplitude amplification recognizes if the quantum registers 4 and 5 are in the total state 0 N 0 . In order to understand the quantum circuit in Fig. 1, we firstly show the operation of the quantum circuit U , following the states in the marked positions, as shown below:1152432151452243331451452233000000000000NNNx xNN NNNx f x x xy Nx f x xy x y c xc U x Hc xf x c xf x yc x f x f y(4.a)(4.b)(5)(6.a)41451452323, ,415243 212431101 N x y x yf x f y f x f y n x x jj xj f x f y n x x jj j xf x yf y f x yf y c x f x f y c x f x f y50xf x f y.In (1)-(4.b), N is the number of qubits, hence, there are 2N elements in the database. In (4.b), 1 n (x ) 2N is the number of elements that obey f (x ) f (y ) for a given x and all y ’s. Thus, the better the solution the larger is the value of n (x ). At last, y i can assume any y value.min 515243 215243 16min min 101 00Nn x N x j x j f x f y n x N x j xj f x f y x c x f x c x f x c x f xminmin 1522443321423 10001 0Nn x N N N N x j xj x x f x f y n x N Nx kxk x x f x f y c x fx y c xf x H y5(6.b)(7)(8)min minmin 6min min 115224343 154231 021421002001001N j NNNN NNx x xx x n x NN Nx j x j x x H y f x f y n x N x kk c x f x c n x xf x c x y c x f x H ymin53 00N x x x f x f y.Finallymin 71515243243 15423 1 0214210000200000000N N j NN NN NNNx x xxf x f y n xNNNx j xj x x H y f x f y NN x k k U c xc n x x c x y c x y53 01Nn x N xf x f yThe worse the solution the lower is the value of n (x ) and the faster is the decay. Now, using k -times the gate U together with the multi-controlled CNOTs, the quantum state just before the quantum amplification is:min 15243 1154231 1 021142120000200020N j N k N N N Nx x f x f y n x k r N N NN x j r x j x x H y f x f y n r N N Nx k k c n x x c n x x H y c n x x H y31 501x k N r x f x f y(9)At the amplitude amplification, the oracle recognizes the state 0 N 0 in the fourth and fifth quantum registers. Therefore, only the first term in (8) will have their amplitudes amplified. Looking closer the first term in (8), one sees that the amplitude of the searched answer is min x c , since n (x )=2N for x =x min . The second term with largest amplitude has n (x )=2N -1, hence, after the k -th application of U its amplitude will be c x /2k . Thus, if k is large enough only the term corresponding to the searched answer will have considerable amplitude and, after amplitude amplification, the searched answer will be obtained with high probability ina single set of measurements. For example, choosing x c for all x , the largest amplitude after kusage of U(before amplitude amplification) is min x c while the second largest amplitude will bemin 2k x x c c . The oracle in the amplitude amplification is applied /(4 ) times, where [9]sin k.If k is large enough, then 2Ngood p . Obviously, the efficiency of the proposed algorithm is equal to theefficiency of the amplitude amplification.3. Quantum algorithm for finding a lower bound of DA Diophantine equation is a polynomial equation with any number of unknowns and with integercoefficients. For example, (x +1)n +(y +2)n +(z +3)n -Cxyz =0, where C and n . Since there is not a universal process to determine whether any Diophantine equation accepts or not integer solutions, the best that one can do is to use a brute-force method. In this case, one can not determine the probability of finding a Diophantine equation that accepts integer solutions, D , exactly, but it is possible to determine a lower bound for it. In this direction, the quantum algorithm proposed can be used to implement a brute-force attack (using the intrinsic quantum parallelism) and the accuracy of the answer obtained is limited by the amount of qubits used, the larger the amount of qubits used the closer to D is the obtained answer. Since only a finite amount of qubits can be used, the answer obtained is simply a lower bound of D . Since the (infinite) set of Diophantine equations considered is enumerable, we will use integer numbers to enumerate the Diophantine equations. The quantum circuit for the proposed quantum algorithm is shown in Fig. 2. In this quantum circuit, we assume that the set of Diophantine equations considered has only three unknowns (x ,y ,z ), however, the extension to a larger number of unknowns is straightforward.(10)(11)(12)(13)(15)(16)(14)(17)31126354,,32126543,,3125643,,,,,,0000,,,,000,,,,,,00lM lxyz px y z Ml xyz p px y z lxyz p px y zx y z p c x y z c x y z D x y z p c x y z D x y z y z4125634,,,,51126534,, ,,,,,,,,,,,,,,0,,,,,,,,1p p i j mxyz p p px y z x y z xyzp i j mp i j mpx y zi j mD x y z D x y z c x y z D x y z y z D x y z cx y zD x y z y z D x y z !62126543,, ,,,,,,352317132,,,,,,01,,,,0,,2,,,,p p i j mpp s s s s s s opts lxyzp i j mpx y zi j mD x y z D x y z t p p p p p pM opt opt opt p opt opt opt xyz s M xyz p pcx y zD x y z y z c x y z D x y z c n x y z x y z D x y !4634353,,,,,,010p p popt opt opt l Mpx y z x y z x y z z !!Finally38126354,,2133163542 ,,,,,,,,0000,,0000,,2,,pps s s opt sp p popt opt opt lM lxyz px y zt p p popt opt opt xyz s l MlM pxyz x y zx y z x y z U c x y z c x y z p c n x y z x y z !In (17), 1 n (x ,y ,z ) 23M is the number of elements that obey |D p (x,y,z )| |D p (x’,y’,z’)| for a given x ,y ,z and all x’,y’,z’. Thus, the better the solution the larger is the value of n (x ,y ,z ). We assume that, for the Diophantine equation number p there are t p minimums. Moreover3,,2s s sp p pMopt opt opt n x y z for s =1,2,…,t p . Thestates !i i =1,2,3,4 and ! are states with undesired terms. Now, using k -times the gate U together with the multi-controlled CNOT gates, the quantum state just before the quantum amplification is:(20)(19)(18)3312635421 ,,,,,,,,,,2,,0000p p s s s opt sp p popt opt optt kp p p M l Mlopt opt optxyz xyz ps x y zx y z x y z p c x y z c n x y z x y z"The state " represents the uninteresting terms. At the amplitude amplification, the oracle recognizes the state 0 3M 0 in the fifth and sixth quantum registers. Therefore, only the first term in (18) will have their amplitudes amplified. Looking closer the first term in (18), one sees that the amplitude of one of the searched answers is p opt sxyz c (comparing to the single variable case in Section 2, we have n (x,y,z )=23M for,,,,p p popt opt opt x y z x y z ). The second term with largest amplitude has n (x ,y ,z )=23M -1, hence, after the k -thapplication of U its amplitude will be c xyz /2k . Thus, if k is large enough only the term corresponding to the searched answer will have considerable amplitude and, after amplitude amplification, it will be obtained with high probability in a single measurement. For example,choosing xyz c for all xyz , thelargest amplitude after k usage of U (before amplitude amplification)is p opt sxyz c thesecond largest amplitude will be 2popt sk xyz xyz c c . The oracle in the amplitude amplification is applied/(4 ) times, wheresin k.If k is large enough, then 32Mgood p p t . The efficiency of the proposed algorithm is equal to theefficiency of the amplitude amplification. Thus, the final state after the amplitude amplification is1,,,,ps s ss s st p p pp p pend opt opt opt p opt opt opt ps y z D x y z errIn (20) one can see that, for each one of the 2N Diophantine equations, the algorithm calculate the minimums values testing 2M different values for each unknown. Moreover, the term err represents the remained unwanted states due to the non ideal amplitude amplification.In order to calculate the lower bound of D , we use the state (20) as input of a quantum countingalgorithm whose oracle recognizes a zero in the last register of (20), that is,,,0p p pp opt opt optD x y z .The quantum counting algorithm returns the number of elements in the database that pass in the oracle test. Let us name this value by r (N ,M ). Hence,(21),lim ,2N D N M r N M #$and a lower bound for D is simply ,2estND r N M .4. DiscussionsThe problem to decide if a given Diophantine equation has or not integer solutions is, according toChurch-Turing computability thesis, an undecidable problem. In fact, this problem is related to the halting problem for Turing machine: The Diophantine equation problem could be solved if and only if the Turing halting problem could be [10]. Since there is not a universal process to decide if a given program will halt or not a Turing machine, the best that one can do is to run the program and wait for a halt. The point is: one can never know if he/she waited enough time in order to observe the halt. The Diophantine equation problem has a similar statement. Since there is not a universal procedure to decide if a given Diophantine equation accepts an integer solution, the best that one can do in order to check if it accepts or not an integer solution, is to test the integer numbers. The point here is: one can never know the answer because there are infinite integers to be tested. Associated to the halting problem is the Chaitin number , the probability of a given program to halt the Turing machine [11-17]. The number is an incompressible number and it can not be calculated. Similarly, one can define the probability of finding a Diophantine equation with integer solutions, D . Obviously, D is also a number that can not be calculated, because the knowledge of D implies in the knowledge of a way to decide if any given Diophantine equation accepts or not any integer solution that, by its turn, implies in the knowledge of and in the solution of the halting problem. As happens with , it is possible to calculate classically a lower bound for D , however, it must be a brute-force algorithm. Classical algorithms for calculation of D will differ basically in the method that the zeros of the Diophantine equation are obtained. The quantum algorithm here proposed is also a brute-force algorithm but its advantage is the quantum parallelism, since several Diophantine equations are tested simultaneously.4. ConclusionsIt was proposed a new quantum algorithm for finding the minimum of a function that requires only one measurement. This is important since the complete circuit is simplified and measurements of qubits are not noise free. This algorithm is an asymptotic quantum search. As an example of its application, we constructed a quantum algorithm for finding a lower bound for the probability of finding a Diophantine equation that accepts integer solutions. A lower bound can also be calculated using a classical computer,however, our algorithm is more efficient since it can test 2N(for N qubits) Diophantine equations simultaneously. At last, the algorithm proposed requires only one measurement, after the quantum counting stage.AcknowledgeThe authors thank the anonymous reviewer for useful comments.References1.Grover, L. V.: A fast quantum mechanical algorithm for database search, in Proc., 28th Annual ACM Symposium on the Theory of Computing, New York, pp. 212 (1996).2. Grover, L. V.: Quantum mechanics helps in searching for a needle in a haystack, Phys. Rev. Lett., 79, pp. 325 (1997).3. M. A. Nielsen, and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge Univ. Press, Cambridge, ch. 4 and 6 (2000).4. Dürr C., and Høyer, P.: A quantum algorithm for finding the minimum, arXive:quant-ph/9607014 (1999).5. Ahuja, A., and Kapoor, S.: A quantum algorithm for finding the maximum, arXive:quant-ph/9911082 (1999).6. Boyer, M., Brassard, G., Høyer, P., and Tapp, A.: Tight bounds on quantum searching, arXive:quant-ph/9605034 (1996).7. Kowada, L. A. B., Lavor, C., Portugal, R., and de Figueiredo, C. M. H.: A new quantum algorithm to solve the minimum searching problem, International Journal of Quantum Information, Vol. 6, no. 3, 427 - 436, 2008.8. Kaye, P., Laflamme, R., and Mosca, M.: An introduction to Quantum Computing, 1st Ed., Oxford University Press, 2007.9. Oliveira, D. S., and Ramos, R. V.: Quantum Bit String Comparator: Circuits and Applications, Quantum Computers and Computing, vol. 7, no. 1, 17-26, 2008.10. T. D. Kieu, Quantum algorithms for Hilbert’s tenth problem, – quant-ph/0110136, 2003.11. C. Calude, Theories of Computational Complexity, North-Holland, Amsterdam, 1998.12. G. Chaitin, Meta Math!: the quest for omega, Pantheon Books, 2005.13. G. Chaitin, Irreducible Complexity in Pure Mathematics, Preprint at (/math.HO/0411091), 2004.14. G. Chaitin, Information-theoretic computational complexity, IEEE Trans. on Inf. Theory, IT-20, 10, 1974.15. C. Calude, Information and Randomness, Spring-Verlag, Berlin, 1994.16. G. Chaitin, Algorithmic Information Theory,1a Ed. Cambridge University Press, 1987.17. G. Markowsky, Introduction to algorithmic information theory, J. of Universal Comp. Science, 2, no. 5, 245, 1996.。

算法是一把双刃剑作文材料

算法是一把双刃剑作文材料

算法是一把双刃剑作文材料英文回答:Algorithm is indeed a double-edged sword. On one hand, algorithms have greatly improved our lives and brought us convenience and efficiency. For example, recommendation algorithms on e-commerce platforms can help us findproducts that we may be interested in, saving us time and effort. Search algorithms enable us to quickly find information on the internet, making research and learning more accessible. Algorithms also play a crucial role in various industries, such as finance, healthcare, and transportation, optimizing processes and improving outcomes.However, algorithms also have their downsides. Onemajor concern is the issue of algorithmic bias. Algorithms are created by humans and are based on data that maycontain inherent biases. This can lead to unfair outcomes, such as discriminatory hiring practices or biased loan approvals. Another concern is the potential for algorithmsto invade privacy. With the increasing amount of data being collected and analyzed, algorithms have the power to infer personal information and make predictions about individuals, raising concerns about surveillance and data misuse.Furthermore, algorithms can also perpetuate echo chambers and filter bubbles. Recommendation algorithms tend to show us content that aligns with our existingpreferences and beliefs, limiting our exposure to diverse perspectives. This can reinforce our existing biases and hinder critical thinking and open-mindedness.中文回答:算法确实是一把双刃剑。

铁路空车调整模型的d-w分解算法

铁路空车调整模型的d-w分解算法

薛锋",孙宗胜「
(1.西南交通大学,交通运输与物流学院,成都611756; 2.综合交通运输智能化国家地方联合工程实验宣,成都611756)
摘要:牢车调整主要是指对铁路牢车车辆进行合理优化,对铁路牢车调整问题进行模型构建及算法
研究可以促进车辆运用效率的提高与相关运输费用的降低。本文在对铁路空车调整理论研究的基础上,
四川省科技计划项戸(2019YJ021I ) 作者简介:薛锋(1981—),男,山东邹城人,工学博士,西南交通大学副教授,研究方向为运输纽织理论与系统优化。 引文格式:薛锋,孙宗胜.铁路空车调整模型的D-W分解算法[J].交通运输工程与信息学报,2019, 17 ( 4 ) : 43-4&
44
交通运输工程与信息学报
Key words: railway transportation; empty railway wagon distribution; Dantzig-Wolft decomposition algorithm; complexity of the algorithm
收稿日期:2019-04-28 基金项目:国家自然科学基金项H (61203175, 61403022);中央高校基本科研业务费专项资金项H (2682013CX068, 2682016CX118 );
第17卷
0引言
铁路空车的合理有效调整是很多铁路企业 极为关心的问题。我国铁路货车数量在50万辆 左右,铁路网中货车在进行卸货后,若货车卸车 站所需车辆数与本站装车数不符,则需在车站间 进行空车调整以满足运输需求。有效的空车调整 可以提高货车运用效率,满足货主及市场运输需 求,提高铁路货运竞争力,增加运输效益。
2. National United Engineering Laboratory of Integrated and Intelligent Transportation, Chengdu 611756, China)

Evolutionary Algorithms

Evolutionary Algorithms

Evolutionary AlgorithmsJuan Carlos Granados NarbonaCommunication and Operating Systems GroupBerlin University of Technologyjcgranad@cs.tu-berlin.deAbstract.Evolutionary computation is a rapidly growingfield duringthe last decade,althougth his origins can be traced back to the late1950’s.A subset of evolutionary computation are evolutionary algorithms(EA).This paper suplies a brief survey of their earlier beginnings,de-sign and different implementations like genetic algorithms(GA),evolu-tionary strategies(ES)and evolutionary programming(EP).Specific ex-amples and a practical case will be also commented.Keywords:evolutionary algorithms,evolution strategies,genetic algorithms,ge-netic programming1IntroductionWhy use evolution as a method for solving computational problems?As we know, many computational problems require searching a huge number of possibilities for solutions.By other problems man may demand complex solutions that are difficult to program by hand or using traditional algorithms.Often the effective use of parallelism is suggested for exploring different possibilities in a effective way.Or our computer programm needs to be adaptive for a continous well per-foming in a changing environment.Here is the point,where the mechanisms of evolution seem to be well suited apporting new concepts and methods inspired by biological evolution.In the seminar”Organic Computing-summer semester2006”we discuss re-lated themes as follows.The next section2illustrates the concept of evolution and evolutionary algorithmsIn third section3the general structure of EAs,their most important elements and how they work is described.The fourth Section4, provides a short description of well-known EAs,special characteristics and a criteria comparison of them.After that,we comment a practical case5of us-ing evolutionary algorithms to optimize a real search problem.Finally,the last section6presents our conclusions.2Fundations of evolutionary algorithmsIn the following we present some elementary definitions.2.1EvolutionEvolution(t.:evolvere=evolve,deployment,expansion,progression)could be defined as a process that results in heritable changes in a population spread over many generations[1].The evolution is bound at three necessary conditions.1.the presence mentioned by elements abled to replicate,2.a varying copying accuracy,known as variation,as well as,3.a different probability of each variant to arrive as element into that samplefrom which the following population is built up:selection.The term was introduced1774by Swiss natural scientist Albrecht of Haller (1708-1777)for its conception of the development of humans.The modern understanding of evolution wasfirst presented1858in a collective paper by Charles Darwin(1809-1882)and Alfred Russel Wallace(1823-1913)and popularized in Darwin’s book”The Origin of Species”(1859).It is based on the theory of natural selection,a process by which individual organisms with favorable traits are more likely to survive and reproduce.The evolution theory at present is a complex,scientific theory of biology.It means that all life has a common descent and the variety of the species has been developed by evolution.Basic statements of the evolution theory are:-Evolution always takes place-Evolution is not reversible(Louis Dollo law)[2](individual structures or achievements can copy old conditions but the underlying genes no more the same structure).-Evolution is not purposeful in the scientific sense(it takes place thus no development for a recognizable purpose).-The evolution affects all levels of the animated world:from the molecule to the ecological system.2.2Definition of Evolutionary algorithmsEvolutionary algorithm is a generic term used to indicate any population-based optimization algorithm that uses mechanisms inspired by biological evolution, such as reproduction,mutation,recombination(evolution or variation opera-tors),natural selection and survival offittest.Candidate solutions to the optimization problem should be seen as individuals in a population and thefitness(cost)function as the environment within the solutions take place.Each individual is evaluated by afitness function.Then the population evolve towards better and better individuals via operators based on natural selection,i.e.survival and reproduction of thefittest,and genetics, e.g.crossover and mutation operators-Goldberg(1989)[3].After repeated ap-plication of the above operators the new population defines a new generation. The evolution process reiterates,till a determinated objetive is reached,such as an apropiate solution(individual)or a specific number of generations(turns)has been delivered.3Theory of evolutionary algorithms3.1Structure of an evolutionary algorithmEvolutionary algorithms are stochastic search methods,that follow the natu-ral biological evolution process.At the beginning of the computation process a number of individuals(the population)are randomly initialized.The objective function,wich describes our optimization search goal,is then evaluated for these individuals.Thefirst/initial generation is produced.If the optimization criteria are not met the algorithm start the creation of a new generation:it selects individuals according to theirfitness for the production of offspring(parent selection mechanism)and recombine them to produce offspring. Mutation of all offspring can also be applied with a certain probability.After this thefitness of the offspring is again computed and the offspring are inserted into the population replacing the parents und producing a new generation.This cycle is performed(seefigure1)until the optimization criteria are reached.Fig.1.Structure of an evolutionary algorithm.Taken from:Introduction to Evo-lutionary Computing-Springer Verlag[4]In correlation to this description a pseudo-algorithm,that can be applied to all EAs,can be described as follow:–BEGIN1.Generation t:=0;2.inititialite the population P(t);3.while not Termination criterion do•evaluate(evaluation and parent selection)the population P(t);•next P’(t)=variation(recombination/mutation)over P(t);•evaluate(evaluation)P’(t);•P(t+1)=select(survivor selection)P’(t)Q1;•Generation t=t+1;4.end while;3.2Components of an evolutionary algorithmThis are the main components of an evolutionary algorithm:(seefigure2)Fig.2.Diagram of the components of an EA.1.Population The primary parameters for controlling the genetic algorithmare the population size and the maximum number of generations to be run.Although EAs can handle with very huge populations,it is used to define a fixed maximum of individuals.2.Representation of individuals”link”the real world(original problemcontext)to the EA world(problem-solving space where evolution takes place)(figure3).We distinguish also between objects and their parameter forming possible solutions within the problem context(phenotype)and their encoding(genotype).This encoding/decoding function must be also invert-ible:to each genotype there has to be at most one corresponding phenotype.Selecting a representation scheme that facilitates solution of the problem by1with P as population and Q the quota of parent individuals,that sould survive the next generation,in order to preserve specific criteriaFig.3.Phenotype,genotype and encoding/decoding functionan EA often requires considerable insight into the problem and good judg-ment.3.Evaluation function The evolutionary process is driven by thefitness mea-sure.This assigns afitness value to each individual,which is the basis for the selection process.Thisfitness value should be similar for alike individuals with the same properties.4.Parent selection mechanism Selection determines,which individuals arechosen for mating(recombination)and how many offspring each selected individual produces.Thefirst step isfitness assignment by:The actual se-lection is performed in the next step.Parents are selected according to their fitness by means of one of the following algorithms:–rank-basedfitness assignment[5]:the population is sorted according to the objective values.Thefitness assigned to each individual depends only on its position in the individuals rank and not on the actual objective value.–proportionalfitness asignment[6]•roulette-wheel selection:this method maps the individuals to con-tiguous segments of a line.Each individual’s segment is equal insize to itsfitness.A random number is generated and the individualwhose segment spans the random number is selected.The processis repeated until the desired number of individuals is obtained.Thismethod is analogous to a roulette wheel with each slice size propor-tional to thefitness.•Stochastic universal sampling:the individuals are assigned to con-tiguos sections of a line exactly as the roulette-wheel selection.Dif-ferent it is that a number of pointers over the line are distributedevenly here.The number of individuals to be selected(n)correspondsto the number of pointers.Then the distance between the pointersare1/n and the position of thefirst pointer is given by a randomlygenerated number in the range[0,1/n].The individuals whose seg-ment includes the pointer are selected.–tournament selection:tournaments between several individuals are ac-complished.Winner of a tournament and thereby selected is the best individual with the”highestfitness”.So many tournaments take place, as individuals are to be selected.–truncation selection:the individuals are sorted according to theirfitness.All individuals with afitness,which is larger than afixed threshold,are selected.Therefrom only the best individuals are selected.5.Variaton operators–Recombination/Crossover produces new individuals in combining the information contained in the parents(mating population).Depend-ing on the representation of the variables of the individuals the following algorithms[7]can be applied:-Real valued recombination4:∗discrete recombination:performs an exchange of variable valuesbetween two individuals.∗intermediate recombination:the variable values of the offspringare chosen somewhere around und between the variable valuesof the parents.∗line recombination:generates variable values at any point on theline defined by the parents.-Binary valued recombination(crossover)5:∗single-point crossover:one crossover position relative to the num-ber of variables of the individuals is randomly selected and thevariables are interchanged between the individuals about thispoint.∗multi-point crossover:m crossover positions are selected(with noduplicates)and sorted.After that the variables between succes-sive points are exchanged to create two new offsprings.∗uniform crossover:makes use of a randomly created mask withsame length as variables exists.Teh parity of the bits in the maskhints which parent will supply the offspring with which bits.∗suffle crossover:like uniform crossover,but the variables are ran-domly shuffeld in both parents before exchange and unshuffledin offspring after crossover.Fig.4.Real valued recombination.After Pohlheim[8]∗crossover with reduced surrogate:a crossover point takes placeat every point,at which the variable values of the parents differfrom another.–Mutation The mutation operator simply changes the value of a gene toa new random value.The purpose of the mutation operator is to preventthe genetic population from converging to a local minimum and to intro-duce to the population new possible solutions.This is also necessary to raise the genetic diversity of individuals in the population(mutation can generate gene values that are not present in the current population).It randomly alters each individual with a(usually)small probability. Both crossover and mutation are stochastic operators,applied with user-defined probabilities.The probability of mutation,depending of selectedEA,is usually much lower than that of crossover.Fig.5.Binary valued recombination.After Pohlheim[8]6.Survival selection mechanism(reinsertion).After the offspring is pro-duced and evaluated,it must be inserted into the population(figure6).It should be differentiate how many and which offspring individuals will be inserted into the population and which individuals of the population should be replaced.A selection of the offspring must be only accomplished,if not all produced individuals are to be inserted into the population.The parameter for it is the reinsert rate(insertion rate).By elitest reinsertion it is guaran-Fig.6.Schema of survival selection mechanism.After Pohlheim[8] teed that the best individuals of the population remain in the population and are only replaced,if better individuals were found.7.Termination criterion Each run of the EA requires specification of a ter-mination criterion for deciding when to terminate a run and a method ofresult designation.One frequently used method is to designate the best in-dividual obtained in any generation of the population during the run(i.e., the best-so-far individual)as the result of the run.4Known Evolutionary algorithmsMore than sixty years ago,about1940,long before the breakthrough of computer began the idea of appling the evolution principles,introduced by Darwin,into the process of solving problems.With the incoming of computers into engineer-ing processes,this idea was revived into”evolutionary systems”:using evolution and its principles as search and optimization tool for a given problem.So began the evolutionary computing to produce interest in the scientist com-munity.Some scientists deploy such evolutionary methods only as an study,how evolution takes place in nature and how this natural operators(genetic variation and selection)could be imported into computer systems.Other just developed evolutionary algorithms to solve their specific problems.Many other deployed al-gorithms for machine learning or optimization.Today evolutionary algorithms or evolutionary computing has become a very interestingfield of organic computing, with genetic algorithms,evolutionary strategies and evolutionary programming as its well-known representatives.In the following we show these evolutionary algorithms and their main charac-teristic features.4.1Genetic Algorithms”A Genetic algorithm(GA)is a mathematical search technique based on the principles of natural selection and genetic recombination[9]”.Genetic algorithms were invented in the1960s by John Holland and developed with help of his students and colleagues at the University of Michigan in the 1960s and the1970s.Their purpose was to study the natural adaption and im-port it into computer systems and it was presented in his book”Adaption in Natural and Artificial Systems”from1975.Genetic algorithms are so far generally the best and most robust kind of evo-lutionary algorithms.They are robust search algorithms and efficient when the search space ist large or complex,no mathematical analysis ist avaliable or tra-ditional search methods fail.GAs get also good results when domain knowledge is marginal or too difficult to encode into the search space.Representation and a significativefitness evaluation function are also very important for the GAs successfully application.The genetic algorithm uses a constant size population(set)of individuals(cro-mosomes)and applies mainly the crossover varation operator to create a new generation.Individuals,seens as possible solutions to the problem and each as-sociated with anfitness value,are encoded using afixed length alphabet usually as binary strings(bit-strings)of0s and1s(gene).Mutation,if used,consists ofa bit-flip operation over each separately gene of the individuals and it is applied with a small probability.4.2Evolutionary StrategiesEvolution strategies(ES)were introduced by Rechenberg[10](1965,1973)and further developed by Schwefel[11](1975,1977)at the Technical University of Berlin.Motivation of their study was,from the beginning,to solve engineering design problems:Rechenberg and Schwefel developed ES in order to conduct successive wing tunnel experiments for aerodynamic shape optimization(e.g.Rechenberg developed a method to optimize realvalued parameters for devices such as air-foils).Thefield of evolution strategies has remained an active area of research and algorithm design to solve specific problems,mostly developing independently from thefield of genetic algorithms(although recently the two communities have begun to interact)[12].A short review of ES is illustrated by Back,Hoffmeister,and Schwefel1991.)[13]The representation used in evolutionary strategies,is afixed-length real-valued vector.As with the bit-strings of genetic algorithms,each position in the vector corresponds to a feature of the individual.ES often uses a smaller population(e.g.under20members)and prioritizes mutation over crossover as variation operator.4.3Evolutionary ProgrammingFogel,Owens,and Walsh[14]developed evolutionary programming(EP)ac-tually as algorithms to solve specific problems.They used the concept offinite state machines,which were evolved by randomly mutating state transition di-agrams and selection of thefittest as representation of candidate solutions to given task.”The representation used in ES is typically stringently connected to the problem domain,usually as afixed-length real-valued vector.The primary difference between evolutionary programming and the other ap-proaches(GA,GP,and ES)is that no exchange of material between individuals in the population is made.Therefore,only mutation operators are used.A typ-ical selection method is to select every individual as parent(n),mutate then to form the offspring(n)and probabilistically select,the half of the2n individuals as survivors and member of the next generation.[..]”[15]4.4Learning Classifier SystemsA Learning Clasifier System[16](figure7),or LCS,is a machine learning system with close links to reinforcement learning and genetic algorithms.They werefirst described by John Holland and consist of a population of binary rules on whicha genetic algorithm altered and selected the best rules.A rule utility is decided by a reinforcement learning technique(don’t usefitness function).The data/information of the environment are coded by the detectors(Detektoren in german)into a(or several)binary message.This messages can apply to the condition section of classifier from the classifier list,which write the message into the message list,which can activate again classifier etc.If this procedure are scheduled,the remaining messages in the message list are converted in actions by the effectors(Effektoren).The classifier is evaluated by one credit assignment algorithm and combined from a genetic algorithm to new classifier.[17]A classifier system is able to”classify”its environment and create rules dynam-ically,therefore able to adapt to differing situations on thefly.(More Information about LCS could be found at:[18])Fig.7.Structure of a classifier Systems4.5Genetic ProgrammingGenetic Programming(GA)is a special case of a genetic algorithm.Instead of by bit strings,the individuals are represented by syntax trees,with rule conditions and/or attirbute values in the leaf nodes and functions(relational,logical or mathematical operators)in the internal nodes.This individuals are maped to programs.Programs can be generated also automatically with the help of genetic programming8.4.6Outlook and comparisonSpecific examples of EAs are given below.Most of these techniques are similar in spirit,but differ in the details of their implementation and the nature of the particular problem to which they have been applied.Fig.8.Genetic Programming”Comparison:–Genetic algorithm:this is the most popular type of EA.One seeks the so-lution of a problem in the form of strings of numbers(traditionally binary, although the best representations are usually those that reflect something about the problem being solved-these are not normally binary),virtually always applying recombination operators in addition to selection and muta-tion;–Evolutionary programming:like genetic programming,only the structure of the program isfixed and its numerical parameters are allowed to evolve;–Evolution strategy:works with vectors of real numbers as representations of solutions,and typically uses self-adaptive mutation rates;–Genetic programming:here the solutions are in the form of computer pro-grams,and theirfitness is determined by their ability to solve a computa-tional problem.–Learning classifier system:instead of a usingfitness function,rule utility is decided by a reinforcement learning technique.”Taken from wikipedia-[19]+Representation Used Operators Population size Choice mechamism GA Bit-String Recombination*Fixed ProbabilisticMutationES Vector of Mutation*Fixed Deterministic floating-point number RecombinationEP Vector of Mutation*Fixed Deterministic floating-point number RecombinationGP parse tree Recombination*Variable ProbabilisticMutationTable1.Criteria comparison of EAs-*marks the most relevant operators5A practical example:getting the best with evolutionary algorithmsIn the following we discuss the paper from Neri[20]to display how EAs can be applied to optimize searching processes.In his paper,Neri defines the detec-tion of intrusions over computer networks as”the task of detecting anomalous pattern of network traffic”.To display this association,he compares two mod-els for adquisition of data traffic(one based on a distributed genetic algorithm -REGAL-and one based on”greedy”heuristic-RIPPER-)and show how the network data representation affects the perfomances of the trafic models.Neri uses as basis:–traffic data from Information Exploration Shootout Project(IES)[21],provided as a large set of network logs containing intrusion attemps as well as normal traffic.This data can be partitioned–two learning systems REGAL as optimiser of former studies[22]and RIPPER as a benchmark to compare the optimization results.REGAL[23](RElational Genetic Algorithm Learner)is”a concept learning sys-tem based on a distributed genetic algorithm that learns First Order Logic multi-modal concept descriptions.REGAL uses a relational database to han-dle the learning examples that are represented as relational tuples”[24].It was developed by Giordana and Neri at the”Universita di Torino”5.1Advance optionsSo displays Neri the optimization process using REGAL in analogy RIPPER, used by Lee,Stolfo and Mok in their studie.[25].An iterative procedure that involves pattern mining and comparison,feature con-struction from patterns,and model building and evaluation is thus employed.In each iteration,a different combination of axis attribute and reference attribute is selected.The choices are limited among the essential attributes,that is,service, dst host,src dst,or src port.After comparing results,he decides to develop a”more compact representation for the packets that consists in mapping a subset of feature’s values into a single value...This compression-based representation is then a valuable way of increass-ing classification perfomances without introucing complex feature that involves additional processing overhead”.[20]This is the main data-mining process(figure9)Fig.9.Overview of the minig data process.After Neri and Giordana[22]6ConclusionsIn this paper the structure and how evolutionary algorithms work has been explained and commented.Also a short milestone of EAs has been presented.As seen,the steady deployment process of Evolutionary algorithms began almost50 years ago.Genetic algortihms,evolutionary strategies and genetic programming are still the well-known framework of evolutionary algorithms.But new concepts as genetic programming and classifiers systems are meeting with success and give the impression that thisfield of the evolutionary computation is still gaining an importance.International Conferences take annually place and new domains and common applications like automatic programming,social systems,optimization,immune systems and machine learning meet with approval the advantages,adaptibiliy and diversity of evolutionary algorithms.References1.What is evolution?by laurence moran-copyright1993-1997[online,cited2006/08/10].Available from:/faqs/ evolution-definition.html2.Thienemann, A.:Die chironomidengattung pseudosmittia und das dolloschegesetz.Acta Biotherica7(3-4)(1943)117–134.Available from:http://www./openurl.asp?genre=volume&id=doi:10.1007/BF01578158 3.Goldberg,D.E.:Genetic algorithms in search,optimization,and machine learning.Addison-Wesley,Reading,MA(1989)4.Eiben,A.E.,Smith,J.E.:Introduction to Evolutionary Computing.SpringerVerlag(2003)5.Baker,J.E.:Adaptive selection methods for genetic algorithms.In:Proceedingsof the1st International Conference on Genetic Algorithms,Mahwah,NJ,USA, Lawrence Erlbaum Associates,Inc.(1985)101–1116.Goldberg,D.E.:Genetic Algorithms in Search,Optimization and Machine Learn-ing.Addison-Wesley Longman Publishing Co.,Inc.,Boston,MA,USA(1989) 7.M¨u hlenbein,H.,Schlierkamp-Voosen,D.:Predictive models for the breeder geneticalgorithm:I.continuous parameter optimization.Evolutionary Computation1(1) (1993)25–498.Pohlheim,H.Entwicklung und systemtechnische anwendung evolutionrer al-gorithmen-abbildungsverzeichnis[online,cited2006/08/10].Available from: /diss/text/diss pohlheim ea lof.html9.Holland,J.H.:Adaptation in Natural and Artifical Systems.University of MichiganPress,Ann Arbor,MI(1975)10.Rechenberg,I.:Evolutionsstrategie:Optimierung technischer Systeme nachPrinzipien der biologischen Evolution.frommann-holzbog,Stuttgart(1973)Ger-man.11.Schwefel,H.P.:Evolutionsstrategie und numerische Optimierung.PhD thesis,Technische Universit¨a t Berlin,Berlin,Germany(1975)German.12.International society for genetic and evolutionary computation[online,cited2006/08/10].Available from:13.B¨a ck,T.,Hoffmeister,F.,Schwefel,H.P.:A survey of evolution strategies.In Belew,R.K.,Booker,L.B.,eds.:Proc.of the Fourth Int.Conf.on Genetic Algorithms, San Mateo,CA,Morgan Kaufmann(1991)2–914.Fogel,L.J.,Owens,A.J.,Walsh,M.J.:Artificial Intelligence Through simulatedEvolution.John Wiley and Sons,New York(1966)15.Cellular evolutionary algorithms site[online,cited2006/09/25].Available from:http://neo.lcc.uma.es/cEA-web/16.Bull,L.:Applications of Learning Classifier Systems.SpringerVerlag(2004)17.Classifier systems and economic modeling[online,cited2006/08/10].Availablefrom:/apl96/mitl.htm18.Learning classifier systems(lcs)bibliography[online,cited2006/08/10].Availablefrom:/∼kovacs/lcs/search.html19.Evolutionary algorithms-wikipedia[online,cited2006/09/25].Available from:/wiki/Evolutionary algorithm20.Neri,F.:Evolutive modeling of tcp/ip network traffic for intrusion detection.In:Real-World Applications of Evolutionary Computing,EvoWorkshops2000: EvoIASP,EvoSCONDI,EvoTel,EvoSTIM,EvoROB,and EvoFlight,London,UK, Springer-Verlag(2000)214–223rmation exploration shootout project-ies[online,cited2006/08/10].Availablefrom::8080/22.Neri,F.,Giordana,A.:A parallel genetic algorithm for concept learning.InEshelman,L.J.,ed.:Proc.of the Sixth Int.Conf.on Genetic Algorithms,San Francisco,CA,Morgan Kaufmann(1995)436–44323.Machine learning programs-regal[online,cited2006/08/10].Available from:http://www.di.unito.it/∼mluser/programs.html24.Giordana,A.,Neri,F.:Search-intensive concept induction.Evolutionary Compu-tation3(4)(1996)375–41625.Lee,W.,Stolfo,S.J.,Mok,K.W.:Mining in a data-flow environment:experience innetwork intrusion detection.In:KDD’99:Proceedings of thefifth ACM SIGKDD international conference on Knowledge discovery and data mining,New York,NY, USA,ACM Press(1999)114–124.doi:/10.1145/312129.312212。

《计算机科学导论》(第三版)弗罗赞(翻译)

《计算机科学导论》(第三版)弗罗赞(翻译)

Chapter 1 练习复习题1.定义一个基于图灵模型的计算机。

答:Turing proposed that all kinds of computation could be performed by a special kind of a machine. He based the model on the actions that people perform when involved in computation. He abstracted these actions into a model for a computational machine that has really changed the world. 图灵模型假设各种各样的运算都能够通过一种特殊的机器来完成,图灵机的模型是基于各种运算过程的。

图灵模型把运算的过程从计算机器中分离开来,这确实改变了整个世界。

2.定义一个基于冯·诺伊曼模型的计算机。

答:The von Neumann Model defines the components of a computer, which are memory, the arithmetic logic unit (ALU), the control unit and the input/output subsystems.冯·诺伊曼模型定义了计算机的组成,它包括存储器、算术逻辑单元、控制单元和输入/输出系统。

3.在基于图灵模型的计算机中,程序的作用是什么?答:Based on the Turing model a program is a set of instruction that tells the computer what to do.基于图灵模型的计算机中程序是一系列的指令,这些指令告诉计算机怎样进行运算。

4.在基于冯·诺伊曼模型的计算机中,程序的作用是什么?答:The von Neumann model states that the program must be stored in the memory. The memory of modern computers hosts both programs and their corresponding data.冯·诺伊曼模型的计算机中,程序必须被保存在存储器中,存储程序模型的计算机包括了程序以及程序处理的数据。

算法导论第4版英文版

算法导论第4版英文版

算法导论第4版英文版Algorithm Introduction, Fourth Edition by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein is undoubtedly one of the most influential books in the field of computer science. With its comprehensive coverage of various algorithms and their analysis, this book has become a beloved resource for students, researchers, and professionals alike.The fourth edition of Algorithm Introduction builds upon the success of its predecessors, offering updated content and new insights into the world of algorithms. It starts with an introduction to algorithm analysis, providing readers with a solid foundation to understand the efficiency and effectiveness of different algorithms. The authors skillfully explain the techniques used in algorithm design and analysis, such as divide-and-conquer, dynamic programming, and greedy algorithms.One of the standout features of this book is its detailed and comprehensive treatment of various data structures. From arrays and linked lists to trees and graphs, the authors explore the intricacies of each data structure, discussing their properties, operations, and analysis. This thorough examination ensures that readers gain a deep understanding of the strengths and weaknesses of different data structures, enabling them to make informed decisions when choosing the appropriate structure for their algorithms.The book also covers a wide range of fundamental algorithms, including sorting, searching, and graph algorithms. The authors presentthese algorithms in a clear and concise manner, using pseudocode and diagrams to facilitate understanding. Additionally, they providedetailed analysis of these algorithms, discussing their time and space complexity, as well as their theoretical limits.Furthermore, Algorithm Introduction delves into advanced topics, such as computational geometry, network flow, and NP-completeness. These topics offer readers a glimpse into the cutting-edge research and real-world applications of algorithms. The authors' expertise in these areas shines through, making the book a valuable resource for those interested in pushing the boundaries of algorithmic research.In addition to its comprehensive content, Algorithm Introduction also stands out for its pedagogical approach. The authors include numerous exercises and problems throughout the book, encouraging readers to apply the concepts they have learned. These exercises not only serve as a means of reinforcing understanding but also provide an opportunity for readers to sharpen their problem-solving skills.The fourth edition of Algorithm Introduction is undoubtedly a must-have for anyone interested in algorithms and their applications. Its clear and concise explanations, comprehensive coverage of topics, and practical exercises make it an invaluable resource for students, researchers, and professionals alike. Whether you are a beginner looking to grasp the basics or an experienced practitioner seeking to expand your knowledge, this book will undoubtedly enhance your understanding of algorithms and their role in computer science.。

正确对待算法的作文题目

正确对待算法的作文题目

正确对待算法的作文题目英文回答:When it comes to dealing with algorithms, it is important to approach them with a balanced perspective. On one hand, algorithms have greatly improved our lives by providing efficient solutions to complex problems. For example, search engines like Google use algorithms toquickly deliver relevant search results, saving us time and effort. Algorithms also play a crucial role in various industries, such as finance, healthcare, and transportation, where they help optimize processes and make informed decisions.However, it is equally important to acknowledge the potential drawbacks and ethical concerns associated with algorithms. One major concern is the issue of bias. Algorithms are created by humans and can inadvertentlyreflect the biases and prejudices of their creators. For instance, facial recognition algorithms have been found tohave higher error rates for people with darker skin tones, leading to potential discrimination. Another concern is the lack of transparency and accountability in algorithmic decision-making. When algorithms are used to make important decisions, such as in hiring or loan approvals, it iscrucial to ensure that they are fair, unbiased, and explainable.To address these concerns, it is necessary to have regulations and guidelines in place to govern the development and use of algorithms. Governments and organizations should promote transparency andaccountability by requiring algorithmic systems to be auditable and explainable. Additionally, there should be diversity and inclusivity in the teams developingalgorithms to minimize biases. Regular audits and evaluations of algorithms should be conducted to identify and rectify any biases or errors.Moreover, it is essential to educate the public about algorithms and their impact. Many people are unaware of how algorithms work and the potential consequences of their use.By promoting digital literacy and providing accessible resources, individuals can make informed decisions and actively engage in discussions about algorithmic fairness and ethics.In conclusion, algorithms have become an integral partof our lives, bringing numerous benefits and conveniences. However, we must approach them with caution and address the potential biases and ethical concerns they may pose. By implementing regulations, promoting transparency, and educating the public, we can ensure that algorithms are developed and used in a responsible and fair manner.中文回答:谈到处理算法时,我们需要以平衡的态度来对待它们。

狼群算法

狼群算法

狼群算法的引入
狼群算法[参考文献The Wolf Colony Algorithm and ItsApplication](Wolf Algorithm,WA)是2007 年杨晨光等人根据狼群的捕食行为提出来的,狼群的整个捕食过程,可以抽象为游走行为、召唤行为和围攻行为。

WA 的规则,包括胜者为王的头狼产生原则和强者生存的狼群更新规则等。

本文用到的是强者生存的狼群更新原则部分。

狼群中能捕捉到猎物的大部分是强壮的狼,狼群捕食到猎物后,按照先强后弱的原则分配猎物,强壮的狼有足够的食物,较弱的狼可能会被饿死,这样能保证强壮的狼在下次捕捉到猎物,不至于使整个狼群饿死。

在寻找食物的途中,传统蚁群算法的路径上既有最好蚂蚁留下的信息素,也有最差蚂蚁留下的信息素。

最好蚂蚁搜索到最优路径的可能性大,而最差蚂蚁搜索到最优路径的可能性较小。

因此最差蚂蚁留下的信息素可能给后面蚂蚁选择路径时造成一定的干扰作用,将导致算法的搜索陷入局部最优。

为了提高算法收敛速度,避免传统蚁群算法陷入局部最优,本文借鉴狼群分配原则对信息素更新机制进行优化,找到每次循环中局部最优路径的蚂蚁,增大其释放的信息素量,去掉局部最差路径的蚂蚁信息素,这样能高效利用蚂蚁经过路径时留下的信息素。

各条路径上的信息素,在加入狼群原则后,蚁群的信息素更新公式如下:
式中,L* 是本次循环中的局部最优路径长度,L** 是本次循环中的局部最差路径长度,δ和ω表示本次循环中局部最优和最差蚂蚁的个数。

狼群分配原则所使用的信息素更新方式有效提升算法获得全局最优解的能力,提高了收敛速度。

2024高三九省联考英语试卷及答案解析

2024高三九省联考英语试卷及答案解析

2024高三九省联考英语试卷及答案解析2024高三九省联考英语试卷及答案文字版注意事项:1. 答卷前,考生务必将自己的考生号、姓名、考点学校、考场号及座位号填写在答题卡上。

2. 回答选择题时,选出每小题答案后,用铅笔把答题卡上对应题目的答案标号涂黑。

如需要改动,用橡皮擦干净后,再选涂其他答案标号。

回答非选择题时,将答案写在答题卡上。

写在本试卷上无效。

3. 考试结束后,将本试卷和答题卡一并交回。

第一部分听力(共两节,满分30分)做题时,先将答案标在试卷上。

录音内容结束后,你将有两分钟的时间将试卷上的答案转涂到答题卡上。

第一节(共5小题;每小题1. 5分,满分 7. 5分)听下面5 段对话。

每段对话后有一个小题,从题中所给的 A、B、C三个选项中选出最佳选项。

听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。

每段对话仅读一遍。

例: How much is the shirt?A. ?19. 15.B. ?9. 18.C. ?9. 15.答案是C。

1. What will Chris do next?A. Drink some coffee.B. Watch the World Cup.C. Go to sleep.2. What is the probable relationship between the speakers?A. Strangers.B. Classmates.C. Relatives.3. What is the woman’s attitude to the man’s suggestion?A. Favorable.B. Tolerant.C. Negative.4. What can we learn about Tom?A. He’s smart for his age.B. He’s unwilling to study.C. He’s difficult to get along with.5. What did Kevin do yesterday?A. He went swimming.B. He cleaned up his house.C. He talked with his grandparents.第二节(共15小题;每小题1. 5分,满分22. 5 分)听下面5段对话或独白。

Probabilistic analysis of algorithms

Probabilistic analysis of algorithms

Probabilistic Analysis of AlgorithmsAlan M.Frieze Bruce Reed1IntroductionRather than analyzing the worst case performance of algorithms,one can investigate their perfor-mance on typical instances of a given size.This is the approach we investigate in this paper.Of course,thefirst question we must answer is:what do we mean by a typical instance of a given size?Sometimes,there is a natural answer to this question.For example,in developing an algorithm which is typically efficent for an NP-complete optimization problems on graphs,we might as-sume that an vertex input is equally likely to be any of the labelled graphs with vertices. This allows us to exploit any property which holds on almost all such graphs when developing the algorithm.There is no such obvious choice of a typical input to an algorithm which sorts numbers for,e.g.,it is not clear how big we want to permit the to become.One of many possible approaches is to impose the condition that each number is drawn uniformly from. Another is to note that in analyzing our algorithm,we may not need to know the values of the variables but simply their relative sizes.We can then perform our analysis assuming that the are a random permutation of with each permutation equally likely.More generally,we will choose some probability distribution on the inputs of a given size and analyze the performance of our algorithm when applied to a random input drawn from this distri-bution.Now,in general,probability distributions are complicated objects which must be formally described and analyzed using much messy measure theory.Fortunately,we will be concerned only with relatively simple distributions which will be much easier to deal with.We often considerfinite distributions in which our probability space is afinite set,and for each there is a such that the and the probability that the outcome is is.If Department of Mathematical Sciences,Carnegie-Mellon University,Pittsburgh,PA15213.Supported in part by NSF grant CCR9530974.E-mail:af1p@.Equipe Combinatoire,CNRS,Univ.de Paris VI,4Place Jussieu,Paris75005,France. E-mail:reed@ecp6.jussieu,frall the are the same then we are choosing a uniform member of.For example,we discussed above choosing uniformily a random labelled graph on vertices.We may also consider choosing reals uniformly in.Thus the probability our random real is between and for is.Alternatively,we may consider analyzing probability distributions by imposing conditions on the random objects chosen without specifying any further the underlying distribution.One example of such a distribution independent analysis was mentioned earlier when we suggested studying sorting under the assumption that all permutations of numbers are equally likely to be the input.Finally,we may consider combining the above three possibilities.For example,we may consider a uniformly chosen graph on vertices whose edges have been assigned uniform random weights from,or a set of random vectors in where each vector consists of independent uniform elements of.Focusing on these simple distributions allows us to dispense with the development of a rigorous measure theoretical foundation of probability theory.It is also quite natural.One of our goals in this paper is to develop exact algorithms which work efficiently on the over-whelming majority of random inputs.A related goal is to try andfind algorithms whose expected running time is small.We examine these approaches in Sections2and3.A different technique is to consider algorithms which are guaranteed to run quickly but do not necessarilyfind the optimal solution,and show they are typically optimal,very close to optimal,or at least reasonably close to optimal.This is the approach taken in Sections4and5.Alternatively,we can show that an algorithm almost always behaves poorly on random instances. For example,we might prove that an algorithm almost always takes exponential time.This is a much more damning condemnation of its performance than the pathological examples con-structed to provide lower bounds on worst-case complexity.We discuss this approach in Section 6.Finally,we note that how an algorithm performs on a random input depends heavily on the probability distribution we are using.In Section7,we compare the analysis of various probability distributions for some specific problems.We stress that we are interested in providing the reader with a gentle introduction to some of the most important topics in this area.Our survey is neither comprehensive nor up to date.Readers may turn to the survey articles[45],and the books[28],[80],[85]for more in-depth discussions of this area.1.1Some Basic NotionsWe begin with two simple but powerful probabilistic tools.The Chernoff/Hoeffding Bounds Suppose is the sum of independent random variables where and for.Letso that.Then for:PrThis is one of many inequalities which bound the extent to which a variable is concentrated around its expected value.Chapter7of this volume is dedicated to the study of such inequalities. The First Moment Method/Markov Inequality If is a random non-negative integer valued variable thenPr(Proof:Pr Pr Pr).Moreover,is often easier to compute than Pr.If this is the case,then we may compute and use it as a bound on Pr.This technique is known as the First moment method.We say that a property defined in terms of holds whp if it holds with probability as .By we mean a random graph with vertex set where each edge is present with probability independently of the presence of the other edges.Thus,for each graph with vertex set and edges the probability that is.In particular,is a uniformily chosen random graph with vertex set.We note that the expected number of edges in is.Further,the Chernoff bounds can be used to show that unless,is whp.Thus,if we analyze ,then typical graphs have about edges.is the random graph on vertices whose edge set is a uniformly chosen random set of of the unordered pairs contained within .Finally,we note that if we have an algorithm for an optimization problem and we run it on a random instance of size drawn from some probability distribution,then the running time of this algorithm on this instance,,is a random variable which depends on.We let its expected value be.The expected running time of algorithm with respect to the specified distribution is a function such that.2Exact Algorithms for Hard ProblemsNP-complete problems are natural candidates for probabilistic analysis,as the traditional worst-case approach has failed to provide efficent algorithms for such problems.In this section,we focus on two such problems,Edge Colouring,and Hamilton Cycle.We shall also discuss Graph Isomorphism,another problem which although not known to be NP-complete,also is not knownto be solvable in polynomial time.As we shall see,it makes little sense to speak of approximation algorithms for any of these problems.Thus,the failure tofind efficient algorithms to solve them means that from a traditional viewpoint we are completely at sea.Ourfirst step is tofind efficient algorithms which solve these problems whp on uniform random instances,we then present algorithms which have polynomial expected running time.Some may criticise as unrealistic the assumption that a typical input is a uniformly chosen graph. However,this is no more unrealistic then the belief that studying the pathological examples con-structed in NP-completeness proofs yields information about typical instances.Furthermore,a standard paradigm for constructing algorithms which run in polynomial time whp(though by no means the only one),is to provide an algorithm which works provided that the input graph has a certain structure and then prove that has the required structure whp.Such proofs are valuable because they add to our understanding of what it is that makes the problem difficult. 2.1Algorithms which almost always succeed2.1.1Hamilton CyclesA Hamilton cycle in a graph is one passing through all its vertices.Determining if a graph has a Hamilton cycle was one of thefirst six NP-complete problems reduced to SAT by Karp in his seminal paper[?].In this section we show that has a Hamilton cycle whp and present a polynomial-time algorithm which whp constructs such a cycle.This is not difficult as there are a large number of random edges.Definition:We call a graph,tractable,if the following conditions hold:(i)every vertex has between and neighbours.(ii)for every pair of vertices,we have:(iii)for every triple of vertices,we have:We need:Lemma1is tractable whp.Proof For each pair of vertices of,is the sum of independent random variables each of which is1with probability and0with probability.Thus,we can show(ii)holds whp using the Chernoff bounds.Similar techniques apply for(i) and(iii),we leave the details to the reader.We now present a polynomial-time algorithm for constructing a Hamilton cycle in a tractable graph,which by the above lemma works whp on.The algorithm has three phases.Phase1:Path ConstructionConstruct a path by iteratively applying the following two rules,until this is no longer possible.(i)If some vertex not on sees an endpoint of,add the edge to,(ii)if there are vertices,such that and then replace by the pathWe leave it as an exercise for the reader to show that in a tractable graph with vertices, thefinal path has at least vertices.Phase2:Cycle ConstructionConstruct a path by applying one of the following two rules.(i)if there are vertices,such that and then let bethe cycle(ii)if there are vertices,such that and then let be the cycleWe leave it as an exercise for the reader to show that in a tractable graph with vertices, this phase is always possible.We note that.Phase3:Cycle ExtensionWe add the vertices of to,one or two at a time,until,according to the following three rules:(i)If some vertex not on sees two consecutive vertices and of then replace by,(ii)if there are adjacent vertices,and consecutive vertices of such that then replace by the,(iii)if there are vertices and vertices such that andthen replace by the cycle.We leave it as an exercise for the reader to show that in a tractable graph with vertices, this step is always possible(Hint:If is not a stable set then we can apply(i)or(ii)).It is easy to see that each phase of the algorithm can be implemented in time,so it is indeed a polynomial-time algorithm as claimed.Exercise:Show that the above algorithm can actually be implemented in time on tractable graphs(which is linear in the number of edges).2.1.2Edge ColouringAn edge colouring of a graph is an assignment of colours to its edges so that no two edges which share an endpoint receive the same colour.I.e.,each colour class is a matching,that is a graph all of whose vertices have degree one.Clearly,if a graph has maximum degree then it has no edge colouring using fewer than colours.Vizing proved that every such graph has a colouring.So determining the chromatic index of a graph boils down to determining if has a-colouring.Vizing[90]also proved that if the maximum degree vertices of form astable set,i.e.no edge of links two vertices of maximum degree,then has a colouring. Berge and Fournier[9]developed a polynomial time algorithm for constructing a colouring of.The algorithm provides a colouring provided the vertices of maximum degree in form a stable set.In contrast Holyer[?]has shown that determining the chromatic number of a graph is NP-complete.In this section,we present the following result due to Erdos and Wilson[38].Theorem1has a unique vertex of maximum degree whp.Thus,we obtain:Corollary1Berge and Fournier’s algorithm is a polynomial-time algorithm which edge colours whp.Proof of Theorem1To prove the theorem,we need to analyze the probability distribution on the degrees of the vertices in.Now,the degree of a vertex in is the sum of variables each of which is with probability and with probability.Thus,the expected degree of a vertex of is and1The probability that has degree is.It follows easily(e.g.from the Chernoff bounds)that if we let be the smallest integer such that Pr then provided is large enough,,so using (1)we obtain:2Pr Pr.Thus,we expect at least vertices of to have degree greater than.So,the following result,which we prove in the next section is not surprising.3Whp there is a vertex of whose degree exceeds.Now,a simple but tedious First Moment calculation,using(1)allows us to show:4Whp there is no such that two vertices of have degree.Combining(3)with(4)yields the theorem,it remains only to prove(4).To do so,we note that,by(1),for between and,we have:PrPrThus,Pr Pr PrSo,we obtain that Pr.We can now bound the expected number of pairs of vertices in both of which have the same degree which exceeds.Let for.ThenPr Pr PrPr PrPrHence,PrPr PrPr PrApplying,the Chernoff bound and our bound on the probability that,we obtainThus,the probability that for some there are two vertices of degree is also,i.e.(4) holds.A similar but messier First Moment computation yields the following result which we state with-out proof as we need it later:5For,the probability that there are disjoint pairs of verticessuch that for some,isAs we discuss in Section2.3,Frieze,Jackson,McDiarmid and Reed[44]showed that the proba-bility that does not have a-edge colouring is between and for some two positive constants and(and).2.1.3Graph IsomorphismThe input to the decision problem Graph Isomorphism is two graphs and.The problem is to determine if there is an isomorphism between them.That is,a bijection from to such that is an edge of if and only if is an edge of.This problem is neither known to be in nor known to be-complete.In a probabilistic analysis of Graph Isomorphism,we do not want to consider an input consisting of two random graphs,as they will whp be obviously non-isomorphic because,e.g.,they have a different number of edges or different degree sequences.There are(at least)two ways of dealing with this problem.Thefirst is to assume that that the input consists of a graph drawn from the uniform distribution on the vertex graphs and a second graph about which we have no information(the reader may wish to think of as chosen by an adversary who has seen). The second(more studied)approach is to consider canonical labelling algorithms.A canonical labelling algorithm assigns to a graph on vertex set,a permutation such that if two graphs and are isomorphic then is an isomorphism from to.That is,a canonical labeling algorithm relabels graphs so that if the original graphs were isomorphic then the relabelled graphs coincide.As an example,a canonical labelling algorithm might choose to order the vertices of the graph so that if then is in more triangles than.We note that if no two vertices of are in the same number of triangles than there is a unique satisfying this condition.Furthermore,if is isomorphic to then there is a unique satisfying this condition and andare the same graph.Of course our canonical labelling algorithm must also have a way of dealing with graphs in which some pairs of vertices are in the same number of triangles.We invite the reader to show that there is a canonical labelling algorithm that runs intime.We also discuss canonical labelling algorithms which relabel some but not all graphs.In this case,if the algorithm relabels it should also relabel all graphs isomorphic to.In this section,we prove a result of Babai,Erdos,and Selkow[?](for strengthenings see Karp[?] and Lipton[?]).Theorem2There is a canonical labelling algorithm which labels whp.One such canonical labelling algorithm is to order the vertices in non-increasing order of degree and to order the vertices of the same degree so that vertices in more triangles comefirst.We shall not treat this algorithm here(however,the reader is invited to show that it succeeds whp by showing that the expected number of pairs of vertices with the same degree and in the same number of triangles is).Instead,we treat an algorithm which orders the vertices in non-increasing order of degree but chooses the order in the set of vertices of the same degree in a slightly different way.We need:Definition We call a degree unique if there is precisely one vertex with this degree.We call a vertex solitary if it has unique degree.Lemma2Whp,the highest degrees of are unique and no two vertices have the same neighbourhood on the vertices of highest degree.Now,the canonical labelling algorithm we consider orders vertices of the same degree so that if then the highest degree vertex which sees exactly one of sees but not. Lemma2ensures that this algorithm succeeds.Thus the lemma implies the theorem.We prove the lemma below.Research Problem:Proof of Lemma2Let.The key to proving the lemma is to show:6Whp the highest degrees in are unique and the difference between two consecutive degrees is at leastfiveWe prove this result bining it with the following result proves the lemma.7The probability that the highest degrees in are unique and differ by at leastfive and two vertices have the same neighbourhood on the vertices of highest degree is.To prove(7),we compute the expected number of sets in such that(i) are solitary vertices with the highest degrees,the highest degrees all differ by atleastfive,and(ii)and have the same neighbourhood on.We show that the expected number of such sets is hence the probability one exists is and(7)holds. Now,there are choices for.For each choice,we determine the edges of .That is,we take a copy of with vertex set.If the vertices of highest degree in are not distinct then(i)cannot hold,for adding and changes each degree by at most two and the difference between two degrees by at most four.If the vertices of highest degree in this graph are unique,then for(i)to hold the vertices with these degrees must be those in which by symmetry occurs with probability.Given that is the set of high degree vertices in this graph we see,by considering the edges from and,that the probability that(ii)holds is.Thus,the expected number of such that(i)and(ii)holds is.So,(7)holds as claimed,we turn now to(6).To prove(6),we consider the defined in our discussion of edge-colouring.As promised in that discussion,we will show that whp,has a vertex of degree greater than.In fact,we will prove that whp it has at least such vertices,which combined with(5)for j=1,proves (6).We actually prove a much stronger result which we will need later,to wit:8The probability that there are fewer than vertices of degree greater than is.To prove this result,we use“the method of deferred decisions”as described in Knuth,Motwani and Pittel[67].Imagine that we have an assistant and when we want to know whether an edge exists,heflips a fair coin and if it comes down heads the edge exists,otherwise it does not. We only do this at most once for each possible pair.The order in which weflip the edges is as described in the following procedure.(1)Set,choose some vertex.determine which edges incident to are present.(2)If stop,otherwise choose the vertex in which has the mostneighbours in and determine which edges between andare present.(3)Increment and return to Step2.By analyzing this procedure,we can show:9The probability that there is some such that has fewer than neighbours in is.Proof By our choice of,if this occurs,then there are fewer thanedges between and.However,we expect edges between the two ing theChernoff bounds,it is easy to show that expected number of sets of vertices such that there are fewer than edges between and is(we leave thedetails to the interested reader).The result follows.10The probability that there are fewer than values of which are less than such that has more than neighbours in is.Proof Now,in thefirst iterations,weflip coins only for edges from.Thus,after we choose,the coins for the edges from to are yet to beflipped,and in fact are thoseflipped in the next iteration.It follows via the Chernoff bounds,that for,the probability of the event that has more than neighbours inis close to and is certainly greater than.Thus,the expected number of for which holds is at least.Furthermore,for distinct and,and are independent for they are determined by disjoint sets of edges(the coins for which areflipped in different iterations of our procedure for generating).Thus,by applying,e.g.,the Chernoff Bounds,we obtain that the number of for which holds is less than with a probability which is. Combining(9)and(10)yields(8)thereby completing the proof of the lemma.We close this section by remarking that combining(5)and(8)yields the following result,which we shallfind useful:11The probability that there are fewer than solitary vertices of with degree bigger than is.2.2Polynomial Expected Time2.2.1Graph IsomorphismWe now present a polynomial expected time algorithm for graph isomorphism.uniform distribu-tion on-vertex graphs and a graph about which we have no information.As a last resort,our algorithm uses the brute force procedure of testing each of the bijections between and.Our algorithm also uses two sub-algorithms both of which are reminiscent of the canonical la-belling procedure in the last section.In the canonical labelling procedure,we essentially knew the bijection on some subset of(the high degree solitary vertices)and this allowed us to determine the rest of the bijection,simply by considering for each.To ease our discussion of extending partial bijections in this manner,we need some definitions. Let,we say a vertex in is determined by if there is no with .We let be the set of vertices determined by.We note:Lemma3If and is a bijection from to some subset of,then for any isomorphism extending and for any,we have only one candidate for and in time,we can either(i)determine that there is no isomorphism from to extending,or(ii)find a bijection from to a subset of such that any isomorphism extending corresponds with on.Proof We leave this as an exercise for the reader.We need to take this idea one step further.To this end,we say a vertex in isfixed by if.We let be the set of verticesfixed by.Applying Lemma 3twice,we obtain:Lemma4If and is a bijection from to some subset of,then for any isomorphism extending and for any,we have only one candidate for and in time,we can either(i)determine that there is no isomorphism from to extending,or(ii)find a bijection from to a subset of such that any isomorphism extending corresponds with on.The probabilistic results we need are:Lemma5With probability,the solitary verticesfix.Lemma6With probability,every set of verticesfixes all but at most vertices of.We prove these results in a moment.First,we show that they imply the existence of the desired polynomial expected time algorithm.We will use an algorithm which computes the degree sequence of and,ensures that these coincide,sets to be the set of solitary vertices of,sets to be the set of solitary vertices of ,and lets be the bijection from to such that.It then determines if fixes.If not it halts.Otherwise,applying the algorithm of Lemma4,it either determines and outputs that is not isomorphic to or extends to a bijection from tosuch that the only possible isomorphism from to is.If it returns such a bijection,it then checks whether or not is in fact an isomorphism.If so,it outputs this isomorphism,otherwise it outputs the fact that and are not isomorphic.By Lemma4,an answer returned by thealgorithm is correct.By Lemma5,the probability that does not give an answer is. It is straightforward to verify that the algorithm can be implemented in time.We will also use an algorithm whichfirst chooses an arbitrary set of vertices of.The algorithm then checks iffixes all but at most vertices of.If not it halts.The algorithm next determines for each set of vertices of and bijection from to whether or not there is isomorphism extending.If itfinds for some and that there is an isomorphism extending,it returns with the information that and are isomorphic.If it determines that for each and there is no isomorphism extending then it outputs that and are not isomorphic.For a given and,applying the procedure of Lemma4,either determines and outputs that no isomorphism from to extends or extends to a bijection from to a subset of such that the only possible isomorphisms from to extending also extend.If it returns such a bijection,it then checks whether or not any of the at mostextensions of to bijections from to are isomorphisms.If any of these are isomorphisms,the algorithm returns that there is an isomorphism extending,otherwise it returns that no such isomorphism exists.By Lemma4,an answer returned by the algorithm is correct.By Lemma6,the probability that does not give an answer is.It is straightforward to show that the algorithm can be implemented so that it spendstime on each pair and hence takes at most time in total.Now,our global algorithm applies,then applies if terminates without a response,andfi-nally applies our brute force algorithm if fails to provide an answer.By the above remarks,the expected running time of this algorithm is. Since a random graph has edges clearly this algorithm has optimal expected running time. We can actually create a canonical labelling algorithm whose expected running time is using similar techniques,see Babai and Kucera[?]for a result in this vein.With our description of the algorithm complete,it remains only to prove our two probabilistic lemmasWe need the following auxiliary results,all of which can be proven using simple First Moment calculations:12The probability that there is a set of vertices which determines fewer than vertices is.13The probability that there is a set of vertices which determines fewer thanvertices is.14The probability that there is a set of vertices which does not determine is. Now,Lemma6follows from(12)and(13).Lemma5follows from(12)and(14),and(11).2.2.2Hamilton CyclesWe now present an algorithm DENSEHAM for Hamilton Cycle that has expected running time which is.The algorithm uses two sub-algorithms.One,,solves Hamilton cycle on any graph in time and actuallyfinds the cycle if it exists.It is the Dynamic Programming algorithm of Held and Karp[55].The other,runs in time.It attempts to construct a Hamilton cycle in the input graph.The probability that it fails to return a Hamilton cycle when applied to is.DENSEHAMfirst applies and then applies if fails tofind a Hamilton cycle.Clearly,DENSEHAM does indeed solve Hamilton Cycle,and in fact outputs a Hamilton cycle if one exists.Furthermore,its expected running time is,as claimed.It remains only to describe and analyse and.is a simple dynamic programming algorithm which determines for each subset of with ,and for each pair of vertices of,whether or not there is a Hamilton path through with endpoints and.To determine if has a Hamilton cycle we need then only check if for any edge of there is a Hamilton path through with endpoints and.considers the subsets of in increasing order of size.To determine if there is a Hamilton path of with endpoints and,it simply checks whether there is some neighbour of in such that there is a Hamilton path of with endpoints and.Since the algorithm has already considered ,this can be done via a simple table lookup.We spend time on each tripleso the the claimed running time bound on holds.With a little extra bookkeeping we can also construct the Hamilton cycle,we omit the details.is reminiscent of the algorithm for Hamilton Cycle presented in the last section.We show:Lemma7Let be a sufficiently large graph such that(i)a set of at most5000vertices such that is tractable,(ii)the minimum degree of is at least2,and(iii)at most one vertex of has degree less than15000.Then has a Hamilton cycle.Furthermore,given we canfind the Hamilton cycle in time.We also show that the probability that satisfies conditions(i)-(iii)of Lemma7is. Actually we prove a slightly stronger result which permits us to use a greedy procedure for finding.Definition A bad sequence of length l is a sequence of disjoint subsets of such that letting,we have that for each between and,either(a)is a vertex such that,。

布谷鸟算法里发现概率英文表达

布谷鸟算法里发现概率英文表达

布谷鸟算法里发现概率英文表达Discovery Probability in the Cuckoodle Algorithm.The Cuckoodle algorithm, named after its characteristic call that resembles the sound of the cuckoo bird, is an innovative approach in the field of optimization techniques. It finds its applications in various domains, ranging from engineering design to financial modeling, where the goal is to identify the best possible solution among a vast search space. A crucial aspect of this algorithm is the discovery probability, which refers to the likelihood of finding a superior solution during the search process.The discovery probability is not a static parameter; it evolves dynamically based on the algorithm's interactions with the search space. Initially, the algorithm has a low discovery probability because it is exploring a vast and diverse landscape of potential solutions. As the search progresses, the algorithm learns from its previousiterations and gradually improves its ability to identifypromising regions. This improvement is reflected in an increasing discovery probability.The Cuckoodle algorithm employs several strategies to enhance its discovery probability. One such strategy is the utilization of heuristic rules, which guide the search towards regions that are more likely to contain optimal solutions. These rules are derived from past experiences and domain-specific knowledge, enabling the algorithm to make informed decisions about where to explore next.Another key aspect is the balance between exploration and exploitation. Exploration involves searching for new and potentially better solutions, while exploitation focuses on refining the current best solution. The Cuckoodle algorithm strikes a careful balance between these two objectives, ensuring that it doesn't get stuck in local optima while also maintaining the ability to discover globally optimal solutions.The discovery probability is also influenced by the diversity of the search population. In the Cuckoodlealgorithm, a population of candidate solutions evolves over time, with each individual representing a potentialsolution to the problem. By maintaining a diverse population, the algorithm increases its chances of discovering novel and innovative solutions. Techniques such as crossover and mutation are employed to introduce genetic diversity among the population members, enabling the algorithm to explore a broader range of solutions.The evaluation function plays a crucial role in determining the discovery probability. This function assigns a fitness score to each candidate solution, indicating its proximity to the optimal solution. By continuously evaluating and comparing the fitness scores, the algorithm can identify regions that are rich in promising solutions, thus increasing the discovery probability.The Cuckoodle algorithm also incorporates learning mechanisms that enable it to adapt and improve its discovery probability over time. By analyzing thehistorical data and patterns, the algorithm can learn fromits past successes and failures, refining its search strategies and becoming more efficient at finding optimal solutions.In summary, the discovery probability is a fundamental aspect of the Cuckoodle algorithm that governs its ability to find the best possible solution in a given search space. Through dynamic adaptation, heuristic rules, exploration-exploitation balance, population diversity, evaluation functions, and learning mechanisms, the algorithm continuously enhances its discovery probability, ensuring efficient and effective optimization.。

TheLanczos算法

TheLanczos算法

TheLanczos算法The Lanczos Algorithm: A Powerful Tool for Eigenvalue Computation Introduction:Eigenvalue computation is a fundamental problem in various scientific and engineering disciplines, including quantum mechanics, computer graphics, data analysis, and many more. The Lanczos algorithm, developed by Cornelius Lanczos in the 1950s, revolutionized the field of numerical linear algebra by providing an efficient and reliable method for approximating the eigenvalues and eigenvectors of large symmetric matrices. This article will explore the essence of the Lanczos algorithm, its applications, and its impact on the scientific community.1. Understanding the Lanczos Algorithm:The Lanczos algorithm is an iterative algorithm that generates a tridiagonal matrix that is similar to the original symmetric matrix, thereby preserving its eigenvalues. It starts with an initial vector and generates a sequence of orthogonal vectors. At each iteration, the algorithm constructs a new Lanczos vector by multiplying the original matrix with the current Lanczos state vector and subtracting the previous Lanczos state vector. This process is repeated until a certain condition is met, such as a fixed number of iterations or convergence of the Lanczos state vectors.2. Advantages of the Lanczos Algorithm:The Lanczos algorithm offers several advantages compared to other eigenvalue computation methods. Firstly, it reduces the computation complexity from O(n^3) to O(n^2), making it suitable for large-scale problems. Secondly, it allows for the approximation of a select number of eigenvalues and eigenvectors, which is particularly valuable for problems where only a few eigenvectors are of interest. Additionally, the algorithm can handle sparse matrices efficiently, making it applicable to real-world problems with large and sparse data sets.3. Applications of the Lanczos Algorithm:The Lanczos algorithm has found numerous applications across various scientific and engineering domains. One notable application is in quantum mechanics, where it is used to calculate the electronic structure of atoms andmolecules. It is also widely used in computer graphics for rendering three-dimensional objects and simulating physical phenomena. Furthermore, the algorithm has proven valuable in data analysis tasks, such as machine learning, clustering, and dimensionality reduction techniques.4. The Impact on the Scientific Community:The Lanczos algorithm has had a significant impact on the scientific community since its introduction. Its efficiency and accuracy in approximating eigenvalues and eigenvectors have enabled researchers to study larger and more complex systems that were previously computationally infeasible. Its ease of implementation and wide availability in numerical software libraries have made it accessible to a broad range of scientists and engineers, democratizing the field of numerical linear algebra.5. Recent Developments and Future Directions:In recent years, researchers have made significant advancements in improving the Lanczos algorithm and extending its capabilities. Various variants, such as the thick-restart Lanczos and implicitly restarted Lanczos methods, have been proposed to enhance convergence and accuracy. Additionally, efforts have been made to parallelize the algorithm using modern computational architectures, further reducing the time and resources required for eigenvalue computation.Conclusion:The Lanczos algorithm has emerged as a powerful tool for approximating eigenvalues and eigenvectors of large symmetric matrices. Its efficiency, accuracy, and wide applicability have made it an indispensable tool in scientific and engineering fields. With ongoing advancements and future developments, the Lanczos algorithm will continue to play a crucial role in unlocking the potential of numerical linear algebra and empowering researchers to tackle even more complex and challenging problems.。

云制造模式下粤港澳大湾区供应链的调度优化问题研究

云制造模式下粤港澳大湾区供应链的调度优化问题研究

0引言当前,全球新一轮科学技术以及各类产业都在厚积薄发,大数据、云计算、物联网、区块链等新一代信息技术的快速发展和广泛应用,进一步促进了全球供应链的加速重构。

Thakur 等(2016)指出面对制造业服务化或服务制造业时,应重构供应链并增加服务元素。

制造企业需要重新配置其制造系统和供应链以适应客户不断变化的需求(Qi Tian 等,2019)。

中国也以更加积极的姿态去推动高科技产业深入融入全球供应链体系,努力提升我国供应链安全水平。

国务院印发的《粤港澳大湾区发展规划纲要》指出,对于未来的发展,粤港澳三地要充分利用好CEPA 及服务业开放先行先试政策,加强科技创新领域的交流合作,开拓国际国内市场,努力优化粤港澳大湾区供应链的调度,以推动其创新型战略转型(黄钟等,2019)。

同时,大数据等数字新技术可加强供应链的优化调整和创新,建立完善的成本控制体系,推进行业间的数据联盟(雷晨光,2020)。

云制造是一种面向服务的新技术,融合了先进的制造技术和信息技术,按用户需求在网络上组织资源,使供应链上资源利用效率得以提高。

随着相关技术的发展,云制造已经在许多企业得到应用和发展,如美国的制造能力交易平台 缩短了制造业的采购流程,中国航天科工二院等大型集团研发的云制造服务平台实现了复杂产品制造全生命周期服务。

优化云制造供应链上的资源调度能够进一步推动经济多元发展,为粤港澳大湾区的企业开创更广阔的发展空间。

本文基于云制造模式下,通过构建供应链调度的优化模型,以期为提高粤港澳三地在云制造环境下的区域协同合作、资源调度配置优化等机制提供理论支撑,进而加强三地的经济联系,实现粤港澳大湾区供应链匹配机制的协调发展。

1云制造供应链调度优化的基本概念供应链调度是指通过建立模型找到优化的决策和方案,使供应链上成员利益最大化,其中包含加工调度和运输安排等环节的优化决策问题。

供应链调度可以为供应链上的企业带来显著的经济效益,但是要考虑供应链上所有成员工作的协同与调度,需要细化的问题很多,一旦供应链的调度失职,会影响整条供应链的生存。

The Levenberg-Marquardt Algorithm

The Levenberg-Marquardt Algorithm

The Levenberg-Marquardt AlgorithmAnanth Ranganathan8th June20041IntroductionThe Levenberg-Marquardt(LM)algorithm is the most widely used optimization algorithm.It outperforms simple gradient descent and other conjugate gradient methods in a wide variety of problems.This document aims to provide an intuitive explanation for this algorithm.The LM algorithm isfirst shown to be a blend of vanilla gradient descent and Gauss-Newton iteration. Subsequently,another perspective on the algorithm is provided by considering it as a trust-region method.2The ProblemThe problem for which the LM algorithm provides a solution is called Nonlinear Least Squares Minimization.This implies that the function to be minimized is of the following special form:f(x)=12r(x) 2.The derivatives of f can be written using the Jacobian matrix J of r w.r.t x defined as J(x)=∂r j2 Jx+r(0) 2.We also get∇f(x)=J T(Jx+r)and∇2f(x)=J T J.Solving for the min-imum by setting∇f(x)=0,we obtain x min=−(J T J)−1J T r,which is the solution to the set of normal equations.Returning to the general,non-linear case,we have∇f(x)=m∑j=1r j(x)∇r j(x)=J(x)T r(x)(1)∇2f(x)=J(x)T J(x)+m∑j=1r j(x)∇2r j(x)(2)The distinctive property of least-squares problems is that given the Jacobian matrix J,we can essentially get the Hessian(∇2f(x))for free if it is possible to approximate the r j s by linear functions(∇2r j(x)are small)or the residuals(r j(x))themselves are small.The Hessian in this case simply becomes∇2f(x)=J(x)T J(x)(3) which is the same as for the linear case.The common approximation used here is one of near-linearity of the r i s near the solution so that∇2r j(x)are small.It is also important to note that(3)is only valid if the residuals are small. Large residual problems cannot be solved using the quadratic approximation,and consequently, the performance of the algorithms presented in this document is poor in such cases.3LM as a blend of Gradient descent and Gauss-Newton itera-tionVanilla gradient descent is the simplest,most intuitive technique tofind minima in a function. Parameter updation is performed by adding the negative of the scaled gradient at each step,i.e.x i+1=x i−λ∇f(4) Simple gradient descent suffers from various convergence problems.Logically,we would like to take large steps down the gradient at locations where the gradient is small(the slope is gentle) and conversely,take small steps when the gradient is large,so as not to rattle out of the minima. With the above update rule,we do just the opposite of this.Another issue is that the curvature of the error surface may not be the same in all directions.For example,if there is a long and narrow valley in the error surface,the component of the gradient in the direction that points along the base of the valley is very small while the component along the valley walls is quite large.This results in motion more in the direction of the walls even though we have to move a long distance along the base and a small distance along the walls.This situation can be improved upon by using curvature as well as gradient information,namely second derivatives.One way to do this is to use Newton’s method to solve the equation∇f(x)=0. Expanding the gradient of f using a Taylor series around the current state x0,we get∇f(x)=∇f(x0)+(x−x0)T∇2f(x0)+higher order terms of(x−x0)(5) If we neglect the higher order terms(assuming f to be quadratic around x0),and solve for the minimum x by setting the left hand side of(5)to0,we get the update rule for Newton’s method-x i+1=x i− ∇2f(x i) −1∇f(x i)(6)where x0has been replaced by x i and x by x i+1.Since Newton’s method implicitly uses a quadratic assumption on f(arising from the neglect of higher order terms in a Taylor series expansion of f),the Hessian need not be evaluated exactly. Rather the approximation of(3)can be used.The main advantage of this technique is rapid con-vergence.However,the rate of convergence is sensitive to the starting location(or more precisely, the linearity around the starting location).It can be seen that simple gradient descent and Gauss-Newton iteration are complementary in the advantages they provide.Levenberg proposed an algorithm based on this observation,whose update rule is a blend of the above mentioned algorithms and is given asx i+1=x i−(H+λI)−1∇f(x i)(7) where H is the Hessian matrix evaluated at x i.This update rule is used as follows.If the error goes down following an update,it implies that our quadratic assumption on f(x)is working and we reduceλ(usually by a factor of10)to reduce the influence of gradient descent.On the other hand,if the error goes up,we would like to follow the gradient more and soλis increased by the same factor.The Levenberg algorithm is thus-1.Do an update as directed by the rule above.2.Evaluate the error at the new parameter vector.3.If the error has increased as a result the update,then retract the step(i.e.reset the weights totheir previous values)and increaseλby a factor of10or some such significant factor.Then go to(1)and try an update again.4.If the error has decreased as a result of the update,then accept the step(i.e.keep the weightsat their new values)and decreaseλby a factor of10or so.The above algorithm has the disadvantage that if the value ofλis large,the calculated Hessian matrix is not used at all.We can derive some advantage out of the second derivative even in such cases by scaling each component of the gradient according to the curvature.This should result in larger movement along the directions where the gradient is smaller so that the classic“error valley”problem does not occur any more.This crucial insight was provided by Marquardt.He replaced the identity matrix in(7)with the diagonal of the Hessian resulting in the Levenberg-Marquardt update rule.x i+1=x i−(H+λdiag[H])−1∇f(x i)(8) Since the Hessian is proportional to the curvature of f,(8)implies a large step in the direction with low curvature(i.e.,an almostflat terrain)and a small step in the direction with high curvature(i.e, a steep incline).It is to be noted that while the LM method is in no way optimal but is just a heuristic,it works extremely well in practice.The onlyflaw is its need for matrix inversion as part of the update.Even though the inverse is usually implemented using clever pseudo-inverse methods such as singular value decomposition,the cost of the update becomes prohibitive after the model size increases to a few thousand parameters.For moderately sized models(of a few hundred parameters)however, this method is much faster than say,vanilla gradient descent.4LM as a trust-region algorithmHistorically,the LM algorithm was presented by Marquardt as given in the previous section where the parameter,λ,was manipulated directly tofind the minimum.Subsequently,a trust-region approach to the algorithm has gained ground.Trust-region algorithms work in a fundamentally different manner than those presented in the previous section,which are called line-search methods.In a line search method,we decide on a direction in which to descend the gradient and are then concerned about the step size,i.e.if p(k) is the direction of descent,andαk the stepsize,then our step is given by x(k+1)=x(k)+αk p(k)and the stepsize is obtained by solving the sub-problemmin f x(k)+αk p(k) ∀αk>0By contrast,in a trust-region algorithm we build a model m(k)that approximates the function f in afinite region near x(k).This region,∆,where the model is a good approximation of f,is called the trust-region.Trust-region algorithms maintain∆and update it at each iteration using heuristics. The model m(k)is most often a quadratic obtained by a Taylor series expansion of f around x(k), i.e.m(k)=f x(k) +∇f x(k) p+1p T H p(10)2and the iteration step itself is x(k+1)=x(k)+p.A trust-region algorithm can thus be conceived of as a sequence of iterations,in each of which we model the function f by a quadratic and then jump to the minimum of that quadratic.The solution of(10)is given by a theorem which is as follows-p∗is a global solution of min p <∆f x(k) +∇f x(k) p+1f w(k) −f w(k)+p∗ ρk=。

散文 Universities and Their Function --- Whitehead

散文 Universities and Their Function --- Whitehead

Whitehead’s Style
• • • • • Bacon’s style Forceful Aphoristic Parallelism Antithesis
• Forceful • A university whick fails in this respect has no reason for existence. • Imagination is not to be divorved from the facts. • Youth is imaginative.
• Aphoristic • The tragedy of the world is that those who are imaginative have but slight experience, and those who are experienced have feeble imaginations. Fools act on imagination without knowledge; pedants act on knowledge without imagination. The task of a university is to weld together imagination and experience.
Reading Comprehension
• 1. According to the author, the most important function of a university is A) to train students in their fields B) to provide opportunities for research C) to develop students’ imaginations D) to convey knowledge to students
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Chinese Journal of ElectronicsVol.20,No.2,Apr.2011The Wolf Colony Algorithm and Its Application∗LIU Changan,YAN Xiaohu,LIU Chunyang and WU Hua(School of Control and Computer Engineering,North China Electric Power University,Beijing102206,China)Abstract—The paper proposes the wolf colony algo-rithm that simulates the intelligent predatory behaviors of the wolf colony to solve the optimization problem.The solution in the searching space is the artificial wolf in the algorithm.A few artificial wolves are assigned to searching in the activity range of the quarry.When the searching ar-tificial wolves discover the quarry,they notify the position of the quarry to the other artificial wolves by howl.The other artificial wolves get close to the quarry and besiege it.The wolf colony is updated according to the assignment rule of the wolf colony.The performance of wolf colony algorithm is discussed according to the function optimiza-tion problem.To prove the good generalization,the wolf colony algorithm is used to plan the optimal path for the mobile robot.The results prove that the path planning method based on the wolf colony algorithm is viable and efficient.Key words—Wolf colony algorithm,Optimization problem,Mobile robot,Path planning.I.IntroductionOptimization problem is the mathematical programming problem that is frequently encountered in scientific research and engineering application.In recent years,many evolution-ary algorithms have been successfully applied to the optimiza-tion problem[1].Genetic algorithm(GA)based on evolution-ary concepts of natural selection and genetics was proposed by J.H.Holland in1962[2,3].The algorithm reflects the natural rule and has strong global searching ability.Particle swarm op-timization(PSO)inspired by the social behaviors of birdflock-ing andfish schooling was proposed by Kennedy and Eber-hart in1995[4,5].The algorithm has many advantages such as easy implementation and good generalization.Ant colony optimization algorithm(ACO)simulating the swarm intelli-gence of the ant colony behaviors was proposed by M.Dorigo in1996[6,7].The algorithm has the advantages of positive feed-back and parallel computation mechanism.The above-mentioned algorithms have some shortcomings in some applications due to the variety of the optimization problem,such as the slow convergence speed and the local op-timum.The paper proposes the Wolf colony algorithm(WCA) that simulates the intelligent predatory behaviors of the wolf colony to solve the optimization problem.The solution in the searching space is the artificial wolf in the algorithm.A few ar-tificial wolves are assigned to searching in the activity range of the quarry.When searching wolves discover the quarry,they notify the position of the quarry to the other artificial wolves by howl.The other artificial wolves get close to the quarry and besiege it.The wolf colony is updated according to the assignment rule of the wolf colony.The rest of this paper is organized as follows:In Section II, the wolf colony algorithm is introduced.Section III discusses the performance of WCA according to the function optimiza-tion problem.Section IV uses WCA to plan the optimal path for mobile robot to prove the good generalization of WCA. Finally,conclusions are summarized in Section V.II.The Wolf Colony Algorithm It was found that the wolf colony has a rigorous organized system.The wolves divide the task definitely and keep their steps consistent when they are preying.A few artificial wolves are assigned to searching in the activity range of the quarry. When the searching wolves discover the quarry,they notify the position of the quarry to the other artificial wolves by howl. The other artificial wolves get close to the quarry and besiege it.The assignment rule of the wolf colony is to assign the food to the strong wolffirst and then to the weak one.The wolf colony algorithm that simulates the above behaviors is proposed.1.The description of the behaviorsSuppose that the dimension of the searching space is D and the individual number is n.The position of the i th artificial wolf is X i,thenX i=(X i1,···,X id,···,X iD)(1)where1≤i≤n,1≤d≤D.(1)The searching behaviorTo increase the chance of discovering the quarry,q artifi-cial wolves are assigned to searching in the activity range of the quarry.The searching behavior is shown in theFig.1.Fig.1.The searching behavior∗Manuscript Received May2010;Accepted Nov.2010.The Wolf Colony Algorithm and Its Application213When the artificial wolf is on the position P0,h searching positions that are in h directions around P0are calculated and the optimal searching position is P1.If P1is better than the current position P0,the artificial wolf moves to P1.P1is set as the current position and the artificial wolf will move on.Suppose that the q searching wolves are the wolves that are nearest to the quarry,the maximum searching number is maxdh and the position of the i th searching artificial wolf is XX i.h positions are generated around XX i,and the j th (1≤j≤h)searching position is Y j,thenY j=XX i+randn·stepa(2) Where randn is a random number uniformly distributed in the range[−1,1];stepa is the searching step.If the searching number is larger than maxdh or the current position is better than the optimal searching position,the searching behavior ends.(2)Besiege the quarryWhen the searching wolves discover the quarry,they no-tify the position of the quarry to the other artificial wolves by howl.The other artificial wolves get close to the quarry and besiege it.Suppose that the position of the quarry is the position of the searching artificial wolf that is nearest to the quarry.The position of the quarry in the d th searching space of the k th iteration colony is G k d,and the position of the i th artificial wolf is X k id,then:X k+1id=X k id+rand·stepb·(G k d−X k id)(3) where rand is a random number uniformly distributed in the range[0,1];stepb is the besieging step;k is the iteration num-ber.The range of the d th position is[XMIN d,XMAX d].If the value calculated by Eq.(3)exceeds the range,set it as the boundary value.Eq.(3)is composed of two parts,where thefirst one is the present position of the artificial wolf,and the second one is the trend that the wolf colony gets close to the optimal artificial wolf.The trend represents the study and information shar-ing among the colony.As the wolf colony besieges the quarry, WCA can search the global optimum.(3)Update the wolf colonyThe assignment rule of the wolf colony is to assign the food to the strong wolffirst and then to the weak one.The wolf colony assigns most of the food to the strong wolf then to the weak one although the weak wolf will starve to death.It can ensure that the strong wolves prey next time,so the adapt-ability of the wolf colony can be enhanced.Simulating the behavior,the paper removes the worst m artificial wolves in the colony and generates m artificial wolves randomly in WCA. Therefore,the colony becomes various and the algorithm can avoid the local optimum.2.The steps of the wolf colony algorithmThe wolf colony algorithm simulating the intelligent preda-tory behaviors of the wolf colony is proposed.And the steps of the algorithm are as follows:Step1Initialization.Initialize the individual number n,the maximum iteration number maxk,the number of the searching artificial wolf q,the searching direction h,the max-imum searching number maxdh,the searching step stepa,the besieging step stepb,the number of the worst artificial wolf m and the position of the i th(1≤i≤n)artificial wolf X i.Step2Select q optimal artificial wolves as the searching artificial wolves and every searching artificial wolf moves on according to Eq.(2).Step3Select the best position in the searching artificial wolves as the position of the quarry.Update the position of every artificial wolf according to Eq.(3).If X id is less than XMIN d,then X id is equal to XMIN d;if X id is larger than XMAX d,then X id is equal to XMAX d.Step4Update the wolf colony according to the assign-ment rule of the wolf colony.Remove the worst m artificial wolves in the wolf colony and generate m artificial wolves ran-domly.Step5Judge whether the termination condition is sat-isfied.If the circulation step of WCA reaches the maximum iteration number,output the position of the optimal artificial wolf which is the optimal solution of the problem;otherwise turn to Step2.III.The Performance Analysis of WCA The selection of the parameters is crucial to the perfor-mance of an optimization algorithm.In this study,the im-portant parameters in WCA are as follows:the besieging step stepb and the number of the worst artificial wolf m.The con-tinuous function optimization problem[8−10]is used to discuss the selection of the important parameters in WCA.The con-tinuous function is as follows:f1(x)=−20exp−0.21nni=1x2i−exp1nni=1cos(2πx i)+20+e,S=[−32,32]n,f min=0(4) When the dimension of the functions is30,the important parameters in WCA are different and the other parameters are set as follows:n=200,q=5,h=4,maxdh=15, stepa=1.5,the average values and the optimal values of f1(x) obtained by WCA are shown in Table1.Table1.The performances of WCA withdifferent parametersstepb mThe average The optimalvalue value0.150.9702 5.8271e−005550.06060.00860.9500.1090 2.5731e−0050.950.03739.5163e−007As is shown in Table1,to study better for the artificial wolf,the appropriate values of stepb should be about1.If the value of stepb is too small,the global optimal value can be ob-tained but the average value is too large which illustrates the convergence speed is show.If the value of stepb is too large, the best position detected by the artificial wolf will exceed the boundary value easily and the global searching ability is214Chinese Journal of Electronics 2011decreased.The appropriate values of m should be small.Oth-erwise,the individual that studies among the colony will be replaced by the random individual.Then the convergence and the searching speed are affected.The number of the searching artificial wolf q and the searching direction h can not be too large.Otherwise,the searching area is overlapped and the computation cost is very pared to the number of the colony,the number of the searching artificial wolf is small.Supposing that h ∗maxdh is smaller than n ,the time complexity of WCA is O (n ∗maxk ).To test the convergence and the global searching ability of WCA,the following standard functions are used in the exper-iment.Sphere function:f 2(x )=n i =1x 2i ,S =[−100,100]n ,f min =0(5)Schwefel function:f 3(x )=max(|x i |,1≤i ≤n ),S =[−100,100]n ,f min =0(6)Rosenbroc function:f 4(x )=n i =1[x 2i −10cos(2πx i )+10],S =[−5.12,5.12]n ,f min =0(7)Rastigri function:f 5(x )=14000n i =1x 2i −ni =1cos x i√i +1,S =[−600,600]n ,f min =0(8)Fig.2.The convergence curves of the three algorithms.(a )Sphere function;(b )Schwefel function;(c )Rosenbroc function;(d )Rastigri functionThe first two functions have only one optimal position while others have two or more local optimal positions.When the dimension of every function is 30,PSO and GA are com-pared to optimize the standard functions.The maximum iter-ation number of the three algorithms is 800.The parameters ofWCA are set as follows:n =200,q =5,h =4,maxdh =15,stepa =1.5,stepb =0.9,m =5.In PSO,the particle number is 200,the inertia weight is 0.729and the two learning coeffi-cients are both 1.496.In GA,the crossover and mutation rate is 0.9and 0.1respectively.The population size is 600and the length of the individual is 900.If the optimal value obtained in the iteration exceeds 30,set it as 30.The convergence curves of the three algorithms for every function are shown in Fig.2.As is shown in Fig.2,WCA has a good convergence and strong global searching ability.Every function is optimized by the three optimization algorithms for 20trials independently and the best solutions obtained are recorded.The average val-ues and the optimal values obtained by the three algorithms are shown in Table 2.Table 2.The comparison of the three algorithms Function Sphere Schwefel Rosenbroc Rastigri The WCA 3.0815e −0097.8201e −0062.4866e −0061.7545e −012average PSO 0.0628 6.273251.65930.1408value GA 0.41250.36320.001114.3940The WCA 2.7055e −0111.8378e −0075.7362e −0081.1102e −016optimal PSO 0.0024 3.771117.98190.0022value GA 0.24430.1673 4.2302e −004 6.1172As is shown in Table 2,PSO and GA are trapped in the local optimum easily when the dimension and the searching space are large.WCA can get the global optimum and has a good convergence.IV.The Application and AnalysisAs an effective global optimization algorithm,WCA canbe used in many fields such as TSP problem [11,12]and job-shop scheduling problem [13,14].The study will prove the good generalization of WCA according to the path planning for the mobile robot.Path planning is the most important research area in mobile robotics.It is to find a collision-free path from a starting point to a target point in an environment with ob-stacles according to some optimization criterions,such as the shortest distance and the least energy consumption [15,16].1.The environment modelSuppose that the work space of the robot is a two dimen-sional environment.As is shown in Fig.3,the black areas are obstacles,the starting point is S and the target point is T .In the coordinate system,divide ST into n +1segments.The division point is x i (1≤i ≤n ).Find the line l i that is vertical with the ST on x i and choose a point p i randomly on l i .p i should satisfy the following conditions:p i is not in obstacles;the lines of p i −1p i and p i p i +1do not across obstacles.Link the point p i by the line segments.And there will be a random path P for the mobile robot t ,thenP ={S,p 1,p 2,···,p i ,···,p n ,T }(9)The path planning in this research is to find the collision-free path from S to T which the length is the shortest.TheThe Wolf Colony Algorithm and Its Application215 length of the path P can be expressed as:L P=n−1i=1L P i P i+1+L SP1+L T P n(10)where L Pi P i+1is the distance between p i and p i+1,L SP1is thedistance between S and p1,L T Pnis the distance between T and P n.Suppose the coordinate of p i is(x i,y i),then:L P i P i+1=(x i+1−x i)2+(y i+1−y i)2(11)Fig.3.The coordinate system for working environment of robot2.The path planning based on WCAThe path planning for mobile robot based on WCA is pro-posed.As is shown in the Fig.3,regard p i on the path P as the position of the artificial wolf in the i th dimensional space. Then the steps of the path planning for mobile robot based on WCA can be summarized as follows:Step1Initialization.Initialize the parameters of WCA, the starting point S and the target point T.Select n paths as the artificial wolves randomly.Step2Select q optimal artificial wolves as the searching artificial wolves and every searching artificial moves on accord-ing to Eq.(2).Step3Besiege the quarry.Select the best position in the searching artificial wolves as the position of the quarry.Up-date the position of every artificial wolf according to Eq.(3). If the updated value exceeds the range,set it as the boundary value.Step4Update the wolf colony according to the assign-ment rule of the wolf colony.Remove the worst m artificial wolves in the wolf colony and generate m artificial wolves ran-domly.Step5Judge whether the position x id satisfies the fol-lowing conditions:x id is not in obstacles;the lines that link x id and its adjacent points do not across obstacles.If it satis-fies the conditions,then x id is the position of the i th artificial wolf in the d th dimensional space.Otherwise,replace x id with the point generated randomly that satisfies the conditions.Step6Judge whether the termination condition is sat-isfied.If the circulation step of WCA reaches the maximum iteration number,output the position of the optimal artificial wolf which is the optimal path for the mobile robot;otherwise, turn to Step2.3.The experiment and analysisIn order to verify the validity and effectiveness of the path planning based on WCA,the paper uses the Matlab engine to call WCA that is written in the Matlab to simulate the ex-periment of the path planning under the VC++6.0environ-ment.The experiment conditions are as follows:CPU:Intel Core2Duo1.6GHz.Memory:1.00G the physical address of the memory expansion.The simulation tool:Matlab7.0and VC++6.0.The unit is pixel in the simulation experiment.The start-ing point S is(100,100)and the target point T is(430,430). The black areas are the obstacles.ST is divided into33seg-ments.WCA,PSO and GA are compared to plan the optimal path between ST.The maximum iteration number of the three algorithms is500.The parameters of WCA are set as follows:n=200,maxk=200,q=5,h=4,maxdh=15, stepa=1.5,stepb=0.9,m=4.In PSO,the particle number is200,the inertia weight is0.729and the two learning coeffi-cients are both1.496.In GA,the crossover and mutation rate are0.9and0.1.The population size is400and the length of the individual is600.Suppose that the speed of the robot is 20pixels/s and it takes a second to turn the corner.Then the length of the optimal path and the walking time for the robot are compared in Table3.Table3.The comparison of the three algorithmsAlgorithm Length(pixel)Time(s)WCA486.935127.3468PSO490.459628.5230GA490.364929.5182 As is shown in Table3,PSO and GA are trapped in the local optimum,the path planning planned by WCA is shorter and less time-consuming.WCA can search the global optimum quickly and has a good convergence.The results prove that WCA can solve the path planning for mobile robot effectively.V.ConclusionsIn this study,WCA that simulates the intelligent preda-tory behaviors of the wolf colony is proposed.WCA is a global optimization algorithm based on the behaviors of the wolf.It searches the global optimum through the coexistence and co-operation among the wolf colony.The performance of WCA is discussed according to the function optimization problem. WCA is used in the path planning for the mobile robot.The results prove that the path planning method based on WCA is viable and efficient.The characters of WCA can be summa-rized as follows:(1)Global searching ability.The searching artificial wolf searches the position that is nearest to the quarry in many directions,so the global searching ability of the algorithm is strong.(2)Simplicity.The algorithm only uses the objective func-tion and the implementation is simple.(3)Good generalization.WCA can be used in many opti-mization problems by modifying the algorithm slightly.Therefore,WCA is very suitable to solve the optimization problem.References[1]Jingan Yang,Yanbin Zhuang,“An improved ant colony op-timization algorithm for solving a complex combinatorial op-timization problem”,Applied Soft Computing,Vol.10,No.2, pp.653–660,2010.216Chinese Journal of Electronics2011[2]J.H.Holland,“Outline for a logical theory of adaptive systems”,Journal of the Association for Computing Machinery,Vol.9, No.3,pp.297–314,1962.[3]Luong Duc Long,Ario Ohsato,“A genetic algorithm basedmethod for scheduling repetitive construction projects”,Au-tomation in Construction,Vol.18,No.4,pp.499–511,2009. [4]J.Kennedy,R.C.Eberhart,“Particle swarm optimization”,Proc.of IEEE International Conference on Neural Networks, Piscataway,USA,pp.1942–1948,1995.[5]Zhang Changsheng,Sun Jigui,Ouyang Dantong,“A self-adaptive discrete particle swarm optimization algorithm”,Acta Electronica Sinica,Vol.32,No.2,pp.209–304,2009.(in Chinese) [6]M.Dorigo,V.Maniezzo,A.Colorni,“Ant system:optimiza-tion by a colony of cooperating agent”,IEEE Transactions on Systems,Man,and Cybernetics,Vol.26,No.1,pp.29–41,1996.[7]Frank Neumann,Carsten Witt,“Ant Colony Optimization andthe minimum spanning tree problem”,Theoretical Computer Science,Vol.411,No.25,pp.2406–2413,2010.[8]Maoguo Gong,Licheng Jiao,Dongdong Yang et al.,“Researchon evolutionary multi-objective optimization algorithms”,Jour-nal of Software,Vol.20,No.2,pp.271–289,2009(in Chinese).[9]Hui Pan,Ling Wang,Bo Liu,“Particle swarm optimization forfunction optimization in noisy environment”,Applied Mathe-matics and Computation,Vol.181,No.2,pp.908–919,2006. [10]Xiaomin Hu,Jun Zhang,Yun Li,“Orthogonal methods basedant colony search for solving continuous optimization prob-lems”,Journal of Computer Science and Technology,Vol.23, No.1,pp.2–18,2008.[11]Klaus Meer,“Simulated annealing versus metropolis for aTSP instance”,Information Processing Letter,Vol.104,No.6, pp.216–219,2007.[12]T.Lust,A.Jaszkiewicz,“Speed-up techniques for solving large-scale biobjective TSP”,Computers&Operations Research, Vol.37,No.3,pp.521–533,2010.[13]Shi Qiang Liu,Erhan Kozan,“Scheduling trains as a blockingparallel-machine job shop scheduling problem”,Computers& Operations Research,Vol.36,No.10,pp.2840–2852,2009. [14] F.Pezzella,G.Morganti,G.Ciaschetti,“A genetic algorithmfor theflexible job-shop scheduling problem”,Computers&Op-erations Research,Vol.35,No.10,pp.3202–3212,2008.[15]Meijuan Gao,Jin Xu,Jingwen Tian et al.,“Path planning formobile robot based on chaos genetic algorithm”,Fourth Inter-national Conference on Natural Computation,Jinan,China, pp.409–413,2008.[16]Lidia M.Ortega,Antonio J.Rueda,Francisco R.Feito,“A so-lution to the path planning problem using angle preprocess-ing”,Robotics and Autonomous Systems,Vol.58,No.1,pp.27–36,2010.[17]Liu Changan,Chang Jingang,Liu Chunyang,“Path planningfor mobile robot based on an improved probabilistic roadmap method”,Chinese Journal of Electronics,Vol.18,No.3,pp.395–399,2009.LIU Changan was born at Hei-longjiang Baiquan in Dec.1971,Professor,Ph.D.,received B.S.degree from North-east Agricultural University in1995andM.S.,Ph.D.degrees from Harbin Instituteof Technology in1997and2001respec-tively.Employed to North China ElectricPower University in2001.Director of In-telligence Robot Research Institute,NorthChina Electric Power University.Research-ingfields:technology of intelligent robot,theory of artificial intelli-gence.(Email:liuchangan@)YAN Xiaohu received B.S.degreefrom Huazhong Agricultural University in2008and entering School of Computer Sci-ence and Technology,North China ElectricPower University as a graduate student ontechnology of intelligencerobot.LIU Chunyang was born at Tang-shan in Oct.,1978,lecturer.Received B.S.degree from Hebei Normal University in2002and M.S.degree from North ChinaElectric Power University in2005.Em-ployed to North China Electric Power Uni-versity in2005.Researchingfields:tech-nology of intelligent robot,theory of artifi-cialintelligence.WU Hua was born at Jinzhou inOct.,1981,lecturer,Ph.D.Employed toNorth China Electric Power University in2010.Researchingfields:technology of in-telligent robot,theory of artificial intelli-gence.。

相关文档
最新文档