ant colony optimization algorithm for learning brain effective connectivity network from fMRI data
Ant Colony Optimization
Ant Colony Optimizationwith Immigrants Schemesfor the Dynamic Vehicle Routing ProblemMichalis Mavrovouniotis1and Shengxiang Yang21Department of Computer Science,University of LeicesterUniversity Road,Leicester LE17RH,United Kingdommm251@2Department of Information Systems and Computing,Brunel UniversityUxbridge,Middlesex UB83PH,United Kingdomshengxiang.yang@Abstract.Ant colony optimization(ACO)algorithms have proved tobe able to adapt to dynamic optimization problems(DOPs)when theyare enhanced to maintain diversity and transfer knowledge.Several ap-proaches have been integrated with ACO to improve its performancefor DOPs.Among these integrations,the ACO algorithm with immi-grants schemes has shown good results on the dynamic travelling sales-man problem.In this paper,we investigate ACO algorithms to solve amore realistic DOP,the dynamic vehicle routing problem(DVRP)withtraffic factors.Random immigrants and elitism-based immigrants are ap-plied to ACO algorithms,which are then investigated on different DVRPtest cases.The results show that the proposed ACO algorithms achievepromising results,especially when elitism-based immigrants are used.1IntroductionIn the vehicle routing problem(VRP),a number of vehicles with limited capacity are routed in order to satisfy the demand of all customers at a minimum cost (usually the total travel time).Ant colony optimization(ACO)algorithms have shown good performance for the VRP,where a population of ants cooperate and construct vehicle routes[5].The cooperation mechanism of ants is achieved via their pheromone trails,where each ant deposits pheromone to its trails and the remaining ants can exploit it[2].The dynamic VRP(DVRP)is closer to a real-world application since the traffic jams in the road system are considered.As a result,the travel time be-tween customers may change depending on the time of the day.In dynamic optimization problems(DOPs)the moving optimum needs to be tracked over time.ACO algorithms can adapt to dynamic changes since they are inspired from nature,which is a continuous adaptation process[9].In practice,they can adapt by transferring knowledge from past environments[1].The challenge of such algorithms is how quickly they can react to dynamic changes in order to maintain the high quality of output instead of premature convergence.C.Di Chio et al.(Eds.):EvoApplications2012,LNCS7248,pp.519–528,2012.c Springer-Verlag Berlin Heidelberg2012520M.Mavrovouniotis and S.YangDeveloping strategies for ACO algorithms to deal with premature conver-gence and address DOPs has attracted a lot of attention,which includes local and global restart strategies[7],memory-based approaches[6],pheromone ma-nipulation schemes to maintain diversity[4],and immigrants schemes to increase diversity[11,12].These approaches have been applied to the dynamic travelling salesman problem(DTSP),which is the simplest case of a DVRP,i.e.,only one vehicle is used.The ACO algorithms that are integrated with immigrants schemes have shown promising results on the DTSP where immigrant ants re-place the worst ants in the population every iteration[11].In this paper,we integrate two immigrants schemes,i.e.,random immigrants and elitism-based immigrants,to ACO algorithms and apply them to the DVRP with traffic factor.The aim of random immigrants ACO(RIACO)is to increase the diversity in order to adapt well in DOPs,and the aim of elitism-based im-migrants ACO(EIACO)is to generate guided diversity to avoid randomization.The rest of the paper is organized as follows.Section2describes the problem we try to solve,i.e.,the DVRP with traffic factors.Section3describes the ant colony system(ACS),which is one of the best performing algorithms for the VRP.Section4describes our proposed approaches where we incorporate immigrants schemes with ACO.Section5describes the experiments carried out by comparing RIACO and EIACO with ACS.Finally,Section6concludes this paper with directions for future work.2The DVRP with Traffic JamsThe VRP has become one of the most popular combinatorial optimization prob-lems,due to its similarities with many real-world applications.The VRP is classified as NP-hard[10].The basic VRP can be described as follows:a number of vehicles with afixed capacity need to satisfy the demand of all the customers, starting from and returning to the depot.Usually,the VRP is represented by a complete weighted graph G=(V,E), with n+1nodes,where V={u0,...,u n}is a set of vertices corresponding to the customers(or delivery points)u i(i=1,···,n)and the depot u0and E={(u i,u j):i=j}is a set of edges.Each edge(u i,u j)is associated with a non-negative d ij which represents the distance(or travel time)between u i and u j.For each customer u i,a non-negative demand D i is given.For the depot u0, a zero demand is associated,i.e.,D0=0.The aim of the VRP is tofind the route(or a set of routes)with the lowest cost without violating the following constraints:(1)every customer is visited exactly once by only one vehicle;(2)every vehicle starts andfinishes at the depot;and (3)the total demand of every vehicle route must not exceed the vehicle capacity Q.The number of routes identifies the corresponding number of vehicles used to generate one VRP solution,which is notfixed but chosen by the algorithm.The VRP becomes more challenging if it is subject to a dynamic environment. There are many variations of the DVRP,such as the DVRP with dynamic de-mand[14].In this paper,we generate a DVRP with traffic factors,where eachAnt Colony Optimization with Immigrants Schemes for the DVRP521 edge(u i,u j)is associated with a traffic factor t ij.Therefore,the cost to travel from u i to u j is c ij=d ij×t ij.Furthermore,the cost to travel from u j to u i may differ due to different traffic factor.For example,one road may have more traffic in one direction and less traffic in the opposite direction.Every f iterations a random number R∈[F L,F U]is generated to represent potential traffic jams,where F L and F U are the lower and upper bounds of the traffic factor,respectively.Each edge has a probability m to have a traffic factor, by generating a different R to represent high and low traffic jams on different roads,i.e.,t ij=1+R,where the traffic factor of the remaining edges is set to1 (indicates no traffic).Note that f and m represent the frequency and magnitude of changes in the DVRP,respectively.3ACO for the DVRPThe ACO metaheuristic consists of a population ofμants where they construct solutions and share their information with the others via their pheromone trails. Thefirst ACO algorithm developed is the Ant System(AS)[2].Many variations of the AS have been developed over the years and applied to difficult optimization problems[3].The best performing ACO algorithm for the DVRP is the ACS[13].There is a multi-colony variation of this algorithm applied to the VRP with time win-dows[5].However,in this paper we consider the single colony which has been applied to the DVRP[13].Initially,all the ants are placed on the depot and all pheromone trails are initialized with an equal amount.With a probability1−q0, where0≤q0≤1is a parameter of the pseudo-random proportional decision rule(usually0.9for ACS),an ant k chooses the next customer j from customeri,as follows:p k ij=⎧⎨⎩[τij]α[ηij]βl∈N k i[τil]α[ηil]β,if j∈N k i,0,otherwise,(1)whereτij is the existing pheromone trail between customers i and j,ηij is the heuristic information available a priori,which is defined as1/c ij,where c ij isthe distance travelled(as calculated in Section2)between customers i and j, N k i denotes the neighbourhood of unvisited customers of ant k when its current customer is i,andαandβare the two parameters that determine the relativeinfluence of pheromone trail and heuristic information,respectively.With the probability q0,the ant k chooses the next customer with the maximum proba-bility,i.e.,[τ]α[η]β,and not probabilistically as in Eq.(1).However,if the choice of the next customer leads to an infeasible solution,i.e.,exceed the maximum capacity Q of the vehicle,the depot is chosen and a new vehicle route starts.When all ants construct their solutions,the best ant retraces the solution and deposits pheromone globally according to its solution quality on the correspond-ing trails,as follows:τij←(1−ρ)τij+ρΔτbestij,∀(i,j)∈Tbest,(2)522M.Mavrovouniotis and S.Yangwhere0<ρ≤1is the pheromone evaporation rate andΔτbestij =1/C best,whereC best is the total cost of the T best tour.Moreover,a local pheromone update is performed every time an ant chooses another customer j from customer i as follows:τij←(1−ρ)τij+ρτ0,(3) whereρis defined as in Eq.(2)andτ0is the initial pheromone value.The pheromone evaporation is the mechanism that eliminates the areas with high intensity of pheromones that are generate by ants,due to stagnation be-haviour1,in order to adapt well to the new environment.The recovery time depends on the size of the problem and magnitude of change.4ACO with Immigrants Schemes for the DVRP4.1FrameworkThe framework of the proposed algorithms is based on the ACO algorithms that were used for the DTSP[11,12].It will be interesting to observe if the framework based on immigrants schemes is beneficial for more realistic problems,such as the DVRP with traffic factors,as described in Section2.The initial phase of the algorithm and the solution construction of the ants are the same with the ACS;see Eq.(1).The difference of the proposed framework is that it uses a short-term memory every iteration t,denoted as k short(t),of limited size,i.e.,K s,which is associated with the pheromone matrix.Initially, k short(0)is empty where at the end of the iteration the K s best ants will be added to k short(t).Each ant k that enters k short(t)deposits a constant amount of pheromone to the corresponding trails,as follows:τij←τij+Δτk ij,∀(i,j)∈T k,(4)whereΔτk ij=(τmax−τ0)/K s and T k is the tour of ant k.Here,τmax andτ0are the maximum and initial pheromone value,respectively.Every iteration the ants from k short(t−1)are replaced with the K s best ants from iteration t,a negative update is performed to their pheromone trails,as follows:τij←τij−Δτk ij,∀(i,j)∈T k,(5) whereΔτij and T k are defined as in Eq.(4).This is because no ants can survive in more than one iteration because of the dynamic environment.In addition,immigrant ants replace the worst ants in k short(t)every iteration and further adjustments are performed to the pheromone trails since k short(t) changes.The main concern when dealing with immigrants schemes is how to generate immigrant ants,that represent feasible solutions.1A term used when all ants follow the same path and construct the same solution.Ant Colony Optimization with Immigrants Schemes for the DVRP523 4.2Random Immigrants ACO(RIACO)Traditionally,the immigrants are randomly generated and replace other ants in the population to increase the diversity.A random immigrant ant for the DVRP is generated as follows.First,the depot is added as the starting point; then,an unvisited customer is randomly selected as the next point.This process is repeated until thefirst segment(starting from the most recent visit to the depot)of customers do not violate the capacity constraint.When the capacity constraint is violated the depot is added and another segment of customers starts.When all customers are visited the solution will represent one feasible VRP solution.Considering the proposed framework described above,before the pheromone trails are updated,a set S ri of r×K s immigrants are generated to replace the worst ants in k short(t),where r is the replacement rate.RIACO has been found to perform better in fast and significantly changing environments for the DTSP[11].This is because when the changing environ-ments are not similar it is better to randomly increase the diversity instead of knowledge transfer.Moreover,when the environmental changes are fast the time is not enough to gain useful knowledge in order to transfer it.However,there is a high risk of randomization with RIACO that may disturb the optimization process.A similar behaviour is expected for the DVRP.4.3Elitism-Based Immigrants ACO(EIACO)Differently from RIACO,which generates diversity randomly with the immi-grants,EIACO generates guided diversity by the knowledge transferred from the best ant of the previous environment.An elitism-based immigrant ant for the DVRP is generated as follows.The best ant of the previous environment is selected in order to use it as the base to generate elitism-based immigrants.The depots of the best ant are removed and adaptive inversion is performed based on the inver-over operator[8].When the inversion operatorfinishes,the depots are added so that the capacity constraint is satisfied in order to represent one feasible VRP solution.Considering the proposed framework above,on iteration t,the elite ant from k short(t−1)is used as the base to generate a set S ei of r×K s immigrants,where r is the replacement rate.The elitism-based immigrants replace the worst ants in k short(t)before the pheromone trails are updated.The EIACO has been found to perform better in slowly and slightly changing environments for the DTSP[11].This is because the knowledge transferred when the changing environments are similar will be more useful.However,there is a risk to transfer too much knowledge and start the optimization process from a local optimum and get stuck there.A similar behaviour is expected for the DVRP.524M.Mavrovouniotis and S.Yang5Simulation Experiments5.1Experimental SetupIn the experiments,we compare the proposed RIACO and EIACO with the existing ACS,described in Section3.All the algorithms have been applied to the vrp45,vrp72,and vrp135problem instances2.To achieve a good balance between exploration and exploitation,most of the parameters have been obtained from our preliminary experiments where others have been inspired from literature[11].For all algorithms,μ=50ants are used,α=1,β=5,andτ0=1/n.For ACS,q0=0.9,andρ=0.7.Note that a lower evaporation rate has been used for ACS,i.e.ρ=0.1,with similar or worseresults.For the proposed algorithms,q0=0.0,K s=10,τmax=1.0and r=0.4.For each algorithm on a DVRP instance,N=30independent runs were executed on the same environmental changes.The algorithms were executed for G=1000iterations and the overall offline performance is calculated as follows:P offline=1GGi=1⎛⎝1NNj=1P∗ij⎞⎠(6)where P∗ij defines the tour cost of the best ant since the last dynamic change of iteration i of run j[9].The value of f was set to10and100,which indicate fast and slowly changing environments,respectively.The value of m was set to0.1,0.25,0.5,and0.75, which indicate the degree of environmental changes from small,to medium,to large,respectively.The bounds of the traffic factor are set as F L=0and F U=5. As a result,eight dynamic environments,i.e.,2values of f×4values of m, were generated from each stationary VRP instance,as described in Section2,to systematically analyze the adaptation and searching capability of each algorithm on the DVRP.5.2Experimental Results and AnalysisThe experimental results regarding the offline performance of the algorithms are presented in Table1and the corresponding statistical results of Wilcoxon rank-sum test,at the0.05level of significance are presented in Table2.Moreover,to better understand the dynamic behaviour of the algorithms,the results of the largest problem instance,i.e.,vrp135,are plotted in Fig.1with f=10,m=0.1 and m=0.75,and f=100,m=0.1and m=0.75,for thefirst500iterations. From the experimental results,several observations can be made by comparing the behaviour of the algorithms.First,RIACO outperforms ACS in all the dynamic test cases;see the results of RIACO⇔ACS in Table2.This validates our expectation that ACS need 2Taken from the Fisher benchmark instances available athttp://neo.lcc.uma.es/radi-aeb/WebVRP/Ant Colony Optimization with Immigrants Schemes for the DVRP525 parison of algorithms regarding the results of the offline performancef=10f=100m⇒0.10.250.50.750.10.250.50.75Alg.&Inst.vrp45ACS897.5972.51205.61648.0883.4929.11120.21536.9RIACO841.2902.41089.51482.9834.9867.51016.11375.1EIACO840.1899.81083.81473.5839.8860.61009.11355.5Alg.&Inst.vrp72ACS305.3338.6426.2596.2297.3324.6412.7547.9RIACO294.4322.8401.7562.5280.6303.5375.2489.6EIACO289.9319.4397.8557.0276.2298.5366.7476.5Alg.&Inst.vrp135ACS1427.71567.31967.42745.71383.71519.41820.52536.2RIACO1417.81554.21922.12676.01353.11457.21698.62358.4EIACO1401.31542.11907.62663.11329.11444.31668.52293.8Table2.Statistical tests of comparing algorithms regarding the offline performance, where“+”or“−”means that thefirst algorithm is significantly better or the second algorithm is significantly betterAlg.&Inst.vrp45vrp72vrp135f=10,m⇒0.10.250.50.750.10.250.50.750.10.250.50.75RIACO⇔ACS++++++++++++EIACO⇔ACS++++++++++++EIACO⇔RIACO++++++++++++f=100,m⇒0.10.250.50.750.10.250.50.750.10.250.50.75RIACO⇔ACS++++++++++++EIACO⇔ACS++++++++++++EIACO⇔RIACO−+++++++++++sufficient time to recover when a dynamic change occurs,which can be also observed from Fig.1in the environmental case with f=100.This is because the pheromone evaporation is the only mechanism used to eliminate pheromone trails that are not useful to the new environment,and may bias the population to areas that are not near the new optimum.On the other hand,RIACO uses the proposed framework where the pheromone trails exist only in one iteration.Second,EIACO outperforms ACS in all the dynamic test cases as the RI-ACO;see the results EIACO⇔ACS in Table2.This is due to the same reasons RIACO outperforms the traditional ACS.However,EIACO outperforms RI-ACO in almost all dynamic test cases;see the results of EIACO⇔RIACO in Table2.In slowly and slightly changing environments EIACO has sufficient time to gain knowledge from the previous environment,and the knowledge transferred has more chances to help when the changing environments are similar.However, on the smallest problem instance,i.e.,vrp45,with f=100and m=0.1RIACO performs better than EIACO.This validates our expectation where too much526M.Mavrovouniotis and S.Yang1300 1350 1400 1450 1500 1550 16000100200300400500O f f l i n e P e r f o r m a n c eIterationvrp135 - f = 10, m = 0.1ACS RIACO EIACO 2200 2400 2600 2800 3000 3200 34000100200300400500O f f l i n e P e r f o r m a n c eIteration vrp135 - f = 10, m = 0.75ACS RIACO EIACO 1200 1250 130013501400 1450 1500 1550 16000100200300400500O f f l i n e P e r f o r m a n c e Iteration vrp135 - f = 100, m = 0.1ACS RIACO EIACO 2200 2400 2600 2800 3000 3200 34000100200300400500O f f l i n e P e r f o r m a n c eIterationvrp135 - f = 100, m = 0.75ACS RIACO EIACOFig.1.Offline performance of algorithms for different dynamic test problems 1300 1350 1400 1450 15000.00.20.40.60.8 1.0O f f l i n e P e r f o r m a n c e r vrp135, f = 100, m = 0.1RIACO EIACO ACS 2200 2300 2400 2500 2600 27000.00.20.40.60.8 1.0O f f l i n e P e r f o r m a n c ervrp135, f = 100, m = 0.75RIACO EIACO ACS Fig.2.Offline performance of RIACO and EIACO with different replacement rates against the performance of ACS in slowly changing environmentsknowledge transferred does not always mean better results in dynamic environ-ments.On the other hand RIACO,was expected to perform better than EIACO in fast and significantly changing environments,since the random immigrants only increase the diversity,but that it is not the case.This may be possibly because of too much randomization that may disturb the optimization process and requires further investigation regarding the effect of the immigrant ants.Ant Colony Optimization with Immigrants Schemes for the DVRP527 Third,in order to investigate the effectiveness of the immigrants schemes,fur-ther experiments have been performed on the same problem instances with the same parameters used before but with different immigrant replacement rates, i.e.,r∈{0.0,0.2,0.4,0.6,0.8,1.0}.In Fig.2the offline performance of RIACO and EIACO with the varying replacement rates are presented3,against the ACS performance,where r=0.0means that no immigrants are generated to re-place ants in the k short(t).The results confirm our expectation above,where the random immigrants in RIACO sometimes may disturb the optimization and de-grade the performance.On the other hand,elitism-based immigrants in EIACO improve the performance,especially in slightly changing environments.Finally,the proposed framework performs better than ACS,even if no immi-grants are generated;see Fig.2.The RIACO with r=1.0performs worse than the ACS,whereas the EIACO with r=1.0better than ACS.This is because RIACO destroys all the knowledge transferred to the k short(t)from the ants of the previous iteration with random immigrants,whereas EIACO destroys that knowledge but transfers new knowledge using the best ant from the previous iteration.6ConclusionsDifferent immigrants schemes have been successfully applied to evolutionary al-gorithms and ACO algorithms to address different DOPs[11,16].ACO-based algorithms with immigrants,i.e.,RIACO and EIACO,have shown good perfor-mance on different variations of the DTSP[11,12].In this paper,we modify and apply such algorithms to address the DVRP with traffic factors,which is closer to a real-world application.The immigrant ants are generated either randomly or using the previous best ant as the base and replace the worst ones in the pop-ulation.The aim is to maintain the diversity of solutions and transfer knowledge from previous environments in order to adapt well in DOPs.Comparing RIACO and EIACO with ACS,one of the best performing ACO al-gorithms for VRP,on different test cases of DVRPs,the following concluding re-marks can be drawn.First,the proposed framework used to integrate ACO with immigrants schemes,performs better than the traditional framework,even when immigrant ants are not generated.Second,EIACO is significantly better than RI-ACO and ACS in almost all dynamic test cases.Third,RIACO is significantly bet-ter than ACS in all dynamic test cases.Finally,the random immigrants may disturb the optimization process with a result to degrade the performance,whereas elitism-based immigrants transfers knowledge with a result to improves the performance for the DVRP with traffic factor.An obvious direction for future work is to hybridize the two immigrants schemes.However,from our preliminary results the performance of the hybrid scheme is better than RIACO but worse than EIACO in all dynamic test cases. Therefore,tofind another way to achieve a good balance between the knowledge 3The experimental results of the remaining problem instances and dynamic test cases are similar for EIACO,whereas for RIACO there is an improvement when r>0.0 on the smallest problem instance.528M.Mavrovouniotis and S.Yangtransferred and the diversity generated would be interesting for future work.An-other future work is to integrate memory-based immigrants with ACO,which have also performed well on the DTSP[12],to the DVRP with traffic factors. References1.Bonabeau,E.,Dorigo,M.,Theraulaz,G.:Swarm Intelligence:From Natural toArtificial Systems.Oxford University Press,New York(1999)2.Dorigo,M.,Maniezzo,V.,Colorni,A.:Ant system:optimization by a colony ofcooperating agents.IEEE Trans.on Syst.Man and Cybern.Part B:Cybern.26(1), 29–41(1996)3.Dorigo,M.,St¨u tzle,T.:Ant Colony Optimization.The MIT Press,London(2004)4.Eyckelhof,C.J.,Snoek,M.:Ant Systems for a Dynamic TSP.In:ANTS2002:Proc.of the3rd Int.Workshop on Ant Algorithms,pp.88–99(2002)5.Gambardella,L.M.,Taillard, E.,Agazzi,G.:MACS-VRPTW:A multiple antcolony system for vehicle routing problems with time windows.In:Corne,D.,et al.(eds.)New Ideas in Optimization,pp.63–76(1999)6.Guntsch,M.,Middendorf,M.:Applying Population Based ACO to Dynamic Op-timization Problems.In:Dorigo,M.,Di Caro,G.A.,Sampels,M.(eds.)Ant Algo-rithms2002.LNCS,vol.2463,pp.111–122.Springer,Heidelberg(2002)7.Guntsch,M.,Middendorf,M.:Pheromone Modification Strategies for Ant Algo-rithms Applied to Dynamic TSP.In:Boers,E.J.W.,Gottlieb,J.,Lanzi,P.L.,Smith, R.E.,Cagnoni,S.,Hart,E.,Raidl,G.R.,Tijink,H.(eds.)EvoIASP2001,EvoWork-shops2001,EvoFlight2001,EvoSTIM2001,EvoCOP2001,and EvoLearn2001.LNCS,vol.2037,pp.213–222.Springer,Heidelberg(2001)8.Tao,G.,Michalewicz,Z.:Inver-over Operator for the TSP.In:Eiben, A.E.,B¨a ck,T.,Schoenauer,M.,Schwefel,H.-P.(eds.)PPSN1998.LNCS,vol.1498, pp.803–812.Springer,Heidelberg(1998)9.Jin,Y.,Branke,J.:Evolutionary optimization in uncertain environments-a survey.IEEE Trans.on put.9(3),303–317(2005)bbe,M.,Laporte,G.,Mercure,H.:Capacitated vehicle routing on trees.Oper-ations Research39(4),616–622(1991)11.Mavrovouniotis,M.,Yang,S.:Ant Colony Optimization with Immigrants Schemesin Dynamic Environments.In:Schaefer,R.,Cotta,C.,Ko l odziej,J.,Rudolph,G.(eds.)PPSN XI.LNCS,vol.6239,pp.371–380.Springer,Heidelberg(2010)12.Mavrovouniotis,M.,Yang,S.:Memory-Based Immigrants for Ant Colony Opti-mization in Changing Environments.In:Di Chio,C.,Cagnoni,S.,Cotta,C.,Ebner, M.,Ek´a rt,A.,Esparcia-Alc´a zar,A.I.,Merelo,J.J.,Neri,F.,Preuss,M.,Richter,H.,Togelius,J.,Yannakakis,G.N.(eds.)EvoApplications2011,Part I.LNCS,vol.6624,pp.324–333.Springer,Heidelberg(2011)13.Montemanni,R.,Gambardella,L.,Rizzoli,A.,Donati,A.:Ant colony system fora dynamic vehicle routing problem.Journal of Combinatorial Optimization10(4),327–343(2005)14.Psaraftis,H.:Dynamic vehicle routing:status and prospects.Annals of OperationsResearch61,143–164(1995)15.Rizzoli,A.E.,Montemanni,R.,Lucibello,E.,Gambardella,L.M.:Ant colony op-timization for real-world vehicle routing problems-from theory to applications.Swarm Intelli.1(2),135–151(2007)16.Yang,S.:Genetic algorithms with memory and elitism based immigrants in dy-namic put.16(3),385–416(2008)。
蚁群算法 加约束条件
蚁群算法加约束条件摘要:一、蚁群算法简介1.蚁群算法的起源2.蚁群算法的基本原理二、加约束条件的原因1.现实问题中的约束条件2.约束条件对蚁群算法的影响三、蚁群算法加约束条件的方法1.引入惩罚函数2.改进信息素更新规则3.采用局部搜索策略四、加约束条件后的蚁群算法应用案例1.旅行商问题2.装载问题3.无线传感器网络部署问题五、总结与展望1.加约束条件对蚁群算法的优化2.蚁群算法在其他优化问题上的应用前景正文:一、蚁群算法简介蚁群算法(Ant Colony Optimization, ACO)是一种模拟自然界蚂蚁觅食行为的优化算法。
该算法起源于1992 年,由意大利学者Mario Dorigo 等人提出。
蚁群算法的基本原理是模拟蚂蚁在寻找食物过程中的信息素更新、路径选择和局部搜索等行为,从而在一定时间内找到从蚁巢到食物源的最短路径。
二、加约束条件的原因在现实问题中,许多优化问题都存在一定的约束条件。
这些约束条件可能会限制算法的搜索空间,导致算法在求解过程中陷入局部最优解。
因此,在蚁群算法中引入约束条件是很有必要的。
蚁群算法中的约束条件可以影响信息素的更新、路径选择等方面,从而影响算法的搜索过程和最终结果。
三、蚁群算法加约束条件的方法为了使蚁群算法在存在约束条件的问题中取得更好的效果,研究者们提出了许多改进方法。
以下列举了三种常用的方法:1.引入惩罚函数通过引入惩罚函数,可以在算法中增加对违反约束条件的解的惩罚,从而降低这些解在搜索过程中的优先级。
2.改进信息素更新规则在蚁群算法中,信息素的更新规则对算法的搜索过程有很大影响。
通过改进信息素更新规则,可以使得算法在满足约束条件的情况下,更加倾向于选择优秀的解。
3.采用局部搜索策略局部搜索策略可以在一定程度上避免算法陷入局部最优解。
通过在蚁群算法中引入局部搜索策略,可以使得算法在满足约束条件的情况下,更容易找到全局最优解。
四、加约束条件后的蚁群算法应用案例蚁群算法在加约束条件后,可以有效解决许多实际问题。
蚂蚁算法和蚁群算法
蚂蚁算法(Ant Colony Algorithm)和蚁群算法(Ant Colony Optimization)是启发式优化算法,灵感来源于蚂蚁在觅食和建立路径时的行为。
这两种算法都基于模拟蚂蚁的行为,通过模拟蚂蚁的集体智慧来解决组合优化问题。
蚂蚁算法和蚁群算法的基本原理类似,但应用领域和具体实现方式可能有所不同。
下面是对两者的简要介绍:蚂蚁算法:蚂蚁算法主要用于解决图论中的最短路径问题,例如旅行商问题(Traveling Salesman Problem,TSP)。
其基本思想是通过模拟蚂蚁在环境中寻找食物的行为,蚂蚁会通过信息素的释放和感知来寻找最优路径。
蚂蚁算法的核心概念是信息素和启发式规则。
信息素(Pheromone):蚂蚁在路径上释放的一种化学物质,用于传递信息和标记路径的好坏程度。
路径上的信息素浓度受到蚂蚁数量和路径距离的影响。
启发式规则(Heuristic Rule):蚂蚁根据局部信息和启发式规则进行决策。
启发式规则可能包括路径距离、路径上的信息素浓度等信息。
蚂蚁算法通过模拟多个蚂蚁的行为,在搜索过程中不断调整路径上的信息素浓度,从而找到较优的解决方案。
蚁群算法:蚁群算法是一种更通用的优化算法,广泛应用于组合优化问题。
除了解决最短路径问题外,蚁群算法还可应用于调度问题、资源分配、网络路由等领域。
蚁群算法的基本原理与蚂蚁算法类似,也是通过模拟蚂蚁的集体行为来求解问题。
在蚁群算法中,蚂蚁在解决问题的过程中通过信息素和启发式规则进行路径选择,但与蚂蚁算法不同的是,蚁群算法将信息素更新机制和启发式规则的权重设置进行了改进。
蚁群算法通常包含以下关键步骤:初始化:初始化蚂蚁的位置和路径。
路径选择:根据信息素和启发式规则进行路径选择。
信息素更新:蚂蚁在路径上释放信息素,信息素浓度受路径质量和全局最优解的影响。
全局更新:周期性地更新全局最优解的信息素浓度。
终止条件:达到预设的终止条件,结束算法并输出结果。
蚁群算法(ACO)解决TSP问题
蚁群算法(ACO)解决TSP问题⼀、蚁群算法1.基本原理蚁群算法(Ant Colony Optimization,ACO)是⼀种基于种群寻优的启发式搜索算法,有意⼤利学者M.Dorigo等⼈于1991年⾸先提出。
该算法受到⾃然界真实蚁群集体在觅⾷过程中⾏为的启发,利⽤真实蚁群通过个体间的信息传递、搜索从蚁⽳到⾷物间的最短路径等集体寻优特征,来解决⼀些离散系统优化中的困难问题。
经过观察发现,蚂蚁在寻找⾷物的过程中,会在它所经过的路径上留下⼀种被称为信息素的化学物质,信息素能够沉积在路径上,并且随着时间逐步挥发。
在蚂蚁的觅⾷过程中,同⼀蚁群中的其他蚂蚁能够感知到这种物质的存在及其强度,后续的蚂蚁会根据信息素浓度的⾼低来选择⾃⼰的⾏动⽅向,蚂蚁总会倾向于向信息素浓度⾼的⽅向⾏进,⽽蚂蚁在⾏进过程中留下的信息素⼜会对原有的信息素浓度予以加强,因此,经过蚂蚁越多的路径上的信息素浓度会越强,⽽后续的蚂蚁选择该路径的可能性就越⼤。
通常在单位时间内,越短的路径会被越多的蚂蚁所访问,该路径上的信息素强度也越来越强,因此,后续的蚂蚁选择该短路径的概率也就越⼤。
经过⼀段时间的搜索后,所有的蚂蚁都将选择这条最短的路径,也就是说,当蚁巢与⾷物之间存在多条路径时,整个蚁群能够通过搜索蚂蚁个体留下的信息素痕迹,寻找到蚁巢和⾷物之间的最短路径。
蚁群算法中,蚂蚁个体作为每⼀个优化问题的可⾏解。
⾸先随机⽣成初始种群,包括确定解的个数、信息素挥发系数、构造解的结构等。
然后构造蚁群算法所特有的信息素矩阵每只妈蚁执⾏蚂蚊移动算⼦后,对整个群体的蚂蚁做⼀评价,记录最优的蚂蚁。
之后算法根据信息素更新算⼦更新信息素矩阵,⾄此种群的⼀次选代过程完成。
整个蚂蚁群体执⾏⼀定次数的选代后退出循环、输出最优解。
2.术语介绍(1)蚂蚁个体。
每只蚂蚁称为⼀个单独的个体,在算法中作为⼀个问题的解。
(2)蚂蚁群体。
⼀定数量的蚂蚁个体组合在⼀起构成⼀个群体,蚂蚁是群体的基本单位。
改进蚁群优化算法求解折扣{0-1}背包问题
1.College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou 350002, China 2.Key Laboratory of Smart Agriculture and Forestry(Fujian Agriculture and Forestry University), Fuzhou 350002, China
背包问题(Knapsack Problem,KP)是经典的 NP-困 难问题,它包括多维背包问题、完全背包问题、分组背包 问 题 和 折 扣 {0- 1} 背 包 问 题(Discounted {0- 1} Knapsack Problem,DKP)等多种类型。DKP 在 2007 年首先 在 文 献 [1] 中 提 出 ,它 是 对 商 场 促 销 行 为 的 抽 象 ,在 商 业、投资决策、资源分配和密码学等方面都有实际应用
(1)根据 DKP 的构造特点,采用组内竞争方式计算 物品的选择概率,从而降低算法的时间复杂度。
(2)在不降低算法精度的前提下舍去启发式信息, 从而减少算法所使用的参数,简化参数设置。
(3)采用混合基于价值密度及价值的优化算子,提 高算法的寻优能力。
(4)基于上述的改进模块设计出的算法,在 DKP 问 题的求解中具有良好的性能表现。
86 2021,57(13)
蚁群算法
4.蚁群算法应用
信息素更新规则
1.蚁群算法简述 2.蚁群算法原理
最大最小蚂蚁系统
3.蚁群算法改进
4.蚁群算法应用
最大最小蚂蚁系统(MAX-MIN Ant System,MMAS)在基本AS算法的基础 上进行了四项改进: (1)只允许迭代最优蚂蚁(在本次迭代构建出最短路径的蚂蚁),或者至今 最优蚂蚁释放信息素。(迭代最优更新规则和至今最优更新规则在MMAS 中会被交替使用)
p( B) 0.033/(0.033 0.3 0.075) 0.081 p(C ) 0.3 /(0.033 0.3 0.075) 0.74 p( D) 0.075 /(0.033 0.3 0.075) 0.18
用轮盘赌法则选择下城市。假设产生的 随机数q=random(0,1)=0.05,则蚂蚁1将会 选择城市B。 用同样的方法为蚂蚁2和3选择下一访问 城市,假设蚂蚁2选择城市D,蚂蚁3选择城 市A。
蚁群算法
1.蚁群算法简述 2.蚁群算法原理 3.蚁群算法改进 4.蚁群算法应用
1.蚁群算法简述 2.蚁群算法原理
3.蚁群算法改进
4.蚁群算法应用
蚁群算法(ant colony optimization, ACO),又称蚂蚁 算法,是一种用来在图中寻找优 化路径的机率型算法。 由Marco Dorigo于1992年在他 的博士论文中提出,其灵感来源 于蚂蚁在寻找食物过程中发现路 径的行为
4.蚁群算法应用
例给出用蚁群算法求解一个四城市的TSP 3 1 2 3 5 4 W dij 1 5 2 2 4 2
假设蚂蚁种群的规模m=3,参数a=1,b=2,r=0.5。 解:
满足结束条件?
蚁群算法 加约束条件
蚁群算法加约束条件【原创实用版】目录1.蚁群算法概述2.蚁群算法的约束条件3.蚁群算法的应用实例4.蚁群算法的优缺点正文一、蚁群算法概述蚁群算法(Ant Colony Optimization, ACO)是一种模拟自然界蚂蚁觅食行为的优化算法。
该算法由意大利学者 Dorigo、Gambardella 等人于 1991 年提出,是一种基于种群的随机搜索算法。
蚁群算法借鉴了蚂蚁觅食过程中的信息素更新机制,通过模拟蚂蚁在寻找食物过程中的信息共享和协同搜索策略,从而在解决优化问题上表现出较强的全局搜索能力。
二、蚁群算法的约束条件在蚁群算法中,约束条件通常包括以下两个方面:1.信息素浓度约束:蚁群算法中,信息素的浓度受限于信息素的挥发性和蚂蚁在路径上释放的信息素的数量。
当信息素的浓度超过一定阈值时,算法会采取相应的措施,如降低信息素的浓度或者增加信息素的挥发性。
2.蚂蚁数量约束:蚁群算法中,蚂蚁的数量是固定的。
在算法执行过程中,蚂蚁的数量不会增加或减少。
因此,在解决实际问题时,需要根据问题的规模和复杂度,合理地选择蚂蚁的数量。
三、蚁群算法的应用实例蚁群算法在许多领域都取得了显著的应用成果,例如:1.旅行商问题(Traveling Salesman Problem, TSP):TSP 是蚁群算法的经典应用之一,通过模拟蚂蚁在城市间寻找最短路径的过程,求解TSP 问题。
2.装载问题(Loading Problem):装载问题是指在有限的车辆空间内,合理地安排货物的装载方案,以使运输成本最小化。
蚁群算法在解决装载问题时,表现出了较好的全局搜索能力。
3.蚁群算法在工程设计、生产调度、供应链管理等领域也取得了较好的应用效果。
四、蚁群算法的优缺点蚁群算法作为一种优化算法,具有以下优缺点:优点:1.全局搜索能力较强:蚁群算法在求解优化问题时,具有较强的全局搜索能力,能够较快地找到较优解。
2.适应性强:蚁群算法可以根据问题的特点和规模,灵活地调整算法参数,如信息素浓度、挥发性等,以提高算法的性能。
蚁群优化算法及其在工程中的应用
蚁群优化算法及其在工程中的应用引言:蚁群优化算法(Ant Colony Optimization,ACO)是一种基于蚁群行为的启发式优化算法,模拟了蚂蚁在寻找食物过程中的行为。
蚁群优化算法以其在组合优化问题中的应用而闻名,特别是在工程领域中,其独特的优化能力成为解决复杂问题的有效工具。
1. 蚁群优化算法的原理与模拟蚁群优化算法源于对蚂蚁觅食行为的研究,它模拟了蚂蚁在寻找食物时使用信息素沉积和信息素蒸发的策略。
蚂蚁释放的信息素作为信息传播的媒介,其他蚂蚁会根据信息素浓度选择路径。
通过这种方式,蚁群优化算法利用信息素的正反馈机制,不断优化路径选择,从而找到全局最优解。
2. 蚁群优化算法的基本步骤蚁群优化算法的基本步骤包括:初始化信息素浓度、蚁群初始化、路径选择、信息素更新等。
2.1 初始化信息素浓度在蚁群优化算法中,信息素浓度表示路径的好坏程度,初始时,信息素浓度可以设置为一个常数或随机值。
较大的初始信息素浓度能够提醒蚂蚁找到正确的路径,但也可能导致过早的收敛。
2.2 蚁群初始化蚂蚁的初始化包括位置的随机选择和路径的初始化。
通常情况下,每只蚂蚁都在搜索空间内的随机位置开始。
2.3 路径选择蚂蚁通过信息素和启发式信息来选择路径。
信息素表示路径的好坏程度,而启发式信息表示路径的可靠程度。
蚂蚁根据这些信息以一定的概率选择下一个位置,并更新路径。
2.4 信息素更新每只蚂蚁走过某条路径后,会根据路径的好坏程度更新信息素浓度。
信息素更新还包括信息素的挥发,以模拟现实中信息的流失。
3. 蚁群优化算法在工程中的应用蚁群优化算法在工程领域中有广泛的应用,以下将从路径规划、交通调度和电力网络等方面进行说明。
3.1 路径规划路径规划是蚁群算法在工程中最为常见的应用之一。
在物流和交通领域,蚁群算法可以帮助寻找最短路径或最佳路线。
例如,蚁群优化算法在无人驾驶车辆中的应用,可以通过模拟蚁群的行为,找到最优的路径规划方案。
3.2 交通调度蚁群优化算法在交通调度中的应用可以帮助优化交通流,减少拥堵和行程时间。
蚁群算法--第四次作业
题目 请以100个城市的旅行商问题为例,写出使用20只人工蚂蚁进行旅行商问题求解的基本步骤。
次数第四次作业 姓名周璐学号53070711班级7班正文Q: 请以100个城市的旅行商问题为例,写出使用20只人工蚂蚁进行旅行商问题求解的基本步骤。
A :核心公式:τ是信息素浓度因子;η是可见度因子,通常取为城市间距离的倒数;α和β用来控制信息素和可见度的相对重要性;N ik 为蚂蚁k 的可访问城市集合 ;Q 为常量。
数据结构:– 信息素矩阵τ(100×100阶,记录信息素浓度)– 可见度矩阵η(100×100阶,通常取为距离(i, j)的倒数) – 禁忌表Tabu (20×100阶,记录蚂蚁k 走过的城市)初始化:– 20蚂蚁随机分布到100个城市环游:– 按照第一个核心公式计算转移概率– 蚂蚁依次选择路径直到返回出发点更新信息素浓度– 按照第二个核心公式计算t+1时刻的信息素浓度停止条件– 全部蚂蚁选择了同一条路线 – 算法运行到最大迭代次数 –旅行商问题代码思路大体如下:/****************************************************************************/ /初始化蚁群/****************************************************************************/ m=20; //蚁群中蚂蚁的数量,当m接近或等于城市个数n时,本算法可以在最少的迭代次数内找到最优解n=20; //表示TSP问题的规模,亦即城市的数量,这里为100D[100][100]={…}; //表示城市完全地图的赋权邻接矩阵,记录城市之间的距离Nc_max=200; //最大循环次数,即算法迭代的次数,亦即蚂蚁出动的拨数(每拨蚂蚁的数量都是20)alpha=1; //蚂蚁在运动过程中所积累信息(即信息素)在蚂蚁选择路径时的相对重要程度,alpha过大时,算法迭代到一定代数后将出现停滞现象beta=5; //启发式因子在蚂蚁选择路径时的相对重要程度rho=0.5; //(0<rho<1)表示路径上信息素的衰减系数(亦称挥发系数、蒸发系数),1-rho表示信息素的持久性系数Q=100; //蚂蚁释放的信息素量,对本算法的性能影响不大/****************************************************************************/ /变量初始化/****************************************************************************/ eta=1/D; //启发式因子,这里设为城市之间距离的倒数pheromone=ones(n,n); //信息素矩阵,这里假设任何两个城市之间路径上的初始信息素都为1tabu_list=zeros(m,n); //禁忌表,记录蚂蚁已经走过的城市,蚂蚁在本次循环中不能再经过这些城市。
蚁群算法与遗传算法的混合算法
蚁群算法与遗传算法的混合算法蚁群算法(Ant Colony Optimization,ACO)和遗传算法(Genetic Algorithm,GA)都属于启发式算法的范畴,它们分别从不同的角度对问题进行建模和求解。
蚁群算法以模拟蚁群觅食行为为基础,通过信息素和启发式规则指导蚂蚁解空间;而遗传算法通过模拟进化过程,利用交叉和变异运算生成新的个体,并适应性地选择个体进行下一代的繁衍。
两者在解决问题时有各自的局限性,因此将两种算法相结合,形成混合算法,可以克服各自的缺点,实现更有效的求解。
蚁群算法具有较强的全局能力,但其速度较慢,且可能会陷入局部最优解。
而遗传算法能够在过程中较快地收敛到局部最优解,但有可能会陷入局部最优解无法跳出。
因此,将两者结合起来,可以同时利用蚁群算法的全局和遗传算法的局部特性。
混合算法的基本思想是,将蚁群算法作为全局策略,用于生成一组较优的解,然后利用遗传算法在这组解中进行局部优化,以寻找最优解。
整个混合算法的流程如下:1.初始化蚁群相关参数和遗传算法的相关参数,包括蚁群大小、信息素更新速率、遗传算法的种群大小、交叉和变异的概率等;2.使用蚁群算法生成一组初始解,并计算每个解的适应度;3.利用遗传算法从初始解中选择适应度较高的一部分个体,作为种群;4.对种群进行交叉和变异操作,生成下一代个体;5.计算下一代个体的适应度;6.如果满足停止条件(如达到指定迭代次数或找到满意解),则输出结果;否则,返回第3步,继续优化。
在混合算法中,蚁群算法和遗传算法的相互作用可以通过以下几种方式实现:1. 优选策略(Elitism):将蚁群算法生成的一组解合并到遗传算法的种群中,在遗传算法的选择过程中保留一些蚁群算法生成的优秀个体,以避免遗传算法陷入局部最优解。
2.信息素启发式规则:将蚁群算法的信息素启发式规则应用于遗传算法的交叉和变异操作中,以指导交叉和变异过程中的方向,增加遗传算法的全局能力。
群体智能与优化算法
群体智能与优化算法群体智能(Swarm Intelligence)是一种模拟自然界群体行为的计算方法,借鉴了群体动物或昆虫在协作中展现出来的智能。
在群体智能中,个体之间相互通信、相互协作,通过简单的规则和局部信息交流来实现整体上的智能行为。
而优化算法则是一类用于解决最优化问题的数学方法,能够在大量搜索空间中找到最优解。
在现代计算领域,群体智能和优化算法常常结合使用,通过模拟自然界群体行为,寻找最佳解决方案。
接下来将分析几种典型的群体智能优化算法。
1. 蚁群算法(Ant Colony Optimization):蚁群算法源于对蚂蚁寻找食物路径行为的模拟。
蚁群算法通过模拟蚁群在环境中的寻找和选择过程,来寻找最优解。
算法中蚂蚁在搜索过程中会释放信息素,其他蚂蚁则根据信息素浓度选择路径,最终形成一条最佳路径。
2. 粒子群算法(Particle Swarm Optimization):粒子群算法源于对鸟群觅食过程的模拟。
在算法中,每个“粒子”代表一个潜在的解,粒子根据自身经验和周围最优解的经验进行位置调整,最终寻找最优解。
3. 遗传算法(Genetic Algorithm):遗传算法源于对生物进化过程的模拟。
通过模拟自然选择、交叉和变异等操作,来搜索最优解。
遗传算法在优化问题中有着广泛的应用,能够在复杂的搜索空间中找到较好的解决方案。
4. 蜂群算法(Artificial Bee Colony Algorithm):蜂群算法源于对蜜蜂群食物搜寻行为的模拟。
在算法中,蜜蜂根据花粉的量和距离选择食物来源,通过不断地试探和挑选来找到最佳解。
总体来说,群体智能与优化算法的结合,提供了一种高效且鲁棒性强的求解方法,特别适用于在大规模、高维度的优化问题中。
通过模拟生物群体的智能行为,这类算法能够在短时间内找到全局最优解或者较好的近似解,应用领域覆盖机器学习、数据挖掘、智能优化等多个领域。
群体智能与优化算法的不断发展,将进一步推动计算领域的发展,为解决实际问题提供更加有效的方法和技术。
蚁群算法蚂蚁算法中英文对照外文翻译文献
蚁群算法外文文献翻译(含:英文原文及中文译文)英文原文The ant colony optimizationOriginAnt colony algorithm (ant colony optimization, ACO), also known as ant algorithm is a probability -based algorithm is used to find the optimal path in the graph . It in his doctoral thesis proposed by Marco Dorigo in 1992 , inspired by the behavior of the path of the ants found in the process of looking for food . Ant colony algorithm is a simulated evolutionary algorithm , preliminary studies show that the algorithm has many excellent properties. Optimization design for PID controller parameters , the results of the results of the ant colony algorithm design and genetic algorithm are compared , the numerical simulation the results show that ant colony algorithm is a new kind of simulated evolutionary optimization effectiveness and value.principleEach ant began to search for food without first telling them where the food was. When a food is found, it will release a pheromone to the environment, attracting other ants, so that more and more ants will find food! Some ants do not always repeat the same path as other ants. Theywill In a different way, if the road to development is shorter than the other roads, then gradually more ants are attracted to this shorter road. Finally, after a period of time, it may appear that the shortest path is repeated by most ants.Why are small ants able to find food? Are they intelligent? Let's imagine if we want to design an AI program for ants, how complex is this program? First, if you want ants to be able to avoid obstacles, you must According to the appropriate terrain, it is programmed to allow them to avoid obstacles, and secondly, to allow ants to find food, they need to be traversed by all points in space; again, if the ant is to find the shortest path, then Y ou need to calculate all possible paths and compare their sizes, and more importantly, you must be careful programming, because the mistakes of the program may make you work harder. This is an incredible process! Too complicated, I am afraid no one can complete such a cumbersome and redundant program.However, the facts are not as complicated as you think. The above program has only 100 lines of code for each ant's core program! Why is such a simple program to make ants do such complicated things? The answer is: the emergence of simple rules. In fact, each ant does not need to know the information of the whole world as we imagine. They actually only care about the immediate information in a very small range, and use these simple rules to make decisions based on these local information. Inthe collective, complex behaviors will emerge. This is the law of the interpretation of artificial life and complexity science! So what are these simple rules?1 ScopeThe range observed by ants is a checkered world. An ant has a parameter that is a speed radius (typically 3). The range it can observe is a 3*3 checkered world, and the distance that can be moved is also in this range. Inside.2, the environmentThe ant's environment is a virtual world. There are obstacles, other ants, and pheromones. There are two kinds of pheromones. One is the food pheromone that is found by the ants who find food. One is to find the nest. The ants sprinkled the pheromone of the nest. Each ant can only sense the environmental information within its range. The environment makes the pheromone disappear at a certain rate.3, foraging rulesFind out if there is food in the range that each ant can sense, and if so, go straight. Otherwise, if there are pheromones and if there are more pheromones in the perceivable range, then it will go to places with a large number of pheromones, and each ant will make mistakes with a small probability, and not go to the information. Most points move. The rules for the ant to find the nest are the same as above, except that it respondsto the pheromone of the nest and does not respond to the food pheromone.4, move rulesEach ant moves in the direction of the most pheromone, and, when there is no pheromone guidance around it, the ant will continue to move in the direction of its original movement, and there will be a random, small disturbance in the direction of movement. In order to prevent the ant from turning in circles, it will remember which points it has recently passed. If it is found that the next point to be taken has recently passed, it will try to avoid it.5, obstacle avoidance rules:If the ant is moving in the direction of obstacles, it will randomly choose another direction, and if there are pheromone guidelines, it will follow the rules of foraging.6, broadcast pheromone rules:Each ant spreads the most pheromones when he first finds food or nests. As he goes further, the number of pheromones sown is decreasing.According to these rules, there is no direct relationship between ants, but each ant interacts with the environment, and through the pheromone ties, the ants are actually associated with each other. For example, when an ant finds food, it does not directly tell other ants that there is food. Instead, it broadcasts pheromones to the environment. When other antspass near it, they will feel the existence of pheromone, and according to The pheromone guide found the food.problem:Having said so much, how did an ant find food? When no ants find food, there is no useful pheromone in the environment. Why do ants find food relatively efficiently? This is due to the ants' movement rules, especially It is a movement rule when there is no pheromone. First of all, it must be able to maintain a certain inertia as much as possible so that the ant moves forward as far as possible (starting, this front is a randomly fixed direction), rather than spinning or shaking in situ; second, the ants must have some randomness. Although there is a fixed direction, it cannot move in a straight line like a particle, but it has a random disturbance. This makes the ant move with a certain purpose, try to keep the original direction, but there are new temptations, especially when it comes to obstacles, it will immediately change direction, this can be seen as a selection process, also It is the environmental obstacles that make ants correct in one direction, but not in other directions. This explains why a single ant can still find a well-kept food in a complex map such as a maze. Of course, when an ant finds food, most ants will find food quickly along the pheromone. However, it is not ruled out that at the beginning, some ants randomly selected the same path. As more pheromones are released by the ants on this path, more ants also choose this path. However, thispath is not optimal (ie, shortest). Therefore, after the number of iterations is completed, the ant finds that it is not an optimal solution but a suboptimal solution. The result of this case may be of practical significance. Not much.How does the ant find the shortest path? This is due to the pheromone, and to the environment, specifically the computer clock. Where there are many places of information, it is obvious that there will be more ants here, so more ants will come together. If there are two roads leading from the nest to food, the number of ants that follow the two roads is the same (or there are more ants on the longer roads, which does not matter). When the ant arrives at the end point along a road, he will return immediately, so that short ant roads will be shorter and shorter, which means that the frequency of repetition will be faster, and therefore there will be more ants walking through the unit time. The number of pheromones will naturally drop. Naturally there will be more ants being attracted to it, so that more pheromones will be sprayed. The long road is the opposite. Therefore, more and more ants are gathering to The shortest path was found and the shortest path was found. Some people may ask the question of the shortest path locally and the shortest part of the overall situation. In fact, the ants gradually approach the shortest path of the whole world. Why? It is because ants make mistakes, that is, they will not go to places with high pheromone according to a certain probability.Going for another way, this can be understood as an innovation. If this innovation can shorten the road, then according to the principle just described, more ants will be attracted.ExtendedWhat do you find after following the trail of ants? Through the above principle of narrative and practical operation, it is not difficult to find that ants have intelligent behavior, due entirely to its simple rules of conduct, and these rules are integrated into the following two aspects. Features:1, diversity2, positive feedbackThe diversity ensures that the ants do not walk into the dead-end and loop indefinitely while feeding, and the positive feedback mechanism ensures that relatively good information can be preserved. We can think of diversity as a creative ability, and positive feedback as a learning reinforcement. The power of positive feedback can also be compared to the authoritative opinion, and diversity is the creativeness of breaking the authority. It is precisely these two carefully combined cleverness that has led to the emergence of smart behavior.In terms of extension, the evolution of nature, the progress of society, and the innovation of mankind are all inseparable from these two things. The diversity guarantees the system's ability to innovate, and positivefeedback ensures that good features can be strengthened, and both must be just right. Combine. If there is excess diversity, that is, if the system is too active, this is equivalent to excessive random movement of ants, and it will be trapped in chaos. On the contrary, if diversity is not enough and positive feedback mechanism is too strong, then the system will be like a pool of stagnant water. In terms of ant colonies, this is manifested in the fact that the behavior of ants is too rigid. When the environment changes, ant colonies still cannot adjust properly. Since complexity and intelligence behaviors emerge according to the underlying rules, since the underlying rules have diversity and positive feedback characteristics, then perhaps you would ask where these rules come from? Where does diversity and positive feedback come from? My own opinion: Rules come from the evolution of nature. The evolution of nature is based on the clever combination of diversity and positive feedback. And why is this clever combination? Why is the world presented before your eyes so lifelike? The answer is that the environment has created all of this. The reason why you see a vivid world is because those who cannot adapt to the diversity of the environment and The combination of positive feedback has been dead and eliminated by the environment!Ant Colony Algorithm Features1) Ant Colony Algorithm is a self-organizing algorithm. In system theory, self-organization and its organization are two basic categories oforganizations. The difference is whether organizational or organizational instructions come from within or outside the system. From the inside of the system, they are self-organizing. Outside the system is his organization. If the system has no specific external intervention during the process of obtaining space, time, or functional structure, we say that the system is self-organizing. In an abstract sense, self-organization is the process of making the system increase without external action (that is, the system changes from disorder to order). The ant colony algorithm satisfactorily satisfies this process. The ant colony optimization is used as an example. At the beginning of the algorithm, a single artificial ant finds the solution disorderly. After a period of evolution of the algorithm, artificial ants increasingly tend to find a near optimal solution through the role of the information hormone. This is a no Orderly process.2) The ant colony algorithm is an essentially parallel algorithm. Each ant searches independently of each other and only communicates through the information hormone. So ant colony algorithm can be regarded as a distributed multi-agent system. It begins independent search at multiple points in the problem space, which not only increases the reliability of the algorithm, but also makes the algorithm have a strong global search capability. .3) The ant colony algorithm is a positive feedback algorithm. From the process of foraging of real ants, it is not difficult to see that ants canultimately find the shortest path and rely directly on the accumulation of information hormones on the shortest path. However, the accumulation of information hormones is a positive feedback process. For the ant colony algorithm, there is exactly the same information hormone in the environment at the initial moment, giving a slight disturbance to the system, making the trajectory concentration on each side not the same, and the solution constructed by ants has advantages and disadvantages. The feedback method used by the algorithm It is to leave more information hormones in the path of better solutions, and more information hormones attract more ants. The process of positive feedback makes the initial differences continue to expand, while guiding the entire system. Evolution toward the optimal solution. Therefore, positive feedback is an important feature of the ant algorithm, which enables the algorithm evolution process to proceed.(4) Ant colony algorithm has strong robustness. Compared with other algorithms, the ant colony algorithm does not require a high initial route. That is, the result of the ant colony algorithm does not depend on the selection of the sub-original route, and no manual adjustment is required during the search. Secondly, the number of parameters of the ant colony algorithm is small, the setting is simple, and the ant colony algorithm is easy to apply to other combinatorial optimization problems.The ant colony optimization algorithm was originally used to solvethe TSP problem. After years of development, it has penetrated into other fields, such as graph coloring, large-scale integrated circuit design, routing problems in communication networks, load balancing, and vehicle scheduling. Wait. The ant colony algorithm has been successfully applied in several fields, and the most successful one is in the application of combinatorial optimization.In the process of network routing, the traffic distribution of the network constantly changes, and the network links or nodes randomly fail or rejoin. The ant colony's self-catalysis and forward feedback mechanism exactly match the solution characteristics of this type of problem. Therefore, the ant colony algorithm has been applied in the network field. The parallel and distributed nature of ant colony foraging behavior makes the algorithm particularly suitable for parallel processing. Therefore, implementing parallel execution of the algorithm has great potential for solving a large number of complex practical application problems.If there is a large number of individuals without intelligence in a group, the intelligent behavior they exhibit through simple cooperation between them is called Swarm Intelligence. Communication on the Internet is nothing more than the result of the interaction of neurons (human brains) through the Internet. Optical cables and routers are merely extensions of axons and synapses. From the perspective ofself-organization phenomenon, there is no essential difference between the intelligence of the human brain and the ant colony. A single neuron has no intelligence and no single ant, but the system formed by connection is an agent.中文译文蚁群算法起源蚁群算法(ant colony optimization, ACO),又称蚂蚁算法,是一种用来在图中寻找优化路径的机率型算法。
25个经典的元启发式算法
25个经典的元启发式算法元启发式算法(metaheuristic algorithm)是一种用于解决复杂优化问题的算法。
它们通常基于自然界的现象或者数学模型,能够快速、有效地寻找到全局最优解或者接近最优解。
下面我们来看看25个经典的元启发式算法。
1.蚁群算法(Ant Colony Optimization):模拟蚂蚁寻找食物的行为,用于解决组合优化问题。
2.遗传算法(Genetic Algorithm):模拟生物进化的过程,用于解决优化问题。
3.粒子群算法(Particle Swarm Optimization):模拟鸟群觅食的行为,用于解决连续优化问题。
4.模拟退火算法(Simulated Annealing):模拟固体退火的过程,用于解决组合优化问题。
5.蚁狮算法(Ant Lion Optimizer):模拟蚁狮捕食的过程,用于解决连续优化问题。
6.蝗虫优化算法(Grasshopper Optimization Algorithm):模拟蝗虫觅食的行为,用于解决优化问题。
7.人工蜂群算法(Artificial Bee Colony Algorithm):模拟蜜蜂寻找花蜜的行为,用于解决优化问题。
8.蝇算法(Fly Algorithm):模拟蝇类觅食的行为,用于解决优化问题。
9.骨骼优化算法(Skeleton-based Optimization Algorithm):模拟骨骼结构的优化过程,用于解决优化问题。
10.人工鱼群算法(Artificial Fish Swarm Algorithm):模拟鱼群觅食的行为,用于解决优化问题。
11.基因表达式编程算法(Gene Expression Programming Algorithm):模拟基因调控的过程,用于解决优化问题。
12.蚁狮算法(Ant Lion Optimizer):模拟蚁狮捕食的过程,用于解决连续优化问题。
13.蝗虫优化算法(Grasshopper Optimization Algorithm):模拟蝗虫觅食的行为,用于解决优化问题。
25个经典的元启发式算法 -回复
25个经典的元启发式算法-回复元启发式算法是一种用于解决优化问题的算法,它通过模拟自然进化过程或其他自然现象的规律,逐步寻找最优解。
这些算法是基于一系列的准则或原则,通过迭代、测试和改进来生成解决方案。
在本文中,我们将介绍25个经典的元启发式算法,并逐步解释它们的主题及其运作原理。
1. 爬山算法(Hill Climbing):爬山算法采用贪心策略,每次移动到当前状态的最优解。
然而,由于只考虑局部最优解,它很容易陷入局部最优解的陷阱。
2. 模拟退火算法(Simulated Annealing):模拟退火算法通过模拟固体退火过程,接受较差解决方案以避免陷入局部最优解。
它以一定的概率接受较差的解决方案,并逐渐降低概率。
3. 遗传算法(Genetic Algorithm):遗传算法模拟自然选择和遗传机制,通过逐代进化来优化解决方案。
它使用交叉和变异操作来产生下一代解决方案,并根据适应度评估函数进行选择。
4. 粒子群优化算法(Particle Swarm Optimization):粒子群优化算法模拟鸟群或鱼群的行为,通过群体合作来搜索最优解。
每个粒子通过学习自己和邻居的经验来更新其位置和速度。
5. 蚁群算法(Ant Colony Optimization):蚁群算法模拟蚂蚁在搜索食物过程中释放信息素的行为。
蚂蚁根据信息素浓度和距离选择路径,并通过更新信息素浓度来引导其他蚂蚁的选择。
6. 人工鱼群算法(Artificial Fish Swarm Algorithm):人工鱼群算法模拟鱼群的行为,通过觅食和追逐行为来搜索最优解。
每条鱼根据个体行为和群体行为来更新其位置和速度。
7. 免疫算法(Immune Algorithm):免疫算法模拟免疫系统的信息处理和适应能力。
它通过生成、选择和演化抗体来解决优化问题,以识别和消除有害因素。
8. 蜂群算法(Bee Algorithm):蜂群算法模拟蜜蜂的行为,通过在食物源附近搜索和招募蜜蜂来优化解决方案。
蚁群优化算法
信息素
1.1 基本原理
双桥实验
蚁穴
食物源
(a)两个路具有同样的长度
自身催化(正反馈)过程
1.起初两条分支上不存在信息 素,蚂蚁以相同的概率进行 选择。 2.随机波动的出现,选择某一 条分支的蚂蚁数量可能比另 外一条多。 3.实验最终结果:所有的蚂蚁 都会选择同一分支。
2
蚂蚁数目过少时,算法的探索能力变差,容易出现早熟现象。特别是当问题的规模很大时,算法的全局寻优能力会十分糟糕
3
在用蚂蚁系统、精华蚂蚁系统、基于排列的蚂蚁系统和最大最小蚂蚁系统求解TSP时,m取值等于城市数目时有较好性能。
蚂蚁数目
2.3 蚂蚁系统理论
参数设置
1
信息素挥发因子较大,信息素挥发速率大,从未被蚂蚁选择过的边上信息素急剧减少到接近0,降低算法的全局探索能力。
2
信息素会不断的蒸发。
3
路径探索也是必需的,否则容易陷入局部最优。
1.1基本理论
蚁群觅食现象
蚁群优化算法
蚁群
搜索空间的一组有效解(种群规模m)
觅食空间
问题的搜索空间(问题的规模、解的维数n)
信息素
信息素浓度变量
蚁巢到食物的一条路径
一个有效解
找到的最短路
问题的最优解
蚁群觅食现象和蚁群优化算法的基本定义对照表
3.3 最大最小蚂蚁系统
最大最小蚂蚁系统
提出背景:
1.对于大规模的TSP,由于搜索蚂蚁的个数有限,而初始化时蚂蚁的 分布是随机的,这会不会造成蚂蚁只搜索了所有路径中的小部分就 以为找到了最好的路径,而真正优秀的路径并没有被探索到呢? 2.当所有蚂蚁都重复构建着同一条路径的时候,意味着算法已经进入 了停滞状态,有没有办法利用算法停滞后的迭代过程进一步搜索以 保证找到更接近真实目标的解呢?
伪随机比例选择规则 蚁群算法的路径选择规则
伪随机比例选择规则蚁群算法的路径选择规则蚁群算法(Ant Colony Optimization Algorithm)是一种被广泛应用于寻优问题求解的启发式算法。
其主要灵感来源于观察蚂蚁在寻找食物时的行为,利用蚁群中的信息素和路径选择规则完成问题求解。
其中,路径选择规则被认为是蚁群算法的重要组成部分之一。
伪随机比例选择规则是常用的蚁群算法路径选择规则之一。
该规则的核心思想是根据路径上的信息素浓度和启发因子对路径进行选择。
具体来说,蚂蚁在选择下一步要走的路径时,会根据当前路径上的信息素和启发因子计算选择概率,并以一定概率选择信息素浓度较高的路径,同时在一定概率内随机选择其他路径。
这样,蚂蚁沿着信息素激励的路径前进,同时也能够随机探索其他可能更优的路径。
该规则的目的是在信息素重要的情况下保持多样性,因为如果蚂蚁仅依靠信息素选择,将会选择最短路径,但这并不一定是最优的路径。
伪随机比例选择规则主要包括三个部分:信息素强度、启发因子和选择概率计算。
信息素强度部分指的是在路径上存储的信息素浓度。
信息素是一些特定的数值,反映了蚂蚁走过一条路径的“好坏程度”。
蚂蚁在路径上遇到环境条件良好的情况下会释放信息素来吸引其他的蚂蚁跟随,最终形成一条蚂蚁们共同使用的路径。
路径的信息素强度高,说明蚂蚁在这条路径上摆脱了危险,并找到了食物。
因此,蚂蚁在选择下一步要走的路径时,会更倾向于选择信息素浓度较高的路径。
启发因子部分是指根据问题本身的特点设计的一些因素。
例如,在解决旅行商问题(TSP)时,启发因子可以根据两个城市之间的距离大小进行计算。
启发因子能够提供一些关于路径好坏的信息,辅助蚂蚁进行路径选择。
选择概率计算部分是将信息素强度和启发因子转换为蚂蚁选择路径的概率。
这里的选择概率计算一般采用轮盘赌(Roulette Wheel)算法,根据概率大小随机选择路径。
其中,伪随机比例选择规则与传统的比例选择规则不同之处在于,它引入了一个随机性因素,使其更具有多样性。
蚂蚁算法步骤
蚂蚁算法步骤蚂蚁算法(Ant Colony Optimization, ACO)是一种受到蚂蚁寻找食物行为启发的优化算法,被广泛应用于解决组合优化问题。
蚂蚁算法模拟了蚂蚁在寻找食物过程中释放信息素、选择路径、更新信息素浓度的行为,通过不断迭代寻找最优解。
蚂蚁算法的步骤主要包括初始化信息素浓度、蚂蚁的移动规则、信息素更新规则、蚂蚁的选择策略和最优解的更新。
以下是蚂蚁算法的详细步骤描述:1. 初始化信息素浓度:首先需要初始化问题空间中各路径的信息素浓度,一般可以设置为相同的初始值。
信息素浓度表示蚂蚁选择某路径的概率,初始值的大小会影响算法的搜索效率和收敛速度。
2. 蚂蚁的移动规则:每只蚂蚁按照一定的移动规则在问题空间中搜索解空间。
蚂蚁在每一步的移动中会考虑信息素浓度和启发信息(如距离、路径长度等),以概率的方式选择下一步的移动方向。
3. 信息素更新规则:蚂蚁选择路径后,会根据蚂蚁的移动路径长度和信息素挥发速率更新信息素浓度。
一般来说,蚂蚁走过的路径会释放信息素,路径长度越短的路径信息素浓度更新越高。
4. 蚂蚁的选择策略:蚂蚁在选择下一步移动的路径时,会考虑信息素浓度和启发信息的影响,一般采用概率的方式进行选择。
信息素浓度高的路径和启发信息好的路径会有更大的选择概率。
5. 最优解的更新:蚂蚁算法会记录每一代蚂蚁的最优解,当蚂蚁搜索完成后,选择最优的蚂蚁路径作为当前的最优解。
最优解的更新会影响信息素的蒸发和下一代蚂蚁的选择。
蚂蚁算法的优点在于能够自适应搜索空间、全局寻优能力强、易于实现和解释等,但也存在收敛速度慢、易陷入局部最优解等缺点。
因此,在应用蚂蚁算法时需要合理选择参数、调整算法的迭代次数和蚂蚁数量,以获得较好的优化效果。
蚂蚁算法已经成功应用于许多领域的优化问题,如路径规划、蚁群聚类、蚁群蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁蚁。
莫比乌斯环的四种方法
莫比乌斯环的四种方法莫比乌斯环(Mobius Ring)是求解复杂问题的一种解决方案,由于它能够有效地把复杂的计算任务分解成小任务,因此被广泛应用于人工智能研究领域。
目前,莫比乌斯环的应用一般主要采用四种方法:Genetic Algorithm、Neural Network、Particle SwarmOptimization 和 Ant Colony Optimization。
1、遗传算法(Genetic Algorithm)——遗传算法是一种用于解决复杂规划问题的搜索算法。
它基于“进化”原理,通过迭代变异、杂交和选择,模拟出生物的进化的过程,不断试错,最终找到最优解。
由于它采用的是一种“随机搜寻”的方法,因此可以游走在整个搜索空间,发现更优解。
2、神经网络(Neural Network)——神经网络是一种机器学习技术,它能够通过大量数据的学习,自动找出最佳解决方案。
它具有高度容错性、快速收敛、强大拟合能力等特点,可用于解决大型复杂优化问题,是人工智能领域的一种重要手段。
3、粒子群优化算法(Particle Swarm Optimization)——粒子群优化算法是一种适用于复杂问题优化的迭代算法。
它通过迭代分析,模拟实际社会中拥有同样社会行为的群体,以求解复杂的优化问题,且在搜索范围内能够收敛到更优的解。
4、蚁群算法(Ant Colony Optimization)——蚁群算法是一种基于蚂蚁搜索路径行为的计算机算法,是一种Swarm Intelligence 智能算法。
通过模拟真实蚂蚁的行为,来搜索最优解,它可以有效地发现最佳路径,从而解决组合优化问题,将复杂的任务分解成简单的任务,从而达到莫比乌斯环的效果。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
been widely used to identify the effective connectivity involved in human brain [5]. In general, these methods can be divided into two categories, one is the model-driven approach, and the other is the data-driven method. The model-driven approaches, such as dynamic causal models (DCM) [6] and structural equation modeling (SEM) [7], are widely used for detecting effective connectivity, but they are required for prior models and especially limited to construct the relative small networks. So the model-driven approach is not suitable for resting state fMRI data or for those situations where the prior knowledge is insufficient [8]. The data-driven approaches directly extract causal interactions from fMRI data, without any prior knowledge or assumption on models. However, these datadriven methods also have their own limitations. For instance, Linear non-Gaussian acyclic model (LiNGAM) algorithm [9] needs some prior assumptions on data generation and data disturbance, which per se have limited its use [10]. Granger causality (GC) method [11] requires the data to be wide-sense stationary and have a zero mean. Generalised synchronization (GS) method [12] has to employ three related measures of nonlinear interdependence to evaluate neural synchrony, while the three measures are not always consistent [10]. Patel’s condition dependence measurement method (Patel) [13] formulates a measure of connection strength κ and a measure of connection directionality τ . However, Patel’s κ performs worse than some other methods at connection sensitivity while Patel’s τ performs well at identifying the directions [10]. Recently, the Bayesian network method has gradually become one kind of the most important data-driven approaches for identifying the functional connectivity. The main reason is that this method can operate on larger datasets of nodes (brain regions) and run searches over a wide range of candidate networks. However, as mentioned in [10], though this kind of methods perform well in identifying functional connectivity, they are rarely able to completely and reliably infer causal directions. Therefore, how to further explore new learning methods for identifying effective connectivity from fMRI data is still a challenging research topic [14]. In 2016, Ji etc.
Jinduo Liu∗ , Junzhong Ji∗ , Aidong Zhang† and Peipeng Liang‡ Beijing Municipal Key Laboratory of Multimedia and Intelligent Software the College of Computer Science and Technology, Beijing University of Technology Beijing, China, Email: jjz01@ † Department of Computer Science and Engineering, University at Buffalo The State University of New York, Buffalo, America Email: azhang@ ‡ Beijing Key Lab of MRI and Brain Informatics Department of Radiology, Xuanwu Hospital, Capital Medical University Beijing, China, Email: p.p.liang@
2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
An ant colony optimization algorithm for learning brain effective connectivity network from fMRI data
∗
Abstract—Identifying brain effective connectivity networks from functional magnetic resonance imaging (fMRI) data is an important advanced subject in neuroinformatics in recent years, where the learning method based on bayesian networks (BN) has become a new hot topic in the field. This paper proposes a new method to learn the brain effective connectivity network structure by combining ant colony optimization (ACO) with BN method, named as ACOEC. In the proposed algorithm, a brain effapped onto an ant, and then the ant colony optimization by simulating real ants looking for food is employed to construct network structures and finally an ant with the highest score is obtained as the optimal solution. The experimental results on simulated and real fMRI data sets show that the new method can not only accurately identify the connections and directions of the brain networks, but also quantitatively describe the connection strength of the brain networks, which has a good clinical application prospects. Keywords-fMRI; Brain effective connectivity network; Bayesian network; Ant colony optimization; Connection strength.
I. I NTRODUCTION In the last few years, there has been growing interest in the use of machine learning and data mining for analyzing fMRI data, where the study of brain networks has gradually become a frontier subject [1, 2]. By means of advanced modern technologies on fMRI data, people can identify brain functions, functional connectivity and effective connectivity between brain regions. These connectivity patterns between specific brain regions make a great contribution for researchers in neuroimaging, cognitive neuroscience and life science to understand the inner functional mechanisms in Human Brain, and find the pathogenesis of cerebral diseases [3, 4]. As an important influence that one neuronal system exerts over another between different brain regions, effective connectivity can describe the directed networks in the resting state and specific changes of baseline brain activity in some diseases. Therefore, how to accurately identify effective connectivity from fMRI data is becoming a research hotspot in neuroinformatics, where lots of computation methods have