A replanning algorithm for a reactive agent architecture

合集下载

securerandom.getinstance的取代方法 -回复

securerandom.getinstance的取代方法 -回复

securerandom.getinstance的取代方法-回复Title: Alternative Methods to SecureRandom.GetInstanceIntroduction:In modern cryptography, securing random number generation is essential for cryptographic operations, such as key generation, initialization vectors, and nonces. One such popular method used in many programming languages is SecureRandom.GetInstance. However, recent concerns regarding vulnerabilities in this method have led to the development of alternative techniques. In this article, we will delve into the issues surrounding SecureRandom.GetInstance and explore some viable replacements.1. Understanding SecureRandom.GetInstance: SecureRandom.GetInstance is a method that provides a cryptographically strong pseudorandom number generator (CSPRNG). It allows developers to access a secure and unpredictable stream of random numbers. It is widely used in both public and private key cryptography algorithms due to its reputation for providing high-quality random numbers.2. Vulnerabilities in SecureRandom.GetInstance:Despite its widespread use, SecureRandom.GetInstance has faced scrutiny due to potential vulnerabilities. In 2013, the revelation of the NSA's Bullrun program raised concerns about possible backdoors in cryptographic algorithms. While no concrete evidence existed, doubts were raised about the integrity of SecureRandom.GetInstance in generating random numbers. Additionally, implementation flaws in certain versions of Java have been discovered, resulting in the generation of predictable random numbers.3. Alternative Methods:To address the concerns surrounding SecureRandom.GetInstance, several alternative methods have emerged. These alternatives focus on improving random number generation and ensuring cryptographic strength.a) java.security.SecureRandom:The java.security.SecureRandom class is an alternative to SecureRandom.GetInstance that provides enhanced security. It utilizes a platform-specific native implementation, making it less susceptible to potential vulnerabilities in the underlying Java implementation. The java.security.SecureRandom class followsstandards set by the National Institute of Standards and Technology (NIST) in generating random numbers.b) CryptGenRandom (Windows):For Windows-based applications, CryptGenRandom is an alternative to SecureRandom.GetInstance. It is a cryptographic service provider in the Windows operating system that generates random numbers using hardware and software entropy sources. CryptGenRandom is well-integrated into the Windows cryptographic infrastructure, making it a reliable choice.c) /dev/random and /dev/urandom (Unix-based systems):Unix-based systems offer alternatives toSecureRandom.GetInstance through the use of /dev/random and /dev/urandom devices. These devices provide access to random numbers based on environmental noise and other system events. /dev/random provides cryptographically secure random numbers at the cost of blocking when not enough entropy is available, while /dev/urandom sacrifices blocking for faster random number generation.4. Implementation Examples:To illustrate the usage of alternative methods, let's consider some code examples:a) java.security.SecureRandom:SecureRandom srng = new java.security.SecureRandom();b) CryptGenRandom (Windows):byte[] randomBytes = new byte[16];CryptGenRandom(CRYPT_PROVIDER, randomBytes, randomBytes.length);c) /dev/random and /dev/urandom (Unix-based systems): OpenSSL library in C/C++: RAND_bytes(buffer, length);Conclusion:Secure random number generation is a critical aspect of cryptographic operations. While SecureRandom.GetInstance has traditionally been a popular method, concerns regarding its security have necessitated the exploration of alternative solutions. The alternatives discussed in this article, includingjava.security.SecureRandom, CryptGenRandom, and /dev/randomand /dev/urandom, offer improved security and reliability for generating random numbers. Developers must choose an appropriate alternative based on their specific platform and cryptographic requirements.。

Scheduling flow shops using differential evolution algorithm

Scheduling flow shops using differential evolution algorithm

Discrete OptimizationScheduling flow shops using differential evolution algorithmGodfrey Onwubolu *,Donald DavendraDepartment of Engineering,The University of the South Pacific,P.O.Box 1168,Suva,FijiReceived 17January 2002;accepted 5August 2004Available online 21November 2004AbstractThis paper describes a novel optimization method based on a differential evolution (exploration)algorithm and its applications to solving non-linear programming problems containing integer and discrete variables.The techniques for handling discrete variables are described as well as the techniques needed to handle boundary constraints.In particular,the application of differential evolution algorithm to minimization of makespan ,flowtime and tardiness in a flow shop manufacturing system is given in order to illustrate the capabilities and the practical use of the method.Experiments were carried out to compare results from the differential evolution algorithm and the genetic algorithm,which has a reputation for being very powerful.The results obtained have proven satisfactory in solution quality when compared with genetic algorithm.The novel method requires few control variables,is relatively easy to implement and use,effec-tive,and efficient,which makes it an attractive and widely applicable approach for solving practical engineering prob-lems.Future directions in terms of research and applications are given.Ó2004Elsevier B.V.All rights reserved.Keywords:Scheduling;Flow shops;Differential evolution algorithm;Optimization1.IntroductionIn general,when discussing non-linear programming,the variables of the object function are usually as-sumed to be continuous.However,in practical real-life engineering applications it is common to have the problem variables under consideration being discrete or integer values.Real-life,practical engineering opti-mization problems are commonly integer or discrete because the available values are limited to a set of commercially available standard sizes.For example,the number of automated guided vehicles,the number of unit loads,the number of storage units in a warehouse operation are integer variables,while the size of a pallet,the size of billet for machining operation,etc.,are often limited to a set of commercially available 0377-2217/$-see front matter Ó2004Elsevier B.V.All rights reserved.doi:10.1016/j.ejor.2004.08.043*Corresponding author.Tel.:+679212034;fax:+679302567.E-mail address:onwubolu_g@usp.ac.fj (G.Onwubolu).European Journal of Operational Research 171(2006)674–692/locate/ejorG.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692675 standard sizes.Another class of interesting optimization problem isfinding the best order or sequence in which jobs have to be machined.None of these engineering problems has a continuous objective function; rather each of these engineering problems has either an integer objective function or discrete objective func-tion.In this paper we deal with the scheduling of jobs in aflow shop manufacturing system.Theflow shop scheduling-problem is a production planning-problem in which n jobs have to be pro-cessed in the same sequence on m machines.The assumptions are that there are no machine breakdowns and that all jobs are pre-emptive.This is commonly the case in many manufacturing systems where jobs are transferred from machine to machine by some kind of automated material handling systems.For large problem instances,typical of practical manufacturing settings,most researchers have focused on developing heuristic procedures that yield near optimal-solutions within a reasonable computation time. Most of these heuristic procedures focus on the development of permutation schedules and use makespan as a performance measure.Some of the well-known scheduling heuristics,which have been reported in the literature,include Palmer(1965),Campbell et al.(1970),Gupta(1971),Dannenbring(1977),Hundal and Rajagopal(1988)and Ho and Chang(1991).Cheng and Gupta(1989)and Baker and Scudder(1990)pre-sented a comprehensive survey of research work done inflow shop scheduling.In recent years,a growing body of literature suggests the use of heuristic search procedures for combi-natorial optimization problems.Several search procedures that have been identified as having great poten-tial to address practical optimization problems include simulated annealing(Kirkpatrick et al.,1983), genetic algorithms(Goldberg,1989),tabu search(Glover,1989,1990),and ant colony optimization(Dor-igo,1992).Consequently,over the past few years,several researchers have demonstrated the applicability of these methods,to combinatorial optimization problems such as theflow shop scheduling(see for example, Widmer and Hertz,1989;Ogbu and Smith,1990;Taillard,1990;Chen et al.,1995;Onwubolu,2000).More recently,a novel optimization method based on differential evolution(exploration)algorithm(Storn and Price,1995)has been developed,which originally focused on solving non-linear programming problems containing continuous variables.Since Storn and Price(1995)invented the differential evolution(explora-tion)algorithm,the challenge has been to employ the algorithm to different areas of problems other than those areas that the inventors originally focussed on.Although application of DE to combinatorial optimi-zation problems encountered in engineering is scarce,researchers have used DE to design complex digital filters(Storn,1999),and to design mechanical elements such as gear train,pressure vessels and springs (Lampinen and Zelinka,1999).This paper presents a new approach based on differential evolution algorithm for solving the problem of scheduling n jobs on m machines when all jobs are available for processing and the objective is to minimize the makespan.Other objective functions considered in the present work include meanflowtime and total tardiness.2.Problem formulationAflow shop scheduling is one in which all jobs must visit machines or work centers in the same sequence. Processing of a job must be completed on current machine before processing of the job is started on suc-ceeding machine.This means that initially all jobs are available and that each machine is restricted to pro-cessing only one job at any particular time.Since thefirst machine in the facility arrangement is thefirst to be visited by each job,the other machines are idle and other jobs are queued.Although queuing of jobs is prohibited in just-in-time(JIT)manufacturing environments,flow shop manufacturing continues tofind applications in electronics manufacturing,and space shuttle processing,and has attracted much research work(Onwubolu,2002).Theflow shop can be formatted generally by the sequencing of n jobs on m ma-chines under the precedence condition,with typical objective functions being the minimizing of average flowtime,minimizing the time required to complete all jobs or makespan,minimizing maximum tardiness,and minimizing the number of tardy jobs.If the number of jobs is relatively small,then the problem can be solved without using any generic optimizing algorithm.Every possibility can be checked to obtain results and then sequentially compared to capture the optimum value.But,more often,the number of jobs to be processed is large,which leads to big-O order of n !Consequently,some kind of algorithm is essential in this type of problem to avoid combinatorial explosion.The standard three-field notation (Lawler et al.,1995)used is that for representing a scheduling problem as a j b j F (C ),where a describes the machine environment,b describes the deviations from standard sched-uling assumptions,and F (C )describes the objective C being optimized.In the work reported in this paper,we are solving the n /m /F k F (C max )problem.Other problems solved include F ðC Þ¼F ðP C i Þand F ðC Þ¼F ðP T j Þ.Here a =n /m /F describes the multiple-machines flow shop problem,b =null,and F ðC Þ¼F ðC max ;P C i ;and P T j Þfor makespan,mean flowtime,and total tardiness,respectively.Stating these problem descriptions more elaborately,the minimization of completion time (makespan)for a flow shop schedule is equivalent to minimizing the objective function I :I ¼X n j ¼1C m ;j ;ð1Þs :t :C i ;j ¼max C i À1;j ;C i ;j À1ÀÁþP i ;j ;ð2Þwhere C m ,j =the completion time of job j ,C 1,1=k (any given value),C i ;j ¼P j k ¼1C 1;k ;C j ;i ¼P i k ¼1C k ;1,i )machine number,j )job in sequence,P i ,j )processing time of job j on machine i .For a given sequence,the mean flowtime,MFT =1P m i ¼1P n j ¼1c ij ,while the condition for tardiness is c m ,j >d j .The constraint of Eq.(2)applies to these two problem descriptions.3.Differential evolutionThe differential evolution (exploration)[DE]algorithm introduced by Storn and Price (1995)is a novel parallel direct search method,which utilizes NP parameter vectors as a population for each generation G .DE can be categorized into a class of floating-point encoded,evolutionary optimization algorithms .Currently,there are several variants of DE.The particular variant used throughout this investigation is the DE/rand/1/bin scheme.This scheme will be discussed here and more detailed descriptions are provided (Storn and Price,1995).Since the DE algorithm was originally designed to work with continuous variables,the opti-mization of continuous problems is discussed first.Handling discrete variables is explained later.Generally,the function to be optimized,I ,is of the form I ðX Þ:R D !R .The optimization target is to minimize the value of this objective function I ðX Þ,min ðI ðX ÞÞ;ð3Þby optimizing the values of its parameters X ={x 1,x 2,...,x D },X 2R D ,where X denotes a vector composed of D objective function ually,the parameters of the objective function are also subject to lower and upper boundary constraints,x (L )and x (U ),respectively,x ðL Þj P x j P x ðU Þj8j 2½1;D :ð4Þ3.1.InitializationAs with all evolutionary optimization algorithms,DE works with a population of solutions,not with a sin-gle solution for the optimization problem.Population P of generation G contains NP solution vectors called individuals of the population and each vector represents potential solution for the optimization problem 676G.Onwubolu,D.Davendra /European Journal of Operational Research 171(2006)674–692P ðG Þ¼X ðG Þi ¼x ðG Þj ;i ;i ¼1;...;NP ;j ¼1;...;D ;G ¼1;...;G max :ð5ÞIn order to establish a starting point for optimum seeking,the population must be initialized.Often there is no more knowledge available about the location of a global optimum than the boundaries of the problem variables.In this case,a natural way to initialize the population P (0)(initial population)is to seed it with random values within the given boundary constraints:P ð0Þ¼x ð0Þj ;i ¼x ðL Þj þrand j ½0;1 Âx ðU Þj Àx ðL Þj 8i 2½1;NP ;8j 2½1;D ;ð6Þwhere rand j [0,1]represents a uniformly distributed random value that ranges from zero to one.3.2.MutationThe self-referential population recombination scheme of DE is different from the other evolutionary algorithms.From the first generation onward,the population of the subsequent generation P (G +1)is obtained on the basis of the current population P (G ).First a temporary or trial population of candidate vectors for the subsequent generation,P 0ðG þ1Þ¼V ðG þ1Þ¼v ðG þ1Þj ;i ,is generated as follows:v ðG þ1Þj ;i ¼x ðG Þj ;r 3þF Âx ðG Þj ;r 1Àx ðG Þj ;r 2 ;if rand j ½0;1 <CR _j ¼k ;x ðG Þi ;j ;otherwise ;8<:ð7Þwhere i 2[1,NP];j 2[1,D ],r 1,r 2,r 32[1,NP],randomly selected,except:r 15r 25r 35i ,k =(int(rand i [0,1]·D )+1),and CR 2[0,1],F 2(0,1].Three randomly chosen indexes,r 1,r 2,and r 3refer to three randomly chosen vectors of population.They are mutually different from each other and also different from the running index i .New random values for r 1,r 2,and r 3are assigned for each value of index i (for each vector).A new value for the random num-ber rand[0,1]is assigned for each value of index j (for each vector parameter).3.3.CrossoverThe index k refers to a randomly chosen vector parameter and it is used to ensure that at least one vector parameter of each individual trial vector V (G +1)differs from its counterpart in the previous generation X (G ).A new random integer value is assigned to k for each value of the index i (prior to construction of each trial vector).F and CR are DE control parameters.Both values remain constant during the search process.Both values as well as the third control parameter,NP (population size),remain constant during the search pro-cess.F is a real-valued factor in range [0.0,1.0]that controls the amplification of differential variations.CR is a real-valued crossover factor in the range [0.0,1.0]that controls the probability that a trial vector will be selected form the randomly chosen,mutated vector,V ðG þ1Þj ;i instead of from the current vector,x ðG Þj ;i .Gener-ally,both F and CR affect the convergence rate and robustness of the search process.Their optimal values are dependent both on objective function characteristics and on the population size,ually,suitable values for F ,CR and NP can be found by experimentation after a few tests using different values.Practical advice on how to select control parameters NP,F and CR can be found in Storn and Price (1995,1997).3.4.SelectionThe selection scheme of DE also differs from the other evolutionary algorithms.On the basis of the cur-rent population P (G )and the temporary population P 0(G +1),the population of the next generation P (G +1)is created as follows:G.Onwubolu,D.Davendra /European Journal of Operational Research 171(2006)674–692677XðGþ1Þi ¼VðGþ1Þi;if I VðGþ1Þi6IðXðGÞiÞ;XðGÞi;otherwise:8<:ð8ÞThus,each individual of the temporary or trial population is compared with its counterpart in the current population.The one with the lower value of cost-function IðXÞto be minimized will propagate the pop-ulation of the next generation.As a result,all the individuals of the next generation are as good or better than their counterparts in the current generation.The interesting point concerning the DE selection scheme is that a trial vector is only compared to one individual vector,not to all the individual vectors in the cur-rent population.3.5.Boundary constraintsIt is important to notice that the recombination operation of DE is able to extend the search outside of the initialized range of the search space(Eqs.(6)and(7)).It is also worthwhile to notice that sometimes this is a beneficial property in problems with no boundary constraints because it is possible tofind the optimum that is located outside of the initialized range.However,in boundary-constrained problems,it is essential to ensure that parameter values lie inside their allowed ranges after recombination.A simple way to guarantee this is to replace parameter values that violate boundary constraints with random values generated within the feasible range:uðGþ1Þj;i ¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjÞ;if uðGþ1Þj;i<xðLÞj_uðGþ1Þj;i>xðUÞj;uðGþ1Þi;j;otherwise;(ð9Þwhere i2[1,NP];j2[1,D].This is the method that was used for this work.Another simple but less efficient method is to reproduce the boundary constraint violating values according to Eq.(7)as many times as is necessary to satisfy the boundary constraints.Yet another simple method that allows bounds to be approached asymptotically while minimizing the amount of disruption that results from resetting out of bound values(Price,1999) isuðGþ1Þj;i ¼ðxðGÞj;iþxðLÞjÞ=2;if uðGþ1Þj;i<xðLÞj;ðxðGÞj;iþxðUÞjÞ=2;if uðGþ1Þj;i>xðUÞj;uðGþ1Þj;i;otherwise:8>><>>:ð10Þ3.6.Conventional technique for integer and discrete optimization by DESeveral approaches have been used to deal with discrete variable optimization.Most of them round offthe variable to the nearest available value before evaluating each trial vector.To keep the population robust,successful trial vectors must enter the population with all of the precision with which they were generated(Storn and Price,1997).In its canonical form,the differential evolution algorithm is only capable of handling continuous vari-ables.Extending it for optimization of integer variables,however,is rather mpinen and Zelinka (1999)discuss how to modify DE for mixed variable optimization.They suggest that only a couple of sim-ple modifications are required.First,integer values should be used to evaluate the objective function,even though DE itself may still works internally with continuousfloating-point values.Thus, Iðy iÞ;i2½1;D ;ð11Þ678G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692wherey i ¼x i for continuous variables;INTðx iÞfor integer variables;&wherey i ¼x i;INTðx iÞ: &x i2X:INT()is a function for converting a real-value to an integer value by truncation.Truncation is performed here only for purposes of cost-function value evaluation.Truncated values are not elsewhere assigned. Thus,DE works with a population of continuous variables regardless of the corresponding object variable type.This is essential for maintaining the diversity of the population and the robustness of the algorithm. Second,in case of integer variable,instead of Eq.(6),the population should be initialized as follows: Pð0Þ¼xð0Þj;i¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjþ1Þ8i2½1;NP ;8j2½1;D :ð12ÞAdditionally,instead of Eq.(9),the boundary constraint handling integer variables should be performed as follows:uðGþ1Þj;i ¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjþ1Þ;if INTðuðGþ1Þj;iÞ<xðLÞj_INTðuðGþ1Þj;iÞ>xðUÞj;uðGþ1Þi;ji;otherwise;(ð13Þwhere i2[1,NP];j2[1,D].They also discuss how discrete values can also be handled in a straightforward manner.Suppose that the subset of discrete variables,X(d),contains l elements that can be assigned to var-iable x:XðdÞ¼xðdÞi;i2½1;l ;ð14Þwhere xðdÞi<xðdÞiþ1.Instead of the discrete value x i itself,we may assign its index,i,to x.Now the discrete variable can be handled as an integer variable that is boundary constrained to range1,...,l.To evaluate the objective func-tion,the discrete value,x i,is used instead of its index i.In other words,instead of optimizing the value of the discrete variable directly,we optimize the value of its index i.Only during evaluation is the indicated discrete value used.Once the discrete problem has been converted into an integer one,the previously de-scribed methods for handling integer variables can be applied(Eqs.(11)–(13)).3.7.Forward transformation and backward transformation techniqueThe problem formulation is already discussed in Section2.Solving theflow shop-scheduling problem and indeed most combinatorial optimization problems requires discrete variables and ordered sequence, rather than relative position indexing.To achieve this,we developed two strategies known as forward and backward transformation techniques respectively.In this paper,we present a forward transformation method for transforming integer variables into continuous variables for the internal representation of vec-tor values since in its canonical form,the DE algorithm is only capable of handling continuous variables.G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692679We also present a backward transformation method for transforming a population of continuous variablesobtained after mutation back into integer variables for evaluating the objective function(Onwubolu,2001). Both forward and backward transformations are utilized in implementing the DE algorithm used in the present study for theflow shop-scheduling problem.Fig.1shows how to deal with this inherent represen-tational problem in DE.Level0deals with integer numbers(which are used in discrete problems).At this level,initialization andfinal solutions are catered for.In the problem domain areas of scheduling,TSP,etc., we are not only interested in computing the objective function cost,we are also interested in the proper order of jobs or cities respectively.Level1of Fig.1deals withfloating point numbers,which are suited for DE.At this level,the DE operators(mutation,crossover,and selection)take place.To transform the integer at level0intofloating point numbers at level1for DEÕs operators,requires some specific kind of coding.This type of coding is highly used in mathematics and computing science.For the basics of trans-forming an integer number into its real number equivalence,interested readers may refer to Michalewicz (1994),and Onwubolu and Kumalo(2001)for its application to optimizing machining operations using genetic algorithms.3.7.1.Forward transformation(from integer to real number)In integer variable optimization a set of integer number is normally generated randomly as an initial solution.Let this set of integer number be represented asz0i2z0:ð15ÞLet the real number(floating point)equivalence of this integer number be z i.The length of the real number depends on the required precision,which in our case,we have chosen two places after the decimal point. The domain of the variable z i has length equal to5;the precision requirement implies that the range be [0...4].Although0is considered since it is not a feasible solution,the range[0.1,1,2,3,4]is chosen,which gives a range of5.We assign each feasible solution two decimal places and this gives us5·100=500.Accordingly,the equivalent continuous variable for z0iis given as100¼102<5Â1026103¼1000:ð16ÞThe mapping from an integer number to a real number z i for the given range is now straightforward,given asz i¼À1þz0iÂ510À1:ð17Þ680G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692Eq.(17)results in most conversion values being negative;this does not create any accuracy problem any way.After some studies by Onwubolu(2001),the scaling factor f=100was found to be adequate for con-verting virtually all integer numbers into their equivalent positive real numbers.Applying this scaling factor of f=100givesz i¼À1þz0iÂfÂ510À1¼À1þz0iÂ50010À1:ð18ÞEq.(18)is used to transform any integer variable into an equivalent continuous variable,which is then used for the DE internal representation of the population of vectors.Without this transformation,it is not pos-sible to make useful moves towards the global optimum in the solution space using the mutation mecha-nism of DE,which works better on continuous variables.For example in afive-job scheduling problem, suppose the sequence is given as{2,4,3,1,5}.This sequence is not directly used in DE internal representa-tion.Rather,applying Eq.(18),the sequence is transformed into a continuous form.Thefloating-pointequivalence of thefirst entry of the given sequence,z0i ¼2,is z i¼À1þ2Â500103À1¼0:001001.Other valuesare similarly obtained and the sequence is therefore represented internally in the DE scheme as {0.001001,1.002,0.501502,À0.499499,and1.5025}.3.7.2.Backward transformation(from real number to integer)Integer variables are used to evaluate the objective function.The DE self-referential population muta-tion scheme is quite unique.After the mutation of each vector,the trial vector is evaluated for its objective function in order to decide whether or not to retain it.This means that the objective function values of the current vectors in the population need to be also evaluated.These vector variables are continuous(from the forward transformation scheme)and have to be transformed into their integer number equivalence. The backward transformation technique is used for convertingfloating point numbers to their integer num-ber equivalence.The scheme is given as follows:z0 i ¼ð1þz iÞÂð103À1Þ500:ð19ÞIn this present form the backward transformation function is not able to properly discriminate between variables.To ensure that each number is discrete and unique,some modifications are required as follows: a¼intðz0iþ0:5Þ;ð20Þb¼aÀz0i;ð21ÞzÃi ¼ðaÀ1Þ;if b>0:5;a;if b<0:5:&ð22ÞEq.(22)gives zÃi ,which is the transformed value used for computing the objective function.It should bementioned that the conversion scheme of Eq.(19),which transforms real numbers after DE operations into integer numbers is not sufficient to avoid duplication;hence,the steps highlighted in Eqs.(20)–(22)are important.In our studies,these modifications ensure that after mutation,crossover and selection opera-tions,the convertedfloating numbers into their integer equivalence in the set of jobs for a new scheduling solution,or set of cities for a new TSP solution,etc.,are not duplicated.As an example,we consider a set of trial vector,z i={À0.33,0.67,À0.17,1.5,0.84}obtained after mutation.The integer values corresponding to the trial vector values are obtained using Eq.(22)as follows:G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692681z0 1¼ð1À0:33ÞÂð103À1Þ=500¼1:33866;z02¼ð1þ0:67ÞÂð103À1Þ=500¼3:3367;z0 3¼ð1À0:17ÞÂð103À1Þ=500¼1:65834;z04¼ð1þ1:50ÞÂð103À1Þ=500¼4:9950;z05¼ð1þ0:84ÞÂð103À1Þ=500¼3:6763;a1¼intð1:333866þ0:5Þ¼2;b1¼2À1:33866¼0:66134>0:5;zÃ1¼2À1¼1;a2¼intð3:3367þ0:5Þ¼4;b2¼4À3:3367¼0:6633>0:5;zÃ2¼4À1¼3;a3¼intð1:65834þ0:5Þ¼2;b3¼2À1:65834¼0:34166<0:5;zÃ3¼2;a4¼intð4:995þ0:5Þ¼5;b4¼5À4:995¼0:005<0:5;zÃ4¼5;a5¼intð3:673þ0:5Þ¼4;b5¼4À3:673¼0:3237<0:5;zÃ5¼4:This can be represented schematically as shown in Fig.2.The set of integer values is given aszÃi ¼f1;3;2;5;4g.This set is used to obtain the objective function values.Like in GA,after mutation,crossover,and boundary checking operations,the trial vector obtained fromthe backward transformation is continuously checked until feasible solution is found.Hence,it is not nec-essary to bother about the ordered sequence,which is crucially important in the type of combinatorial opti-mization problems we are concerned with.Feasible solutions constitute about10–15%of the total trial vectors.3.8.DE strategiesPrice and Storn(2001)have suggested ten different working strategies of DE and some guidelines in applying these strategies for any given problem.Different strategies can be adopted in the DE algorithm depending upon the type of problem for which it is applied.Table1shows the ten different working strat-egies proposed by Price and Storn(2001).The general convention used in Table1is as follows:DE/x/y/z.DE stands for differential evolution algorithm,x represents a string denoting the vector to be perturbed,y is the number of difference vectors considered for perturbation of x,and z is the type of crossover being used(exp:exponential;bin:binomial). Thus,the working algorithm outline by Storn and Price(1997)is the seventh strategy of DE,that is,DE/ rand/1/bin.Hence the perturbation can be either in the best vector of the previous generation or in any ran-domly chosen vector.Similarly for perturbation,either single or two vector differences can be used.For perturbation with a single vector difference,out of the three distinct randomly chosen vectors,the weighted vector differential of any two vectors is added to the third one.Similarly for perturbation with two vector682G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692。

村田热敏电阻

村田热敏电阻

7
!Note • Please read rating and !CAUTION (for storage, operating, rating, soldering, mounting and handling) in this catalog to prevent smoking and/or burning, etc.
1
2
3
Temperature Sensor and Compensation Chip Type Standard Land Pattern Dimensions Temperature Sensor and Compensation Chip Type Temperature Characteristics (Center Value) Temperature Sensor and Compensation Chip Type !Caution/Notice Temperature Sensor and Compensation Chip Type Package 5 Temperature Sensor Thermo String Type
R44E.pdf
Dec.17,2012
NTC Thermistor for Temperature Sensor Thermo String Type
(Part Number) qProduct ID Product ID NXF NTC Thermistors Sensor Thermo String Type NXF q T w 15 e XH r 103 t F A 2 B 025 !0 uLead Wire Type Code A Lead Wire Type ø0.3 Copper Lead Wire with Polyurethane Coat

reactive定义 -回复

reactive定义 -回复

reactive定义-回复Reactive programming is an innovative approach to software development that is gaining increasing popularity in recent years. In this article, we will delve into the concept of reactive programming, its main principles, and its advantages and applications in various industries.So, what exactly is reactive programming? In simple terms, it is a programming paradigm that focuses on the flow of data and events in a system, allowing for responsive and efficient applications. Reactive programming is based on the concept of reactive systems, which are capable of reacting and adapting promptly to changes in their environment.One fundamental principle of reactive programming is responsiveness. Reactive systems are designed to respond to events and changes in the environment in a timely manner. This means that they can quickly react to user input, network conditions, or system failures, ensuring that the application remains functional and responsive.To achieve this responsiveness, reactive programming relies onasynchronous and non-blocking processing. Traditional programming models often involve blocking operations, which means that the program execution is suspended until the completion of a certain task. In reactive programming, on the other hand, tasks are executed non-blocking, allowing the application to continue processing other tasks while waiting for a response.Reactive systems also emphasize a message-driven architecture. The communication between different components or services is based on the exchange of messages, allowing for loose coupling and enhanced scalability. By decoupling components through messages, reactive systems can handle fluctuations in load efficiently, ensuring that the system remains stable and responsive even under high demand.Another core principle of reactive programming is resilience. Reactive systems are designed to be fault-tolerant and resilient to failures. They achieve this through techniques such as error handling, fault recovery, and isolation. When a failure occurs, a reactive system is capable of handling the error gracefully, recovering from the failure, and continuing its operation without compromising the overall system stability.Now that we understand the main principles of reactive programming, it is important to explore its advantages and applications. Reactive programming has several benefits that make it attractive to developers and organizations. Firstly, it allows for the development of highly responsive and interactive applications, which enhances the user experience. Whether it is a web application, a mobile app, or a real-time analytics system, reactive programming enables developers to build applications that are fast and can react to user input in real-time.Furthermore, reactive programming facilitates scalability. With the increase in big data and distributed systems, scalability is becoming essential. Reactive systems, by nature, can handle large amounts of data and high loads while remaining stable and responsive. This scalability is achieved through the use of message-driven communication and asynchronous processing, which allow for efficient distribution and processing of data across different components or services.Reactive programming also promotes modularity and reusability. By following the principles of loose coupling and message-basedcommunication, reactive systems can be easily composed and integrated with other components or services. This modularity not only simplifies the development process but also encourages code reusability and maintainability.In terms of applications, reactive programming has found its place in various industries. It is particularly useful in domains that require real-time data processing and event-driven systems. For example, in finance, reactive programming is used to build high-frequency trading systems that can make split-second decisions based on market events. In the IoT (Internet of Things) domain, reactive programming enables the development of smart and responsive systems that can handle a vast amount of sensor data and react accordingly.In conclusion, reactive programming is a powerful paradigm that offers numerous benefits in terms of responsiveness, scalability, modularity, and resilience. By focusing on data flow andevent-driven architecture, reactive programming enables the development of highly interactive and efficient applications. Its applications span various industries, from finance to IoT, and itcontinues to gain popularity among developers and organizations looking to build responsive and adaptive systems. With its principles and advantages, reactive programming is undoubtedly shaping the future of software development.。

provide repetitive results -回复

provide repetitive results -回复

provide repetitive results -回复题目:提供重复结果:以中括号内的内容为主题,写一篇1500-2000字文章,一步一步回答引言:在日常生活中,我们经常碰到各种各样需要提供重复结果的情况。

无论是科学研究、商业决策还是个人生活,重复结果的需求都是不可忽视的。

本文将以提供重复结果为主题,为读者一步一步解析如何有效地应对这一需求。

一、重复结果的定义和背景重复结果是指在相同的输入条件下,通过重复操作能够得到预期的输出结果。

为什么要重复获取结果呢?这是因为在一些情况下,我们需要验证某个结果的可靠性、统计某个现象的趋势、或者为未来的决策提供依据。

通过重复结果,我们可以减少随机因素的干扰,增加结果的可信度。

二、重复结果的方法和步骤1. 设定初始条件和参数要进行重复结果的实验,首先需要明确实验的初始条件和参数。

这些条件和参数必须在每次实验中保持一致,以确保结果的可比性。

例如,在研究一个新药物对疾病产生影响时,初始条件包括病人的年龄、病情的严重程度以及服药的剂量等。

2. 确定实验的重复次数实验的重复次数取决于所需的结果的可信度。

一般而言,实验次数越多,结果的可靠性越高。

根据实验的目的和资源的限制,决定进行多少次重复实验。

3. 进行实验并记录结果按照设定的初始条件和参数,进行实验并记录每次实验的结果。

在记录结果时,应尽量客观、准确地描述实验的结果,注意排除主观因素的干扰。

4. 分析结果并得出结论针对记录下来的实验结果,进行统计分析,得出结果的平均值、标准差等统计指标。

根据统计分析的结果,结合实验的目的去分析结果的合理性和可靠性,得出结论。

5. 检验结果的稳定性和可复制性为了验证结果的稳定性和可复制性,需要对实验进行反复验证。

重复实验的结果应该趋于稳定,能够被其他独立的实验所复制。

三、常见应用场景和案例分析1. 科学研究在科学研究中,重复结果是确保研究结论的可靠性的关键步骤。

例如,为了研究某个物质的化学性质,科学家会根据特定的条件和参数进行多次实验,然后对实验结果进行统计分析,以得出该物质的属性和特性。

reactive 方法

reactive 方法

Reactive 方法1. 简介Reactive 方法是一种用于设计和开发响应式系统的方法论。

它强调系统对外部事件的快速响应,以实现高效、可靠和可伸缩的系统。

Reactive 方法广泛应用于现代软件开发中,特别是在构建实时系统、分布式系统和大数据处理系统方面发挥了重要作用。

2. 响应式系统的特点响应式系统具有以下几个重要特点:2.1 弹性弹性是响应式系统的核心概念之一。

它指的是系统能够根据负载的变化自动扩展或缩减资源,以满足系统的需求。

弹性使得系统能够在高负载情况下保持稳定的性能,并能够快速响应变化。

2.2 消息驱动响应式系统使用消息驱动的方式进行通信。

消息可以是事件、命令或查询。

消息驱动的架构可以提供高度解耦的组件,使得系统更容易扩展和维护。

消息驱动还可以通过引入异步处理和流控制来提高系统的性能和可靠性。

2.3 高可用性响应式系统具有高度的可用性,即使在组件故障或网络中断的情况下也能保持正常运行。

系统通过备份、冗余和自动容错来确保高可用性。

当一个组件失败时,系统能够自动切换到备用组件,以保持功能和性能。

2.4 实时性响应式系统需要能够及时处理和响应外部事件。

实时性是许多应用的关键需求,特别是在金融交易、实时分析和通信领域。

响应式系统利用事件驱动的机制和快速响应的架构来实现实时性。

3. Reactive 方法的原则Reactive 方法基于一组原则,使系统能够达到高效、可靠和可扩展的目标。

下面是一些重要的原则:3.1 响应式宣言响应式宣言是 Reactive 方法的核心,它明确了构建响应式系统的目标和价值观。

主要包括:弹性、消息驱动、韧性、可伸缩性和响应式设计。

遵循响应式宣言可以引导我们设计出健壮、灵活和易于维护的系统。

3.2 异步编程在响应式系统中,异步编程是非常重要的。

通过将任务划分为小的和可并行处理的单元,系统能够更高效地利用资源。

异步编程还使系统能够更好地响应外部事件,提高系统的实时性。

Survey of clustering data mining techniques

Survey of clustering data mining techniques

A Survey of Clustering Data Mining TechniquesPavel BerkhinYahoo!,Inc.pberkhin@Summary.Clustering is the division of data into groups of similar objects.It dis-regards some details in exchange for data simplifirmally,clustering can be viewed as data modeling concisely summarizing the data,and,therefore,it re-lates to many disciplines from statistics to numerical analysis.Clustering plays an important role in a broad range of applications,from information retrieval to CRM. Such applications usually deal with large datasets and many attributes.Exploration of such data is a subject of data mining.This survey concentrates on clustering algorithms from a data mining perspective.1IntroductionThe goal of this survey is to provide a comprehensive review of different clus-tering techniques in data mining.Clustering is a division of data into groups of similar objects.Each group,called a cluster,consists of objects that are similar to one another and dissimilar to objects of other groups.When repre-senting data with fewer clusters necessarily loses certainfine details(akin to lossy data compression),but achieves simplification.It represents many data objects by few clusters,and hence,it models data by its clusters.Data mod-eling puts clustering in a historical perspective rooted in mathematics,sta-tistics,and numerical analysis.From a machine learning perspective clusters correspond to hidden patterns,the search for clusters is unsupervised learn-ing,and the resulting system represents a data concept.Therefore,clustering is unsupervised learning of a hidden data concept.Data mining applications add to a general picture three complications:(a)large databases,(b)many attributes,(c)attributes of different types.This imposes on a data analysis se-vere computational requirements.Data mining applications include scientific data exploration,information retrieval,text mining,spatial databases,Web analysis,CRM,marketing,medical diagnostics,computational biology,and many others.They present real challenges to classic clustering algorithms. These challenges led to the emergence of powerful broadly applicable data2Pavel Berkhinmining clustering methods developed on the foundation of classic techniques.They are subject of this survey.1.1NotationsTo fix the context and clarify terminology,consider a dataset X consisting of data points (i.e.,objects ,instances ,cases ,patterns ,tuples ,transactions )x i =(x i 1,···,x id ),i =1:N ,in attribute space A ,where each component x il ∈A l ,l =1:d ,is a numerical or nominal categorical attribute (i.e.,feature ,variable ,dimension ,component ,field ).For a discussion of attribute data types see [106].Such point-by-attribute data format conceptually corresponds to a N ×d matrix and is used by a majority of algorithms reviewed below.However,data of other formats,such as variable length sequences and heterogeneous data,are not uncommon.The simplest subset in an attribute space is a direct Cartesian product of sub-ranges C = C l ⊂A ,C l ⊂A l ,called a segment (i.e.,cube ,cell ,region ).A unit is an elementary segment whose sub-ranges consist of a single category value,or of a small numerical bin.Describing the numbers of data points per every unit represents an extreme case of clustering,a histogram .This is a very expensive representation,and not a very revealing er driven segmentation is another commonly used practice in data exploration that utilizes expert knowledge regarding the importance of certain sub-domains.Unlike segmentation,clustering is assumed to be automatic,and so it is a machine learning technique.The ultimate goal of clustering is to assign points to a finite system of k subsets (clusters).Usually (but not always)subsets do not intersect,and their union is equal to a full dataset with the possible exception of outliersX =C 1 ··· C k C outliers ,C i C j =0,i =j.1.2Clustering Bibliography at GlanceGeneral references regarding clustering include [110],[205],[116],[131],[63],[72],[165],[119],[75],[141],[107],[91].A very good introduction to contem-porary data mining clustering techniques can be found in the textbook [106].There is a close relationship between clustering and many other fields.Clustering has always been used in statistics [10]and science [158].The clas-sic introduction into pattern recognition framework is given in [64].Typical applications include speech and character recognition.Machine learning clus-tering algorithms were applied to image segmentation and computer vision[117].For statistical approaches to pattern recognition see [56]and [85].Clus-tering can be viewed as a density estimation problem.This is the subject of traditional multivariate statistical estimation [197].Clustering is also widelyA Survey of Clustering Data Mining Techniques3 used for data compression in image processing,which is also known as vec-tor quantization[89].Datafitting in numerical analysis provides still another venue in data modeling[53].This survey’s emphasis is on clustering in data mining.Such clustering is characterized by large datasets with many attributes of different types. Though we do not even try to review particular applications,many important ideas are related to the specificfields.Clustering in data mining was brought to life by intense developments in information retrieval and text mining[52], [206],[58],spatial database applications,for example,GIS or astronomical data,[223],[189],[68],sequence and heterogeneous data analysis[43],Web applications[48],[111],[81],DNA analysis in computational biology[23],and many others.They resulted in a large amount of application-specific devel-opments,but also in some general techniques.These techniques and classic clustering algorithms that relate to them are surveyed below.1.3Plan of Further PresentationClassification of clustering algorithms is neither straightforward,nor canoni-cal.In reality,different classes of algorithms overlap.Traditionally clustering techniques are broadly divided in hierarchical and partitioning.Hierarchical clustering is further subdivided into agglomerative and divisive.The basics of hierarchical clustering include Lance-Williams formula,idea of conceptual clustering,now classic algorithms SLINK,COBWEB,as well as newer algo-rithms CURE and CHAMELEON.We survey these algorithms in the section Hierarchical Clustering.While hierarchical algorithms gradually(dis)assemble points into clusters (as crystals grow),partitioning algorithms learn clusters directly.In doing so they try to discover clusters either by iteratively relocating points between subsets,or by identifying areas heavily populated with data.Algorithms of thefirst kind are called Partitioning Relocation Clustering. They are further classified into probabilistic clustering(EM framework,al-gorithms SNOB,AUTOCLASS,MCLUST),k-medoids methods(algorithms PAM,CLARA,CLARANS,and its extension),and k-means methods(differ-ent schemes,initialization,optimization,harmonic means,extensions).Such methods concentrate on how well pointsfit into their clusters and tend to build clusters of proper convex shapes.Partitioning algorithms of the second type are surveyed in the section Density-Based Partitioning.They attempt to discover dense connected com-ponents of data,which areflexible in terms of their shape.Density-based connectivity is used in the algorithms DBSCAN,OPTICS,DBCLASD,while the algorithm DENCLUE exploits space density functions.These algorithms are less sensitive to outliers and can discover clusters of irregular shape.They usually work with low-dimensional numerical data,known as spatial data. Spatial objects could include not only points,but also geometrically extended objects(algorithm GDBSCAN).4Pavel BerkhinSome algorithms work with data indirectly by constructing summaries of data over the attribute space subsets.They perform space segmentation and then aggregate appropriate segments.We discuss them in the section Grid-Based Methods.They frequently use hierarchical agglomeration as one phase of processing.Algorithms BANG,STING,WaveCluster,and FC are discussed in this section.Grid-based methods are fast and handle outliers well.Grid-based methodology is also used as an intermediate step in many other algorithms (for example,CLIQUE,MAFIA).Categorical data is intimately connected with transactional databases.The concept of a similarity alone is not sufficient for clustering such data.The idea of categorical data co-occurrence comes to the rescue.The algorithms ROCK,SNN,and CACTUS are surveyed in the section Co-Occurrence of Categorical Data.The situation gets even more aggravated with the growth of the number of items involved.To help with this problem the effort is shifted from data clustering to pre-clustering of items or categorical attribute values. Development based on hyper-graph partitioning and the algorithm STIRR exemplify this approach.Many other clustering techniques are developed,primarily in machine learning,that either have theoretical significance,are used traditionally out-side the data mining community,or do notfit in previously outlined categories. The boundary is blurred.In the section Other Developments we discuss the emerging direction of constraint-based clustering,the important researchfield of graph partitioning,and the relationship of clustering to supervised learning, gradient descent,artificial neural networks,and evolutionary methods.Data Mining primarily works with large databases.Clustering large datasets presents scalability problems reviewed in the section Scalability and VLDB Extensions.Here we talk about algorithms like DIGNET,about BIRCH and other data squashing techniques,and about Hoffding or Chernoffbounds.Another trait of real-life data is high dimensionality.Corresponding de-velopments are surveyed in the section Clustering High Dimensional Data. The trouble comes from a decrease in metric separation when the dimension grows.One approach to dimensionality reduction uses attributes transforma-tions(DFT,PCA,wavelets).Another way to address the problem is through subspace clustering(algorithms CLIQUE,MAFIA,ENCLUS,OPTIGRID, PROCLUS,ORCLUS).Still another approach clusters attributes in groups and uses their derived proxies to cluster objects.This double clustering is known as co-clustering.Issues common to different clustering methods are overviewed in the sec-tion General Algorithmic Issues.We talk about assessment of results,de-termination of appropriate number of clusters to build,data preprocessing, proximity measures,and handling of outliers.For reader’s convenience we provide a classification of clustering algorithms closely followed by this survey:•Hierarchical MethodsA Survey of Clustering Data Mining Techniques5Agglomerative AlgorithmsDivisive Algorithms•Partitioning Relocation MethodsProbabilistic ClusteringK-medoids MethodsK-means Methods•Density-Based Partitioning MethodsDensity-Based Connectivity ClusteringDensity Functions Clustering•Grid-Based Methods•Methods Based on Co-Occurrence of Categorical Data•Other Clustering TechniquesConstraint-Based ClusteringGraph PartitioningClustering Algorithms and Supervised LearningClustering Algorithms in Machine Learning•Scalable Clustering Algorithms•Algorithms For High Dimensional DataSubspace ClusteringCo-Clustering Techniques1.4Important IssuesThe properties of clustering algorithms we are primarily concerned with in data mining include:•Type of attributes algorithm can handle•Scalability to large datasets•Ability to work with high dimensional data•Ability tofind clusters of irregular shape•Handling outliers•Time complexity(we frequently simply use the term complexity)•Data order dependency•Labeling or assignment(hard or strict vs.soft or fuzzy)•Reliance on a priori knowledge and user defined parameters •Interpretability of resultsRealistically,with every algorithm we discuss only some of these properties. The list is in no way exhaustive.For example,as appropriate,we also discuss algorithms ability to work in pre-defined memory buffer,to restart,and to provide an intermediate solution.6Pavel Berkhin2Hierarchical ClusteringHierarchical clustering builds a cluster hierarchy or a tree of clusters,also known as a dendrogram.Every cluster node contains child clusters;sibling clusters partition the points covered by their common parent.Such an ap-proach allows exploring data on different levels of granularity.Hierarchical clustering methods are categorized into agglomerative(bottom-up)and divi-sive(top-down)[116],[131].An agglomerative clustering starts with one-point (singleton)clusters and recursively merges two or more of the most similar clusters.A divisive clustering starts with a single cluster containing all data points and recursively splits the most appropriate cluster.The process contin-ues until a stopping criterion(frequently,the requested number k of clusters) is achieved.Advantages of hierarchical clustering include:•Flexibility regarding the level of granularity•Ease of handling any form of similarity or distance•Applicability to any attribute typesDisadvantages of hierarchical clustering are related to:•Vagueness of termination criteria•Most hierarchical algorithms do not revisit(intermediate)clusters once constructed.The classic approaches to hierarchical clustering are presented in the sub-section Linkage Metrics.Hierarchical clustering based on linkage metrics re-sults in clusters of proper(convex)shapes.Active contemporary efforts to build cluster systems that incorporate our intuitive concept of clusters as con-nected components of arbitrary shape,including the algorithms CURE and CHAMELEON,are surveyed in the subsection Hierarchical Clusters of Arbi-trary Shapes.Divisive techniques based on binary taxonomies are presented in the subsection Binary Divisive Partitioning.The subsection Other Devel-opments contains information related to incremental learning,model-based clustering,and cluster refinement.In hierarchical clustering our regular point-by-attribute data representa-tion frequently is of secondary importance.Instead,hierarchical clustering frequently deals with the N×N matrix of distances(dissimilarities)or sim-ilarities between training points sometimes called a connectivity matrix.So-called linkage metrics are constructed from elements of this matrix.The re-quirement of keeping a connectivity matrix in memory is unrealistic.To relax this limitation different techniques are used to sparsify(introduce zeros into) the connectivity matrix.This can be done by omitting entries smaller than a certain threshold,by using only a certain subset of data representatives,or by keeping with each point only a certain number of its nearest neighbors(for nearest neighbor chains see[177]).Notice that the way we process the original (dis)similarity matrix and construct a linkage metric reflects our a priori ideas about the data model.A Survey of Clustering Data Mining Techniques7With the(sparsified)connectivity matrix we can associate the weighted connectivity graph G(X,E)whose vertices X are data points,and edges E and their weights are defined by the connectivity matrix.This establishes a connection between hierarchical clustering and graph partitioning.One of the most striking developments in hierarchical clustering is the algorithm BIRCH.It is discussed in the section Scalable VLDB Extensions.Hierarchical clustering initializes a cluster system as a set of singleton clusters(agglomerative case)or a single cluster of all points(divisive case) and proceeds iteratively merging or splitting the most appropriate cluster(s) until the stopping criterion is achieved.The appropriateness of a cluster(s) for merging or splitting depends on the(dis)similarity of cluster(s)elements. This reflects a general presumption that clusters consist of similar points.An important example of dissimilarity between two points is the distance between them.To merge or split subsets of points rather than individual points,the dis-tance between individual points has to be generalized to the distance between subsets.Such a derived proximity measure is called a linkage metric.The type of a linkage metric significantly affects hierarchical algorithms,because it re-flects a particular concept of closeness and connectivity.Major inter-cluster linkage metrics[171],[177]include single link,average link,and complete link. The underlying dissimilarity measure(usually,distance)is computed for every pair of nodes with one node in thefirst set and another node in the second set.A specific operation such as minimum(single link),average(average link),or maximum(complete link)is applied to pair-wise dissimilarity measures:d(C1,C2)=Op{d(x,y),x∈C1,y∈C2}Early examples include the algorithm SLINK[199],which implements single link(Op=min),Voorhees’method[215],which implements average link (Op=Avr),and the algorithm CLINK[55],which implements complete link (Op=max).It is related to the problem offinding the Euclidean minimal spanning tree[224]and has O(N2)complexity.The methods using inter-cluster distances defined in terms of pairs of nodes(one in each respective cluster)are called graph methods.They do not use any cluster representation other than a set of points.This name naturally relates to the connectivity graph G(X,E)introduced above,because every data partition corresponds to a graph partition.Such methods can be augmented by so-called geometric methods in which a cluster is represented by its central point.Under the assumption of numerical attributes,the center point is defined as a centroid or an average of two cluster centroids subject to agglomeration.It results in centroid,median,and minimum variance linkage metrics.All of the above linkage metrics can be derived from the Lance-Williams updating formula[145],d(C iC j,C k)=a(i)d(C i,C k)+a(j)d(C j,C k)+b·d(C i,C j)+c|d(C i,C k)−d(C j,C k)|.8Pavel BerkhinHere a,b,c are coefficients corresponding to a particular linkage.This formula expresses a linkage metric between a union of the two clusters and the third cluster in terms of underlying nodes.The Lance-Williams formula is crucial to making the dis(similarity)computations feasible.Surveys of linkage metrics can be found in [170][54].When distance is used as a base measure,linkage metrics capture inter-cluster proximity.However,a similarity-based view that results in intra-cluster connectivity considerations is also used,for example,in the original average link agglomeration (Group-Average Method)[116].Under reasonable assumptions,such as reducibility condition (graph meth-ods satisfy this condition),linkage metrics methods suffer from O N 2 time complexity [177].Despite the unfavorable time complexity,these algorithms are widely used.As an example,the algorithm AGNES (AGlomerative NESt-ing)[131]is used in S-Plus.When the connectivity N ×N matrix is sparsified,graph methods directly dealing with the connectivity graph G can be used.In particular,hierarchical divisive MST (Minimum Spanning Tree)algorithm is based on graph parti-tioning [116].2.1Hierarchical Clusters of Arbitrary ShapesFor spatial data,linkage metrics based on Euclidean distance naturally gener-ate clusters of convex shapes.Meanwhile,visual inspection of spatial images frequently discovers clusters with curvy appearance.Guha et al.[99]introduced the hierarchical agglomerative clustering algo-rithm CURE (Clustering Using REpresentatives).This algorithm has a num-ber of novel features of general importance.It takes special steps to handle outliers and to provide labeling in assignment stage.It also uses two techniques to achieve scalability:data sampling (section 8),and data partitioning.CURE creates p partitions,so that fine granularity clusters are constructed in parti-tions first.A major feature of CURE is that it represents a cluster by a fixed number,c ,of points scattered around it.The distance between two clusters used in the agglomerative process is the minimum of distances between two scattered representatives.Therefore,CURE takes a middle approach between the graph (all-points)methods and the geometric (one centroid)methods.Single and average link closeness are replaced by representatives’aggregate closeness.Selecting representatives scattered around a cluster makes it pos-sible to cover non-spherical shapes.As before,agglomeration continues until the requested number k of clusters is achieved.CURE employs one additional trick:originally selected scattered points are shrunk to the geometric centroid of the cluster by a user-specified factor α.Shrinkage suppresses the affect of outliers;outliers happen to be located further from the cluster centroid than the other scattered representatives.CURE is capable of finding clusters of different shapes and sizes,and it is insensitive to outliers.Because CURE uses sampling,estimation of its complexity is not straightforward.For low-dimensional data authors provide a complexity estimate of O (N 2sample )definedA Survey of Clustering Data Mining Techniques9 in terms of a sample size.More exact bounds depend on input parameters: shrink factorα,number of representative points c,number of partitions p,and a sample size.Figure1(a)illustrates agglomeration in CURE.Three clusters, each with three representatives,are shown before and after the merge and shrinkage.Two closest representatives are connected.While the algorithm CURE works with numerical attributes(particularly low dimensional spatial data),the algorithm ROCK developed by the same researchers[100]targets hierarchical agglomerative clustering for categorical attributes.It is reviewed in the section Co-Occurrence of Categorical Data.The hierarchical agglomerative algorithm CHAMELEON[127]uses the connectivity graph G corresponding to the K-nearest neighbor model spar-sification of the connectivity matrix:the edges of K most similar points to any given point are preserved,the rest are pruned.CHAMELEON has two stages.In thefirst stage small tight clusters are built to ignite the second stage.This involves a graph partitioning[129].In the second stage agglomer-ative process is performed.It utilizes measures of relative inter-connectivity RI(C i,C j)and relative closeness RC(C i,C j);both are locally normalized by internal interconnectivity and closeness of clusters C i and C j.In this sense the modeling is dynamic:it depends on data locally.Normalization involves certain non-obvious graph operations[129].CHAMELEON relies heavily on graph partitioning implemented in the library HMETIS(see the section6). Agglomerative process depends on user provided thresholds.A decision to merge is made based on the combinationRI(C i,C j)·RC(C i,C j)αof local measures.The algorithm does not depend on assumptions about the data model.It has been proven tofind clusters of different shapes,densities, and sizes in2D(two-dimensional)space.It has a complexity of O(Nm+ Nlog(N)+m2log(m),where m is the number of sub-clusters built during the first initialization phase.Figure1(b)(analogous to the one in[127])clarifies the difference with CURE.It presents a choice of four clusters(a)-(d)for a merge.While CURE would merge clusters(a)and(b),CHAMELEON makes intuitively better choice of merging(c)and(d).2.2Binary Divisive PartitioningIn linguistics,information retrieval,and document clustering applications bi-nary taxonomies are very useful.Linear algebra methods,based on singular value decomposition(SVD)are used for this purpose in collaborativefilter-ing and information retrieval[26].Application of SVD to hierarchical divisive clustering of document collections resulted in the PDDP(Principal Direction Divisive Partitioning)algorithm[31].In our notations,object x is a docu-ment,l th attribute corresponds to a word(index term),and a matrix X entry x il is a measure(e.g.TF-IDF)of l-term frequency in a document x.PDDP constructs SVD decomposition of the matrix10Pavel Berkhin(a)Algorithm CURE (b)Algorithm CHAMELEONFig.1.Agglomeration in Clusters of Arbitrary Shapes(X −e ¯x ),¯x =1Ni =1:N x i ,e =(1,...,1)T .This algorithm bisects data in Euclidean space by a hyperplane that passes through data centroid orthogonal to the eigenvector with the largest singular value.A k -way split is also possible if the k largest singular values are consid-ered.Bisecting is a good way to categorize documents and it yields a binary tree.When k -means (2-means)is used for bisecting,the dividing hyperplane is orthogonal to the line connecting the two centroids.The comparative study of SVD vs.k -means approaches [191]can be used for further references.Hier-archical divisive bisecting k -means was proven [206]to be preferable to PDDP for document clustering.While PDDP or 2-means are concerned with how to split a cluster,the problem of which cluster to split is also important.Simple strategies are:(1)split each node at a given level,(2)split the cluster with highest cardinality,and,(3)split the cluster with the largest intra-cluster variance.All three strategies have problems.For a more detailed analysis of this subject and better strategies,see [192].2.3Other DevelopmentsOne of early agglomerative clustering algorithms,Ward’s method [222],is based not on linkage metric,but on an objective function used in k -means.The merger decision is viewed in terms of its effect on the objective function.The popular hierarchical clustering algorithm for categorical data COB-WEB [77]has two very important qualities.First,it utilizes incremental learn-ing.Instead of following divisive or agglomerative approaches,it dynamically builds a dendrogram by processing one data point at a time.Second,COB-WEB is an example of conceptual or model-based learning.This means that each cluster is considered as a model that can be described intrinsically,rather than as a collection of points assigned to it.COBWEB’s dendrogram is calleda classification tree.Each tree node(cluster)C is associated with the condi-tional probabilities for categorical attribute-values pairs,P r(x l=νlp|C),l=1:d,p=1:|A l|.This easily can be recognized as a C-specific Na¨ıve Bayes classifier.During the classification tree construction,every new point is descended along the tree and the tree is potentially updated(by an insert/split/merge/create op-eration).Decisions are based on the category utility[49]CU{C1,...,C k}=1j=1:kCU(C j)CU(C j)=l,p(P r(x l=νlp|C j)2−(P r(x l=νlp)2.Category utility is similar to the GINI index.It rewards clusters C j for in-creases in predictability of the categorical attribute valuesνlp.Being incre-mental,COBWEB is fast with a complexity of O(tN),though it depends non-linearly on tree characteristics packed into a constant t.There is a similar incremental hierarchical algorithm for all numerical attributes called CLAS-SIT[88].CLASSIT associates normal distributions with cluster nodes.Both algorithms can result in highly unbalanced trees.Chiu et al.[47]proposed another conceptual or model-based approach to hierarchical clustering.This development contains several different use-ful features,such as the extension of scalability preprocessing to categori-cal attributes,outliers handling,and a two-step strategy for monitoring the number of clusters including BIC(defined below).A model associated with a cluster covers both numerical and categorical attributes and constitutes a blend of Gaussian and multinomial models.Denote corresponding multivari-ate parameters byθ.With every cluster C we associate a logarithm of its (classification)likelihoodl C=x i∈Clog(p(x i|θ))The algorithm uses maximum likelihood estimates for parameterθ.The dis-tance between two clusters is defined(instead of linkage metric)as a decrease in log-likelihoodd(C1,C2)=l C1+l C2−l C1∪C2caused by merging of the two clusters under consideration.The agglomerative process continues until the stopping criterion is satisfied.As such,determina-tion of the best k is automatic.This algorithm has the commercial implemen-tation(in SPSS Clementine).The complexity of the algorithm is linear in N for the summarization phase.Traditional hierarchical clustering does not change points membership in once assigned clusters due to its greedy approach:after a merge or a split is selected it is not refined.Though COBWEB does reconsider its decisions,its。

电气安全名词术语GB

电气安全名词术语GB

标准名称:电气安全名词术语GB 4776-84标准编号:GB 4776-84标准正文:国家标准1984-11-30发布1988-07-01实施1基本概念1.1保安性fail-safe为防止产品本身的危险故障而设计的性能。

1.2正常状态nromal condition所有用于防止危险的设施均无损坏的状态。

1.3电气事故electric accident由电流、电磁场、雷电、静电和某些电路故障等直接或间接造成建筑设施、电气设备毁坏、人、动物伤亡,以及引起火灾和爆炸等后果的事件。

1.4触电电击electric shock电流通过人体或动物体而引起的病理、生理效应。

1.5电磁场伤害injury due to electromagnetic field人体在电磁场作用下吸收能量受到的伤害。

1.6破坏性放电介质击穿disruptive discharge dielectric breakdown固体、液体、气体介质及其组合介质在高电压作用下,介质强度丧失的现象。

破坏性放电时,电极间的电压迅速下降到零或接近于零。

1.7短路short circuit通过比较小的电阻或阻抗,偶然地或有意地对一个电路中在正常情况下处于不同电压下的两点或几点之间进行的连接。

1.8绝缘故障insulation fault绝缘电阻的不正常下降。

1.9接地故障earth fault由于导体与地连接或对地绝缘电阻变得小于规定值而引起的故障。

1.10过电流overcurrent超过额定电流的电流。

1.11过电压overvoltage超过额定电压的电压。

1.12过负载overload超过额定负载的负载。

1.13导电部分conductive part能导电,但不一定承载工作电流的部分。

1.14带电部分live part正常使用时被通电的导体或导电部分,它包括中性导体,但按惯例,不包括保护中性导体(PEN导体)。

注:此术语不一定意味着触电危险。

1.15外露导电部分exposed conductive part电气设备能被触及的导电部分。

reactive定义 -回复

reactive定义 -回复

reactive定义-回复什么是"reactive"?"Reactive"(反应式)是一种编程模式,旨在使应用程序能够根据外部事件的发生而自动进行响应。

这种响应性编程模型背后的理念是,应用程序应该能够根据外部环境的变化来自动地进行适应和调整,而不是被动地等待和响应用户的输入。

通过使用"reactive"编程模式,开发人员可以构建出更加灵活、可扩展和可维护的应用程序。

"reactive"编程模式有以下几个关键概念:1. 响应式数据流:在"reactive"编程中,数据被看作是一系列连续的事件。

这些事件可以是用户输入、传感器数据、外部系统的消息等等,而应用程序的状态则是这些事件的累积结果。

数据流通过观察者模式中的Observable(可观察对象)传递给订阅者(观察者),当新事件发生时,订阅者可以对其进行处理。

2. 响应式函数:"reactive"编程中的函数通常被称为响应式函数。

这些函数接收输入(数据流)并产生输出(新的数据流),并且任何时间点上的输出都取决于当前的输入。

这使得函数能够动态地响应输入的变化。

3. 响应式依赖:在"reactive"编程中,依赖关系的管理非常重要。

当一个响应式函数依赖于其他响应式函数的输出时,任何一个依赖项的变化都会触发重计算,以确保函数的输出仍然是最新的。

这种自动的依赖管理使得应用程序能够自动进行适应和调整。

4. 响应式编程框架:为了帮助开发人员更加便捷地应用"reactive"编程模式,许多编程框架和库已经涌现出来。

这些框架提供了一套工具和API,帮助开发人员轻松地实现响应式函数、管理响应式依赖,并在应用程序中处理数据流的传递。

一些常见的响应式编程框架包括RxJava、ReactiveX 和Spring Reactor等。

reactive的使用场景 -回复

reactive的使用场景 -回复

reactive的使用场景-回复什么是reactive?Reactive(响应式编程)是一种编程范式,旨在通过建立一种数据流的机制来处理数据的异步流动。

在响应式编程中,程序可以根据数据的变化实时地响应并处理这些变化,通过自动化地传播数据的改变来提高系统的可维护性和可伸缩性。

响应式编程的核心概念是观察者模式和数据流。

观察者模式是指一个主题对象维护一个观察者列表,并在数据变化时自动通知所有观察者。

数据流是指在程序中通过建立数据的流动链来处理数据的变化,使得数据变化能够在整个系统中自动传播。

响应式编程最早由微软推出的Reactive Extensions(Rx)框架提出,并在后来的几年逐渐流行起来。

Rx框架提供了一套丰富的操作符和API,以简化异步编程和事件驱动编程的复杂性。

除了Rx框架,还有其他的实现响应式编程的库和框架,如Reactive Streams、Reactor等。

Reactive的使用场景:1. 用户界面开发:在用户界面开发中,经常需要根据用户的操作实时地更新界面。

使用响应式编程可以方便地实现数据和界面的绑定,使得界面能够自动更新。

2. 事件驱动系统:在事件驱动系统中,存在大量的事件流和数据流,需要对这些流进行处理和管理。

使用响应式编程可以方便地处理事件和数据的流动,提高系统的可维护性和可伸缩性。

3. 异步编程:异步编程是处理实时数据的重要手段。

使用响应式编程可以简化异步编程的复杂性,使得代码更加清晰和可读。

4. 大数据处理:在大数据处理中,经常需要处理大量的数据流和事件流。

使用响应式编程可以方便地处理这些数据流和事件流,提高处理效率和减少资源消耗。

5. 分布式系统:在分布式系统中,存在大量的节点和消息流传递。

使用响应式编程可以方便地处理节点和消息的流动,提高系统的可伸缩性和可靠性。

基于以上的使用场景,我们可以看到响应式编程具有以下的优点:1. 维护性:响应式编程通过建立数据流的机制,使得程序的数据流动变得可见和可控。

reactiveredistemplate序列化 -回复

reactiveredistemplate序列化 -回复

reactiveredistemplate序列化-回复标题:深度解析RedisTemplate序列化机制引言:Redis是一款缓存和持久化数据库系统,被广泛应用于分布式系统中的数据缓存和数据共享领域。

而在Redis中,序列化是一种常见的数据存储和传输方式。

本文将深入讨论RedisTemplate序列化的机制,以及如何将对象序列化为字节数组并存储在Redis中、如何从Redis中读取并反序列化对象。

一、什么是RedisTemplate序列化RedisTemplate是Spring Data Redis提供的一个操作Redis的模板类,它封装了对Redis的操作,简化了与Redis的交互过程。

而序列化是Redis 中数据存储和传输的一种方式,通过将对象转换成字节数组进行存储,从而实现对象的持久化操作。

二、RedisTemplate序列化的作用1.数据存储:RedisTemplate序列化实现了数据的持久化操作,可以将Java对象转换为字节数组并存储在Redis中,保证数据的可靠性和持久化。

2.数据传输:RedisTemplate序列化可以将数据以字节数组的形式进行传输,实现了分布式系统中的数据共享和通信。

3.性能优化:序列化可以减少网络传输和存储空间的占用,提高系统的性能。

三、RedisTemplate序列化的机制RedisTemplate提供了默认的序列化机制,即JdkSerializationRedisSerializer。

它使用Java的序列化和反序列化机制,将Java对象转换为字节数组,该字节数组可用于存储在Redis中或进行网络传输。

1. JdkSerializationRedisSerializer JdkSerializationRedisSerializer采用Java默认的序列化机制,从对象中获取字节数组并进行存储和传输。

该序列化机制可以确保对象的完整性和一致性,但在性能和空间占用方面存在一定的缺陷。

reactive statement 回复声明

reactive statement 回复声明

"Reactive statement" 可以指代在某个话题或问题上做出反应或回应的声明。

这种声明通常是在别人已经发表观点或立场后,再对其做出回应或反应的表达方式。

这种反应可以是同意、反对、补充或评论等,通常需要在语境中理解其具体含义。

例如:
1. "I react to the statement made by the Prime Minister yesterday."
这句话表示说话者对总理昨天的声明做出了反应。

2. "The Ministry of Foreign Affairs quickly issued a reactive statement in response to the comments made by the US ambassador."
这句话表示外交部针对美国大使的言论迅速发表了一个回应声明。

希望以上信息对你有帮助。

reactive 编程

reactive 编程

reactive 编程什么是[reactive 编程]?Reactive 编程是一种编程模型,旨在帮助开发人员构建高度可响应的和事件驱动的应用程序。

它着力于处理异步数据流,并提供一种简单和一致的方式来处理数据流的变化。

相比传统的命令式编程模型,Reactive 编程更加灵活和高效,可以应对复杂和实时的应用场景。

[reactive 编程]的核心原则是什么?Reactive 编程有四个核心原则,即响应性、弹性、饱和性和消息传递。

这些原则共同确保了应用程序的可靠性和性能。

首先是响应性。

Reactive 编程鼓励开发人员将应用程序看作是一个事件流,对外部事件做出快速响应。

它通过观察者模式实现,开发人员可以订阅数据流中的事件,并使用回调函数处理数据的变化。

其次是弹性。

Reactive 编程假设应用程序将要处理各种各样的数据流情况,包括高负载、错误处理和系统崩溃等。

开发人员需要编写健壮的代码来应对这些情况,以确保应用程序的可用性和稳定性。

饱和性是另一个重要的原则。

Reactive 编程鼓励使用异步和非阻塞操作,以提高应用程序的性能和资源利用率。

通过合理的资源管理和并发控制,开发人员可以更好地运用系统资源,提高应用程序的性能和并发处理能力。

最后是消息传递。

Reactive 编程倡导使用消息传递机制来实现组件之间的通信。

这种方式可以提高应用程序的解耦性,使组件之间的依赖关系更加清晰和可维护。

如何使用[reactive 编程]?使用Reactive 编程需要理解和熟悉一些核心概念和工具。

以下是一些常见的Reactive 编程框架和库:1. ReactiveX:ReactiveX 是一个跨平台的Reactive 编程框架,支持多种编程语言,如Java、JavaScript、C# 等。

它提供了一套丰富的操作符,用于处理异步数据流。

开发人员可以使用这些操作符来过滤、映射和转换数据,以及构建复杂的数据处理流程。

2. Akka:Akka 是一个用于构建高并发和分布式应用程序的工具集。

reactiveredistemplate序列化 -回复

reactiveredistemplate序列化 -回复

reactiveredistemplate序列化-回复什么是React Redux Template序列化?React Redux Template序列化是指将React Redux模板实例转化为一系列字节流或者字符串的操作过程。

在React Redux开发中,我们经常需要对应用状态进行持久化存储或者网络传输,而序列化就是将这些状态数据转化为可传输或可存储的形式。

为什么需要序列化?在React Redux中,应用状态是保存在Redux store中的,它包含了所有组件的状态数据。

在实际开发中,我们可能需要将这些状态数据进行存储,比如保存到浏览器的本地存储、数据库中,或者通过网络传输给其他用户。

由于状态数据是一个JavaScript对象,无法直接进行存储或传输,因此需要将其序列化为可存储或传输的形式。

如何进行序列化?React Redux提供了一种简单的方式来进行序列化,就是使用第三方库,比如JSON.stringify()和JSON.parse()。

JSON.stringify()方法可以将JavaScript对象转换为一个JSON字符串,而JSON.parse()方法则可以将JSON字符串转换为JavaScript对象。

下面是一个简单的示例,展示了如何使用JSON.stringify()和JSON.parse()来进行序列化和反序列化:jsximport { createStore } from 'redux';import { Provider } from 'react-redux';import { serialize, deserialize } from'react-redux-template-serializer';Step 1: 创建React Redux模板实例const initialState = {counter: 0,};function reducer(state = initialState, action) {switch (action.type) {case 'INCREMENT':return { ...state, counter: state.counter + 1 };case 'DECREMENT':return { ...state, counter: state.counter - 1 };default:return state;}}const store = createStore(reducer);Step 2: 序列化React Redux模板实例const serializedState = serialize(store.getState());Step 3: 反序列化React Redux模板实例const deserializedState = deserialize(serializedState);Step 4: 创建新的React Redux模板实例const newStore = createStore(reducer, deserializedState);在上面的示例中,我们首先创建了一个React Redux模板实例,并将其状态数据进行序列化,接着通过反序列化将其转换回可用于创建新的模板实例的状态数据。

reactivefeignclient注解

reactivefeignclient注解

文章标题:深度解析reactivefeignclient注解:从简单到复杂的探讨1. 引言在现代软件开发中,微服务架构已经成为一个流行的选项。

在微服务架构中,服务之间的通信变得至关重要。

而Feign作为一个声明式的HTTP客户端,能够简化服务之间的通信,使得我们的代码更加优雅和简洁。

而在Reactive编程的流行下,ReactiveFeign作为Feign的衍生项目,提供了对响应式编程模式的支持,让我们能够更好地应对大规模的并发请求。

在本文中,我将深入探讨ReactiveFeign中的一个关键注解——reactivefeignclient,以帮助您更好地理解这一概念。

2. reactivefeignclient的定义和作用首先让我们来了解一下reactivefeignclient注解的定义和作用。

在ReactiveFeign中,使用reactivefeignclient注解来标注一个接口,以表示这个接口是一个ReactiveFeign客户端。

通过这个注解,我们能够定义客户端的请求方式、请求路径以及其他相关信息,使得我们能够通过接口的方式去调用远程服务,并且能够自动地将响应转换为Reactive类型,以便我们更好地应对异步流式处理。

3. reactivefeignclient注解的使用方法我们将分析reactivefeignclient注解的使用方法。

在使用reactivefeignclient注解时,我们需要注意以下几个方面:- 如何定义接口的请求方式和路径- 如何定义接口的参数和返回类型- 如何配置客户端的超时和重试策略举例来说,我们可以定义一个简单的reactivefeignclient接口,如下所示:```@reactivefeignclient(name = "example-service")public interface ExampleServiceClient {@requestmapping(method = requestmethod.get, value = "/example")Mono<example> getExample();@requestmapping(method = requestmethod.post, value = "/example")Mono<example> createExample(@requestbodyMono<example> example);}```在上述示例中,我们使用了reactivefeignclient、requestmapping 等注解来定义了一个名为ExampleServiceClient的ReactiveFeign客户端接口。

reactive声明 -回复

reactive声明 -回复

reactive声明-回复【Reactive声明】是指使用React.js等前端框架开发应用程序时,通过声明式编程的方式来构建用户界面的方法。

相比传统的命令式编程,声明式编程更加注重描述应用程序应该具备的性质,而不是指导它应该如何完成任务。

在这篇文章中,我们将一步一步地回答与【Reactive声明】相关的问题,并对其优势和应用进行深入探讨。

一、什么是【Reactive声明】?【Reactive声明】是一种使用React.js等前端框架开发应用程序的方法。

它通过声明式编程来描述应用程序的性质,而不是详细指导实现过程。

在【Reactive声明】中,开发者只需要关注应用程序的状态和数据,框架会负责根据这些声明自动更新用户界面。

二、【Reactive声明】的优势是什么?1. 增强可维护性:【Reactive声明】将应用程序的状态和数据从具体的实现中解耦,使得代码更易于理解和维护。

开发者可以通过声明式的方式表达应用程序的逻辑,而无需关注底层的实现细节。

2. 提高开发效率:使用【Reactive声明】可以提高开发效率。

由于框架会根据声明自动更新用户界面,开发者只需要关注数据的变化,减少了手动更新界面的工作量。

此外,【Reactive声明】还提供了一些方便的工具和库,帮助开发者更快速地构建用户界面。

3. 增强可测试性:【Reactive声明】将应用程序的逻辑和界面分离,使得逻辑部分更容易进行单元测试。

开发者可以更方便地编写测试用例,确保应用程序的逻辑正确性。

三、所需的基本概念是什么?在使用【Reactive声明】开发应用程序之前,理解以下几个基本概念是非常重要的:1. 组件:组件是构建应用程序的基本单元。

每个组件都有自己的状态和属性,并负责渲染对应的用户界面。

2. 状态:状态是应用程序中的数据。

通过修改状态,可以触发界面的更新。

在【Reactive声明】中,状态是可变的,但是只能通过特定的方法进行修改。

3. 属性:属性是从父组件传递给子组件的数据。

reactive原理 -回复

reactive原理 -回复

reactive原理-回复什么是reactive原理?[reactive原理]是一种编程范式,旨在提供一种异步数据流处理的解决方案。

它的核心理念是,数据流是由一系列的事件组成的,我们可以对这些事件进行订阅,并通过回调函数来处理这些事件。

首先,我们需要理解什么是数据流。

在响应式编程中,数据流是一组连续的事件,这些事件可以是用户输入、网络请求、传感器数据、系统事件等等。

这些事件的产生是非常频繁且不可预测的,而我们需要对这些事件进行处理,以便做出相应的响应。

在传统的编程模型中,我们通常会使用命令式编程来处理事件。

这意味着我们需要先定义事件的产生和处理的顺序,然后按照这个顺序执行代码。

这种方式对于简单的应用来说还是可以接受的,但是对于复杂的应用来说,这种方式很容易出现问题。

而基于reactive原理的编程范式,提供了一种更加灵活和高效的处理数据流的方式。

首先,我们需要定义一个数据流,我们可以通过订阅这个数据流来获取其中的事件。

一旦有事件产生,我们可以通过注册回调函数来处理这些事件。

这样就实现了事件的驱动编程。

在reactive原理中,有几个重要的概念需要我们了解。

首先是可观察对象(Observables),它表示一个数据流,可以发出多个事件。

我们可以通过订阅这个可观察对象来获取其中的事件。

另一个重要的概念是观察者(Observers),它表示一个对象,用于处理可观察对象发出的事件。

观察者通过订阅可观察对象,当有事件产生时,观察者会调用事先定义好的回调函数来处理事件。

还有一个重要的概念是操作符(Operators),它用于对可观察对象进行变换和组合。

通过操作符,我们可以对数据流进行过滤、映射、合并等操作,以便根据需求产生所需的数据流。

基于这些概念,我们可以使用reactive原理构建复杂的应用。

我们可以通过订阅多个可观察对象,并使用操作符来对这些数据流进行处理和组合,最终得到我们想要的结果。

而且,由于reactive原理中的操作是异步的,所以我们可以更加高效地处理大量的事件。

zotero重复引用

zotero重复引用

zotero重复引用英文回答:Zotero is a powerful reference management tool that allows users to easily collect, organize, and cite sources for their research papers or projects. However, it is not uncommon to encounter situations where the same reference needs to be cited multiple times within the same document. In such cases, Zotero provides a feature called "duplicate citations" that allows users to handle this scenario efficiently.To use the duplicate citations feature in Zotero,follow these steps:1. Open your document in a word processor that supports Zotero integration, such as Microsoft Word or Google Docs.2. Place your cursor at the location where you want to insert the duplicate citation.3. Click on the "Add/Edit Citation" button in the Zotero toolbar or use the keyboard shortcut (usuallyCtrl+Alt+C).4. In the citation dialog box, search for the reference you want to cite and select it from the search results.5. Instead of clicking the "OK" button, click on the small arrow next to it to reveal additional options.6. From the dropdown menu, select the "Duplicate" option.7. Zotero will insert a duplicate citation at the desired location, using the same reference details as the original citation.It is important to note that the duplicate citation feature in Zotero ensures that any changes made to the original citation, such as adding page numbers or modifying the citation style, will be automatically reflected in allduplicate citations. This saves users from manually updating each occurrence of the citation throughout the document.中文回答:Zotero是一个功能强大的参考文献管理工具,可以帮助用户轻松地收集、整理和引用研究论文或项目所需的来源。

active choices reactive parameter 无效

active choices reactive parameter 无效

active choices reactive parameter 无效在 Jenkins 中,Active Choices Reactive Parameter 是一个非常有用的插件,它允许用户在构建过程中动态选择参数。

然而,在一些情况下,用户可能会遇到 Active Choices Reactive Parameter 无效的问题。

本文将讨论一些可能导致该问题的原因,并提供解决方案。

首先,让我们了解一下 Active Choices Reactive Parameter 的工作原理。

Active Choices Reactive Parameter 允许用户在选择参数时触发一个Groovy 脚本进行响应。

脚本可以根据用户的选择生成并更新其他参数的选项。

该功能主要由两个部分组成:Active Choices Parameter 和 Groovy 脚本。

然而,有时用户可能会发现 Active Choices Reactive Parameter 无法正常工作。

这可能是由于以下原因之一:1. 插件版本不兼容:首先,检查使用的插件版本是否与当前的 Jenkins 版本兼容。

拥有最新的插件版本通常会解决许多已知问题。

2. 脚本错误:检查你的 Groovy 脚本是否存在语法错误或逻辑错误。

特别是,确保脚本正确处理用户的选择,并根据需要生成正确的参数选项。

3. 脚本权限:确保你的 Jenkins 实例中的用户或用户组具有执行脚本所需的权限。

尝试更改用户权限并重新运行构建,看看问题是否得到解决。

4. 依赖项问题:Active Choices Reactive Parameter 通常依赖其他插件来提供所需的功能。

确保这些插件已安装,并检查它们是否存在任何冲突或版本不兼容的问题。

5. 缓存问题:有时 Active Choices Reactive Parameter 可能会受到缓存的影响。

尝试清除 Jenkins 缓存并重新运行构建,看看问题是否得到解决。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Replanning Algorithm for a Reactive AgentArchitectureGuido Boella and Rossana DamianoDipartimento di InformaticaCso Svizzera185Torino ITALYemail:guido@di.unito.itKeywords:Planning,Agent architecturesAbstract.We present an algorithm for replanning in a reactive agent architecturewhich incorporates decision-theoretic notions to drive the planning and meta-deliberation process.The deliberation component relies on a refinement plannerwhich produces plans with optimal expected utility.The replanning algorithmwe propose exploits the planner’s ability to provide an approximate evaluationof partial plans:it starts from a fully refined plan and makes it more partial untilitfinds a more partial plan which subsumes more promising refinements;at thatpoint,the planning process is restarted from the current partial plan.1IntroductionIn this paper we present a replanning algorithm developed for a reactive agent archi-tecture which incorporates decision-theoretic notions to determine the agent’s commit-ment.The agent architecture is based on the planning paradigm proposed by[4],which combines decision-theoretic refinement planning with a sound notion of action abstrac-tion([3]):given a goal and a state of the world,the planner is invoked on a partial plan (i.e.a plan in which some actions are abstract)and iteratively refines it by returning one or more plans which maximize the expected utility according to the agent’s preferences, modelled by a multi-attribute utility function.The decision-theoretic planning paradigm extends the classical goal satisfaction paradigm by allowing partial goal satisfaction and the trade-off of goal satisfaction against resource consumption.Moreover,it accounts for uncertainty and non deter-minism,which provide the conceptual instruments for dealing with uncertain world knowledge and actions having non-deterministic effects.These features make decision-theoretic planning especially suitable for modelling agents who are situated in dynami-cally changing,non deterministic environments,and have incomplete knowledge about the environment.However,decision-theoretic planning frameworks based on plan refinement([4])do not lend themselves to reactive agent architectures,as they do not include any support for reactive replanning.In this paper,we try to overcome this gap,by proposing an algorithm for replanning for a reactive agent architecture based on decision-theoretic notions.Since optimal plans are computed with reference to a certain world state,if the world state changes,the selected plan may not be appropriate anymore.Instead of plan-ning an alternative solution from the scratch,by re-starting the planning process from the goal,the agent tries to perform replanning on its current plan.The replanning algorithm is based on a partialization process:it proceeds by mak-ing the current solution more partial and then starting the refinement process again. This process is repeated until a new feasible plan is found or the partialization process reaches the topmost action in the plan library(in this case,it coincides with the standard planning process).We take advantage of the decision-theoretic approach on which the planner is based not only for improving the quality of the replanned solution,but also for guiding the replanning process.In particular,the planner ability to evaluate the expected utility of partial plans provides a way to decide whether to continue the partialization process or to re-start refinement:for each partial plan produced in the partialization step,it is possible to make an approximate estimate of whether and with what utility the primitive plans it subsumes achieve the agent’s goal.Then,the pruning heuristic used during the standard planning process to discard sub-optimal plans can be used in the same way during the replanning process to reduce its complexity.2The agent architectureThe architecture is composed of a deliberation module,an execution module,and a sensing module,and relies on a meta-deliberation module to evaluate the need for re-deliberation,following[8].The internal state of the agent is defined by its beliefs about the current world,its goals,and the intentions(plans)it has formed in order to achieve a subset of these goals.The agent’s deliberation and redeliberation are based on decision-theoretic notions:the agent is driven by the overall goal of maximizing its utility based on a set of preferences which are encoded in a utility function.The agent is situated in a dynamic environment,i.e.the world can change indepen-dently from the agent’s actions,and actions can have non-deterministic effects,i.e.,an action can result in a set of alternative effects.Moreover,there is no perfect correspon-dence between the environment actual state and the agent’s representation of it.In this architecture,intentions are not static,and can be modified as a result of re-deliberation:if the agent detects a significant mismatch between the initially expected and the currently expected utility brought about by a plan,the agent revises its intentions by performing re-deliberation.As a result,the agent is likely to become committed to different plans along time,each constituted of a different sequence of actions.However, while the intention to execute a certain plan remains the same until it is dropped or sat-isfied,the commitment to execute single actions evolves continuously as a consequence of both execution and re-deliberation.In order to represent dynamic intentions,separate structures for representing plan-level commitment and action-level commitment have been introduced in the architec-ture.So,intentions are stored in two kind of structures:plans,representing goal-level commitment,and action-executions,representing action-level commitment.New in-stances of the plan structure follow one another in time as a consequence of the agent’sre-deliberation;on the contrary,the action-level commitment of an agent is recorded in a unitary instance of the action-execution structure,called execution record ,whose temporal extent coincides with the agent’s commitment to a goal and which is updated at every cycle.METAgoals intentions subjective worldFig.1.The structure of the agent ar-chitecture.Dashed lines represent dataflow,solid lines represent control flow.The behavior of the agent is con-trolled by an execution-sensing loop witha meta-level deliberation step (see figure1).When this loop is first entered,the de-liberation module is invoked on the initial goal;the goal is matched against the planschemata contained in the library,andwhen a plan schema is found,it is passedto the planner for refinement.This planbecomes the agent’s current intention,andthe agent starts executing it.After execut-ing each action in the plan,the sensingmodule monitors the effects of the action execution,and updates the agent’s repre-sentation of the world.Then,the meta-deliberation module evaluates the updated representation by means of an execution-monitoring function:if the world meetsthe agent’s expectations,there is no need for re-deliberation,and the execution is re-sumed;otherwise,if the agent’s intentions are not adequate anymore to the new envi-ronment,then the deliberation module is assigned the task of modifying them.Due to the agent’s uncertainty about the outcome of the plan,the initial plan isassociated to an expected utility interval,but this interval may vary as the execution of the plan proceeds.More specifically,after the execution of a non-deterministic action (or a conditional action,if the agent did not know at deliberation time what conditional effect would apply),the new expected utility interval is either the same as the one that preceded the execution,or a different one.If it is different,the new upper bound of the expected utility can be the same as the previous one,or it can be higher or lower -that is,an effect which is more or less advantageous than expected has taken place.The execution-monitoring function,which constitutes the core of the meta-deliberationmodule,relies on the agent’s subjective expectations about the utility of a certain plan:this function computes the expected utility of the course of action constituted by the re-maining plan steps in the updated representation of the world.The new expected utility is compared to the previously expected one,and the difference is calculated:replanning is performed only if there is a significant difference.If new deliberation is not necessary,the meta-deliberation module simply updatesthe execution record and releases the control to the execution module,which executes the next action.On the contrary,if new deliberation is necessary,the deliberation mod-ule is given the control and invokes its replanning component on the current plan with the task of finding a better plan;the functioning of the replanning component is inspired to the notion of persistence of intentions ([2]),in that it tries to perform the most lo-cal replanning which allows the expected utility to be brought back to an acceptable difference with the previously expected one.3The planning algorithmThe action library is organised along two abstraction hierarchies.The sequential ab-straction hierarchy is a task decomposition hierarchy:an action type in this hierarchy is a macro-operator which the planner can substitute with a sequence of(primitive or non-primitive)action types.The specification hierarchy is composed of abstract action types which subsume more specific ones.In the following,for simplicity,we will refer to sequentially abstract actions as complex actions and to actions in the specification hierarchy as abstract actions.A plan(see section2)is a sequence of action instances and has associated the goal the plan has been planned to achieve.A plan can be partial both in the sense that some steps are complex actions and in the sense that some are abstract actions.Each plan is associated with the derivation tree(including both abstract and complex actions) which has been built during the planning process and that will be used for driving the replanning phase.Before refining a partial plan,the agent does not know which plan(or plans)-among those subsumed by that partial plan-is the most advantageous according to its prefer-ences.Hence,the expected utility of the abstract action is uncertain:it is expressed as an interval having as upper and lower bounds the expected utility of the best and the worst outcomes produced by substituting in the plan the abstract action with all the more specific actions it subsumes.This property is a key one for the planning process as it makes it possible to compare partial plans which contain abstract actions.The planning process starts from the topmost action in the hierarchy which achieves the given goal.If there is no time bound,it proceeds refining the current plan(s)by substituting complex actions with the associated decomposition and abstract actions with all the more specific actions they subsume,until it obtains a set of plans which are composed of primitive actions.At each cycle the planning algorithm re-starts from a less partial plan:at the be-ginning this plan coincides with the topmost action which achieves the goal,in the subsequent refinement phases it is constituted by a sequence of actions;this feature is relevant for replanning,as it make it possible to use the planner for refining any partial plan,no matter how it has been generated.At each refinement step,the expected utility of each plan is computed by projecting it from the current world state.Then,a pruning heuristic is applied by discarding the plans identified as suboptimal,i.e.,plans whose expected utility upper bound is lower than the lower bound of some other plan.The suboptimality of a plan with respect to means that all possible refinements of have an expected utility which dominates the utility of,and,as a consequence,dominates the utility of all refinements of: consequently,suboptimal plans can be discarded without further refining them.On the contrary,plans which have overlapping utilities need further refinement before the agent makes any choice.At each step of refinement the expected utility interval of a plan tends to become narrower,since it subsumes a reduced number of plans(in fact,the plan appears deeper in the hierarchy of plans).procedure plan replan(plan p,world w){/*find the first action which will fail*/action a:=find-focused-action(p,w);mark a;//set a as the FAplan p’:=p;plan p’’:=p;/*while a solution or the root are not found*/while(not(achieve(p’’,w,goal(p’’)))and has-father(a)){/*look for a partial plan with better utility*/while(not(promising(p’,w,p))and has-father(a)){p’:=partialize(p’);project(p’,w);}//evaluate the action in w/*restart planning on the partial plan*/p’’:=refine(p’,w);}return p’’;}Fig.2.The main procedure of the replanning algorithm,replan4The replanning algorithmIf a replanning phase is entered,then it means that the current plan does not reach the agent’s goal,or that it reaches it with a very low utility compared with the initial expec-tations.But it is possible that the current plan is‘close’to a similar feasible solution, where closeness is represented by the fact that both the current solution and a new fea-sible one are subsumed by a common partial plan at some level of the action abstraction hierarchy.The key idea of the replanning algorithm is then to make the current plan more partial by traversing the abstraction hierarchies in a upsidedown manner,until a more promising abstract plan is found.The abstraction and the decomposition hierarchy play complementary roles in the algorithm:the abstraction hierarchy determines the alter-natives for substituting the actions in the plan,while the decomposition hierarchy is exploited to focus the substitution process on a portion of the plan.A partial plan can be identified as promising by observing its expected utility interval, since this interval includes not only the utility of the(unfeasible)current plan but also the utility of the new solution.So,during the replanning process,it is possible to use this estimate in order to compare the new plan with the expected utility of the more specific plan from which it has been obtained:if it is not promising it is discarded.The starting point of the partialization process inside the plan is thefirst plan step whose preconditions do not hold,due to some event which changed the world or to some failure of the preceding actions.In[5]’s planning framework the Strips-like pre-condition/effect relation is not accounted for:instead,an action is described as a set of conditional effects.The representation of an action includes both the action intended effects,which are obtained when its‘preconditions’hold,and the effects obtained when its‘preconditions’do not hold.For this reason,the notation of the action has been aug-function plan partialize(plan p){action a:=marked-action(p);/*a is the FA of p*//*if it is subsumed by a partial action*/if(abstract(father(a))){delete(a,p);/*delete a from the tree*/return p;}/*no more abstract parents:we are in a decomposition*/else if(complex(father(a)){a1:=find-sibling(a,p);if(null(a1)){/*there is no FA in the decomposition*/mark(father(a))//set the FA//delete the decompositiondelete(descendant(father(a)),p);return p;}else{//change the current FAunmark(a);mark(a1);}}}Fig.3.The procedure for making a plan more abstract,partialize.mented with the information about the action intended effect,which makes it possible to identify its preconditions.1The task of identifying the next action whose preconditions do not hold(the‘fo-cused action’)is accomplished by the Find-focused-action function(see the main pro-cedure in Figure2);mark is the function which sets the current focused action of the plan).Then,starting from the focused action(FA),the replanning algorithm partial-izes the plan,following the derivation tree associated with the plan(see the partializes function in Figure3).If the action type of the FA is directly subsumed by an abstract action type in the derivation tree,the focused action is deleted and the abstract action substitutes it in the tree frontier which constitutes the plan.On the contrary,if FA appears in a decomposi-tion(i.e.,its father in the derivation tree is a sequentially abstract action)then two cases are possible(see thefind-sibling function in4):1.There is some action in the plan which is a descendant of a sibling of FA in thedecomposition and which has not been examined yet:this descendant of the sibling becomes the current FA.The order according to which siblings are considered re-flects the assumption that it is better to replan non-executed actions,when possible: so,right siblings(from the focused action on)are given priority on left siblings. 2.All siblings in the decomposition have been already refined(i.e.,no one has anydescendant):all the siblings of FA and FA itself are removed from the derivation1Since it is possible that more than one condition-effect branch lead to the goal(maybe with different satisfaction degrees),different sets of preconditions can be identified by selecting the condition associated to successful effects.function action find-sibling(a,p){/*get the next action to be refined(in the same decomposition as a)*/ action a0:=right-sibling(a,p);action a1:=leftmost(descendant(a0,p));while(not(null(a1))){/*if it can be partialized*/if(not complex(father(a1))){unmark(a);//change FAmark(a1)return a1;}/*move to next action*/a0:=right-sibling(a0,p);a1:=leftmost(descendant(a0,p));}/*do the same on the left side of the plan*/action a1:=left-sibling(a,p);action a1:=rightmost(descendant(a0,p));while(not(null(a1))){if(not complex(father(a1))){unmark(a);mark(a1)return a1;}action a1:=left-sibling(a,p);}Fig.4.The procedure forfinding the new focused action.tree and replaced in the plan by the complex sequential action,which becomes thecurrent FA(see Figure4).2As discussed in the Introduction,the pruning process of the planner is applied inthe refinement process executed during the replanning phase.In this way,the difficultyoffinding a new solution from the current partial plan is alleviated by the fact that suboptimal alternatives are discarded before their refinement.Beside allowing the pruning heuristic,however,the abstraction mechanism has an-other advantage.Remember that,by the definition of abstraction discussed in Section2,it appears that,given a world state,the outcome of an abstract action includes the outcomes of all the actions it subsumes.Each time a plan is partialized,the resulting plan has an expected utility inter-val that includes the utility interval of.However subsumes also other plans whose outcomes are possibly different from the outcome of.At this point,two cases are pos-sible:either the other plans are better than or not.In thefirst case,the utility of willhave an higher higher bound with respect to,since it includes all the outcomes of the subsumed plans.In the second case,the utility of will not have a higher upper boundthan.Hence,is not more promising than the less partial plan.The algorithm exploits this property(see the promising condition in the procedure re-plan)to decide when the iteration of the partialization step must be stopped:when a2Since an action type may occur in multiple decompositions3,in order to understand which decomposition the action instance appears into,it is not sufficient to use the action type library,but it is necessary to use the derivation tree).promising partial plan (i.e.,a plan which subsumes better alternatives than the previous one)is reached,the partialization process ends and the refinement process is restarted on the current partial plan.The abstraction hierarchy has also a further role.The assumption underlying our strategy is that a plan failure can often be resolved locally ,within the subtree the fo-cused action appears into.Not all failures,however,can be resolved locally,but these cases are taken into account by the algorithm as well:after the current subtree has been completely partialized,a wider subtree in the derivation tree will be considered,until the topmost root action is reached:in this case,the root of the derivation tree becomes the FA and the planning process is restarted from scratch.In case of non-local causal dependencies among actions (i.e.,a precondition of the FA is enabled by the effect of an action which does not appear in the local context of FA),the algorithm takes advantage from the fact that the current partial plan is projected onto its final state and its expected utility is computed:provided that the definition of abstract action operators is sufficiently accurate to make casual dependencies explicit,it is likely that invalid dependencies will be reflected in the expected utility of the current partial plan,and,as a consequence,it will be pruned during refinement without being further expanded.Fig.5.A generic action hierarchy.Ab-straction relations are represented by dashed lines.Finally,the movement of the FA is acritical point of the algorithm.Here wepresented find-sibling as a simple pro-cess which follows the local structure ofthe tree.However,some improvementsare possible to take advantage from thecases in which non local dependencies are known.Hence,the find-sibling proce-dure should be modified in order to usein deeper way the structure of the plans and,in particular,the implicit enablementlinks among actions for choosing the next FA.For the sake of brevity,in order to illustrate how the replanning algorithm works,we will resort to a generic action hierarchy (see fig.5),which abstracts outthe details of the domains we used to test the implementation.In the following,we will examine the replanning process that the algorithm would per-form,given the initial plan composed of the steps--(see fig.6).1.Assume that,at the beginning of the replanning process,the focused step is action(1).is examined first,but an alternative instantiation of it cannot be found (as its immediate parent is not an abstract action).The find-siblings function returns the right sibling of ,.2.The planner is given as input the partial plan--.Assuming that a feasible plan is not found (i.e.,--,the only possible alternative to the originalA AC’Fig.6.A graphical representation of the plan replanning process on the generic action library introduced in5.Black nodes represent the siblings of the focused action node,while the grey nodes represent the local decomposition context.(1)-(2)-(3)represent the phases of the replanning process.plan,does not work),the replanning process is started again after collapsing the sub-plan()on its father,the complex node(no siblings left).3.Given the new input plan-,the focused step now is(2).The focused stepis examinedfirst,and the more abstract father node is found;is replaced by in the plan and the planner is invoked on the new partial plan-.4.Again,assuming that a new feasible plan has not been found by refining-,thereplanning process continues by examining,the only sibling of the focused action(3).Before the candidate plan is collapsed on its root(),the replanning processgives the planner as input the plan obtained by substituting the more abstract node for in the current partial plan,obtaining-.5.Finally,if the refinement of the partial plan-does not yield a feasible plan,the plan is collapsed on the its father.If a feasible plan is not found by refining the plan constituted by the root alone,the plan replanning algorithm fails.In the previous version of the algorithm,thefind-sibling step proceeds not only from left to right(towards actions yet to be executed),but also in a backward manner: at a certain point it is possible that the focused point is shifted to an already executed actions.In order to overcome this problem,we propose that the projection rule should is changed to include in the projection the actions that must be executed again(possibly in an alternative way).In this case,the FA would be moved incrementally to the left, and would become the reference point for starting the projection of the current partial plan.5Related Work and Conclusions[6]has proposed a similar algorithm for an SNLP planner.The algorithm searches for a plan similar to known onesfirst by retracting refinements:i.e.,actions,constraints and causal links.In order to remove the refinements in the right order,[6]add to the plan an history of‘reasons’explaining why each new element has been inserted.In a similar way,our algorithm adapts the failed plan to the new situation by retracting refinements,even if in the sense of more specific actions and decompositions.The same role played by‘reasons’is embodied in the derivation tree associated to the plan which explains the structure of the current plan and guides the partialization process.As it has been remarked on by([7]),reusing existing plans raises complexity issues. They show that modifying existing plans is advantageous only under some conditions: in particular,when,as in our proposal,it is employed in a replanning context(instead of a general plan-reuse approach to planning)in which it is crucial to retain as many steps as possible of the plan the agent is committed to.Second,when the complexity of generating plans from the scratch is hard,as in the case of our decision-theoretic planner.For what concerns the complexity issues,it must be noticed that the replanning algorithm works in a similar way as the iterative deepening algorithm.At each stage, the height of the tree of the state space examined increases.The difference with the standard search algorithm is that,instead of starting the search from the tree root and stopping at a certain depth,we start from a leaf of the plan space and,at each step,we select a higher tree which rooted by one of the ancestors of the leaf.In the worst case,the order of complexity of the replanning algorithm is the same as the standard planning algorithm.However,two facts that reduce the actual work performed by the replanning algorithm must be taken into account:first,if the assumption that a feasible solution is“close”to the current plan is true,then the height of the tree which includes both plans is lower than the height of root of the whole state space.Second,the pruning heuristics is used to prevent the refinement of some of the intermediate plans in the search space,reducing the number of refinement runs performed.Finally,it is worth mentioning that the replanning algorithm we propose is complete, in that itfinds the solution if one exists,but it does not necessarilyfinds the optimal solution:the desirability of an optimal solution,in fact,is subordinated to the notions of resource-boundedness and to the persistence of intentions,which tend to privilege conservative options.References1.M.E.Bratman,Intention,Plans,and Practical Reason,Harvard University Press,Cambridge(MA),1987.2.M.E.Bratman,D.J.Israel,and M.E.Pollack,‘Plans and resource-bounded practical reason-ing’,Computational Intelligence,4,349–355,(1988).3.Vu Ha and Peter Haddawy,‘Theoretical foundations for abstraction-based probabilistic plan-ning’,in Twelfth Conference on Uncertainty in Artificial Intelligence,pp.291–298,Portland.4.P.Haddawy and S.Hanks,‘Utility models for goal-directed,decision-theoretic planners’,Computational Intelligence,14,392–429,(1998).5.P.Haddawy and M.Suwandi,‘Decision-theoretic refinement planning using inheritance ab-straction’,in Proc.of2nd AIPS Int.Conf.,pp.266–271,Menlo Park,CA,(1994).6.Steve Hanks and Daniel S.Weld,‘A domain-independent algorithm for plan adaptation’,Jour-nal of Artificial Intelligence Research,2,319–360,(1995).7. B.Nebel and J.Koehler,‘Plan modification versus plan generation:A complexity-theoreticperspective’,in Proceedings of of the13th International Joint Conference on Artificial Intel-ligence,pp.1436–1441,Chambery,France,(1993).8.Mike Wooldridge and Simon Parsons,‘Intention reconsideration reconsidered’,in Proc.ofATAL-98),eds.,J¨o rg M¨u ller,Munindar P.Singh,and Anand S.Rao,volume1555,pp.63–80.Springer-Verlag,(1999).。

相关文档
最新文档