Received date

合集下载

14.C#中SERIALPORT类中DATARECEIVED事件GUI实时处理方法

14.C#中SERIALPORT类中DATARECEIVED事件GUI实时处理方法

MSDN:从SerialPort对象接收数据时,将在辅助线程上引发DataReceived事件。

由于此事件在辅助线程而非主线程上引发,因此尝试修改主线程中的一些元素(如UI元素)时会引发线程异常。

如果有必要修改主Form或Control中的元素,必须使用Invoke回发更改请求,这将在正确的线程上执行.进而要想将辅助线程中所读到的数据显示到主线程的Form控件上时,只有通过Invoke方法来实现,将Invoke方法内的命令在调用Invoke方法的对象(这里是this.tB_ReceiveDate控件)所在的线程上执行。

下面是代码实例:private void serialPort1_DataReceived(object sender,SerialDataReceivedEventAr gs e){int SDateTemp=this.serialPort1.ReadByte();//读取串口中一个字节的数据this.tB_ReceiveDate.Invoke(//在拥有此控件的基础窗口句柄的线程上执行委托Invo ke(Delegate)//即在textBox_ReceiveDate控件的父窗口form中执行委托.new MethodInvoker(//表示一个委托,该委托可执行托管代码中声明为void且不接//受任何参数的任何方法。

在对控件的Invoke方法进行调用时//或需要一个简单委托又不想自己定义时可以使用该委托。

delegate{//匿名方法,C#2.0的新功能,这是一种允许程序员将一段完整//代码区块当成参数传递的程序代码编写技术,通过此种方法可以直接使用委托来设计事件响应程序//以下就是你要在主线程上实现的功能,但是有一点要注意,这里不适宜处理过多的方法,因为C#消息机//制是消息流水线响应机制,如果这里在主线程上处理语句的时间过长会导致主UI线程阻塞,停止响应或响//应不顺畅,这时你的主form界面会延迟或卡死this.tB_ReceiveDate.AppendText(SDateTemp.ToString());//输出到主窗口//文本控件this.tB_ReceiveDate.Text+="";}));}根据使用经验,在串口速度很快的时候,例如大于100Hz的时候,用上述方法的效果非常不好,因为Invoke使本在辅助线程执行的指令发送到主线程的消息队列中执行,容易引起主线程消息阻塞,界面响应不好。

8D改善措施报告

8D改善措施报告
1.Experience gained:从本次事件中获取的经验:
2 .Action plan for similar problems:对类似的问题有改进措施
3.Action followup results:/改进措施跟踪结果:
Discilne 8:If the same problemhappenagain please describeyour reason(如果问题重复发生,请解释原因):
1.How to prevent recurrence?(如何防止再发生)
2. If the action has the risk to cause to the other problems?(改善措施是否会造成其它风险
□Yes□No
-Ifyes,please make action plan:(如果会造成其它问题,请制定改善计划)
Yes□No□
If No,please describ the results:/若改善结果不理想,请具体说明:
-Closed Status(结案状态) □ □
-Closed date(结案日期)
Prepared by(编制):Approved by(批准)
3.Ifneed to detect related process files?是否需要更新相关工艺文件
Yes□no□
-文件更新日期:文件执行日期:
-文件更新部分执行效果及确认/日期:
4.How to detect out?(怎样检查出此不良)
5.How to identify the improvement date?(何时生产的产品为改善后的产品?)
-客户退货品可:退回公司□客房本地报废□委托客户处理□

date做追溯讲的用法

date做追溯讲的用法

date做追溯讲的用法摘要:1.引言2.date 命令的基本概念和用途3.追溯讲的用法及其特点4.实际案例演示5.总结正文:【引言】在Linux 系统中,date 命令是一种常用的时间处理工具,可以显示和设置系统时间,格式化日期和时间等。

其中,date 命令的追溯讲(tracing)用法是一个非常实用的功能,可以帮助我们更好地理解和管理时间。

本文将为大家介绍date 命令追溯讲的用法及其特点。

【date 命令的基本概念和用途】date 命令是Linux 系统中一个非常常用的时间处理工具,可以通过显示和设置系统时间、格式化日期和时间等方式,实现对时间的管理。

date 命令的基本语法如下:```date [选项]```常见的使用方式包括:- 显示当前日期和时间:`date`- 显示指定格式的日期和时间:`date -d format`- 设置系统时间为某个特定时间:`date -s time`【追溯讲的用法及其特点】date 命令的追溯讲(tracing)用法是一种非常有特色的功能。

通过该功能,我们可以在显示日期和时间的同时,查看该时间点之前的历史日期和时间。

追溯讲的基本语法如下:```date -v```其中,`-v`选项表示开启追溯讲功能。

追溯讲的特点如下:1.可以显示当前时间点之前的历史日期和时间。

2.显示的日期和时间格式为“年月日时:分:秒”,与时区相关。

3.默认情况下,追溯讲功能会显示距离当前时间最近的一个小时内的历史时间。

【实际案例演示】下面,我们通过一个实际案例,来演示如何使用date 命令的追溯讲功能。

假设当前时间为2022 年8 月10 日14:30:00,我们执行如下命令:```date -v```输出结果如下:```Wed Aug 10 13:30:00 CST 2022```从输出结果可以看出,追溯讲功能显示了距离当前时间最近的一个小时内的历史时间,即2022 年8 月10 日13:30:00。

MIME协议及邮件格式分析

MIME协议及邮件格式分析

电子邮件也许是一个Internet上的流行最广泛的应用。

也是我们现在的大多数网络办公流程的基础。

各种邮件服务器很多,但都大都遵循以1982年出版的RFC822--《ARPA网络文本信息格式标准(STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT MESSAGES)》为基础的一系列邮件格式的规定。

RFC(The Requests for Comments)是用来规定互联网工作标准的文档。

我们使用的时候并没有注意到这些协议在我们的邮件通信过程中默默的发挥着的作用,这丝毫也不能减低这些作用的重要性。

邮件内部还有很多不为人知的秘密。

在RFC822中规定一封信包括一个必须的多个头部域(header fields)和一个可选的体部(body)组成。

从一封信头开始至第一个空行都是头部。

头部定义了一个邮件的各项基本要素,路由信息等内容。

在Outlook Express中选定一封信看它的属性。

在详细资料选项卡中显示的就是这封邮件的头部内容。

也可以选定一封信,另存为一个.eml文件。

由于文件是一个纯文本文件,用一般的编辑器打开就可以看到邮件的内容。

头部有各个头部域组成,每一个头部域都包括域名(field-name)和域体(field-body),它们之间以":"分隔。

每一个头部域都可以看作由ASCII码字符组成的独立的文本。

常见的头部域包括:"Return-Path", "Received", "Date", "From", "Subject", "Sender","To", "cc","MIME-Version"等。

各头部域之间没有规定顺序。

就像各个域的名字一样。

他们表示的具体意义也不同。

net_due_date翻译

net_due_date翻译

net_due_date翻译net_due_date的中文翻译是“净到期日”。

净到期日是指在交易中规定的应付款项的最后截止日期。

这是供应商和买方之间的一种协议,用于确保交易中的款项能够按时支付。

净到期日通常用于商业交易中的销售和采购合同。

在这种情况下,供应商和买方会在合同中约定应付款项的截止日期。

这个日期通常是指从发票日期开始的一定时期,买方需要在这个日期之前将款项支付给供应商。

以下是一些用法和中英文对照的例句:1. Please make sure to pay the invoice within the net due date.请确保在净到期日之前支付发票款项。

2. The net due date for this purchase order is 30 days from the date of delivery.该采购订单的净到期日是交货日期起30天。

3. Failure to pay the invoice by the net due date may result in late payment penalties.如果在净到期日之前未支付发票,可能会导致延迟付款罚款。

4. The net due date can be negotiated between the buyer and the seller.净到期日可以在买方和卖方之间进行协商。

5. It is important to track the net due dates of all outstanding invoices to ensure timely payment.跟踪所有未付发票的净到期日非常重要,以确保及时付款。

净到期日是商业交易中规定的应付款项的最后截止日期。

买方需要在这个日期之前将款项支付给供应商,以确保交易按时完成。

Received (received date) Revised (revised date)

Received (received date) Revised (revised date)

Automatic mapping of ASSIST applications using process algebraMarco AldinucciDept.of Computer Science,University of PisaLargo B.Pontecorvo3,Pisa I-56127,ItalyandAnne BenoitLIP,Ecole Normale Superieure de Lyon(ENS)46all´e e d’Italie,69364Lyon Cedex07,FranceReceived(received date)Revised(revised date)Communicated by(Name of Editor)ABSTRACTGrid technologies aim to harness the computational capabilities of widely distributed collections of computers.Due to the heterogeneous and dynamic nature of the set of grid resources,the programming and optimisation burden of a low level approach to grid computing is clearly unacceptable for large scale,complex applications.The development of grid applications can be simplified by using high-level programming environments.In the present work,we address the problem of the mapping of a high-level grid application onto the computational resources.In order to optimise the mapping of the application, we propose to automatically generate performance models from the application using the process algebra PEPA.We target applications written with the high-level environment ASSIST,since the use of such a structured environment allows us to automate the study of the application more effectively.Keywords:high-level parallel programming;ASSIST environment;Performance Eval-uation Process Algebra(PEPA);automatic model generation.1.IntroductionA grid system is a geographically distributed collection of possibly parallel,inter-connected processing elements,which all run some form of common grid middleware (e.g.Globus)[13].The key idea behind grid-aware applications is to make use of the aggregate power of distributed resources,thus benefiting from a computing power that falls far beyond the current availability threshold in a single site.However, developing programs able to exploit this potential is highly programming inten-sive.Programmers must design concurrent programs that can execute on large-scale platforms that cannot be assumed to be homogeneous,secure,reliable or centrally managed.They must then implement these programs correctly and efficiently.As a result,in order to build efficient grid-aware applications,programmers have to address the classical problems of parallel computing as well as grid-specific ones:Parallel Processing Letters1.Programming:code all the program details,take care about concurrency ex-ploitation,among the others:concurrent activities set up,mapping/scheduling, communication/synchronisation handling and data allocation.2.Mapping&Deploying:deploy application processes according to a suitablemapping onto grid platforms.These may be highly heterogeneous in archi-tecture and performance and unevenly connected,thus exhibiting different connectivity properties among all pairs of platforms.3.Dynamic environment:manage resource unreliability and dynamic availabil-ity,network topology,latency and bandwidth unsteadiness.Hence,the number and quality of problems to be resolved in order to draw a given QoS(in term of performance,robustness,etc.)from grid-aware applications is quite large.The lesson learnt from parallel computing suggests that any low-level approach to grid programming is likely to raise the programmer’s burden to an unacceptable level for any real world application.Therefore,we envision a layered, high-level programming model for the grid,which is currently pursued by several research initiatives and programming environments,such as ASSIST[19],eSkel[9], GrADS[17],ProActive[6],Ibis[18].In such an environment,most of the grid specific efforts are moved from programmers to grid tools and run-time systems. Thus,the programmers have only the responsibility of organising the application specific code,while the developing tools and their run-time systems deal with the interaction with the grid,through collective protocols and services[12].In such a scenario,the QoS and performance constraints of the application can either be specified at compile time or varying at run-time.In both cases,the run-time system should actively operate in order to fulfil QoS requirements of the application,since any static resource assignment may violate QoS constraints due to the very uneven performance of grid resources over time.As an example,AS-SIST applications exploit an autonomic(self-optimisation)behaviour.They may be equipped with a QoS contract describing the degree of performance the application is required to provide.The ASSIST run-time environment tries to keep the QoS contract valid for the duration of the application run despite possible variations of platforms’performance at the level of grid fabric[5].The autonomic features of an ASSIST application rely heavily on run-time application monitoring,and thus they are not fully effective for application deployment since the application is not yet running.In order to deploy an application onto the grid,a suitable mapping of application processes onto grid platforms should be established,and this process is quite critical for application performance.This problem can be addressed by defining a performance model of an ASSIST application in order to statically optimise the mapping of the application onto a heterogeneous environment.The model is generated from the source code of the application,before the initial mapping.It is expressed with the process algebra PEPA[15],designed for performance evaluation.The use of a stochastic model allows us to take into account aspects of uncertainty which are inherent to grid computing,and to use classical techniques of resolution based on Markov chains toAutomatic mapping of ASSIST applications using process algebra obtain performance results.This static analysis of the application is complemen-tary with the autonomic reconfiguration of ASSIST applications,which works on a dynamic basis.In this work we concentrate on the static part to optimise the map-ping,while the dynamic management is done at run-time.It is thus an orthogonal but complementary approach.Structure of the paper.The next section introduces the ASSIST high-level pro-gramming environment and its run-time support.Section3introduces the Per-formance Evaluation Process Algebra PEPA,which can be used to model ASSIST applications.These performance models help to optimise the mapping of the ap-plication.We present our approach in Section4,and give an overview of future working directions.Finally,concluding remarks are given in Section5.2.The ASSIST environment and its run-time supportASSIST(A Software System based on Integrated Skeleton Technology)is a pro-gramming environment aimed at the development of distributed high-performance applications[19,3].ASSIST applications should be compiled in binary packages that can be deployed and run on grids,including those exhibiting heterogeneous platforms.Deployment and run is provided through standard middleware services(e.g.Globus)enriched with the ASSIST run-time support.2.1.The ASSIST coordination languageASSIST applications are described by means of a coordination language,which can express arbitrary graphs of modules,interconnected by typed streams of data. Each stream realises a one-way asynchronous channel between two sets of endpoint modules:sources and sinks.Data items injected from sources are broadcast to all sinks.Modules can be either sequential or parallel.A sequential module wraps a sequential function.A parallel module(parmod)can be used to describe the parallel execution of a number of sequential functions that are activated and run as Virtual Processes(VPs)on items arriving from input streams.The VPs may synchronise with the others through barriers.The sequential functions can be programmed by using a standard sequential language(C,C++,Fortran,Java).A parmod may behave in a data-parallel(e.g.SPMD/apply-to-all)or task-parallel(e.g.farm)way and it may exploit a distributed shared state that survives the VPs lifespan.A module can nondeterministically accept from one or more input streams a number of input items,which may be decomposed in parts and used as function parameters to instantiate VPs according to the input and distribution rules specified in the parmod.The VPs may send items or parts of items onto the output streams,and these are gathered according to the output rules.An ASSIST application is sketched in Appendix A.We briefly describe here how to code an ASSIST application and its modules;more details on the particular application in Appendix A are given in Section4.1.In lines4–5four streams with type task t are declared.Lines6–9define endpoints of streams.Overall,Parallel Processing Letters ASSISTcompiler seq P1parmod VP VP VP binary les QoScontract ASSIST programresourcedescription(XML)VP VP VP VP VP VP VP VP VP output section input section binary code+XML (network of processes)ISM OSM P1P2VP VPVP VP VP VPMVP seq P2sourcecode Fig.An ASSIST application QoS contract are compiled in a set of executable codes and its meta-data [3].This information is used to set up a processes network at launch time.lines 3–10define the application graph of modules.In lines 12–16two sequential modules are declared:these simply provide a container for a sequential function invocation and the binding between streams and function parameters.In lines 18–52two parmods are declared.Each parmod is characterised by its topology ,input section ,virtual processes ,and output section declarations.The topology declaration specialises the behaviour of the VPs as farm (topol-ogy none ,as in line 41),or SMPD (topology array ).The input section en-ables programmers to declare how VPs receive data items,or parts of items,from streams.A single data item may be distributed (scattered,broadcast or unicast)to many VPs.The input section realises a CSP repetitive command [16].The virtual processes declarations enable the programmer to realise a parametric VP starting from a sequential function (proc ).VPs may be identified by an index and may synchronise and exchange data one with another through the ASSIST lan-guage API.The output section enables programmers to declare how data should be gathered from VPs to be sent onto output streams.More details on the ASSIST coordination language can be found in [19,3,2].2.2.The ASSIST run-time supportThe ASSIST compiler translates a graph of modules into a network of processes.As sketched in Fig.1,sequential modules are translated into sequential processes,while parallel modules are translated into a parametric (w.r.t.the parallelism de-gree)network of processes:one Input Section Manager (ISM),one Output Section Manager (OSM),and a set of Virtual Processes Managers (VPMs,each of them running a set of Virtual Processes).The actual parallelism degree of a parmod instance is given by the number of VPMs.All processes communicate via ASSIST support channels,which can be implemented on top of a number of grid middleware communication mechanisms (e.g.shared memory,TCP/IP,Globus,CORBA-IIOP,SOAP-WS).The suitable communication mechanism between each pair of processes is selected at launch time depending on the mapping of the processes [3].Automatic mapping of ASSIST applications using process algebra 2.3.Towards fully grid-aware applicationsASSIST applications can already cope with platform heterogeneity,either in space(various architectures)or in time(varying load)[5,2].These are definite fea-tures of a grid,however they are not the only ones.Grids are usually organised in sites on which processing elements are organised in networks with private ad-dresses allowing only outbound connections.Also,they are often fed through job schedulers.In these cases,setting up a multi-site parallel application onto the grid is a challenge in its own right(irrespectively of its performance).Advance reser-vation,co-allocation,multi-site launching are currently hot topics of research for a large part of the grid community.Nevertheless,many of these problems should be targeted at the middleware layer level and they are largely independent of the logical mapping of application processes on a suitable set of resources,given that the mapping is consistent with deployment constraints.In our work,we assume that the middleware level supplies(or will supply) suitable services for co-allocation,staging and execution.These are actually the minimal requirements in order to imagine the bare existence of any non-trivial, multi-site parallel application.Thus we can analyse how to map an ASSIST ap-plication,assuming that we can exploit middleware tools to deploy and launch applications[11].3.Introduction to performance evaluation and PEPAIn this section,we briefly introduce the Performance Evaluation Process Algebra PEPA[15],with which we can model an ASSIST application.The use of a process algebra allows us to include the aspects of uncertainty relative to both the grid and the application,and to use standard methods to easily and quickly obtain performance results.The PEPA language provides a small set of combinators. These allow language terms to be constructed defining the behaviour of components, via the activities they undertake and the interactions between them.We can for instance define constants(def=),express the sequential behavior of a given component (.),a choice between different behaviors(+),and the direct interaction betweencomponents(£¡L ,||).Timing information is associated with each activity.Thus,when enabled,an activity a=(α,r)will delay for a period sampled from the negative exponential distribution which has parameter r.If several activities are enabled concurrently,either in competition or independently,we assume that a race condition exists between them.When an activity is known to be carried out in cooperation with another component,a component may be passive with respect to that activity.This means that the rate of the activity is left unspecified,(denoted ),and is determined upon cooperation by the rate of the activity in the other component.All passive actions must be synchronised in thefinal model.The dynamic behaviour of a PEPA model is represented by the evolution of its components,as governed by the operational semantics of PEPA terms[15].Thus, as in classical process algebra,the semantics of each term is given via a labelledParallel Processing Lettersmulti-transition system(the multiplicity of arcs are significant).In the transition system a state corresponds to each syntactic term of the language,or derivative,and an arc represents the activity which causes one derivative to evolve into another. The complete set of reachable states is termed the derivative set and these form the nodes of the derivation graph,which is formed by applying the semantic rules exhaustively.The derivation graph is the basis of the underlying Continuous Time Markov Chain(CTMC)which is used to derive performance measures from a PEPA model.The graph is systematically reduced to a form where it can be treated as the state transition diagram of the underlying CTMC.Each derivative is then a state in the CTMC.The transition rate between two derivatives P and Q in the derivation graph is the rate at which the system changes from behaving as component P to behaving as Q.Examples of derivation graphs can be found in[15].It is important to note that in our models the rates are represented as ran-dom variables,not constant values.These random variables are exponentially dis-tributed.Repeated samples from the distribution will follow the distribution and conform to the mean but individual samples may potentially take any positive value.The use of such distribution is quite realistic and it allows us to use stan-dard methods on CTMCs to readily obtain performance results.There are indeed several methods and tools available for analysing PEPA models.Thus,the PEPA Workbench[14]allows us to generate the state space of a PEPA model and the in-finitesimal generator matrix of the underlying Markov chain.The state space of the model is represented as a sparse matrix.The PEPA Workbench can then compute the steady-state probability distribution of the system,and performance measures such as throughput and utilisation can be directly computed from this.4.Performance models of ASSIST applicationPEPA can easily be used to model an ASSIST application since such applications are based on stream communications,and the graph structure deduced from these streams can be modelled with PEPA.Given the probabilistic information about the performance of each of the ASSIST modules and streams,we then aim tofind information about the global behavior of the application,which is expressed by the steady-state of the system.The model thus allows us to predict the run-time behavior of the application in the long time run,taking into account information obtained from a static analysis of the program.This behavior is not known in advance,it is a result of the PEPA model.4.1.The ASSIST applicationAs we have seen in Section2,an ASSIST application consists of a series of modules and streams connecting the modules.The structure of the application is represented by a graph,where the modules are the nodes and the streams the arcs.We illustrate in this paper our modeling process on an example of a graph, but the process can be easily generalised to any ASSIST applications since theAutomatic mapping of ASSIST applications using process algebra M3M4M2M1s1s3s2Figure 2:Graph representation of our example application.information about the graph can be extracted directly from ASSIST source code,and the model can be generated automatically from the graph.A model of a data mining classification algorithm has been presented in [1].For the purpose of our methodology and in order to generalise our approach,we concentrate here only on the graph of an application.The graph of the application that we consider in this paper is similar to the one of [1],consisting of four modules.Figure 2represents the graph of this application.We choose this graph as an application example,since this is a very common workflow pattern.In such a schema,•one module (M1)is generating input,for instance reading from a file or ac-cessing a database;•two modules (M2,M3)are interacting in a client-server way;they can interact one or several times for each input,in order to produce a result;•the result is sent to a last module (M4)which is in charge of the output.4.2.The PEPA modelEach ASSIST module is represented as a PEPA component,and the different components are synchronised through the streams of data to model the overall application.The PEPA components of the modules are shown in Table 1.The modules are working in a sequential way:the module MX (X =1..4)is initially in the state MX1,waiting for data on its input streams.Then,in the state MX2,it processes the piece of data and evolves to its third state MX3.Finally,the module sends the output data on its output streams and goes back into its first state.The system evolves from one state to another when an activity occurs.The activity sX (X =1..4)represents the transfer of data through the stream X ,with the associated rate λX .The rate reflects the complexity of the communication.The activity pX (X =1..4)represents the processing of a data by module MX ,which is done at a rate µX .These rates are related to the theoretical complexity of the modules.A discussion on rates is done in Section 4.3.The overall PEPA model is then obtained by a collaboration of the different modules in their initial states:M 11£¡s 1M 21£¡s 2,s 3M 31£¡s 4M 41.4.3.Automatic generation of the modelThe PEPA model is automatically generated from the ASSIST source code.This task is simplified thanks to some information provided by the user directly in theParallel Processing Letterssource code,and particularly the rates associated to the different activities of the PEPA model.The rates are directly related to the theoretical complexity of the modules and of the communications.In particular,rates of the communications depend on:a) the speed of the links and b)data size and communications frequencies.A module may include a parallel computation,thus its rate depends on a)computing power of the platforms running the module and b)parallel computation complexity,its size, its parallel degree,and its speedup.Observe that aspect a)of both modules and communications rates strictly depends on mapping,while aspect b)is much more dependent on the application’s logical structure and algorithms.We are interested in the relative computational and communication costs of the different parts of the system,but we define numerical values to allow a numerical resolution of the PEPA model.This information is defined directly in the ASSIST source code of the application by calling a rate function,in the body of the main procedure of the application(Appendix A,between lines9and10).This function takes as a parameter the name of the modules and streams,and it should be called once for each module and each stream tofix the rates of the corresponding PEPA activities.We can define several sets of rates in order to compare several PEPA models.The values for each sets are defined between brackets,separated with commas,as shown in the example below.rate(s1)=(10,1000);rate(s2)=(10,1);rate(s3)=(10,1);rate(s4)=(10,1000);rate(M1)=(100,100);rate(M2)=(100,100);rate(M3)=(1,1);rate(M4)=(100,100);The PEPA model is generated during a precompilation of the source code of AS-SIST.The parser identifies the main procedure and extracts the useful information from it:the modules and streams,the connections between them,and the rates of the different activities.The main difficulty consists in identifying the schemes of input and output behaviour in the case of several streams.This information can be found in the input and output section of the parmod code.Regarding the input section,the parser looks at the guards.Details on the different types of guards can be found in[19,3].Table1:PEPA model for the exampleM11def=M12M12def=(p1,µ1).M13M13def=(s1,λ1).M11M21def=(s1, ).M22+(s2, ).M22 M22def=(p2,µ2).M23M23def=(s3,λ3).M21+(s4,λ4).M21M31def=(s3, ).M32 M32def=(p3,µ3).M33 M33def=(s2,λ2).M31M41def=(s4, ).M42 M42def=(p4,µ4).M43 M43def=M41Automatic mapping of ASSIST applications using process algebra As an example,a disjoint guards means that the module takes input from ei-ther of the streams when some data arrives.This is translated by a choice in the PEPA model,as illustrated in our example.However,some more complex behaviour may also be expressed,for instance the parmod can be instructed to start execut-ing only when it has data from both streams.In this case,the PEPA model is changed with some sequential composition to express this behaviour.For example, M21def=(s1, ).(s2, ).M22+(s2, ).(s1, ).M22.Currently,we are not support-ing variables in guards,since these may change the frequency of accessing data on a stream.Since the variables may depend on the input data,we cannot automatically extract static information from them.We plan to address this problem by asking the programmer to provide the relative frequency of the guard.The considerations for the output section are similar.The PEPA model generated by the application for a given set of rates is repre-sented below:mu1=100;mu2=100;mu3=1;mu4=100;la1=10;la2=10;la3=10;la4=10;M11=M12;M12=(p1,mu1).M13;M13=(s1,la1).M11;M21=(s1,infty).M22+(s2,infty).M22;M22=(p2,mu2).M23;M23=(s3,la3).M21+(s4,infty).M21;M31=(s3,infty).M32;M32=(p3,mu3).M33;M33=(s2,la2).M31;M41=(s4,la4).M42;M42=(p4,mu4).M43;M43=M41;(M11<s1>(M21<s2,s3>M31))<s4>M414.4.Performance resultsOnce the PEPA models have been generated,performance results can be ob-tained easily with the PEPA Workbench[14].The performance results are the probability to be in either of the states of the system.We compute the probability to be waiting for a processing activity pX,or to wait for a transfer activity sX.Some additional information is generated in the PEPA source code(file example.pepa) to specify the performance results that we are interested in.This information is the following:perf_M1=100*{M12||**||**||**};perf_M2=100*{**||M22||**||**}; perf_M3=100*{**||**||M32||**};perf_M4=100*{**||**||**||M42}; perf_s1=100*{M13||M21||**||**};perf_s2=100*{**||M21||M33||**}; perf_s3=100*{**||M23||M31||**};perf_s4=100*{**||M23||**||M41};The expression in brackets describes the states of the PEPA model corresponding to a particular state of the system.For each module MX(X=1..4),the result perf MX corresponds to the percentage of time spent waiting to process this module.The steady-state probability is multiplied by100for readability and interpretation rea-sons.A similar result is obtained for each stream.We expect the complexity of the PEPA model to be quite simple and the resolution straightforward for most of the ASSIST applications.In our example,the PEPA model consists in36states andParallel Processing Letters80transitions,and it requires less than0.1seconds to generate the state space of the model and to compute the steady state solution,using the linear biconjugate gradient method[14].Experiment1.For the purpose of our example,we choose the following rates, meaning that the module M3is computationally more intensive than the other modules.In our case,M3has an average duration pared to0.01sec. for the others(µ1=100;µ2=100;µ3=1;µ4=100).The rates for the streams correspond to an average duration of0.1sec(λ1=10;λ2=10;λ3=10;λ4=10). The results for this example are shown in Table2(row Case1).These results confirm the fact that most of the time is spent in module M3,which is the most computationally demanding.Moreover,module M1(respectively M4) spends most of its time waiting to send data on s1(respectively waiting to receive data from s4).M2is computing quickly,and this module is often receiving/sending from stream s2/s3(little time spent waiting on these streams in comparison with streams s1/s4).If we study the computational rate,we can thus decide to map M3alone on a powerful computing site because it has the highest value between the different steady states probabilities of the modules.One should be careful to map the streams s1 and s4onto sufficiently fast network links to increase the overall throughput of the network.A mapping that performs well can thus be deduced from this information, by adjusting the reasoning to the architecture of the available system. Experiment2.We can reproduce the same experiment but for a different ap-plication:one in which there are a lot of data to be transfered inside the loop. Here,for one input on s1,the module M2makes several calls to the server M3 for computations.In this case,the rates of the streams are different,for instance λ1=λ4=1000andλ2=λ3=1.The results for this experiment are shown in Table2(row Case2).In this table, we can see that M3is quite idle,waiting to receive data89.4%of the time(i.e.this is the time it is not processing).Moreover,we can see in the stream results that s2 and s3are busier than the other streams.In this case a good solution might be to map M2and M3on to the same cluster,since M3is no longer the computational bottleneck.We could thus have fast communication links for s2and s3,which are demanding a lot of network resources.Table2:Performance results for the example.Modules StreamsM1M2M3M4s1s2s3s4 Case1 4.2 5.167.0 4.247.0 6.7 6.747.0Case252.152.210.652.1 5.210.610.6 5.2Automatic mapping of ASSIST applications using process algebra 4.5.Analysis summaryAs mentioned in Section4.3,PEPA rates model both aspects strictly related to the mapping and to the application’s logical structure(such as algorithms imple-mented in the modules,communication patterns and size).The predictive analysis conducted in this work provides performance results which are related only to the application’s logical behavior.On the PEPA model this translates on the assump-tion that all sites includes platforms with the same computing power,and all links have an uniform speed.In other words,we assume to deal with a homogeneous grid to obtain the relative requirements of power among links and platforms.This information is used as a hint for the mapping on a heterogeneous grid.It is of value to have a general idea of a good mapping solution for the application, and this reasoning can be easily refined with new models including the mapping peculiarities,as demonstrated in our previous work[1].However,the modeling technique exposed in the present paper allows us to highlight individual resources (links and processors)requirements,that are used to label the application graph.These labels represent the expected relative requirements of each module(stream) with respect to other modules(streams)during the application run.In the case of a module the described requirement can be interpreted as the aggregate power of the site on which it will be mapped.On the other hand,a stream requirement can be interpreted as the bandwidth of the network link on which it will be mapped.The relative requirements of parmods and streams may be used to implement mapping heuristics which assign more demanding parmods to more powerful sites,and more demanding streams to links exhibiting higher bandwidths.When a fully automatic application mapping is not required,modules and streams requirements can be used to drive a user-assisted mapping process.Moreover,each parmod exhibits a structured parallelism pattern(a.k.a.skele-ton).In many cases,it is thus possible to draw a reliable relationship between the site fabric level information(number and kind of processors,processors and network benchmarks)and the expected aggregate power of the site running a given parmod exhibiting a parallelism pattern[5,4,8].This may enable the development of a map-ping heuristic,which needs only information about sites fabric level information, and can automatically derive the performance of a given parmod on a given site.The use of models taking into account both of the system architecture charac-teristics can then eventually validate this heuristic,and give expected results about the performance of the application for a specified mapping.4.6.Future workThe approach described here considers the ASSIST modules as blocks and does not model the internal behavior of each module.A more sophisticated approach might be to consider using known models of individual modules and to integrate these with the global ASSIST model,thus providing a more accurate indication of the performance of the application.At this level of detail,distributed shared。

todate的用法

todate的用法

todate的用法“to date”的用法如下:一、基本用法“to date”是一个短语,在句中可作状语,意为“到目前为止;迄今为止”,相当于“so far”或者“up to now”。

例如:1. To date, we have received no reply from them.(到目前为止,我们还没有收到他们的回复。

)2. He has visited ten countries to date.(迄今为止,他已经访问了十个国家。

)二、固定搭配没有特别典型固定搭配“to date”与某一个词专门搭配,但在句子结构上常与完成时态连用。

三、双语例句(一)积极语境例句1. I've achieved a great deal to date in my career. It's like climbing a mountain, and I'm already halfway up! (到目前为止,我在职业生涯中已经取得了很多成就。

这就像爬山,我已经爬到一半了呢!)2. To date, our team has won every single match. How amazing is that? It's as if we have some kind of magic power. (到目前为止,我们的团队赢得了每一场比赛。

多棒啊?就好像我们有某种魔力似的。

)3. She has made so many friends to date. She's like a social butterfly, flitting from one group to another. (到目前为止,她交了好多朋友。

她就像一只社交蝴蝶,在不同的群体间穿梭。

)4. To date, this little restaurant has served the most delicious food I've ever tasted. It's a real hidden gem. (到目前为止,这家小餐馆提供了我吃过的最美味的食物。

received的用法及短语

received的用法及短语

received的用法及短语一、什么是"received"及其基本用法(400字)"Received"(接受)是一个常用的英语动词,用来表示接受、获得或收到某物。

它可以作为过去分词和现在分词使用,并可与其他词组合成不同表达方式。

1. "be received"当"received"作为过去分词时,通常与助动词"be"连用形成被动语态,表示某人或某物被接受了。

例如:- The package was received by the recipient yesterday. (昨天包裹已经被收件人签收)2. "have received"当"received"作为现在分词时,可与助动词"have"连用,表示某人或某物已经获得/收到了某种东西。

例如:- I have received your email and will reply soon. (我已经收到你的邮件,会尽快回复)3. "receive from"另外一个常见的短语是“receive from”,意思是从某处或者某人那里接收/获得/收到某物。

例如:- I received a gift from my friend on my birthday. (我生日那天从朋友那里收到了一份礼物)二、常见短语及用法(600字)除了基本的用法之外,“received”还可以和其他单词或短语结合使用,形成更多的表达方式。

以下是一些常见的短语及其用法:1. well-received"well-received"(广受欢迎)是一个形容词短语,用来描述某事物或者情景受到了积极的反响和喜爱。

例如:- The new product was well-received by customers, resulting in a significant increase in sales. (新产品得到了顾客的热烈好评,销售额大幅增加)2. received wisdom"received wisdom"(传统智慧)指的是被广泛接受并认为是真实或正确的观点、想法或信条。

探索空间的前沿——比尔·希列尔(1937—2019)

探索空间的前沿——比尔·希列尔(1937—2019)

收稿日期:2020年12月20日Received Date: December 20, 2020艾伦·佩恩(伦敦大学学院巴特雷特建筑学院)Alan Penn, Bartlett School of Architecture, University College London, 22 Gordon Street, London WC1H 0QB. E-mail:*************.uk参考文献引用格式:艾伦·佩恩. 探索空间的前沿: 比尔·希列尔(1937-2019) [J]. 城市设计, 2021 (1): 6-13.Penn A. Exploring the frontiers of space: Bill Hillier (1937 - 2019) [J]. Urban Design, 2021 (1): 6-13.比尔·希列尔是位一流科学家和理论学者,于2019年11月5日逝世1(图1)。

他一生都致力于将盎格鲁—撒克逊的实证科学传统延续到建筑学之中。

他揭示了社会如何嵌入我们所建造的房屋以及建成环境的结构之中。

他也揭示了环境的组织构成怎样反过来又影响社会结构,这体现为空间中的行为举止、交通交流使用以及人们交往的模式(图2—图5)。

按照这种方式,他发现了全面揭示社会再生产和社会演变的基本机制,其中社会再生产指社会形态持续存在的时间跨度超越了个人的生命周期,而社会演变则指社会形态以事件的形式不时地迅速变化和创新。

他指出,建筑不只是体现为社会且由社会形成的社会产品,也不简单地作为社会活动发生的被动背景;建筑是积极的生命体,以此不断地建构并再生产社会。

在该体系下,他阐明了建筑物和城市在社会经济的语境之中,有可能繁荣或衰败。

这也辅助人们理解了人类社会的演变。

作为伦敦房地产商的儿子,比尔并不是科班出生的建筑师或科学家。

他在剑桥大学女王学院攻读英语文学学位,也许他本能成为诗人。

机器人辅助胃癌根治术助手操作技巧及经验

机器人辅助胃癌根治术助手操作技巧及经验

收稿日期:2020-12-20 录用日期:2021-03-13Received Date: 2020-12-20 Accepted Date: 2021-03-13基金项目:国家自然科学基金面上项目(81871995)Foundation Item: the National Natural Science Foundation of China(81871995) 通讯作者:符洋,Email: **************.cnCorresponding Author: FU Yang, Email: fuyang@zzu. edu. cn引用格式:刘琪,姜建武,符洋. 机器人辅助胃癌根治术助手操作技巧及经验[J]. 机器人外科学杂志(中英文),2021,2(3):177-180.Citation: LIU Q, JIANG J W, FU Y. Experience on being a qualified surgical assistant during robotic in rotot -assisted [J]. Chinese Journal of Robotic Surgery, 2021, 2(3): 177-180.机器人辅助胃癌根治术助手操作技巧及经验刘 琪,姜建武,符 洋(郑州大学第一附属医院胃肠外科 河南 郑州 450052)摘 要 随着微创外科理念深入人心,机器人手术系统凭借其独特的优势逐渐被外科医生所接纳,并广泛应用于各种腔镜手术。

本团队从器械的合理装配、充分的视野显露、必要的操作配合、故障的及时排除四个方面总结机器人胃癌手术助手配合的经验,分析机器人手术中助手操作配合的重点和难点,以期建立较为完整的机器人胃癌手术助手的操作规范。

关键词 机器人手术系统;胃癌根治术;助手;手术配合中图分类号 R735.24 文献标识码 A 文章编号 2096-7721(2021)03-0177-04Experience on being a qualified surgical assistant in robot -assisted gastrectomyLIU Qi , JIANG Jianwu, FU Yang(Department of Gastrointestinal Surgery, the First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China)Abstract With the concept of minimally invasive surgery gaining popularity, robot -assisted surgery system is gradually accepted by surgeons for its unique advantages and widely used in various endoscopic procedures. We summarized the experience of being a qualified assistant in robotic gastrectomy from four aspects: reasonable assembly of instruments, full field of vision exposure, necessary surgery cooperation and timely troubleshooting, the key points and difficulties of cooperation in robotic surgery were also discussed, so to establish a relatively complete operation specifications for surgical assistants in performing robotic surgery on gastric cancer. Key words Robot -assisted surgery system; Radical gastrectomy; Assistant; Surgery cooperation达芬奇机器人手术系统自1996年问世以 来[1],凭借其高清视野、精细精准操作、灵巧的Endowrist 关节等多个优势,被外科医生迅速接纳并广泛应用[2-3]。

关于retest date和expiry date

关于retest date和expiry date

关于retest date和expiry date(复验期和有效期?)的认识和疑retest period和expired date的讨论已经在很多地方都有涉及了,但是都没有给出最清楚的解答,今天正好查到了一些美国FDA关于这方面的材料,正好利用这个机会,想和大家分享下我的想法,并请高手解决一下我的疑问。

首先给出这两个名次的定义(特指原料药):expiry date :the date placed on the container/label of an API dedignating the time during which the API is expected to remain wthin established shelf-life specification if stored under defined conditions and after which it should not be used.有效期:标识于原料药的包装或标签上,表明在规定的储存条件下,在该日期内,原料药的标准应符合规定的要求,并且超过这个日期就不能再使用。

retest date:the date when a material should be reexamined to ensure that it is still suitable for use.复验期:在该日期后必须要复测才能继续使用。

(以上定义出自FDA guidance:drug substance chenmistry,Manufacturing and control information)基于上面两个定义,我们可能会有很多疑问,比如:1)有效期和复验期是通过什么确定的?2)如果制定了有效期还需要制定复验期吗?或者相反,制定了复验期还需要制定有效期吗?3)在规定有效期或者复验期内的产品使用前是否需要检验?4)超出有效期的原辅料,如果检验合格,是否能继续使用?以上4各问题,是我浏览过很多论坛后总结出来的最典型的疑问,以下是我个人的理解和依据,如果有什么不合理的地方,欢迎大家指正:第一个问题:Q7a上已有非常明确的规定:“11.61 An API expiry or retest date should be based on an evaluation of data derived from stability s tudies. ”可见有效期和复验期都是基于稳定性试验数据而制定的。

数据挖掘实战-天池新人赛o2o优惠券使用预测

数据挖掘实战-天池新人赛o2o优惠券使用预测

数据挖掘实战-天池新⼈赛o2o 优惠券使⽤预测数据挖掘实战 - o2o优惠券使⽤预测⼀、前⾔⼆、赛题简介赛题的主要任务就是,根据提供的数据来分析建模,精准预测⽤户在2016年7⽉领取优惠券15以内的使⽤情况,是否会在规定时间内使⽤相应优惠券。

官⽹给的数据集主要有:ccf_offline_stage1_test_revised.csv : ⽤户线下优惠券使⽤预测样本cff_offline_stage1_train.zip :⽤户线下消费和优惠券领取⾏为cff_online_stage1_train.zip :⽤户线上点击/消费和优惠券领取⾏为sample_submission.csv :提交格式三、代码实例1. 导⼊第三⽅库以及读⼊数据import os, sys, pickleimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom datetime import datefrom sklearn.linear_model import SGDClassifier, LogisticRegressionimport seaborn as sns# 显⽰中⽂plt.rcParams['font.sans-serif'] = [u'SimHei']plt.rcParams['axes.unicode_minus'] = Falsedfoff = pd.read_csv('./ccf_offline_stage1_train.csv')dftest = pd.read_csv('./ccf_offline_stage1_test_revised.csv')dfon = pd.read_csv('./ccf_online_stage1_train.csv')print('data read end.')2. 简单观察数据特征# 简单的观察数据特征print("dfoff 的shape 是",dfoff.shape)print("dftest 的shape 是",dftest.shape)print("dfon 的shape 是",dfon.shape)print(dfoff.describe())print(dftest.describe())print(dfon.describe())dfoff.head()3. ⽤户线下消费和优惠券领取⾏为以及简单User_id ⽤户IDMerchant_id 商户IDCoupon_id : null 表⽰⽆优惠券消费,此时Discount_rate 和Date_received 字段⽆意义。

reactivecrudrepository date类型 -回复

reactivecrudrepository date类型 -回复

reactivecrudrepository date类型-回复ReactiveCrudRepository和Date类型是Spring Data中的两个核心概念。

ReactiveCrudRepository是Spring Data提供的一种用于处理持久化数据的接口,而Date类型则代表着时间和日期的数据。

本文将一步一步地回答关于这两个概念的问题。

第一步:介绍ReactiveCrudRepository ReactiveCrudRepository是Spring Data提供的一种用于处理持久化数据的接口。

它是一种异步的、响应式的方式来进行数据库操作。

它提供了一系列的操作方法,包括增删改查等常见的数据库操作,同时还支持自定义的查询。

第二步:介绍Date类型Date类型是Java中用于表示时间和日期的数据类型。

它包含了年、月、日、小时、分钟、秒等信息。

在Java中,Date类型被广泛地用于处理时间相关的业务。

第三步:ReactiveCrudRepository中的Date类型使用示例对于ReactiveCrudRepository中的Date类型,我们可以使用以下方式进行使用和操作:1. 存储和读取数据:ReactiveCrudRepository提供了save()方法,我们可以使用该方法将Date类型的数据存储到数据库中。

示例代码如下:javaAutowiredprivate ReactiveCrudRepository<User, String> userRepository;public Mono<User> saveUser(User user) {user.setCreatedAt(new Date());return userRepository.save(user);}在上面的示例代码中,我们给User对象设置了一个Date类型的属性createdAt,并将其保存到数据库中。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Segregative Genetic Algorithms(SEGA):A Hybrid Superstructure Upwards Compatible to Genetic Algorithms for RetardingPremature ConvergenceMichael AffenzellerInstitute of Systems ScienceSystems Theory and Information TechnologyJohannes Kepler UniversityAltenbergerstrasse69A-4040Linz-Austriae-mail:ma@cast.uni-linz.ac.atRecommended by:Andries P.EngelbrechtReceived:dateAccepted in thefinal form:dateAbstract.Many problems of combinatorial optimization belong to the class of NP-complete problems and can be solved efficiently only by heuristics.Both,Genetic Algorithms and Evolution Strategies have a number of drawbacks that reduce their applicability to that kind of problems.During the last decades plenty of work has been investigated in order to introduce new coding standards and operators especially for Genetic Algorithms.All these approaches have one thing in common:They are very problem specific and mostly they do not challenge the basic principle of Genetic Algorithms. In the present paper we take a different approach and look upon the concepts of a Standard Genetic Algorithm(SGA)as an artificial self organizing process in order to overcome some of the fundamental problems Genetic Algorithms are concerned with in almost all areas of application.With the purpose of providing concepts which make the algorithm more open for scalability on the one hand,and whichfight premature convergence on the other hand,this paper presents an extension of the Standard Genetic Algorithm that does not introduce any problem specific knowledge.On the basis of an Evolution-Strategy-like selective pressure handling some further improvements like the introduction of a concept of segregation and reunification of smaller subpopulations during the evolutionary process are considered.The additional aspects introduced within the scope of that variants of Genetic Algorithms are inspired from optimization as well as from the views of bionics. In contrast to contributions in thefield of genetic algorithms that introduce new coding standards and operators for certain problems,the introduced approach should be considered as a novel heuristic appliable to multiple problems of combinatorial optimization using exactly the same coding standards and operators for crossover and mutation as done when treating a certain problem with a Standard Genetic Algorithm.Furthermore,the corresponding Genetic Algorithm is unrestrictedly included in all of the newly proposed hybrid variants under especial parameter settings.In the present paper the new algorithm is discussed for the traveling salesman problem(TSP)as a well documented instance of a multimodal combinatorial optimization problem.In contrast to all other evolutionary heuristics that do not use any additional problem specific knowledge,we obtain solutions close to the best known solution for all considered benchmark problems(symmetric as well as asymmetric benchmarks)which represents a new attainment when applying Evolutionary Computation to the TSP.Keywords:Genetic Algorithms,Evolution Strategies,Selective Pressure,Population Diversity,Pre-mature Convergence.1IntroductionWork on what nowadays is called Evolutionary Computation started in the sixties in the United States and Germany.Substantially there are two basic approaches in computer science that copy evolutionary mechanisms:Evolution Strategies(ES)and Genetic Algorithms(GA).Genetic Algorithms go back to Holland[Hol92],an American computer scientist and psychologist.Holland developed his theory not only under the aspect of solving optimization problems but also to observe biological processes. Last but not least that is the reason that Genetic Algorithms are much closer to the biological model than Evolution Strategies.The theoretical foundations of Evolution Strategies have been made by Rechenberg and Schwefel([Rec73],[Sch94]).Their primary goal was optimization.Despite many things in common the two attempts developed almost independently from each other into the USA (GA)and Germany(ES).Both attempts work with a population model whereby the genetic information of each individual of a population is different in general.Among other things this genotype includes the parameter vector which contains all necessary information about thefitness of a certain individual.Before the intrinsic evolutionary process takes place,the initial population is initialized arbitrarily.Evolution,i.e.re-placement of the old generation by a new generation,proceeds until a certain termination criterion is fulfilled.The major differences between Evolution Strategies and Genetic Algorithms lie in the form of the genotype,the calculation of thefitness and the operators(mutation,recombination,selection).In contrast to Genetic Algorithms where the main-role of the mutation operator is simply to avoid stag-nation,mutation is the primary operator of Evolution Strategies.As a further difference between the two major representatives of Evolutionary Computation,selection in case of Evolution strategies is absolutely deterministic which is not the case in the context of Genetic Algorithms or in nature. Therefore,in case of Evolution Strategies,arbitrary small differences infitness can decide about the survival of an individual.The mapping between a bit-string and the real numbers often is a prob-lem when applying Genetic Algorithms whereas Evolution Strategies resign in copying nature that exactly.Applied to problems of combinatorial optimization,Evolution Strategies tend tofind local optima quite efficiently.But in the case of multimodal test functions,global optima can only be detected by Evolution Strategies if one of the start values is located in the narrower range of a global optimum. Nevertheless,the concept how Evolution Strategies handle selective pressure has turned out to be very useful in the context of the Segregative Genetic Algorithm(SEGA)as presented in this paper. Furthermore,we have borrowed the cooling mechanism from Simulated Annealing(SA),introduced by Kirkpatrick[KGV83],in order to obtain a variable selective pressure for the enhanced GA-model as described in section4.This will mainly be required when segregating and reuniting the population as described in section5.Whereas Island Models(e.g.in[Whi94])for Genetic Algorithms are mainly driven by the idea of using simultaneous computer systems,SEGA attempts to utilize migration more precisely in order to achieve superior results in terms of global convergence.The aim of dividing the whole population into a certain number of subpopulations(segregation)that grow together in case of stagnatingfitness within those subpopulations is to combat premature con-vergence which is the source of GA-difficulties.The principle idea of SEGA is to divide the whole population into a certain number of subpopulations at the beginning of the evolutionary process.These subpopulations evolve independently from each other until thefitness increase stagnates because of too similar individuals within the subpopulations.Then a reunification from n to(n−1)subpopulations is done.Roughly spoken this means,that there is a certain number of villages at the beginning of the evolutionary process that are slowly growing together to bigger cities,ending up in one big town containing the whole population at the end of evolution.By this approach of width-search,building blocks in different regions of the search space are evolved at the beginning and during the evolutionary process which would disappear early in case of a conventional Genetic Algorithms and whose genetic information could not be provided at a later date of evolution when the search for global optima is of paramount importance.In this context the above mentioned variable selective pressure is especially important at the time of joining some residents of another village to a certain village in order to steer the population diversity.Experimental results on some symmetric and asymmetric benchmark problems of the TSP indicate the supremacy of the introduced concept for locating global minima compared to a conventional Genetic Algorithm.Furthermore,the evaluation shows,that the results of the Segregative Genetic Algorithm (SEGA)are comparable for symmetric benchmark problems and even superior for asymmetric bench-mark problems when being compared to the results of the Cooperative Simulated Annealing technique (COSA)[Wen95]which has to be considered as a problem specific heuristic for routing problems.This represents a major difference to SEGA that uses exactly the same operators as a corresponding GA and can,therefore,be applied to a huge number of problems-namely all problems GAs can be applied to.Moreover it should be pointed out that a corresponding Genetic Algorithm is unrestrictedly included in the SEGA when the number of subpopulations(villages)and the cooling temperature are both constantly set to1at the beginning of the evolutionary process.2Premature ConvergenceWhen applying Genetic Algorithms to complex problems,one of the most frequent difficulties is premature convergence.Roughly speaking,premature convergence occurs when the population of a Genetic Algorithm reaches such a suboptimal state that the genetic operators can no longer produce offspring that outperform their parents(e.g.[Fog94]).Several methods have been proposed to combat premature convergence in Genetic Algorithms[CG93], [DeJ75],[Gol89].These include,for example,the restriction of the selection procedure,the operators and the according probabilities as well as the modification offitness assignment.However,all these methods are heuristic in nature.Their effects vary with different problems and their implementation strategies need ad hoc modifications with respect to different situations.Anyway,also mutation is not suited to overcome premature convergence because this low probability event mostly produces individuals with lowerfitness which immediately disappear in the next couple of generations due to crossover.A critical problem in studying premature convergence is the identification of its occurrence and the characterization of its extent.Srinivas and Patnaik[SP94],for example,use the difference between the average and maximumfitness as a standard to measure premature convergence and adaptively vary the crossover and mutation probabilities according to this measurement.On the other hand,as in the present paper,the term’population diversity’has been used in many papers to study premature convergence(e.g.[SFP93],[YA94])where the decrease of population diversity is the primary reason for premature convergence.However,a very homogeneous population,i.e.little population diversity,is the major reason for a Genetic Algorithm to prematurely converge.Therefore,this situation represents some kind of analogy to the situation when a neighborhood based search technique like Simulated Annealing is unable to escape from a local minimum.3The Hybrid Structure of SEGAAs SEGA is based upon the variable selective pressure concept,it borrows aspects of Genetic Al-gorithms,Evolution Strategies,and Simulated Annealing whereby the main focus certainly lies on Genetic Algorithms as illustrated in Fig.1.Figure1:The hybrid structure of SEGAThe mechanisms for crossover,mutation as well as the selection principle(thefirst step of selection) are directly adopted from Genetic Algorithms.From Evolution Strategies we have borrowed the han-dling of selective pressure in order to be aware of a scalability easy to handle in terms of solution quality and running time as described in section4.Additionally,we have introduced a Simulated-Annealing-like temperature for keeping the selective pressure variable.Therefore,it becomes possible to steer the genetic diversity of the population.The namegiving strategy of segregation and reunification is of major importance as its purpose is to avoid premature convergence.4The Variable Selective Pressure Model”It is a matter of fact that in Europe Evolution Strategies and in the USA Genetic Al-gorithms have survived more than a decade of non-acceptance or neglect.It is also true,however,that so far both strata of ideas have evolved in geographic isolation and thusnot led to recombined offspring.Now it is time for a new generation of algorithms whichmake use of the rich gene pool of ideas on both sides of Atlantic,and make use too of thefavorable environment showing up in the form of massively parallel processor systems[...].There are not many paradigms so far to help us make good use of the new situation.Wehave become too familiar with monocausal thinking and linear programming,and there isnow a lack of good ideas for using massively parallel computing facilities.”14.1IntroductionThe handling of selective pressure in the context of Genetic Algorithms mainly depends on the choice of a certain replacement scheme.The’generational replacement scheme’,for example,replaces the entire population by the next one,whereas the’elitism replacement scheme’,which keeps the best 1Taken from the introduction to the proceedings of the First Workshop on Parallel Problem Solving from Nature [SM90].individuals of the last generation and only replaces the rest,usually performs faster.On the other hand,elitism might cause too homogeneous populations,i.e.little population diversity,and might cause unwanted premature convergence.Anyway,there exists no manageable model for controllable selective pressure handling within the theory of Genetic Algorithms.Therefore,we introduce some kind of intermediate step(a’virtual population’)into selection which provides a handling of selective pressure very similar to that of Evolution Strategies[Aff01b].As we will exemplarily point out,the most common replacement mechanisms can easily be implemented in this intermediate selection step. Furthermore,this Evolution Strategy like variable selective pressure will help us to steer the degree of population diversity on the on hand and,on the other hand,it will act as a basic model for a new hybrid metaheuristics based upon Genetic Algorithms as being proposed in section5.Actually,all modifications that are and will be taken into account use exactly the same operators for crossover and mutation as a corresponding Genetic Algorithm.As no further problem specific information is used,the new hybrids can be applied to a huge number of problems-namely all problems Genetic Algorithms can be applied to.Moreover it should be pointed out that under especial parameter settings,the corresponding Genetic Algorithm is unrestrictedly included in the presented hybrid variants of a Genetic Algorithm.4.2Formal Integration of the Variable Selective Pressure ModelSimilar to any other commonly used Genetic Algorithm we use a population offixed size that will evolve to a new population of same size by selection,crossover,and mutation.What we additionally have done is to introduce an intermediate step in terms of a so-called virtual population of variable size where the size of the virtual population usually has to be greater than the population size.This virtual population is created by selection,crossover,and mutation in the common sense of Genetic Algorithms.But like in the context of Evolution Strategies,only a certain percentage of this intermediate population will survive which represents a direct analogy to selective pressure handling in the context of Evolution Strategies.This handling of selective pressure in our context is mainly motivated by(µ,λ)-Evolution Strate-gies whereµparents produceλdescendants from which the bestµsurvive.Within the frameworkof Evolution Strategies,selective pressure is defined as s=µλ,where a small value of s indicates ahigh selective pressure and vice versa(for details see for instance[SHF94]).Applied to Genetic Algo-rithms,this means that from|P OP|(population size)number of parents|P OP|·T((size of virtual population)>|P OP|,i.e.T>1)2descendants are generated by crossover and mutation from which the best|P OP|survive as illustrated in Fig.2.Obviously we define the selective pressure as s=|P OP||P OP|·T =1T,where a small value of s,i.e.agreat value of T stands for a high selective pressure and vice versa.Equipped with this enhanced GA-model it is quite easy to adopt further extensions based upon a controllable selective pressure,i.e. it becomes possible either to reset the temperature up/down to a certain level or simply to cool down the temperature in the sense of Simulated Annealing during the evolutionary process in order to steer the convergence of the algorithm.Bionically interpreting this(µ,λ)-Evolution Strategy like selective pressure handling,for Genetic Algorithms this means,that some kind of’infant mortality’has been introduced in that sense that a certain ratio of the population(|P OP|·T−|P OP|=|P OP|·(T−1)) will never become procreative,i.e.this weaker part of a population will not get the possibility of repro-duction.Decreasing this’infant mortality’,i.e.reducing the selective pressure during the evolutionary process also makes sense in a bionic interpretation as also in nature stronger and higher developed populations suffer less from infant mortality.From the point of view of optimization,decreasing the temperature during the optimization process means that a greater part of the search space is explored at the beginning of evolution-whereas at a later stage of evolution,when the averagefitness is already 2In principle,also temperatures T less than1are imaginable in case of too homogeneous populations in order to replenish the rest of the population((1−T)|P OP|)with’new’individuals forfighting premature convergence.Figure2:Evolution of a new population with selective pressure s=1T for a virtual population builtup in the sense of a(µ,λ)-Evolution Strategy.quite high,a higher selective pressure is quite critical in that sense that it can easily cause premature convergence.Anyway,operating with a temperature converging to zero,this(µ,λ)-Evolution Strategy like selective pressure model for Genetic Algorithms acts like the corresponding Genetic Algorithm with generational replacement.Moreover,implementing the analogue to the(µ+λ)-Evolution Strategy denotes the other extreme of immortal individuals.However,also the implementation of this strategy is quite easy with our model as indicated in Fig.3.Other replacement mechanisms,like elitism or the goldcage-model for example,are also very easy to handle by just adding the best individuals respectively the best individual of the last generation to the virtual generation.In the following section we are going to discuss new aspects and models built up upon the described variable selective pressure model.Nevertheless,there is nothing against using a Genetic Algorithm equipped with the variable selective pressure model without further extensions.Experiments per-formed on the variable selective pressure model already indicate the supremacy of this approach.In those experiments the temperature is slowly cooled down to zero.If one is aware of an appropriate measure of population diversity,the temperature can be set dynamically in order to steer the the genetic diversity which seems to represent a further very promising approach.5Segregation and ReunificationIn principle,the new SEGA introduces two enhancements to the basic concept of Genetic Algorithms. Thefirst is to bring in a variable selective pressure,as described in section4,in order to control the diversity of the evolving population.The second concept introduces a separation of the population to increase the broadness of the search process and joins the subpopulation after their evolution in orderFigure3:Evolution of a new population for a virtual population built up in the sense of a(µ+λ)-Evolution Strategy.to end up with a population including all genetic information sufficient for locating the region of a global optimum.The aim of dividing the whole population into a certain number of subpopulations(segregation)that grow together in case of stagnatingfitness within those subpopulations(reunification)is to combat premature convergence which is the source of GA-difficulties.This segregation and reunification approach is a new technique to overcome premature convergence[Aff01a].The principle idea is to divide the whole population into a certain number of subpopulations at the beginning of the evolutionary process.These subpopulations evolve independently from each other until thefitness increase stagnates because of too similar individuals within the subpopulations.Then a reunification from n to(n−1)subpopulations is done.Roughly spoken this means,that there is a certain number of villages at the beginning of the evolutionary process that are slowly growing together to bigger cities,ending up with one big town containing the whole population at the end of evolution.By this approach of width-search,building blocks in different regions of the search space are evolved at the beginning and during the evolutionary process which are likely to disappear early in case of conventional Genetic Algorithms and whose genetic information could not be provided at a later date of evolution when the search for global optima is of paramount importance.In this context the above mentioned variable selective pressure is especially important at the time of joining some residents of another village to a certain village in order to steer the population diversity.Again,like in the context of the variable selective pressure model which is included in SEGA as well, it should be pointed out that a corresponding Genetic Algorithm is unrestrictedly included in the SEGA when the number of subpopulations(villages)and the cooling temperature are both set to1at the beginning of the evolutionary process.Moreover,the introduced techniques also do not use any problem specific information.6The New AlgorithmWith all strategies described above,finally the new genetic algorithm is stated as described in Alg.1.Algorithm1Segregative Genetic Algorithm(SEGA)Initialize size of population|P OP|Initialize number of iterations nrOfIterations∈NInitialize number of villages nrOfV illages∈NInitialize temperature T∈[1,∞[Initialize number adaptive cooling factorα∈[0,1]Generate initial population P OP0=(I1,....,I|P OP|)for i=1to nrOfIterations doCheck if i=dateOfReunificationif i=dateOfReunification thennrOfV illages=nrOfV illages−1reset temperatureend ifP OP i=calcNextGeneration(P OP i−1,T,nrOfV illages,|P OP|)T=max(T∗α,1)end forProcedure2CalcNextGeneration(P OP i−1,T,nrOfV illages,|P OP|)villageP opulation=|P OP|nrOfV illagesfor i=0to(nrOfV illages−1)dofor j=(i*villagePopulation)to((i+1)*villagePopulation)doCalculate fitness j of each member of the village population(like in standardGA)end forvirtualP opulation=villageP opulation∗Tfor k=0to villagePopulation doGenerate individuals of virtual population I k from the members of the village(I i∗villageP opulation...I(i+1)∗villageP opulation∈P OP i−1)due to theirfitnesses by crossover andmutationend forSelect the best(villageP opulation)individuals from the virtual population in order to achieve the new village population I i∗villageP opulation...I(i+1)∗villageP opulation∈P OP i of the next genera-tionend forSEGA uses afixed number of iterations for termination.Depending on this total number of iterations and the initial number of subpopulations(villages),the dates of reunification may statically be calculated at the beginning of the evolutionary process as done in the experimental result section. Further improvements,particularly in the sense of running time,are possible,if,in order to determine the dates of reunification,a dynamic criterion for the detection of stagnating genetic diversity within the subpopulations is used.A further aspect worth mentioning is the choice of the temperature and the cooling-factor when merging certain subpopulations.Experimental research has indicated that the best results can be achieved resetting the temperature within a range of1.2to2.0and choosing a cooling-factorαsuch that the selective pressure converges to one as the subpopulation’s genetic diversity is stagnating.Convenient choice of the GA specific parameters can be found in[Mic96], [Cha95a],[Cha95b],or[Cha98].7Experimental ResultsEmpirical studies with different problem classes and instances are possibly the most effective way to analyze the potential of heuristic optimization searches like Genetic Algorithms.Even if a conver-gence proof similar to that of Simulated Annealing[B.H88]may be possible,we are unfortunately confronted with the drawback that the number of states of the Markov Chain blows up from|S|to |S P OP|,thus limiting the computational tractability to small problems and very small population sizes. In our experiments,all computations are performed on a Pentium III PC with256megabytes of memory.The programs are written in the Java programming language.For the tests we have selected the Traveling Salesman Problem(TSP)as a well documented instance of a typical multimodal combinatorial optimization problem.We have tested the new concepts on a selection of symmetric as well as asymmetric TSP benchmark problem instances taken from the TSPLIB[Rei91]using updated results3for the best,or at least the best known,solutions.In doing so,we have performed a comparison of the newly developed hybrids with a Genetic Algorithm using exactly the same operators for crossover and mutation and with the COSA algorithm[WK97]as an established and successful representative of a heuristic especially developed for routing problems.In this connection it should be mentioned that the main aspect and motivation during the design-and implementation-phase of the test framework was not to attack the best known solutions but rather to be aware of a tool that allows to draw effective comparisons between the certain new variants of a Genetic Algorithm with the corresponding Genetic Algorithm using the COSA heuristic as some kind of benchmark.Anyway,all tricky problem specific upgrades that are discussed for Genetic Algorithms in variousfields can be implemented as well.In the following we will discuss the influence of the new-introduced measures concerning speed and solution quality.In doing so we have set the GA-specific parameters of the respective hybrid equal to the corresponding Genetic Algorithm.The parameters of COSA have been set as suggested by the author[Wen95].Unless otherwise specified,the Genetic Algorithm and the corresponding hybrids use a mutation probability of0.05combined with the golden-cage population model,i.e.the entire population is replaced(like in the case of generational replacement)with the exception that the best member of the old population survives until the new population generates a better one(wild-card strategy).For each problem the algorithms were run three times.The efficiency of each algorithm is quantified in terms of the relative difference of the best’s individualfitness after a given number or iterations to the best or best-known solution.In the experiments the relative difference is defined as relativeDifference=(F itness−1)∗100%.OptimalConsidering the behaviour of Genetic Algorithms it is obvious that there is a strong correlation be-tween the population size and the trend of population diversity,i.e.if one wants to increase the size of the population,it may be reasonable also to increase the selective pressure which might be done by introducing a different replacement technique(e.g.change from generational replacement to elitism). In the context of the new GA-model with an adjustable selective pressure this can easily be done be just increasing or decreasing selective pressure.Anyway,the results obtained with afixed selective pressure are not very noteworthy because the problem of premature convergence very likely occurs,especially under higher temperature settings in combination with small populations.Clear improvements to the ordinary GA-model can only be achieved with temperature parameter settings in a range of about0.03−0.10.Furthermore,it can be observed that at a later stage of evolution the population diversity rapidly decreases which denotes a quite natural motivation for introducing a variable selective pressure model.The purpose of a variable selective pressure is to retard the occurrence of premature convergence.If one would be able to define an appropriate measure for the genetic diversity of a population(e.g.a 3Updates for the best(known)solutions can for example be found on ftp://ftp.zib.de/pub/Packages/mp-testdata/tsp/tsplib/index.html。

相关文档
最新文档