Electronic Notes in Theoretical Computer Science

合集下载

Ipate_2009_Electronic-Notes-in-Theoretical-Computer-Science

Ipate_2009_Electronic-Notes-in-Theoretical-Computer-Science

Testing Non-deterministic Stream X-machineModels and P systemsFlorentin Ipate 1Department of Computer Science University of Pitesti Str Targu din Vale 1,110040Pitesti,RomaniaMarian Gheorghe 2Department of Computer Science University of Sheffield Regent Court,Portobello Street,Sheffield S14DP,UKAbstractUnder certain well defined conditions,the stream X-machine testing method can produce a test set that is guaranteed to determine the correctness of an implementation.The testing method has originally assumed that an implementation of each processing function or relation is proven to be correct before the actual testing can take place.Such a limitation has been removed in a subsequent paper,but only for determin-istic X-machines.This paper extends this result to non-deterministic stream X-machines and considers a conformance relationship between a specification and an implementation,rather than mere equivalence.Furthermore,it shows how this method can be applied to test a P system by building a suitable stream X-machine from the derivation tree associated with a partial computation.Keywords:testing,correctness,X-machines,P systems,finite state machines,non-determinism.1IntroductionThe stream X-machine (SXM )[10]is a form of extended finite state machine (FSM).It describes a system as a finite set of states and a number of transitions between states.In addition,a SXM contains an internal store,called memory .A transition is triggered by an input value,produces an output value and may access and/or alter the memory.A transition diagram of a SXM is a finite automaton (called an associated automaton )in which the arcs are labelled by relation names (referred to as processing relations ).Under certain well defined design for test conditions ,it is 1Email:florentin.ipate@ifsoft.ro 2Email:m.gheorghe@Electronic Notes in Theoretical Computer Science 227 (2009) 113–1261571-0661/$ – see front matter © 2008 Published by Elsevier B.V./locate/entcsdoi:10.1016/j.entcs.2008.12.107possible to produce a test set that is guaranteed to determine the correctness of an implementation under test (IUT )[14,10,2,12,16].Traditional extended finite-state machine test generation methods [3,17]rely on the construction of an equivalent finite state machine,where states are the state/memory pairs of the original SXM.For complex specifications,this leads to a known state explosion problem;the SXM testing method does not perform such a construction.Instead,however,the method assumes that the processing functions (relations)are correctly implemented and reduces testing of a SXM to testing of its associated automaton.Therefore,it is fair to say that the method only tests the integration of the implementation of processing functions (relations).In practice,the correctness of an implementation of a processing function (relation)is checked by a separate process [10],using the SXM testing method or alternative functional methods.The method (called in what follows SXM integration testing )was first developed in the context of deterministic SXMs (i.e.those SXMs in which all labels represent partial functions rather than relations and an input can trigger at most one transition in any state and for any memory value).The deterministic SXM (DSXM )integration testing method [14,10]was extended to the non-deterministic case (NSXM integration testing)in paper [15].Conformance testing for SXMs has been previously considered in [8]for a subclass of quasi-non-deterministic SXMs;later work [9]uses a rather general definition of conformance,but requires an im-plementation to compute a function (this paper assumes a non-deterministic IUT).The applicability of the mentioned integration testing methods is limited by the assumption that the implementation of each processing function (relation)can be tested in isolation from the rest of the system.This is not always a realistic assumption.This limitation has been removed [11]in the context of deterministic SXMs,so that testing of functions can be performed along with the integration testing.This paper extends [11]for non-deterministic SXMs.This is called complete NSXM testing .Such an extension is not trivial for the following reasons:(a)in a non-deterministic SXM an input sequence may potentially produce an infinite number of output sequences,while a test set will be applied to an IUT for a limited number of times (as few as possible);(b)in a non-deterministic SXM some paths may never be exercised.Since their introduction in 1998[19],P systems have been intensively studied and developed,in particular with regard to the computational power of different variants and their capability to solve hard problems.In the last years there have also been significant developments in using the P systems paradigm to model,simu-late and formally verify various systems [5].Suitable classes of P systems have been associated with some of these applications and software packages have been devel-oped.Although formal verification has been applied to different models based on P systems [10],testing is completely neglected in this context.Testing is an essential part of software development and all software applications,irrespective of their use and purpose,are tested before being released.Two recent papers provide initial steps towards building a P system testing theory:based on rule coverage [7]and on FSM conformance techniques [13].In this paper we develop a testing methodF .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126114F.Ipate,M.Gheorghe/Electronic Notes in Theoretical Computer Science227(2009)113–126115 for non-deterministic SXMs and show how this can be applied to P systems:a way of deriving a SXM from a P system is provided and the method is applied to the obtained model.This approach significantly extends the previous testing methods for P systems by considering(1)a SXM model of a P system instead of a simple FSM and also(2)non-determinism,a widely spread characteristic of P systems.2PreliminariesThis section introduces the notation and the formalisms used in the paper:finite state machines,stream X-machines and P systems.For afinite alphabet A,A∗denotes the set of allfinite sequences with members in A; denotes an empty sequence.For a,b∈A∗,ab denotes the concatenation of sequences a and b;a n is defined by a0= and a n=a n−1a for n≥1.For U,V⊆A∗, UV={ab|a∈U,b∈V};U n is defined by U0={ }and U n=U n−1U for n≥1. For a relation f:A←→B,dom(f)denotes the domain of f and Im(f)denotes the image of f.If a/∈dom(f),we write f(a)=∅.For U⊆A,f(U)=∪a∈U f(a)and f|U:U←→B denotes the restriction of f to U,i.e.f|U(a)=f(a),∀a∈U.For two relations f,g:A←→B,we use f g to denote that dom(f)=dom(g)and for any a∈dom(f),f(a)⊆g(a).Forφ:M×Σ←→Γ×M and m∈M we define ωφm:Σ←→Γbyωφm(σ)=πΓ(φ(m,σ)),σ∈Σ,whereπΓ:Γ×M−→Γdenotes the projection function.We shall also use the projection functionπM:Γ×M−→M. For afinite set A,#A denotes the number of elements of A.Definition2.1Afinite automaton(FA for short)is a tuple(Σ,Q,F,I),whereΣis afinite input alphabet;Q is afinite set of states;F is a(partial)next state function,F:Q×Σ−→2Q;I is a set of initial states,I⊆Q;all states are assumed terminal.Function F is usually described by a state transition diagram.A FA is called deterministic if there is one initial state(I={q0})and F maps each state/input pair into at most one state(F:Q×Σ−→Q).The next state function can be extended to a partial function F∗:Q×Σ∗−→Q defined by F∗(q, )=q,∀q∈Q; F∗(q,sσ)=F(F∗(q,s),σ)),∀q∈Q,s∈Σ∗,σ∈Σ.For q∈Q,L A(q)={s∈Σ∗|(q,s)∈dom(F∗)}.If q=q0then the set is simply denoted L A and called the language accepted by A.A state q∈Q is called accessible if∃s∈Σ∗such that F∗(q0,s)=q.A is called accessible if∀q∈Q,q is accessible.For U⊆Σ∗,two states q1,q2∈Q are called U-equivalent if L A(q1)∩U=L A(q2)∩U;otherwise, q1and q2are called U-distinguishable.If U=Σ∗then q1and q2are simply called equivalent or distinguishable.A is called reduced if∀q1,q2∈Q,((q1=q2)=⇒(q1 and q2are distinguishable)).A deterministic FA A is called minimal if any other FA that accepts the same language as A has at least the same number of states as A.The reader is assumed to be familiar with basic automata theory,for details see for example[6].Given a FA specification A and a class of implementations C,a test set of A w.r.t.C is a set of input sequences that,when applied to any implementation A in the class C,will detect any response in A that does not conform to theresponse specified by A ,i.e.∀A ∈C ,(L A ∩Y =L A ∩Y =⇒L A =L A ).The class C is identified by the assumptions we can make about an implementation A .If no information is available,a test set may not exist for even very simple FA specifications.There are a number of more or less realistic assumptions that one can make about the form and size of an implementation and these,in turn,give rise to different techniques for generating test sets [17].One of the least restrictive assumptions refers to the number of states of A and is the basis for the W method[4,1]:the difference between the number of states of an implementation and that of a specification has to be at most k ,a non-negative integer estimated by a tester.The W -method involves the selection of two sets of input sequences,a transition cover P and a characterization set W defined as follows:Definition 2.2S ⊆Σ∗is called a state cover of A if ∈S and ∀q ∈Q \{q 0},∃s ∈S such that F ∗(q 0,s )=q .P ⊆Σ∗is called a transition cover of A if S ∪S Φ⊆P for some state cover S of A .W ⊆Σ∗is called a characterisation set of A if any two distinct states q 1,q 2∈Q ,q 1=q 2,are W -distinguishable.Note that a state cover,a transition cover and a characterisation set exist if A is minimal.Theorem 2.3[1]Let A be a deterministic FA having input alphabet Σ,n the num-ber of states of A ,m ≥n and C m the set of deterministic FAs with an input alphabet Σand each with a maximal number of states m .If P is a transition cover and W a characterisation set of A then Y m −n =P (Σm −n ∪Σm −n −1∪...∪{ })(W ∪{ })is a test set of A w.r.t.C m .The above theorem is the theoretical basis for the W -method in the context of partially specified deterministic FA.The reason is included in W is to ensure that if an IUT has ignored the last element σof a sequence of inputs verifying the existence of a transition triggered by σ,this will be detected.A SXM is a form of extended FSM as defined next.Definition 2.4A SXM is a tuple Z =(Σ,Γ,Q,M,Φ,F,I,m 0),where Σand Γare finite sets called an input alphabet an output alphabet respectively;Q is a finite set of states ;M is a (possibly)infinite set called memory ;Φ,called the type of Z ,is a finite set of distinct processing relations that the machine can use.A processing relation is a non-empty relation of the form φ:M ×Σ←→Γ×M .Often Φis a set of (partial)functions.F is the (partial)next state function ,F :Q ×Φ−→2Q .In a similar way to finite automata,F is usually described by a state transition diagram.I is a set of initial states ,I ⊆Q ;m 0is the initial memory value ,m 0∈M .It is sometimes helpful to think of a SXM as a finite automaton with the arcs labelled by relations from the type Φ.The FA A Z =(Φ,Q,F,I,T )over an alphabet Φis called the associated FA of Z .Definition 2.5Given a sequence p ∈Φ∗,p induces the relation |p |:M ×Σ∗←→Γ∗×M defined as follows:(1)(m, )| |( ,m ),∀m ∈M ,(2)∀p ∈Φ∗,φ∈Φ,(m,sσ)|pφ|(gγ,m ),where ∀m,m ∈M,s ∈Σ∗,g ∈Γ∗,σ∈Σ,γ∈Γare such that ∃m ∈M with (m,s )|p |(g,m )and (m ,σ)φ(γ,m ).F .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126116F.Ipate,M.Gheorghe/Electronic Notes in Theoretical Computer Science227(2009)113–126117 A sequence|p|can be considered a relation between a(memory,input string)pairand the(output string,memory)pairs produced by a consecutive application of therelations in p.It is easy to see that ifΦis a set of(partial)functions rather thanrelations then|p|is a also a(partial)function.Definition2.6A SXM Z is called deterministic if the following three conditions hold:(1)the associated FA of the machine Z is deterministic,i.e.Z has onlyone initial state(I={q0})and the next state function maps each pair of(state, processing function)onto at most one state(F:Q×Φ−→Q);(2)Φis a set of(partial)functions rather than relations;(3)any two distinct processing functionsthat label arcs emerging from the same state have disjoint domains,i.e.∀φ1,φ2∈Φ, ((∃q∈Q with(q,φ1),(q,φ2)∈dom(F))=⇒(φ1=φ2or dom(φ1)∩dom(φ2)=∅)). From the above definition,NSXMs can have three types of non-determinism:state non-determinism if#I>1or∃q∈Q,φ∈Φwith#(F(q,φ))>1;oper-ator non-determinism if some elements ofΦare relations but not partial func-tions;domain non-determinism if there exist q∈Q,φ1,φ2∈Φ,φ1=φ2with (q,φ1),(q,φ2)∈dom(F)and dom(φ1)∩dom(φ2)=∅.It is not necessary to consider the case of state non-determinism since it can easily be eliminated by rewriting a NSXM using standard algorithms that take a non-deterministic FA and produce an equivalent deterministic FA;in general,this transformation may introduce domain non-determinism.Further,NSXMs are assumed to be free of state non-determinism and are denoted by a tuple Z=(Σ,Γ,Q,M,Φ,F,q0,m0)where F is a partial func-tion and q0is the initial state.The associated deterministic FA is then a tuple A Z=(Φ,Q,F,q0).In general,a non-deterministic SXM computes a relation since the application of an input sequence may produce more than one output sequence. The exact correspondence between an input sequence and an output produced is defined next.Definition2.7A SXM Z,computes a relation f Z:Σ∗←→Γ∗defined by:s f Z gand(m0,s)|p|(g,m).if there is p∈Φ∗,m∈M such that p∈L AZWhen a path p is important,we will write s f p Z g.Note that for a SXM featuringdomain non-determinism,p may not be uniquely identified by s.If Z is a deter-ministic SXM,then f Z is a(partial)function rather than a relation.We define acomputation of Z as any subset of f Z that associates each input sequence s withoutput sequences produced when a machine exercises every path in Z that can betraversed by s at least once.Definition2.8A relation :Σ∗←→Γ∗is called a computation of Z if the following two conditions hold:(1)∀s∈Σ,g∈Γ,s g=⇒s f Z g;(2)∀s∈Σ, g∈Γ,p∈Φ∗,s f p Z g=⇒(s f p Z g and s g for some g ∈Γ).The set of all computations of Z is denoted by H Z.It is easy to see that f Z∈H Z and∀ ∈H Z, f Z.IfΦis a set of(partial)functions(such as when Z is a DSXM)then H Z={f Z},because in this case for each path through the machine there is only one sequence of possible outputs.The following definition refers to one of the many variants of P systems,namelycell-like P system,which uses non-cooperative transformation and communication rules [20].Since now onwards we will refer to this model as simply P system.Definition 2.9A P system is a tuple Π=(V,μ,w 1,...,w n ,R 1,...,R n ),where •V is a finite set,called alphabet ;•μdefines the membrane structure;a hierarchical arrangement of n compartments called regions delimited by membranes ;these membranes and regions are identi-fied by integers 1to n ;•w i ,1≤i ≤n ,represents the initial multiset occurring in region i ;•R i ,1≤i ≤n ,denotes the set of rules applied in region i .The rules in each region have the form a →(a 1,t 1)...(a m ,t m ),where a,a i ∈V ,t i ∈{in,out,here },1≤i ≤m .When such a rule is applied to a symbol a in the current region,the symbol a is replaced by the symbols a i with t i =here ;symbols a i with t i =out are sent to the outer region (or environment when the current region is the most external)and symbols a i with t i =in are sent into one of the regions contained in the current one,arbitrarily chosen.In the following definitions and examples all the symbols (a i ,here )are used as a i .The rules are applied in maximally parallel mode which means that they are used in all the regions in the same time and in each region all symbols that may be processed,must be.A configuration of the P system Πis a tuple c =(u 1,...,u n ),u i ∈V ∗,1≤i ≤n.A derivation of a configuration c 1to c 2using the maximal parallelism mode is denoted by c 1=⇒c 2.3NSXM Integration TestingThis section presents the theoretical basis for the NSXM integration testing method[15].This method generates a test set from a non-deterministic SXM specification,providing that the system components (i.e.processing relations)are implemented correctly.Therefore,it is assumed that the IUT is a NSXM having the same type (processing relations)as a specification.The concepts and results in this section are largely from [15],the presentation of which has been slightly modified to fit with other published work in the area [8,11].The essence remains unchanged.In order to test non-deterministic implementations,one usually makes a so-called complete-testing assumption [18]:it is possible,by applying a given input sequence s to an implementation for a finite number of times,to exercise all the paths of the implementation that can be traversed by s .Without such an assumption,no test suite can guarantee full fault coverage of non-deterministic implementations.For testing of an IUT Z against a specification Z ,we apply elements of a test set X a number of times,so as to ensure that both a specification and an implementation traverse all paths they can.During a test,only a subset of a (potentially infinite)set of outputs is observed from a non-deterministic implementation;for this reason,a possible computation of Z is observed ( |X ).If all outputs of Z in response to X can be produced by Z ,it means that Z performed a computation andF .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126118F.Ipate,M.Gheorghe/Electronic Notes in Theoretical Computer Science227(2009)113–126119 |X= |X.In this case,we should be able to conclude that for any sequence of inputs Z will produce an output which is allowed by Z.This justifies the following.Definition3.1Let Z be a SXM and C a set of SXMs having the same input alphabet(Σ)and output alphabet(Γ)as Z.Then afinite set X⊆Σ∗is called a test set of Z w.r.t.C if∀Z ∈C, |X= |X for some ∈H Z, ∈H Z implies f Z=f Z .It is assumed that an IUT is a NSXM having the same input alphabet,output alphabet,memory and an initial memory value as a specification;additionally, a SXM specification has to satisfy three conditions:input-completeness,output-distinguishability and observability.Definition3.2Two SXMs Z and Z are called weak testing compatible if they have identical input alphabets,output alphabets,memory sets and initial memory values.Two weak testing compatible SXMs are called testing compatible if they have identical types.Φis called input-complete if∀φ∈Φ,m∈M,∃σ∈Σsuch that(m,σ)∈dom(φ).Φis called output-distinguishable if∀φ1,φ2∈Φ,(there are m∈M,σ∈Σsuch thatωφ1m(σ)∩ωφ2m(σ)=∅),thenφ1=φ2(ωφm was introduced in Sect.1).Φis called observable if∀φ∈Φ,σ∈Σ,γ∈Γ,m,m1,m2∈M,((γ,m1)∈φ(m,σ), (γ,m2)∈φ(m,σ)implies m1=m2).These three conditions(input-completeness,output-distinguishability and ob-servability)are generally known as“design for test conditions”[10,14].3With-out them,it would be extremely difficult to test a system properly.The input-completeness condition ensures that all sequences of processing relations in the associated FA can be attempted using appropriate inputs,so they can be tested against the implementation.The output-distinguishability condition ensures that any processing relation can be identified from the machine computation by examin-ing the outputs produced.WhenΦis observable,the next memory value computed by relations can be determined.Note that ifΦis a set of(partial)functions then it is already observable,so this condition is not explicitly stated when considering such SXMs(such as DSXMs).The basic idea of the testing method is to translate a test set of an associated FA into a test set of an IUT.In order to do this,we need a mechanism,called a test function,that converts sequences of processing relations into sequences of inputs. Definition3.3Let Z=(Σ,Γ,Q,M,Φ,F,q0,m0)be a SXM with an input-complete typeΦ.A test function of Z t:Φ∗−→Σ∗is defined as follows.First of all,t( )= ; for n>0andφ1,...,φn∈Φ,t(φ1...φn)=σ1...σk,whereσ1,...,σk∈Σare such that(m0,σ1...σk)∈dom(|φ1...φk|).In order to determine the value k≤n,two cases are considered,(1)φ1...φn∈L A Z,in which case k=n;(2)φ1...φn/∈L A Z and for0≤i<n there isφ1...φi∈L A Z such thatφ1...φi+1/∈L A Z,in which case k=i+1.3The design for test conditions for a deterministic SXM have been relaxed in[12]and[16]In other words,for any sequence v =φ1...φn of processing relations,t (v )is a sequence of inputs that exercises the longest prefix φ1...φi of v that is a path in a specification NSXM and,if i <n ,also exercises φi +1,the relation that follows after this prefix.Note that since Φis input-complete there always exist σ1,...,σk as above,but these are not necessarily uniquely defined.The result below is the theoretical basis for NSXM integration testing.Theorem 3.4[15]Let Z be a SXM having type Φinput-complete,output-distinguishable and observable and C a set of SXMs testing compatible with Z .If t is a test function of Z and Y ⊆Φ∗a test set of A Z w.r.t.A C ,where A C ={A Z |Z ∈C },then X =t (Y )is a test set of Z w.r.t.C .Since C is a set of (non-deterministic)SXMs testing compatible with Z ,it is assumed that the processing relations are implemented correctly,i.e.an IUT uses the same set of processing relations as a specification.Therefore,the method only tests the integration of processing relations.The correctness of implementation of these relations is checked by separate testing processes,as discussed in [15].We can now use theorems 3.4and 2.3to generate a test set of Z w.r.t.C m ,the set of NSXMs testing compatible with Z whose number of states does not exceed m ≥n .This is X m −n =t (Y m −n ),where Y m −n =P (Φm −n ∪...∪{ })(W ∪{ }),n is the number of states in Z ,P is a transition cover and W a characterisation set of A Z and t is a test function of Z .More details about the applicability of the NSXM integration testing method can be found in reference [15].4Theoretical Basis for Complete NSXM TestingThis section presents the theoretical basis for the complete NSXM testing method.Unlike integration testing,no assumption is made here regarding the correctness of the implementation of processing relations.Therefore,the general case is considered when a specification and an IUT may have different types (i.e.are weak testing compatible).Furthermore,an IUT will be checked for conformance to a specification rather than for equivalence.An IUT Z conforms to Z if Z is defined on all inputs on which Z is defined and the behaviour of Z is a subset of the behaviour of Z that traverses every path of Z at least once.Definition 4.1Let Z and Z be two SXMs having the same input alphabet (Σ)and output alphabet (Γ).We say that Z conforms to Z ,written Z Z ,if f Z f Z and H Z ∩H Z =∅.The definition of a test set is revised to reflect the more general situation.Definition 4.2Let Z be a SXM and C a set of SXMs with the same input alphabet (Σ)and output alphabet (Γ)as Z .Then a finite set X ⊆Σ∗is called a conformance test set of Z w.r.t.C if ∀Z ∈C , |X = |X for some ∈H Z , ∈H Z implies Z Z .Obviously,any test set of Z w.r.t.C is also a conformance test set of Z w.r.t.C .If Z is a deterministic SXM then the notions of a test set and a conformance testF .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126120set coincide.The output-distinguishability condition has to be updated for the situation where a specification and an implementation may have different types.Definition 4.3Let Z =(Σ,Γ,Q,M,Φ,F,q 0,m 0)and Z =(Σ,Γ,Q ,M,Φ ,F ,q 0,m 0)be two weak testing compatible SXMs having types Φand Φ ,respectively.Then Φis called output-distinguishable w.r.t.Φ ifthere exists a bijective function c :Φ−→Φ such that the following holds:∀φ∈Φ,φ ∈Φ ,((∃m ∈M,σ∈Σsuch that ωφm (σ)∩ωφ m (σ)=∅)=⇒φ =c (φ)).This says that,for a processing relation φin Φ,we must be able to identify a corresponding relation φ =c (φ)in Φ by examining outputs.Naturally,if Φis output-distinguishable then Φis also output-distinguishable w.r.t.itself.In the conditions of Def. 4.3we denote by A −c Z =(Φ,Q ,F −c ,q 0)the FA obtained bysubstituting each arc c (φ)in A Z with φ,i.e.F −c (q ,φ)=F (q ,c (φ)).Obviously,for φ1,...,φn ∈Φ,φ1...φn ∈L A −c Ziffc (φ1)...c (φn )∈L A Z .Design for test conditions imply two important properties of computations,(1)the H Z ∩H Z =∅condition of Def. 4.1requires A Z and A Z to be equivalent (reference [9]permits L A Z ⊆L A Z rather than L A Z =L A Z as considered here,but restricts consideration to integration testing of an IUT which computes a function);(2)an equivalent definition to Def.4.1can be written as domf Z =domf Z ∧f Z ∈H Z .Since the method does not assume that processing relations are correctly imple-mented,we will have to test their implementations in addition to their integration.Therefore,a test set has to contain two components:an integration test set (from the previous section)and a set for testing processing relations (called a relation test set ).The latter is defined below.Definition 4.4Let Z =(Σ,Γ,Q,M,Φ,F,q 0,m 0)and Z =(Σ,Γ,Q ,M,Φ ,F ,q 0,m 0)be two weak testing compatible SXMs.Then for φ∈Φand m ∈M ,a finite set Σφm ⊆Σis called an m conformance test set of φw.r.t.Φ if the following holds:∀φ ∈Φ ,(if ψ|Σφm =ψ |Σφm for someψ,ψ :Σ←→Γ,ψ ωφm ,ψ ωφ m then φ φ).Σφm is called a m test set of φw.r.t.Φ if the following holds:∀φ ∈Φ,(if ψ|Σφm =ψ |Σφm for some ψ,ψ :Σ←→Γ,ψ ωφm ,ψ ωφ m then φ =φ).An m (conformance)test set of φw.r.t.Φ is a finite set of inputs that checks φ(for conformance or equivalence)against any processing relation in Φ .Definition 4.5Let Z be a SXM having type Φ.Then a set V ={v 1,...,v k }⊆Φ∗is called a relation cover of Z if Φcan be written as Φ={φ1,...,φk }such that the following hold:(1)v 1= and φ1∈L A Z ;(2)for any 2≤i ≤k ,v i ∈{φ1,...,φi −1}∗and v i φi ∈L A Z .In the above definition,v i is a sequence containing only the relations φ1,...,φi −1that reach a state in a specification from which an arc with φi is defined.Therefore,V reaches every processing relation in A Z using sequences of relations that have already been accessed.¿From Def.4.5,it follows that a relation cover of Z exists if F .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126121for any proper subset Φ0of Φ,L A Z \Φ∗0=∅.This happens if A Z is accessible and all processing relations in Φare actually used as labels in A Z (i.e.πM (dom (F ))=Φ),as can be expected in practice.Definition 4.6Let Z =(Σ,Γ,Q,M,Φ,F,q 0,m 0)and Z =(Σ,Γ,Q ,M,Φ ,F ,q 0,m 0)be two weak testing compatible SXMs,Φ={φ1,...,φk }input-complete,V ={v 1,...,v k }a relation cover of Z and t a test function of Z .Then X Φ=∪k i =1t (v i )Σi m i is called a relation test set of Z w.r.t.Φ if for any 1≤i ≤k ,m i ∈πM (|v i |(m 0,t (v i )))and Σi m i is an m i conformance test set of φi w.r.t.Φ such that {m i }×Σi m i ∩dom (φi )=∅.For simplicity,in the expression of X Φwe used t (v i )instead of {t (v i )}.A relation test set of Z exists if a relation cover of Z exists and Φis input-complete.Due to non-determinism,relations can produce any possible values of memory;in order to test any given relation φi ,it is necessary to attempt all values from Σi m i for some value m i of memory.If t (v i )reaches different values of memory every time it is attempted,this cannot be done.For this reason,we have to strengthen the complete-testing assumption by requiring that there is an output sequence that can always (i.e.eventually)be produced by a NSXM in response to an input sequence.From the observability condition,this ensures the existence of m i which can be reached #Σi m i times by t (v i ).The idea behind the construction of a relation test set is to access and test every relation in A Z using sequences of relations that have already been tested.Therefore,a relation test set is used to test the processing relations of a SXM specification against their implementations,as shown by the result below.Lemma 4.7Let Z and Z be two weak testing compatible SXMs having types Φand Φ ,respectively,such that Φis input-complete,output-distinguishable w.r.t.Φ and observable.If X Φis a relation test set of Z w.r.t.Φ and |X Φ= |X Φfor some ∈H Z , ∈H Z ,then there exists a bijective function c :Φ−→Φ such that for any φ∈Φ,c (φ) φ.Proof.Let Φ={φ1,...,φk },V ={v 1,...,v k }and X Φ=∪k i =1t (v i )Σi m i be as inDef.4.6.Let also c :Φ−→Φ be as in Def.4.3.For simplicity,we use c :Φ∗−→Φ ∗to denote the free-semigroup morphism induced by c .We prove by induction on 1≤i ≤k the following statement:c (φi ) φi and c (v i φi )∈L A Z .For i =1this is c (φ1) φ1and c (φ1)∈L A Z .Let σ1∈Σ1m 0and γ1∈Γ,such that γ1∈ (σ1)∩ωφ1m 0(σ1).Since (σ1)= (σ1),∃φ ∈Φ such that φ ∈L A Z such that γ1∈ (σ1)∩ωφ m 0(σ1),so γ1∈ωφ1m 0(σ1)∩ωφ m 0(σ1).Since Φis output-distinguishable w.r.t.Φ ,we have φ =c (φ1),so c (φ1)∈L A Z .Furthermore,we define o 1:Σ1m 0←→Γby σ1o 1γ1for all σ1and γ1as above and define ψ1,ψ 1:Σ←→Γby ψ1(σ)=o 1(σ),σ∈Σ1m 0,ψ1(σ)=ωφ1m 0(σ),σ∈Σ\Σ1m 0,ψ 1(σ)=o 1(σ),σ∈Σ1m 0,ψ 1(σ)=ωφ 1m 0(σ),σ∈Σ\Σ1m 0.It is easy to verify that ψ1 ωφ1m 0,ψ 1 ωφ 1m 0and ψ1|Σ1m 0=ψ 1|Σ1m 0.Since Σ1m 0is an m 0conformance test set of φ1w.r.t.Φ ,we have c (φ1) φ1.F .Ipate,M.Gheorghe /Electronic Notes in Theoretical Computer Science 227(2009)113–126122。

Abstract

Abstract

A calculus for interaction nets based on thelinear chemical abstract machineShinya SatoHimeji Dokkyo University,Faculty of Econoinformatics,7-2-1Kamiohno,Himeji-shi,Hyogo670-8524,JapanIan MackieLIX,CNRS UMR7161,´Ecole Polytechnique,91128Palaiseau Cedex,France1IntroductionInteraction nets are graphical rewriting systems,defined in a very similar way to term rewriting systems:they are user-defined by giving a signature and a set of rules over the signature.At the origin,interaction nets were inspired by linear logic proof nets[7],specifically the multiplicative part.Since interaction nets were introduced by Lafont in1990[8]there has been a wealth of theory and applications developed(see for instance[3,6,4,10,12,9]for just a sample).In the study of the theory of interaction nets,various textual calculi have been proposed for interaction nets.Although these calculi destroy the graphical advan-tages,they do allow an easy way of writing them.In particular,they can provide a basis of a programming language[11],and they also serve to provide a language that is more familiar to develop proofs of properties.In[5]a calculus of interaction nets was proposed,based in part on the syntax that Lafont gave.The purpose of this current paper is to investigate alternative calculi for inter-action nets,with the aim of providing the following:•a calculus which is close,with as few overheads as possible,to the graphical syntax;This paper is electronically published inElectronic Notes in Theoretical Computer ScienceURL:www.elsevier.nl/locate/entcs•provide afirst analysis of the cost of interaction net reduction,so that we can build up a cost model of interaction nets.With respect to thefirst point,we see this paper as an increment on previous work.With respect to the second point,this paper is afirst step towards the study of cost models for interaction nets,which,in the future,will give the ability to compare different interaction net encodings of languages in a formal setting.The rest of this paper is structured as follows.In the next section we recall some basics on interaction nets.In Section3we review the linear chemical abstract machine.Section4is devoted to representing interaction nets in the linear chem-ical abstract machine.Section5is about cost models.We conclude the paper in Section6.2Interaction netsHere we review the basic notions of interaction nets.We refer the reader to[8]for a more detailed presentation.Interaction nets are specified by the following data:•A setΣof symbols.Elements ofΣserve as agent(node)labels.Each symbol has an associated arity ar that determines the number of its auxiliary ports.If ar(α)=n forα∈Σ,thenαhas n+1ports:n auxiliary ports and a distinguished one called the principal port.•A net built onΣis an undirected graph with agents at the vertices.The edges of the net connect agents together at the ports such that there is only one edge at every port.A port which is not connected is called a free port.•Two agents(α,β)∈Σ×Σconnected via their principal ports form an active pair (analogous to a redex).An interaction rule((α,β)→N)∈R in replaces the pair (α,β)by the net N.All the free ports are preserved during reduction,and there is at most one rule for each pair of agents.The following diagram illustrates the idea,where N is any net built fromΣ.We use the notation N1−→N2for the one step reduction and−→∗for its transitive and reflexive closure.Figure1shows an example of a system of interaction nets,which corresponds to the usual term rewriting system for addition on the natural numbers.Interaction nets have the following property[8]:•Strong Confluence:Let N be a net.If N−→N1and N−→N2with N1=N2, then there is a net N3such that N1−→N3and N2−→N3.2Fig.1.An example of a system of interaction nets3Linear chemical abstract machineIn this section,we introduce a framework which is a generalised linear chemical abstract machine(linear CHAM),introduced by Abramsky[1].This framework is specified by the following data:•We assume a set N of names,ranged over by x,y,z,...,x1,x2,....We use¯x,¯y,... to range over sequences of names.•A setΣof symbols.Elements ofΣserve as constructors of terms.Each symbol has an associated arity ar that determines the number of its arguments.•Terms built onΣhave one of the forms:t::=x|α(t1,...,t n),where t1,...,t n are terms,α∈Σand ar(α)=n.When ar(α)=0,the we omit brackets,i.e. we write justα.We use t,s,u,...to range over terms and¯t,¯s,¯u,...to range over sequences of terms.•Coequations have the form:t⊥s,where t,s are terms.Coequations are molecules, thus,elements of computation.Given¯t=t1,...,t k,¯s=s1,...,s k,we write¯t⊥¯s to denote the list t1⊥s1,...,t k⊥s k.We useΘ,Ξto range over sequences of coequa-tions.•Configurations have the form:Θ;¯t,whereΘis a sequence of coequations and ¯t is a sequence of terms.In a configurationΘ;¯t,we callΘa solution and¯t the main body.Main bodies are used for recording the result of the computation.We use P,Q to range over configurations.•Rewriting rules have the form:α(t1,...,t n)⊥β(s1,...,s k)−→Θ,whereα,β∈Σ, t1,...,t n,s1,...,s k are meta-variables for terms andΘis a sequence of coequations. Moreover,t1,...,t n,s1,...,s k must be distinct,occur once inΘrespectively and there should be at most one rule betweenαandβ.We use R cham for a set of these rules.Definition3.1(free and bound)When a name x occurs twice in a term t,then the occurrences are said to be bound for t and all other occurrences are free for t. When every name occurs twice in a term t,then t is said to be linear.We extend this notion into name occurrences in sequences of terms,coequations and so on.Linear CHAM is a rewriting system for configurations.The rules of linear CHAM are divided into three kinds:structural rules which describe the“magical mix-3Notation:(1)GivenΘ,we writeΘl to denote the results of replacing each occurrence of a bounded name x forΘby fresh names x l respectively.Structural Rules:•Basic Rules:t⊥u⇌u⊥t,t1⊥u1,t2⊥u2⇌t2⊥u2,t1⊥u1•Structural Context Rule:Basic rules can be applied in any context of solutions:Θ⇌ΞP1−→Q1Reaction Rules:•Communication:t⊥x,x⊥u−→t⊥u•Binding:t⊥x,s⊥u−→s⊥u[t/x](s,u are not names and x occurs in u)•Change:α(¯t)⊥β(¯s)−→Θl(α(¯t)⊥β(¯s)−→Θ∈R cham)•Reaction Context Rule:Θ−→Ξ•Strong Confluency:If P−→Q1,P−→Q2and Q1≡Q2,then for some R, Q1−→R and Q2−→R.•Determinacy:If P⇓Q1and P⇓Q2,then Q1≡Q2.4From interaction nets to linear CHAMIn this section we introduce a translation from interaction nets into the linear chem-ical framework.First,we define a translation T N from nets into configurations, where we restrict nets to be deadlock free(consequently,there are no vicious cir-cles).•Free ports:For every free port,we connect a principal port of a fresh agent whose arity is0.We use T,S,U,...to range over these fresh agents.•Agents:For every agent whose symbol isα,we introduce a termα(x1,...,x n) where each x1,...,x n is a fresh name.In this translation,the occurrence of the termα(x1,...,x n)is corresponding to the principal port of the agentα,and each occurrence of x1,...,x n corresponds to the auxiliary ports respectively.•Connections between principal ports:We assume that terms for these prin-cipal ports areα(¯t)andβ(¯s).For this connection,we introduce a coequation α(¯t)⊥β(¯s).•Connections between a principal port and an auxiliary port:We assume that terms for a principal port and an auxiliary port areα(¯t)and x respectively. For this connection,we replace the occurrence of x byα(¯t).•Connections between auxiliary ports:We assume that names for these aux-iliary ports are x and y respectively.For this connection,we introduce a fresh name z and we replace the occurrence of x and y by z.•Finalization:We make a configuration in the following way:·for the solution,collect coequations generated by this translation.·the main body must be empty because all free principal ports are connected to principal ports of fresh agents.For example,each net in the example of reductions in Figure1is represented as follows:add(T,Z)⊥S(Z);,T⊥S(x),add(x,Z)⊥Z;,T⊥S(Z);.We can show that all coequations obtained by using T N have the form t⊥u where t and u are not names because every occurrence of a coequation is caused as a result of a connection between principal ports:Lemma4.1When N has no vicious circle,then all coequations obtained by T N[N] have forms t⊥u where t and u are not names.2 Next we define a translation T R of a rule((α,β)→N)∈R in.•The left hand side:We introduce a coequationα(t1,...,t n)⊥β(s1,...,s k).•The right hand side:Assuming each auxiliary port in the right hand side are connected to principal ports t1,...,t n,s1,...,s k corresponding to the occurrences ofα(t1,...,t n)⊥β(s1,...,s k)respectively,we have a sequence of coequationsΘby5using the translation T N of the nets of the right hand side.•Finalization:We make a ruleα(t1,...,t n)⊥β(s1,...,s k)−→Θ(t1,...,t n,s1,...,s k),whereΘ(t1,...,t n,s1,...,s k)means meta variables t1,...,t n,s1,...s k occur inΘre-spectively.For example,each rule in Figure1is represented as follows:•add(t1,t2)⊥S(s1)−→t1⊥S(x),add(x,t2)⊥s1,•add(t1,t2)⊥Z−→t1⊥t2.4.1CorrectnessNext,we examine a correctness of the translation.In Figure1,thefirst net in the example of reductions is reduced to the last net by using rules obtained from the translation T as follows:add(T,Z)⊥S(Z);−→T⊥S(x),add(x,Z)⊥Z;−→T⊥S(x),x⊥Z;−→T⊥S(Z);. This correspondence shows the correctness of the this translation for the case of Figure1.We can show the correctness for the other cases:Theorem4.2Let R in and N1be linear.When N1−→N2by using a rule R∈R in and N1,N2does not contain any vicious circles,then by using a rule T R[R],either T N[N1]−→T N[N2]or T N[N1]−→·−→Binding T N[N2].Proof.We argue just a simple case such that((α,β)→N)∈R in where ar(α)=1, ar(β)=2,and T R[(α,β)→N]=α(t)⊥β(s1,s2)−→Θ(t,s1,s2).We check connections of free ports of(α,β):No port is connected:N is just a net(α,β).Then,by the construction ofΘ(t,s1,s2),T N[N1]=α(T)⊥β(S1,S2);−→Θ(T,S1,S2);=T N[N2].One port is connected:We assume that the most left free port is connected. To a principal port:A principal port of an agentγis connected as follows:Because the principal port ofγis represented as a term,we can showT N[N1]=α(γ(T))⊥β(S1,S2);−→Θ(γ(T),S1,S2);=T N[N2].To an one auxiliary port:An auxiliary port of an agentγis connected.6Case1:In the right hand side of the rule,the auxiliary port ofγis connected to a principal port:Let T R[(α,β)→N]=α(t)⊥β(s1,s2)−→t⊥σ(u),Θ′(s1,s2).Then,T N[N1]=T⊥γ(x),α(x)⊥β(S1,S2);−→T⊥γ(x),x⊥σ(u),Θ′(S1,S2);−→Binding T⊥γ(σ(u)),Θ′(S1,S2);=T N[N2].Case2:In the right hand side of the rule,the auxiliary port ofγis connected to an auxiliary port:Then,T N[N1]=T⊥γ(x),α(x)⊥β(S1,S2);−→T⊥γ(x),Θ(x,S1,S2);=T N[N2].Case3:In the right hand side of the rule,the auxiliary port ofγis connected to a free port.Let T R[(α,β)→N]=α(t)⊥β(s1,s2)−→t⊥s1,Θ′′(s2).Then,T N[N1]=T⊥γ(x),α(x)⊥β(S);−→T⊥γ(x),x⊥S1,Θ′′(S2);−→Binding T⊥γ(S1),Θ′′(S2);=T N[N2].Two ports are connected:In the case that these ports are connected to distinct agents respectively,we can show the correspondence as same as above.Next,we check the case that these ports are connected to the same agent:Let T R[(α,β)→N]=α(t)⊥β(s1,s2)−→s1⊥α2(u),Θ′(t,s2).Then,T N[N1]=α(γ(x))⊥β(x,S2);−→x⊥α2(u),Θ′(γ(x),S2);−→BindingΘ′(γ(α2(u)),S2);(by using the Lemma4.1)=T N[N2].The other cases are also shown in a similar way.We note that nets such as the following are not treated because there exists some vicious circle in N2:72 4.2How far from computation of interaction netsHere we ask how close our representation of interaction nets are to the graphical ones.The number of reductions may be increased more than the number of reduc-tions in Interaction Nets.For example,in Figure1,the number of reductions is two,but in the linear chemical framework,the total number of reductions is three, even though the number of reductions by using the change rule is two.This is because we have to introduce names for connections between auxiliary ports.Therefore,reductions for these names can be increased according to the number of these connections at most.We introduce a weight for the number which can be increased:•For a net N,a weight|N|is the number of connections between auxiliary ports.•For a rule((α,β)→N)∈R in,a weight|(α,β)→N|is|N|.When N1⇓N2,the sum of|N1|and|r|for each rule r which is applied in order to obtain N2is the number of reductions which can be increased in the case of the translation T.As an example,in Figure1,in the net of the right hand side in the rule between add and S,one name is introduced.Therefore the number of reductions is increased by one.On the other hand,this redundancy is needed for parallel computation.For example,in the following net,there are two active pairs whose auxiliary ports are connected together:When these reductions are performed simultaneously,there becomes no informa-tion about the connection between these auxiliary ports.For this reason,we need some device for preserving this information.In the case of our calculus,this net is represented as add(T,Z)⊥S(x),add(x,Z)⊥Z;,then,after these reductions that can be done simultaneously,we can rewire these auxiliary ports together.Therefore, we can think that this redundancy is the price that we have to pay for parallel computations.4.3Related worksIn[5]a textual calculus for interaction nets has been proposed.The main differ-ence between this calculus and our calculus is a way of representation of rewriting rules.In the textual calculus,fresh names are introduced according to occurrences8of names in a rule,even if the occurrences are not for connections between auxil-iary ports.Therefore,the number of reductions can be increased according to the number of names which is not for connections between auxiliary ports in rules at most.For example,the rules in Figure1are defined as follows:add(S(x),y)1S(add(x,y))add(x,x)1Z.Note that,in these rules,the number of names which are not for connections between auxiliary ports are one respectively.The rewritings are performed as follows: a|add(a,Z)=S(Z) −→ a|a=S(x′),Z=y′,Z=add(x′,y′) −→ S(x′)|Z=y′,Z=add(x′,y′) −→ S(x′)|Z=add(x′,Z)−→ S(x′)|x′=x′′,Z=x′′ −→ S(x′′)|Z=x′′ −→ S(Z)| . Compared with the number of reductions in our calculus,we canfind that the number of reductions is increased by two(see Example3.2).5Cost modelsIn this section we briefly outline how we propose to measure the cost of an interaction net computation using the calculus.The main point is that we can use the calculus to give a precise measure.In the graphical notation,for the rule between add and S in Figure1,we may estimate that the cost should be three times of the cost of a rewire of ports intu-itively:However,each cost of rewires are different.For example,as a result of a rewire, whether a new active pair can be created or not depends on kinds of connections. Therefore,it is difficult to estimate the cost.In our calculus,we can divide rewires into two sorts:one is creation of an active pair,another is just a re-connection.Taking account of agents,we introduce the following constants of costs:•α···the cost of creation of an agent(and a name),•ǫ···the cost of erasing of an agent(and a name),•θ···the cost of creation of an active pair,•ω···the cost of rewire.and we estimate costs of a rule as the sum of creation costs of the right hand side of the rule.As an example,for the rule add(t1,t2)⊥S(u)−→t1⊥S(x),add(x,t2)⊥u, the cost for thefirst coequation t1⊥S(x)can be estimated as the sum of the cost of one creation of a name x,one rewire of x to S and two rewires of both t1and S(x) to a coequation cell⊥,thusα+3ω+θ.For the next coequation add(x,t2)⊥s1,9the cost can be the sum of the cost of one rewires of x to add,one creation of a coequation cell⊥and two rewires of both add(x,t2)and s1to the coequation cell⊥,thus3ω+θ.Therefore,the cost can beα+6ω+2θ.For the another rule add(t1,t2)⊥Z−→t1⊥t2,the cost can be estimated as the sum of the cost of eliminations of agent add and Z,one creation of a coequation cell⊥and two rewires of both t1and t2to the coequation cell⊥,thus2ǫ+θ+2ω.If we can reuse a coequation cell in the left hand side,these costs can be reduced toα+6ω+θand 2ǫ+2ωrespectively.This cost model is suitable for the calculation models that treat an active pairs as a data object of two cells and store each information of connections between auxiliary ports into a buffer.6ConclusionsIn this paper we have given a new calculus for interaction nets that we believe is closer to the graphical framework than extant calculi.We have also made afirst attempt to study the cost of an interaction net computation.Current work is now devoted to building programming languages around this framework,and also using the framework to show properties of existing systems of interaction. References[1]putational interpretations of linear logic.Theoretical Computer Science,111:3–57,1993.[2]G.Berry and G.Boudol.The chemical abstract machine.In Conference Record of the SeventeenthAnnual Symposium on Principles of Programming Languages,pages81–94,San Francisco,California, Jan.1990.[3]M.Fern´a ndez and I.Mackie.Coinductive techniques for operational equivalence of interaction nets.In Proceedings of the13th Annual IEEE Symposium on Logic in Computer Science(LICS’98),pages 321–332.IEEE Computer Society Press,June1998.[4]M.Fern´a ndez and I.Mackie.Interaction nets and term rewriting systems.Theoretical ComputerScience,190(1):3–39,January1998.[5]M.Fern´a ndez and I.Mackie.A calculus for interaction nets.In G.Nadathur,editor,Proceedings of theInternational Conference on Principles and Practice of Declarative Programming(PPDP’99),volume 1702of Lecture Notes in Computer Science,pages170–187.Springer-Verlag,September1999.[6]M.Fern´a ndez and I.Mackie.A theory of operational equivalence for interaction nets.In G.Gonnet,D.Panario,and A.Viola,editors,LATIN2000.Theoretical Informatics.Proceedings of the4th LatinAmerican Symposium,Punta del Este,Uruguay,volume1776of Lecture Notes in Computer Science, pages447–456.Springer-Verlag,April2000.[7]J.-Y.Girard.Linear logic.Theoretical Computer Science,50:1–102,1987.[8]font.Interaction nets.In Seventeenth Annual Symposium on Principles of ProgrammingLanguages,pages95–108,San Francisco,California,1990.ACM Press.[9]S.Lippi.λ-calculus left reduction with interaction nets.Mathematical Structures in Computer Science,12(6),2002.[10]I.Mackie.Interaction nets for linear logic.Theoretical Computer Science,247(1):83–140,September2000.[11]I.Mackie.Towards a programming language for interaction nets.Electronic Notes in TheoreticalComputer Science,127(5):133–151,May2005.[12]J.S.Pinto.Sequential and concurrent abstract machines for interaction nets.In J.Tiuryn,editor,Proceedings of Foundations of Software Science and Computation Structures(FOSSACS),volume1784 of Lecture Notes in Computer Science,pages267–282.Springer-Verlag,2000.10。

Theoretical Computer Science,

Theoretical Computer Science,

Implementing Matching in A L E—First ResultsSebastian Brandt∗Theoretical Computer Science,TU Dresden,Germanyemail:brandt@tcs.inf.tu-dresden.deAbstractMatching problems in Description Logics are theoretically well understood, with a variety of algorithms available for different DLs.Nevertheless,still no im-plementation of a general matching algorithm exists.The present paper presentsan implementation of an existing matching algorithm for the DL A L E and showsfirst results on benchmarks w.r.t.randomly generated matching problems.Theobserved computation times show that the implementation performs well even onrelatively large matching problems.1MotivationMatching in Description Logics(DLs)has beenfirst introduced by Borgida and McGuinness in the context of the Classic system[9]as a means tofilter out irrelevant aspects of large concept descriptions.It has also been mentioned that matching(as well as unification)can be used either tofind redundancies in or to integrate knowl-edge bases[7,10].More recently,matching has been proposed to perform queries on knowledge bases,an application particularly interesting in combination with other non-standard inferences[11].A matching problem(modulo equivalence)consists of a concept description C and a concept pattern D,i.e.,a concept description with variables.Matching D against C meansfinding a substitution of variables in D by concept descriptions such that C is equivalent to the instantiated concept pattern D.Matching algorithms have been developed for the DLs A L N,A L E,and their re-spective sublanguages[4,3].For A L N and its sublangages,algorithms could even be found for an extension of matching problems,namely matching under side condi-tions[1].However,there exists no implementation of an algorithm providing matching in DLs as an explicit inference service.In the present paper,we present an implemen-tation of an A L E-matching algorithm as introduced in[3].It has also been shown in the relevant paper that the algorithm is in expspace.As with other non-standard inferences,the question arises whether or not the actual run-time behavior of an im-plemented algorithm is as adverse as the theoretical upper bound suggests.To cast light on this question,we have performed benchmarks w.r.t.randomly generated matching problems.As we shall see,in our case moderate optimization ∗Supported by the DFG under grant BA1122/4-3strategies suffice to observe practicable run-times.The remainder of the present paper is structured as follows:after introducing relevant basic notions and definitions the existing A L E-matching algorithm is discussed in Section3.In Section4the ideas underlying our implementation will be presented while Section5shows the results of our benchmarks.2PreliminariesConcept descriptions are inductively defined with the help of a set of constructors, starting with a set N C of concept names and a set N R of role names.For the sake of simplicity,we assume N R to be the singleton{r}.However,all definitions and results can easily be generalized to arbitrary sets of role names.In this work,we consider the DL A L E which allows for the top concept( ),bottom concept(⊥), conjunction(C D),existential restrictions(∃r.C),and value restrictions(∀r.C). The semantics of A L E-concept descriptions is defined in the usual model-theoretic way.For every concept description C the -normal form C of C is obtained by exhaustive application of the transformation rule∀r. → to C.In preparation to the following section we also need to introduce concept patterns. These are defined w.r.t.afinite set N X of concept variables distinct from N C.Concept patterns are an extension of concept descriptions in the sense that they allow for primitive concepts A∈N C and concept variables X∈N X as atomic constructors. The only restriction is that primitive negation may not be applied to concept variables. For every concept pattern D,a -pattern of D is obtained by syntactically replacing some variables in D by the top-concept .One of the most important traditional inference services provided by DL systems is computing the subsumption hierarchy of concept descriptions.The concept de-scription C is subsumed by the description D(C D)iffC I⊆D I holds for all interpretations I.The concept descriptions C and D are equivalent(C≡D)iffthey subsume each other.Subsumption of A L E-concept descriptions has been characterized by means of homomorphisms between so-called description trees[6]which are defined as follows.Definition1An A L E-description tree is a tree of the form G=(N,E,n0, )where1.N is afinite set of nodes;2.E⊆N×{∃,∀}×N R×N is afinite set of edges each labeled with a quantorand a role name;3.n0is the root node of G;4. is a labeling function with (n)⊆{⊥}∪N C∪{¬A|A∈N C}∪N X for alln∈N.Description trees correspond to syntax trees of concept descriptions(or concept patterns).It is therefore easy to see that concept descriptions can be translated into description trees and back(See[5]for a formal translation).By tree(C)we denote the description tree of the concept description(or concept pattern)C while con(G)denotes the concept description obtained from the tree G.For every node n in the description tree tree(C)of C we denote by C|n the subdescription obtained by translating the subtree of tree(C)induced by n back into a concept description.Definition2A mappingϕ:N H→N G from an A L E-description tree H:=(N H,E H, m0, H)to an A L E-description tree G:=(N G,E G,n0, G)is called homomorphism if and only if the following conditions hold:1.ϕ(m0)=n0;2.for all nodes n∈N H it holds that H(n)\N X⊆ G(ϕ(n))or⊥∈ G(ϕ(n));3.For all edges(n Qr m)∈E H,either(ϕ(n)Qrϕ(m))∈E G,orϕ(n)=ϕ(m)and⊥∈ G(ϕ(n)).It has been shown in[6]that C D for two concept descriptions C and D iffthere exists a homomorphismϕfrom tree(D )onto tree(C).Note,however,that the above definition includes homomorphisms from a description tree representing a concept pattern onto one representing a concept description.For the A L E-matching algorithm we also need to introduce the least common sub-sumer of A L E-concept descriptions.Definition3(lcs)Given A L E-concept descriptions C1,...,C n,the A L E-concept de-scription C is the least common subsumer(lcs)of C1,...,C n(C=lcs{C1,...,C n} for short)iff(i)C i C for all1≤i≤n,and(ii)C is the least concept description with this property,i.e.,if C satisfies C i C for all1≤i≤n,then C C .It has been shown in[6]that in the DL A L E the lcs of two or more concept descriptions always exists and is uniquely determined up to equivalence.Moreover,it can be computed in exponential time.3Matching in A L EIn order to define matching problems wefirst need to introduce substitutions on concept patterns.A substitutionσis a mapping from N X into the set of all A L E-concept descriptions.Substitutions are extended to concept patterns by induction on the structure of the pattern,thus modifying only the occurrences of variables in the pattern.The notion of subsumption is extended to substitutions in the following way.A substitutionσis subsumed by a substitutionτ(σ τ)iffσ(X) τ(X)for all X∈N X.With these preliminaries we can define matching problems.Definition4Let C be an A L E-concept description and D be an A L E-concept pattern. Then,C≡?D is a A L E-matching problem.A substitutionσis a matcher iffC≡σ(D).A set S of matchers to C≡?D is called s-complete ifffor every matcherτto C≡?D there exists an elementσ∈S withσ τ.In general a solvable matching problem has several matchers.One way to restrict the attention to‘interesting’sets matchers is to compute s-complete sets of matchers as defined above.Figure1shows the relevant A L E-matching algorithm originally presented in[2,3].It has been shown that it in fact computes s-complete sets of matchers,that the number of returned matchers is at most exponential,and that each matcher is of size at most exponential in the size of the matching problem.In[3]it is also shown that the matching algorithm is in expspace.It is still open how‘tight’this upper bound is,and especially,if sets of s-complete matchers can also be computed in exptime—currently the best lower bound for this computation problem.Input:A L E -matching problem C ≡?DOutput:s-complete set C of matchers for C ≡?DC :=∅For all -patterns D of D doFor all homomorphisms ϕfrom H :=tree (D )into tree (C )Define σby σ(X ):= lcs {C |ϕ(m )|m ∈N H ,X ∈ H (m )}if X ∈var (D ) otherwiseIf C σ(D )then C :=C ∪{σ}Figure 1:The A L E -Matching AlgorithmExample 5Let N C :={A }and N R :={r }.Consider the matching problem C ex ≡?D ex with C ex :=∃r.(A ∃r.A )and D ex :=X Y ∃r.(A Y ∀r.X ).The relevant description trees are shown below:In order to apply the matching algorithm shown in Figure 1we have to start bycomputing all -patterns D ex of D ex .Apart from D ex itself,these are Y ∃r.(A Y ∀r.X )=:D ex ,X ∃r.(A ∀r.)=:Dex ,and ∃r.(A ∀r. )=:D ex The next step is to compute the respective -normal forms.It is easyto see that the -normal formof D ex and D ex is equivalent to the original concepts.For D ex and D ex ,however,the valuerestriction ∀r. is removed.The description trees of the relevant normalizedconcepts are shown below.∃r∃rA A∃r ∃r ∃r m m A,Y Y tree (D ex )∃r tree (C ex )n 0n 1n 2X∀r m 1X A m 0m 2X ∀r A,Y X,Y tree (D ex )m m tree (D ex )tree (D ex )A m m m Because of the universal r -edge in tree (D ex )and tree (D ex )which is missing in tree (C ex )it is easy to see that no homomorphism exists from tree (D ex )or tree (D ex )onto tree (C ex ).However,by mapping m 0onto n 0and m 1onto n 1we find a homomor-phism ϕfrom tree (D ex )onto tree (C ex ).Hence,the next step is to construct a substi-tution σaccording to the definition in Figure 1.Since X is no element of var (D ex )we obtain σ(X )= .Moreover,we find that Y occurs in m 0and m 1.Hence,we have tocompute the lcs of C ex |ϕ(m 0)and C ex |ϕ(m 1).Since ϕ(m 0)=n 0and ϕ(m 1)=n 1this means to compute the lcs of C ex and ∃r.A .Thus,we obtain σ(Y )=∃r.A .In the nextstep of the algorithm we find that σ(D )=∃r.A ∃r.(A ∃r.A )which is subsumed by the input concept C ex .Thus,σis added to the list C of solutions.For the -pattern D ex it is easy to see that the only homomorphism ϕfrom tree (D ex )onto tree (C ex )also maps m 0onto n 0and m 1onto m 1.However,since D ex contains no variables,we immediately obtain the substitution σ ={X → ,Y → }.In this case,however,thefinal subsumption test does not hold,i.e.,Cex σ (D).As a result,σ={X→ ,Y→∃r.A}is returned as the only matcher for the matching problem C ex≡?D ex.4ImplementationConsidering the matching algorithm in Figure1we can identify three major tasks to be solved by an implementation.Firstly,all -patterns D of the input pattern D must be generated;secondly,all homomorphismsϕfrom tree(D )onto tree(C)must be found;and thirdly,for every variable X we must compute the lcs of all subconcepts C|ϕ(m)for which X occurs at position m in tree(D ).Thefirst task regards only the input concept pattern and requires only some simple syntactical replacements.Even the computation of the -normal form D of a -pattern D can be done in a straightforward way in polynomial time.As(even optimized)implementations of the lcs algorithm for A L E exist[8]the third task is simple as soon as D andϕare determined.Thefinal subsumption test C σ(D) can also be carried out by a standard reasoner,such as FaCT[13]or Racer[12].The crucial task,however,is the second one.An obvious approach to constructing homomorphisms between two description trees is the usual top-down strategy known from lcs algorithms.Starting at the root nodes of the source and the destination tree in question,one could test for all pairs of edges respecting Condition3whether or not a homomorphism exists between the subtrees induced by the endpoints of these edges.Recursively descending in such a way,all homomorphisms between source and destination tree could be computed.The problem with this approach is that subproblems may be solved several times over—for instance if two homomorphisms are equal w.r.t.some subtrees of the original description tree.To overcome this problem,we have chosen a dynamic-programming strategy to compose homomorphisms in a bottom-up fashion,thereby storing and re-using sets of admissible destination nodes for every source node.As a consequence,only polynomi-ally many subproblems have to be solved for the computation of one homomorphism. The dynamic-programming approach,however,suggests a more sophisticated data structures for the representation of description trees.It proved expedient not to choose an algebraic data structure(as used in the lcs implementations),but to repre-sent a description tree by a set of arrays indexed either by the nodes of the tree,by the role names occurring in the edge labels,or by the occurring variable names.As a result,all aspects important for the computation of homomorphisms can be retrieved instantly.In our implementation,the composition of a homomorphism is done in two steps. In thefirst step—the actual bottom-up computation—a set of admissible destination nodes is computed for every node of the source description tree.The results are then used in the second one to compute the actual homomorphisms.The crucial part in thefirst step is to determine whether or not a certain node is an admissible destination node.This part is shown in further detail in Figure2.The idea is to test for stricter conditions than Definition1suggests in order to detect pairs of nodes which cannot be part of a homomorphism as soon as possible.For instance, according to Definition1,a leaf labeled with⊥is always an admissible destinationInput:description trees G s=:(N s,E s,n s0, s),G d=:(N d,E d,n d0, d),n s∈N s,n d∈N dOutput:n d admissible destination of n s?True iff:•⊥∈ d(n d)and–depth(n s)>depth(n d)or–depth(n s)=depth(n d)and either n s=n s0and n s=n s0or both nodesare successors w.r.t.the same quantor and role name•⊥∈ d(n d)–depth(n s)=depth(n d)–for every successor n s of n s there exists at least one successor n d of n das admissible destination for n s.Figure2:Test for admissible destination nodesnode.However,if its depth exceeds that of the source node then every mapping containing this pair at some node on the path from the root to the source node violates Condition4.Note that in the second case shown in Figure2the test for the successor n s only ends in a recursive call if n s has never been considered beforehand. Note also that the dynamic programming strategy implies that no backtracking is necessary.In comparison to the theoretical algorithm the implemented one contains some mentionable optimizations:•PreprocessingThe input concept pattern and concept description is simplified to keep the relevant description trees as small as possible.•Necessary conditionsLet (D)and⊥(D)denote the concept obtained from the pattern D by replac-ing all variables in D by and⊥,respectively.If C (D)or⊥(D) C then the matching problem C≡?D has no solution.• -patternsIn many cases it is not necessary to generate all top-patterns D of D.This is only promising when replacing variables by leads to a removal of subterms in the -normal form D and hence to a removal of edges in the relevant description tree tree(D ).Moreover,if one -pattern D does admit of a homomorphism then any specialization of D does also,leading only to a solution not minimal w.r.t. .In the following section shows somefirst performance tests for the implemented algorithms with the optimizations discussed above.5BenchmarksAn obvious approach to benchmarking our implementation of A L E-matching is to test the performance on randomly generated matching problems.Nevertheless,if C and D are generated independently of each other then it is unlikely that a matcher forC≡?D exists.In particular,in the second optimization(necessary conditions)is likely to solve such matching problems without even invoking the actual matching algorithm.To overcome this difficulty,we randomly generate a concept C and then construct a concept pattern D from C by randomly replacing subconcepts of C by variables.Note that matching problems obtained in this way are not necessarily solvable because of multiple occurrences of variables.As a simple example,consider C:=∃r.A ∀r.B and D:=∃r.X ∀r.X.The matching problem C≡?D has no solution.As a consequence, the second optimization is not reflected in the results.Our benchmarks were taken on a standard PC with one1.7GHz Pentium-4pro-cessor and512MB of memory.A total of1200matching problems(in10groups, using different parameters for the random generation)was examined.Taking overall averages,the concept description C had an average size of518with a maximum of 992,and the concept pattern D had size185with a maximum of772.The matching algorithm on average took1.2seconds to solve the problem,the observed maximum was58.2seconds.6ConclusionIn the present paper we have presentedfirst experiences with an implementation of the A L E-matching algorithm as proposed by Baader and K¨u sters[3].The algorithm is based on a tree representation of the involved concept description and concept pattern. The main problem for the implementation is posed by that step of the algorithm in which all homomorphisms between the relevant description trees must be generated. Here we have chosen a dynamic programming approach which avoids solving identical subproblems several times.In addition to that,the implementation includes some straightforward optimizations aimed at identifying cases which have no solution as soon as possible.The benchmarks have shown that despite the high theoretical upper bound cur-rently known for the A L E-matching algorithm the implementation performs well even on relatively large randomly generated concepts.Obviously,our next step is to confirm ourfindings by further testing.Firstly,a greater variety of randomly generated matching problems could be considered.Sec-ondly,if available,matching problems resulting from realistic applications might give further insight into the practical benefit of our implementation.In case the current implementation performs well under the above circumstances, the next step could be an extension to matching under side conditions.References[1]F.Baader,S.Brandt,and R.K¨u sters.Matching under side conditions in de-scription logics.In Proc.of IJCAI’01,pages213–218,Seattle,Washington,2001.Morgan Kaufmann.[2]F.Baader and R.K¨u sters.Matching in Description Logics with Existential Re-strictions.In Proc.of DL1999,number22in CEUR-WS,Sweden,1999.[3]F.Baader and R.K¨u sters.Matching in description logics with existential re-strictions.In Proc.of KR2000,pages261–272,Breckenridge,CO,2000.Morgan Kaufmann Publishers.[4]F.Baader,R.K¨u sters,A.Borgida,and D.McGuinness.Matching in DescriptionLogics.Journal of Logic and Computation,9(3):411–447,1999.[5]F.Baader,R.K¨u sters,and puting least common subsumers indescription logics with existential restrictions.LTCS-Report LTCS-98-09,LuFG Theoretical Computer Science,RWTH Aachen,Germany,1998.See rmatik.rwth-aachen.de/Forschung/Papers.html.[6]F.Baader,R.K¨u sters,and puting Least Common Subsumersin Description Logics with Existential Restrictions.In Proc.of IJCAI’99,pages 96–101,Stockholm,Sweden,1999.Morgan Kaufmann Publishers.[7]F.Baader and P.Narendran.Unification of concept terms in description logics.In Proc.of ECAI-1998,pages331–335,Brighton,UK,1998.John Wiley&Sons Ltd.[8]F.Baader and A.-Y.Turhan.On the problem of computing small representationsof least common subsumers.In Proc.of KI2002,Lecture Notes in Artificial Intelligence,Aachen,Germany,2002.Springer–Verlag.[9]A.Borgida,R.J.Brachman,D.L.McGuinness,and L.A.Resnick.CLASSIC:A Structural Data Model for Objects.In Proc.of the1989ACM SIGMODInternational Conference on Management of Data,Portland,Oregon,pages58–67.ACM Press,1989.[10]A.Borgida and R.K¨u sters.What’s not in a name:Some Properties of a PurelyStructural Approach to Integrating Large DL Knowledge Bases.In Proc.of DL2000,number33in CEUR-WS,Aachen,Germany,2000.RWTH Aachen. [11]S.Brandt and ing non-standard inferences in descriptionlogics—what does it buy me?In Proc.of KIDLWS’01,number44in CEUR-WS, Vienna,Austria,September2001.RWTH Aachen.[12]Volker Haarslev and Ralf M¨o ller.RACER system description.Lecture Notes inComputer Science,2083:701–??,2001.[13]I.Horrocks.The FaCT system.In Proc.of Tableaux’98,volume1397of LectureNotes in Artificial Intelligence,pages307–312,Berlin,1998.Springer-Verlag.。

Computerized Medical Imaging and Graphics

Computerized Medical Imaging and Graphics

results 126 - 150Font Size:Journal (998)Journal/Book TitleExpert Systems with Applications (20)Computers & Operations Research (19)Nuclear Instruments and Methods in Physics Rese (16)Electronic Notes in Theoretical Computer Scienc (11)European Journal of Operational Research (11)view moreTopiccomputer science (12)delta (9)support vector (6)theoretical computer (6)genetic algorithm (5)Year2011 (72)2010 (140)2009 (144)2008 (123)2007 (83)view moreOpen all previews126 Systematische Gewinnung von Informationen zu ethischen Aspekten inHTA-Berichten zu medizinischenTechnologien bzw. Interventionen Original Research ArticleZeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen , Volume 102, Issue 5, 31July 2008, Pages 329-341Sigrid DrosteClose preview | Related articles | Related reference workarticlesAbstract | Figures/Tables | ReferencesZusammenfassungHintergrundZiel eines Health Technology Assessment ist die vollständigeund umfassende Beurteilung einer medizinischen Technologiebzw. Intervention. Dies schließt die Berücksichtigung ethischerAspekte ein, die mit der Nutzung einer medizinischenPurchase$ 31.50Technologie oder mit dem Prozess der Technologiebewertung verbunden sind. Ethik – als Philosophie der Moral – umfasst dabei Fragen der Autonomie, des Wohlergehens, der Schadensvermeidung bzw. -minimierung, derVerhältnismäßigkeit und der Gerechtigkeit.ZieleDarstellung der Arbeitschritte zur Gewinnung von Informationen zu ethischen Aspekten, Auflistung und Beschreibung der unterschiedlichen verfügbaren Informationsquellen sowie der an die jeweilige Informationsquelle angepassten Suchbegriffe und -strategien.VorgehensweiseDie Informationsgewinnung, welche analog den Arbeitsschritten zur Informationsgewinnung für die Bewertung des medizinischen Nutzens einer Medizintechnologie erfolgt, bedient sich neben den bekannten internationalen und nationalen Informationsquellen wie HTA-, biomedizinischer (wie z. B. MEDLINE), sozialwissenschaftliche sowie psychologischer Literaturdatenbanken auch bestehender Ethik-Datenbanken. Die Datenbanken erlauben Suchabfragen, welche eine Kombination aus Thesaurus- und Freitextbegriffen darstellen oder solche, die ausschließlich aus Freitextbegriffen bestehen. Eine ergänzende Handsuche in Ethik-Fachzeitschriften, den Internetseiten der HTA-Institutionen und wissenschaftlichen Institutionen/Organisationen mit Arbeitsschwerpunkt Ethik sowie ggf. auch Anfragen bei Ethikexperten vervollständigendas Suchergebnis.FazitDie Gewinnung von Informationen zu ethischen Aspekten medizinischer Technologien erfolgt zwar nach dem gleichen Arbeitsablauf wie die Bewertung des medizinischen Nutzens, bedarf jedoch eigener Suchstrategien und weiterer fachspezifischer Informationsquellen.SummaryBackgroundThe aim of a health technology assessment (HTA) is the complete and comprehensive evaluation of a medical technology or intervention. This includes the consideration of ethical aspects associated with the use of a medical technology or with the technology evaluation process. In this context, ethics, as a moral philosophy, embraces issues of autonomy, beneficence, non-maleficence, and justice.ObjectivesPresentation of the working steps to retrieve information on ethical aspects; documentation and description of the various information sources available, as well as of the search terms and strategies tailored to the respective information sources. ProceduresIn addition to well-known national and international informationsources such as bibliographic databases for HTA, biomedical sciences (e.g., MEDLINE), social sciences, and psychology, ethics databases are also used in the retrieval of information, which is performed analogously to the working steps applied in the retrieval of information for the evaluation of a health technology's clinical benefit. The databases allow search queries that present a combination of thesaurus and free-text terms or those that exclusively consist of free-text terms. The search results are completed by supplementary handsearching of ethics journals, websites of HTA institutions andinstitutions/organisations with key activities involving ethics and, if necessary, requests to ethics experts.ConclusionAlthough the search for information on ethical issues of medical technologies is performed according to the same procedures as those that are followed in clinical benefit assessments, specific search strategies and additional specialist information sources are needed.Article Outline1. Hintergrund2. Ablauf der systematischen Informationsgewinnung3. Quellen der Informationsgewinnung3.1. Literaturdatenbanken3.1.1. HTA-Datenbanken3.1.2. Biomedizinische DatenbankenMEDLINEEMBASE LocatorPlusScience Citation IndexCCMedVerlagsdatenbankenKarlsruher virtueller Katalog3.1.3. Sozialwissenschaftliche und psychologischeDatenbanken3.1.4. Ethik-DatenbankenBELITEthmedETHXWebEUROETHICSLEWI3.2. Zeitschriften3.3. Institutionen3.3.1. HTA-Institutionen3.3.2. Ethik-Institutionen3.4. Experten4. Speicherung der Suchergebnisse5. Dokumentation der Informationsgewinnung6. Nutzung der beschriebenen Informationsquellen undSuchbegriffeReferences127Locating and computing in parallel all the simple rootsof special functions using PVM Original Research ArticleJournal of Computational and Applied Mathematics,Volume 133, Issues 1-2, 1 August 2001, Pages 545-554V. P. Plagianakos, N. K. Nousis, M. N. VrahatisClose preview | Related articles | Related reference work articlesPurchase$ 31.50Abstract | Figures/Tables | ReferencesAbstractAn algorithm is proposed for locating and computing in parallel and with certainty all the simple roots of any twice continuously differentiable function in any specific interval. To compute with certainty all the roots, the proposed method is heavily based on the knowledge of the total number of roots within the given interval. To obtain this information we use results from topological degree theory and, in particular, the Kronecker–Picard approach. This theory gives a formula for the computation of the total number of roots of a system of equations within a given region, which can be computed in parallel. With this tool in hand, we construct a parallel procedure for the localization and isolation of all the roots by dividing the given region successively and applying the above formula to these subregions until the final domains contain at the most one root. The subregions with no roots are discarded, while for the rest a modification of the well-known bisection method is employed for the computation of the contained root. The new aspect of the present contribution is that the computation of the total number of zeros using theKronecker–Picard integral as well as the localization and computation of all the roots is performed in parallel using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates ageneral-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of varied architectures. The proposed algorithm has large granularity and low synchronization, and is robust. It has been implemented and tested and our experience is that it can massively compute with certainty all the roots in a certain interval. Performance information from massive computations related to a recently proposed conjecture due to Elbert (this issue, J. Comput. Appl. Math. 133 (2001) 65–83) is reported.Article Outline1. Introduction2. The topological degree for the localization of zeros3. Computing the number of simple roots4. The modified bisection method5. The algorithms5.1. The algorithm of the master5.2. The algorithm of the slaves6. Experimental results7. ConclusionAcknowledgementsReferences128Maximizing production rate and workload smoothingin assembly lines using particle swarmoptimization Original Research ArticleInternational Journal of Production Economics, Volume129, Issue 2, February 2011, Pages 242-250Andreas C. NearchouClose preview| Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractParticle swarm optimization (PSO) one of the latest developed populationheuristics has rarely been applied in production and operations management(POM) optimization problems. A possible reason for this absence is that, PSOwas introduced as global optimizer over continuous spaces, while a large setof POM problems are of combinatorial nature with discrete decision variables.PSO evolves floating-point vectors (called particles) and thus, its application toPOM problems whose solutions are usually presented by permutations ofPurchase$ 39.95integers is not straightforward. This paper presents a novel method based on PSO for the simple assembly line balancing problem (SALBP), a well-known NP-hard POM problem. Two criteria are simultaneously considered for optimization: to maximize the production rate of the line (equivalently to minimize the cycle time), and to maximize the workload smoothing (i.e. to distribute the workload evenly as possible to the workstations of the assembly line). Emphasis is given on seeking a set of diverse Pareto optimal solutions for the bi-criteria SALBP. Extensive experiments carried out on multipletest-beds problems taken from the open literature are reported and discussed. Comparisons between the proposed PSO algorithm and two existingmulti-objective population heuristics show a quite promising higher performance for the proposed approach.Article Outline1. Introduction2. The bi-criteria simple assembly line balancing problem (SALBP)3. The particle swarm optimization (PSO) algorithm4. A particle swarm optimizer for multi-objective optimization in assembly lines 4.1. Mapping the ‗particles‘ to feasible ALB solutions4.2. Assigning the tasks to the workstations4.3. An implementation of PSO algorithm for multi-objective ALBPs4.4. The evaluation mechanism5. Experimental results5.1. Experimental setup5.2. The control parameters of the PSO algorithm5.3. Comparative results over benchmarks problems6. Conclusions7. AppendixReferences129Recent developments in pedestrian flow theory andresearch in Russia Original Research ArticleFire Safety Journal, Volume 43, Issue 2, February 2008,Pages 108-118V.V. Kholshevnikov, T.J. Shields, K.E. Boyce, D.A.SamoshinClose preview | Related articles |Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractPredtechenskii and Milinskii's seminal work [Predtechenskii VM, Milinskii AI.Planning for foot traffic flow in buildings. Revised and updated edition.Moscow: Stoiizdat; 1969] in relation to pedestrian flows is well known.However, analysis of the experimental results and observations obtained fromthis series of experimental studies revealed the inherent statisticalnon-homogeneity of pedestrian flow speeds [Kholshevnikov VV. The study ofhuman flows and methodology of evacuation standardisation. Moscow: MIFS;1999]. As such, the results of these individual experiments cannot beintegrated to produce a valid general expression V=f(D) for each type ofpedestrian flow path, where V is the flow velocity and D is the flow density.This paper presents further pedestrian flow research conducted in Russia post1969.In this paper pedestrian flow is treated as a stochastic process, i.e., that whichmight be observed in a series of experiments as a manifestation of the randomfunction V=f(D). A fundamentally new random methodology to mathematicallydescribe this function is presented. Corresponding computer simulationmodels ―Analysis of Foot Traffic Flow Probability‖ (known by the Russianacronym ADLPV) [Kholshevnikov VV. Human flows in buildings, structuresand on their adjoining territories. Doctor of science thesis. Moscow: MISI;Purchase$ 31.501983; Kholshevnikov VV, Nikonov SA, Levin YP. Human flows modelling and computations. In: The study of architecture design issues. Tomsk: TGU; 1983; Kholschevnikov VV, Nikonov SA, Shamgunov RN. Modelling and analysis of motion of foot traffic flows in buildings of different usage. Moscow Civil Engineering Institute; 1986; Kholshevnikov VV, Nikonov SA, Shamgunov RN. Modelling and analysis of pedestrian of pedestrian flow movement in various facilities. CIB W14/87/41987; Bradley D, Drysdale D, Molkov V, editors. Retrospective review of research on pedestrian flows modelling in Russia and perspectives for its development. In: Proceedings of the fourth international seminar ―fire and explosion hazards‖, Londonderry, UK, 8–12 September 2003. p. 907–16; Nikonov SA. A development of an arrangements concerning fire evacuation in public buildings on the basic of foot traffic flow modelling. PhD thesis, (Supervisor V.V. Kholshevnikov). Moscow: HFSETS; 1985; Isaevich II. A development of multi-variative analysis of design solution for subway stations and transfer knots based on foot traffic flow modelling. PhD thesis, (Supervisor V.V. Kholshevn ikov). Moscow: MISI; 1990] and ―Free Foot Traffic Flow‖ (known by the Russian acronym SDLP) [Kholshevnikov VV. Human flows in buildings, structures and on their adjoining territories. Doctor of science thesis. Moscow: MISI; 1983; Aibuev ZS-A. The formation of foot traffic flows on large industrial territories. PhD thesis, Moscow: Moscow Civil Engineering Institute; 1989; Nikonov SA. A development of an arrangements concerning fire evacuation in public buildings on the basic of foot traffic flow modelling. PhD thesis, (Supervisor V.V. Kholshevnikov). Moscow: HFSETS; 1985; Kholshevnikov VV, Shields TJ, Samoshyn DA. Foot traffic flows: background for modelling. Proceedings of the second international conference on pedestrian and evacuation dynamics, University of Greenwich, 2003, p. 420] are described. The high degree of correspondence between observed pedestrian flows and the output from these models has been sufficient for the models to be accepted by statutory authorities and used in building design andregulation in Russia [Building regulations. Fire safety of buildings and structures. SNiP II-2-80. Moscow: Stroizdat; 1981.[12]; State Standard12.1.0004—91 (GOST). Fire Safety. General requirements. Moscow, 1992; Building Regulations. Building accessibility for disabled people. SNiP 35-01-2000. Moscow: Stroizdat; 2000] for many years.Article OutlineNomenclature1. Introduction2. Fundamental laws of pedestrian flow3. Potential emotional impact on travel speed4.Development of relation between travel speed, density, emotional state and type of route5. Psychological impact on pedestrian travel speed6. Pedestrian flow modelling and validation7. ConclusionsReferences 130The persistence of memory: Forensic identification and extraction of cryptographic keys Original ResearchArticleDigital Investigation , Volume 6, Supplement 1, September 2009, Pages S132-S140Carsten Maartmann-Moe, Steffen E. Thorkildsen, André ÅrnesClose preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractThe increasing popularity of cryptography poses a great challenge in the field of digital forensics. Digital evidence protected by strong encryption may be impossible to decrypt without the correct key. We propose novel methods for Purchase$ 31.50cryptographic key identification and present a new proof of concept tool named Interrogate that searches through volatile memory and recovers cryptographic keys used by the ciphers AES, Serpent and Twofish. By using the tool in a virtual digital crime scene, we simulate and examine the different states of systems where well known and popular cryptosystems are installed. Our experiments show that the chances of uncovering cryptographic keys are high when the digital crime scene are in certain well-defined states. Finally, we argue that the consequence of this and other recent results regarding memory acquisition require that the current practices of digital forensics should be guided towards a more forensically sound way of handling live analysis in a digital crime scene.Article Outline1. Introduction2. Related work3. Cryptographic key identification3.1. Proof of concept tool: Interrogate3.2. AES key representation in memory3.2.1. AES keys3.3. Serpent key representation in memory3.3.1. Identifying serpent keys3.4. Twofish key representation in memory3.4.1. Notes on the Twofish key schedule3.4.2. Identifying TrueCrypt Twofish keys3.4.3. A less implementation-dependent search4. Leveraging memory structure5. Experiments5.1. Simulation of digital crime scenes5.2. Classes of cryptographic software 5.2.1. Whole-disk encryption5.2.2. Virtual disk encryption5.2.3. Session-based encryption5.3. Definition of system states5.4. Virtualized case generation procedure5.5. Real-world testing6. Results7. Towards forensically sound cryptographic memory forensics8. Conclusions and future workAcknowledgementsReferences131A comparative study of the finite-sample performanceof some portmanteau tests for randomness of a timeseries Original Research ArticleComputational Statistics & Data Analysis, Volume 48,Issue 2, 1 February 2005, Pages 391-413Andy C.C. Kwan, Ah-Boon Sim, Yangru WuClose preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractTesting for the randomness of a time series has been one of the most widelyresearched topics in time-series analysis. The present paper carries out acomparative study of the finite-sample performance of some well-knownportmanteau tests in this area. Using Monte Carlo simulation experiments, wefind that (i) the empirical sizes of some oft-used parametric portmanteau testsare severely undersized when the data generating process is skewed, (ii) thenon-parametric portmanteau test possesses proper sizes only when thenumber of rank autocorrelations is chosen to be small relative to the samplesize, (iii) the non-parametric portmanteau test is more powerful than thePurchase$ 31.50parametric portmanteau tests in the case of skewed distributions, and (iv) the choice of the number of sample autocorrelations (or rank autocorrelations) can significantly affect the size as well as the power of the tests considered. Article Outline 1. Introduction2. Portmanteau tests for randomness3. Simulation experiments and main results4. Concluding remarksAcknowledgementsAppendix A. Empirical means, variances and variance/mean ratios of theportmanteau testsReferences132Analysis of risk factors for abdominal aortic aneurysm in a cohort of more than 3 million individuals OriginalResearch ArticleJournal of Vascular Surgery , Volume 52, Issue 3,September 2010, Pages 539-548K. Craig Kent, Robert M. Zwolak, Natalia N. Egorova, Thomas S. Riles, Andrew Manganaro, Alan J. Moskowitz, Annetine C. Gelijns, Giampaolo GrecoClose preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesBackgroundAbdominal aortic aneurysm (AAA) disease is an insidious condition with an 85% chance of death after rupture. Ultrasound screening can reduce mortality, but its use is advocated only for a limited subset of the population at risk. MethodsWe used data from a retrospective cohort of 3.1 million patients who Purchase $ 31.50completed a medical and lifestyle questionnaire and were evaluated by ultrasound imaging for the presence of AAA by Life Line Screening in 2003 to 2008. Risk factors associated with AAA were identified using multivariable logistic regression analysis.ResultsWe observed a positive association with increasing years of smoking and cigarettes smoked and a negative association with smoking cessation. Excess weight was associated with increased risk, whereas exercise and consumption of nuts, vegetables, and fruits were associated with reduced risk. Blacks, Hispanics, and Asians had lower risk of AAA than whites and Native Americans. Well-known risk factors were reaffirmed, including male gender, age, family history, and cardiovascular disease. A predictive scoring system was created that identifies aneurysms more efficiently than current criteria and includes women, nonsmokers, and individuals aged <65 years. Using this model on national statistics of risk factors prevalence, we estimated 1.1 million AAAs in the United States, of which 569,000 are among women, nonsmokers, and individuals aged <65 years.ConclusionsSmoking cessation and a healthy lifestyle are associated with lower risk of AAA. We estimated that about half of the patients with AAA disease are not eligible for screening under current guidelines. We have created a high-yield screening algorithm that expands the target population for screening by including at-risk individuals not identified with existing screening criteria. Article OutlineMethodsSource of dataStatistical analysisEstimation of national AAA prevalence ResultsCharacteristics of the study population Risk of AAAPrevalence of AAA in the United States DiscussionConclusionsAuthor contributions AcknowledgementsReferences。

计算机的历史2019

计算机的历史2019
一台装配有微处理器的计算机,从此掀开了个人电 脑的序幕。
1976年:4月1日,斯蒂夫.沃兹尼亚克(Stephen Wozinak) 和斯蒂夫.乔布斯(Stephen Jobs)共同创立了苹果公司,并 推出了自己的第一款计算机:Apple-Ⅰ。2014年拍卖90.5万
• 1981年8月,IBM唐.埃 斯特奇(D.Estridge) 领导的开发团队完成 了IBM个人电脑PC5150的研发,IBM宣 布了IBM PC的诞生, 由此掀开了改变世界 历史的一页。
双屏计算机和可穿戴计算机
生物计算机
• 生物计算机也称仿生计算机,主要原材料是生物工程技术 产生的蛋白质分子,并以此作为生物芯片来替代半导体硅 片,利用有机化合物存储数据。信息以波的形式传播,当 波沿着蛋白质分子链传播时,会引起蛋白质分子链中单键、 双键结构顺序的变化。运算速度要比当今最新一代计算机 快10万倍,它具有很强的抗电磁干扰能力,并能彻底消除 电路间的干扰。能量消耗仅相当于普通计算机的十亿分之 一,且具有巨大的存储能力。生物计算机具有生物体的一 些特点,如能发挥生物本身的调节机能,自动修复芯片上 发生的故障,还能模仿人脑的机制等
你心中未来的计算机是什么样的?
ENIAC长30.48米,宽6米,高2.4米,占地面积约170平方米, 30个操作台,重达30英吨,耗电量150千瓦/小时,造价48万 美元。它包含了18000根真空电子管,10000个电容器,1500 个继电器,计算速度是每秒5000次加法或400次乘法,是使 用继电器运转的机电式计算机的1000倍、手工计算的20万倍。
• 1958年,美国的IBM公司制成了第一台全部使用晶体管的计算 机RCA501型。
• 1964年,中国制成了第一台全晶体管电子计算机441—B型。

Electronic Notes in Theoretical Computer Science

Electronic Notes in Theoretical Computer Science

DCM2005Preliminary VersionAbstract Effective ModelsUdi Boker1,2School of Computer ScienceTel Aviv UniversityTel Aviv69978,IsraelNachum Dershowitz3School of Computer ScienceTel Aviv UniversityTel Aviv69978,IsraelAbstractWe modify Gurevich’s notion of abstract machine so as to encompass computational models,that is,sets of machines that share the same domain.We also add an effec-tiveness requirement.The resultant class of“Effective Models”includes all known Turing-complete state-transition models,operating over any countable domain.Key words:Computational models,Turing machines,ASM,Abstract State Machines,Effectiveness1Sequential ProceduresWefirst define“sequential procedures”,along the lines of the“sequential algorithms”of[3].These are abstract state transition systems,whose states are algebras.Definition1.1(States)•A state is a structure(algebra)s over a(finite-arity)vocabulary F,that is, a domain(nonempty set of elements)D together with interpretations[[f]]s over D of the function names f∈F.•A location of vocabulary F over a domain D is a pair,denoted f(a),where f is a k-ary function name in F and a∈D k.1This work was carried out in partial fulfillment of the requirements for the Ph.D.degree of thefirst author.2Email:udiboker@tau.ac.il3Email:nachumd@tau.ac.ilThis is a preliminary version.Thefinal version will be published inElectronic Notes in Theoretical Computer ScienceURL:www.elsevier.nl/locate/entcs•The value of a location f(a)in a state s,denoted[[f(a)]]s,is the domain element[[f]]s(a).•We sometimes use a term f(t1,...,t k)to refer to the location f([[t1]]s,...,[[t k]]s).•Two states s and s over vocabulary F with the same domain coincide over a set T of F-terms if[[t]]s=[[t]]s for all terms t∈T.•An update of location l over domain D is a pair,denoted l:=v,where v∈D.•The modification of a state s into another state s over the same vocabulary and domain is∆(s,s )={l:=v |[[l]]s=[[l]]s =v }.•A mappingρ(s)of state s over vocabulary F and domain D via injection ρ:D→D is a state s of vocabulary F over D ,such thatρ([[f(a)]]s)= [[f(ρ(a))]]s for every location f(a)of s.•Two states s and s over the same vocabulary with domains D and D , respectively,are isomorphic if there is a bijectionπ:D↔D ,such that s =π(s).A“sequential procedure”is like Gurevich’s[3]“sequential algorithm”,with two modifications for computing a specific function,rather than expressing an abstract algorithm:the procedure vocabulary includes special constants“In”and“Out”;there is a single initial state,up to changes in In.Definition1.2(Sequential Procedures)•A sequential procedure A is a tuple F,In,Out,D,S,S0,τ ,where:F is afinite vocabulary;In and Out are nullary function names in F;D,the procedure domain,is a domain;S,its states,is a collection of structures of vocabulary F,closed under isomorphism;S0,the initial states,is a subset of S over the domain D,containing equal states up to changes in the value of In(often referred to as a single state s0);andτ:S→S,the transition function,such that:·Domain invariance.The domain of s andτ(s)is the same for every state s∈S.·Isomorphism preservation.The transition function preserves isomor-phism.Meaning,if states s and s are isomorphic via a bijectionπ,then τ(s)andτ(s )are also isomorphic viaπ.That is,τ(π(s))=π(τ(s)).·Bounded exploration.There exists afinite set T of“critical”terms, such that∆(s,τ(s))=∆(s ,τ(s ))if s and s coincide over T.Tuple elements of a procedure A are indexed F A,τA,etc.•A run of a procedure A is afinite or infinite sequence s0;τs1;τs2;τ···,where s0is an initial state and every s i+1=τA(s i).•A run s0;τs1;τs2;τ···terminates if it isfinite or if s i=s i+1from some point on.•The terminating state of a terminating run s0;τs1;τs2;τ···is2its last state if it is finite,or its stable state if it is infinite.If there is a terminating run beginning with state s and terminating in state s ,we write s ;!τs .•The extensionality of a sequential procedure A over domain D is the partial function f :D →D ,such that f (x )=[[Out ]]s whenever there’s a run s ;!τs with [[In ]]s =x ,and is undefined otherwise.Domain invariance simply ensures that a specific “run”of the procedure is over a specific domain.The isomorphism preservation reflects the fact that we are working at a fixed level of abstraction.See [3,p.89].The bounded-exploration constraint is required to ensure that the behavior of the procedure is effective.This reflects the informal assumption that the program of an algorithm can be given by a finite text [3,p.90].2Programmable MachinesThe transition function of a “programmable machine”is given by a finite “flat program”:Definition 2.1(Programmable Machines)•A flat program P of vocabulary F has the following syntax:if x 11.=y 11and x 12.=y 12and ...x 1k 1.=y 1k 1then l 1:=v 1if x 21.=y 21and x 22.=y 22and ...x 2k 2.=y 2k 2then l 2:=v 2...if x n 1.=y n 1and x n 2.=y n 2and ...x nk n .=y nk nthen l n :=v n where each .=is either ‘=’or ‘=’,n,k 1,...,k n ∈N ,and all the x ij ,y ij ,l i ,and v i are F -terms.•Each line of the program is called a rule .•The activation of a flat program P on an F -structure s ,denoted P (s ),is a set of updates {l :=v |if p then l :=v ∈P,[[p ]]s }(under the standard interpretation of =,=,and conjunction),or the empty set ∅if the above set includes two values for the same location.•A programmable machine is a tuple F ,In ,Out ,D,S ,S 0,P ,where all but the last component is as in a sequential procedure (Definition 1.2),and P is a flat program of F .•The run of a programmable machine and its extensionality are defined as for sequential procedures (Definition 1.2),where the transition function τis given by τ(s )=s ∈S such that ∆(s,s )=P (s ).To make flat programs more readable,we combine rules,as in3%commentif cond-1stat-1stat-2elsestat-3Analogous to the the main lemma of[3],one can show that every program-mable machine is a sequential procedure,and every sequential procedure is a programmable machine.In contradistinction to Abstract Sequential Machines(ASMs),we do not have built in equality,booleans,or an undefined in the definition of procedures: The equality notion is not presumed in the procedure’s initial state,nor can it be a part of the initial state of an“effective procedure”,as defined below. Rather,the transition function must be programmed to perform any needed equality checks.Boolean constants and connectives may be defined like any other constant or function.Instead of a special term for undefined values,a default domain value may be used explicitly.3Effective ModelsWe define an“effective procedure”as a sequential procedure satisfying an “initial-data”postulate(Axiom3.3below).This postulate states that the procedures may have onlyfinite initial data in addition to the domain repre-sentation(“base structure”).An“effective model”is,then,any set of effective procedures that share the same domain representation.We formalize thefiniteness of the initial data by allowing the initial state to contain an“almost-constant structure”.Since we are heading for a char-acterization of effectiveness,the domain over which the procedure actually operates should have countably many elements,which have to be nameable. Hence,without loss of generality,one may assume that naming is via terms. Definition3.1(Almost-Constant and Base Structures)•A structure S is almost constant if all but afinite number of locations have the same value.•A structure S offinite vocabulary F over a domain D is a base structure if all the domain elements are the value of a unique F-term.That is,for every element e∈D there exists a unique F-term t such that[[t]]S=e.•A structure S of vocabulary F over domain D is the union of structures S and S of vocabularies F and F ,respectively,over D,denoted S=S S , if F=F F ,[[l]]S=[[l]]S for every location l of S ,and[[l]]S=[[l]]S for every location l of S .A base structure is isomorphic to the standard free term algebra(Herbrand universe)of its vocabulary.4Proposition3.2Let S be a base structure over vocabulary G and domain D. Then:•Vocabulary G has at least one nullary function.•Domain D is countable.•Every domain element is the value of a unique location of S.Axiom3.3(Initial Data)The procedure’s initial states consist of an infi-nite base structure and an almost-constant structure.That is,for some infinite base structure BS and almost-constant structure AS,and for every initial state s0,we have s0=BS AS {In}for some In.Definition3.4(Effective Procedures and Models)•An effective procedure A is a sequential procedure satisfying the initial-data postulate.An effective procedure is,accordingly,a tuple F,In,Out,D,S,S0,τ,BS,AS ,adding a base structure BS and an almost-constant structure AS to the sequential procedure tuple,defined in Defini-tion1.2.•An effective model E is a set of effective procedures that share the same base structure.That is,BS A=BS B for all effective procedures A,B∈E.A computational model might have some predefined complex operations,as in a RAM model with built-in integer multiplication.Viewing such a model as a sequential algorithm allows the initial state to include these complex functions as oracles[3].Since we are demanding effectiveness,we cannot allow arbitrary functions as oracles,and force the initial state to include only finite data over and above the domain representation(Axiom3.3).Hence,the view of the model at the required abstraction level is accomplished by“big steps”,which may employ complex functions,while these complex functions are implemented by afinite sequence of“small steps”behind the scenes.That is,(the extensionality of)an effective procedure may be included(as an oracle) in the initial states of other effective procedures.(Cf.the“turbo”steps of[2].) 4Effective Includes ComputableTuring machines,and other computational methods,can be shown to be ef-fective.We demonstrate below how Turing machines and counter machines can be described by effective models.4.1Turing Machines.We consider Turing machines(TM)with two-way infinite tapes.The tape alphabet is{0,1}.The two edges of the tape are marked by a special$ sign.As usual,the state(instantaneous description)of a Turing machine is Left,q,Right ,where Left is afinite string containing the tape section left of the reading head,q is the internal state of the machine,and Right is afinite5string with the tape section to the right to the read head.The read head points to thefirst character of the Right string.TMs can be described by the following effective model E:Domain:Finite strings ending with a$sign.That is the domain D= {0,1}∗$.Base structure:Constructors for thefinite strings(name/arity):$/0, Cons0/1,and Cons1/1.Almost-constant structure:•Input and Output(nullary functions):In,Out.The value of In at the initial state is the content of the tape,as a string over{0,1}∗ending with a$sign.•Constants for the alphabet characters and TM-states(nullary):0,1,q0, q1,...,q k.Their initial value is irrelevant,as long it is a different value for each constant.•Variables to keep the current status of the Turing machine(nullary):Left, Right,and q.Their initial values are:Left=$,Right=$,and q=q0.•Functions to examine the tape(unary functions):Head and Tail.Their initial value,at all locations,is$.Transition function:For each Turing machine m∈TM,define an effective procedure m ∈E via aflat program looking like this:if q=q_0%TM’s state q_0if Head(Right)=0%write1,move right,switch to q_3Left:=Cons_1(Left)Right:=Tail(Right)q:=q_3%Internal operationsTail(Cons_1(Left)):=LeftHead(Cons_1(Left)):=1if Head(Right)=1%write0,move left,switch to q_1Left:=Tail(Left)Right:=Cons_0(Right)q:=q_1%Internal operationsTail(Cons_0(Right)):=RightHead(Cons_0(Left)):=0if q=q_1%TM’s state q_1...if q=q_k%the halting stateOut:=Right6The updates for Head and Tail are bookkeeping operations that are really part of the“behind-the-scenes”small steps.The procedure also requires some initialization,in order tofill the internal functions Head and Tail with their values for all strings up to the given input string.It sequentially enumerates all strings,assigning their Head and Tail values,until encountering the input string.The following internal variables (nullary functions)are used in the initialization(Name=initial value):New= $,Backward=0,Forward=1;AddDigit=0,and Direction=$.%Sequentially constructing the Left variable%until it equals to the input In,for filling%the values of Head and Tail.%The enumeration is$,0$,1$,00$,01$,...if Left=In%FinishedRight:=LeftLeft:=$else%Keep enumeratingif Direction=New%default valif Head(Left)=$%$->0$Left:=Cons_0(Left)Head(Cons_0(Left)):=0Tail(Cons_0(Left)):=Leftif Head(Left)=0%e.g.110$->111$Left:=Cons_1(Tail(Left))Head(Cons_1(Tail(Left)):=1Tail(Cons_1(Tail(Left)):=Tail(Left)if Head(Left)=1%01$->10$;11$->000$Direction:=BackwardLeft:=Tail(Left)Right:=Cons_0(Right)if Direction=Backwardif Head(Left)=$%add rightmost digitDirection:=ForwardAddDigit:=Trueif Head(Left)=0%change to1Left:=Cons_1(Tail(Left))Direction:=Forwardif Head(Left)=1%keep backwardsLeft:=Tail(Left)Right:=Cons_0(Right)if Direction=Forward%Gather right0sif Head(Right)=$%finished gatheringDirection:=Newif AddDigit=1Left:=Cons_0(Left)Head(Cons_0(Left)):=0Tail(Cons_0(Left)):=Left7AddDigit=0elseLeft:=Cons_0(Left)Right:=Tail(Right)Head(Cons_0(Left)):=0Tail(Cons_0(Left)):=Left4.2Counter Machines.Counter machines(CM)can be described by the following effective model E:The domain is the natural numbers N.The base structure consists of a nullary function Zero and a unary function Succ,interpreted as the reg-ular successor over N.The almost-constant structure has the vocabulary (name/arity):Out/0,CurrentLine/0,P red/1,Next/1,Reg0,...,Reg n/0, and Line1,...,Line k/0.Its initial data are T rue=1,Line i=i,and all other locations are0.The same structure applies to all machines,except for the number of registers(Reg i)and the number of lines(Line i).For every counter machine m∈CM,define an effective procedure m ∈E with the followingflat program:%Initialization:fill the values of the%predecessor function up to the value%of the inputif CurrentLine=Zeroif Next=Succ(In)CurrentLine:=Line_1elsePred(Succ(Next)):=NextNext:=Succ(Next)%Simulate the counter-machine program.%The values of a,b,c and d are as in%the CM-program lines.if CurrentLine=Line_1Reg_a:=Succ(Reg_a)%or Pred(Reg_a)Pred(Succ(Reg_a)):=Reg_aif Reg_b=ZeroCurrentLine:=celseCurrentLine:=dif CurrentLine=Line_2...%Always:Out:=Reg_085DiscussionIn[3],Gurevich proved that any algorithm satisfying his postulates can be represented by an Abstract State Machine.But an ASM is designed to be “abstract”,so is defined on top of an arbitrary structure that may contain non-effective functions.Hence,it may compute non-effective functions.We have adopted Gurevich’s postulates,but added an additional postulate(Axiom3.3) for effectivity:an algorithm’s initial state may contain onlyfinite data in addition to the domain representation.Different runs of the same procedure share the same initial data,except for the input;different procedures of the same model share a base structure.Here,we showed that Turing machines and counter machines are effective models.In[1],we prove theflip side,namely that Turing machines can sim-ulate all effective models.To cover hypercomputational models,one would need to relax the effectivity axiom or the bounded exploration requirement. References[1]Udi Boker and Nachum Dershowitz,A formalization of the Church-TuringThesis,in preparation.[2]N.G.Fruja and R.F.St¨a rk.The hidden computation steps of Turbo AbstractState Machines.In E.B¨o rger,A.Gargantini,and E.Riccobene,editors,Abstract State Machines—Advances in Theory and Applications,10th International Workshop,ASM2003,Taormina,Italy,pages244–262.Springer-Verlag,Lecture Notes in Computer Science2589,2003.[3]Yuri Gurevich.Sequential abstract state machines capture sequential algorithms.ACM Transactions on Computational Logic,1:77–111,2000.9。

Computer Methods in Applied Mechanics and Engineering

Computer Methods in Applied Mechanics and Engineering

Topological clustering for water distribution systems analysis Environmental Modelling & Software , In Press, Corrected Proof ,Available online 15 February 2011Lina Perelman, Avi OstfeldShow preview | Related articles | Related reference work articlesPurchase $ 19.95 716 HyphArea —Automated analysis of spatiotemporal fungal patterns Original Research ArticleJournal of Plant Physiology , Volume 168, Issue 1, 1January 2011, Pages 72-78Tobias Baum, Aura Navarro-Quezada, Wolfgang Knogge, Dimitar Douchkov, Patrick Schweizer, Udo Seiffert Close preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractIn phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often themethod of choice. In order to close the gap between subjective humaninspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host –pathogen combinations (e.g., barley —Blumeria graminis , barley —Rhynchosporium secalis ) usingdifferent microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data thatachieved good results with several microscope imaging protocols.Furthermore, adaptability towards different host –pathogen systems wasobtained by using a classification that is based on a genetic algorithm. Thedeveloped software system was named HyphArea , since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of atypical use case the utilization and basic properties of HyphArea could bePurchase $ 41.95demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Article Outline Introduction Material and methodsExperimental set-upThe host and pathogensFluorescence image acquisition protocolBright field image acquisition protocolSegmentation of hyphal coloniesAchieving adaptive behaviorImage corpusResultsR. secalis infected material and segmentationB. graminis infected material and segmentationDiscussionAcknowledgementsReferences717 Automated generation of contrapuntal musical compositions using probabilistic logic inDerive Original Research ArticleMathematics and Computers in Simulation , Volume 80, Issue 6, February 2010, Pages 1200-1211Gabriel Aguilera, José Luis Galán, Rafael Madrid, Antonio Manuel Martínez, Yolanda Padilla, Pedro Rodríguez Close preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractIn this work, we present a new application developed in Derive 6 to compose Purchase$ 31.50counterpoint for a given melody (―cantus firmus‖). The result isnon-deterministic, so different counterpoints can be generated for a fixed melody, all of them obeying classical rules of counterpoint. In the case where the counterpoint cannot be generated in a first step, backtracking techniques have been implemented in order to improve the likelihood of obtaining a result. The contrapuntal rules are specified in Derive using probabilistic rules of a probabilistic logic, and the result can be generated for both voices (above and below) of first species counterpoint.The main goal of this work is not to obtain a ―professional‖ counterpoint generator but to show an application of a probabilistic logic using a CAS tool. Thus, the algorithm developed does not take into account stylistic melodic characteristics of species counterpoint, but rather focuses on the harmonic aspect.The work developed can be summarized in the following steps:(1) Development of a probabilistic algorithm in order to obtain anon-deterministic counterpoint for a given melody.(2) Implementation of the algorithm in Derive 6 using probabilistic Logic.(3) Implementation in Java of a program to deal with the input (―cantus firmus‖) and with the output (counterpoint) through inter-communication with the module developed in D ERIVE. This program also allows users to listen to the result obtained.Article Outline1. Introduction1.1. Historical background1.2. ―Cantus Firmus‖ and counterpoint1.3. Work developed1.4. D ERIVE 6 and J AVA2. Description of the algorithm2.1. The process2.2. Rules2.3.Example2.4. Backtracking3. Description of the environment3.1. Menu bar3.2. Real time modifications3.3. Inter-communication with DERIVE3.4. Playing the composition4. Results4.1. Example 1: Above voice against ―Cantus firmus‖4.2. Example 2: Below voice against ―Cantus firmus‖4.3. Example 3: Above and below voices against ―Cantus firmus‖5. Conclusions and future workReferences718 Study of pharmaceutical samples by NIR chemical-image and multivariate analysis OriginalResearch ArticleTrAC Trends in Analytical Chemistry , Volume 27, Issue 8, September 2008, Pages 696-713José Manuel Amigo, Jordi Cruz, Manel Bautista, Santiago Maspoch, Jordi Coello, Marcelo BlancoClose preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractNear-infrared spectroscopy chemical imaging (NIR-CI) is a powerful tool for Purchase$ 31.50providing a great deal of information on pharmaceutical samples, since the NIR spectrum can be measured for each pixel of the image over a wide range of wavelengths.Joining NIR-CI with chemometric algorithms (e.g., Principal Component Analysis, PCA) and using correlation coefficients, cluster analysis, classical least-square regression (CLS) and multivariate curve resolution-alternating least squares (MCR-ALS) are of increasing interest, due to the great amount of information that can be extracted from one image. Despite this, investigation of their potential usefulness must be done to establish their benefits and potential limitations.We explored the possibilities of different algorithms in the global study (qualitative and quantitative information) of homogeneity in pharmaceutical samples that may confirm different stages in a blending process. For this purpose, we studied four examples, involving four binary mixtures in different concentrations.In this way, we studied the benefits and the drawbacks of PCA, cluster analysis (K-means and Fuzzy C-means clustering) and correlation coefficients for qualitative purposes and CLS and MCR-ALS for quantitative purposes.We present new possibilities in cluster analysis and MCR-ALS in image analysis, and we introduce and test new BACRA software for mapping correlation-coefficient surfaces.Article Outline1. Introduction2. Structure of hyperspectral data3. Preprocessing the hyperspectral image4. Techniques for exploratory analysis4.1. Principal Component Analysis (PCA)4.2. Cluster analysis4.2.1. K-means algorithm4.2.2. Fuzzy C-means algorithm4.2.3. Number of clusters4.2.3.1. Silhouette index4.2.3.2. Partition Entropy index4.3. Similarity using correlation coefficients5. Techniques for estimating analyte concentration in each pixel 5.1. Classical Least Squares5.2. Multivariate Curve Resolution-Alternating Least Squares5.3. Augmented MCR-ALS for homogeneous samples6. Experimental and data treatment6.1. Reagents and instruments6.2. Experimental6.3. Data treatment7. Results and discussion7.1. PCA analysis7.2. Cluster analysis7.2.1. K-means results for heterogeneous samples7.2.2. FCM results7.3. Correlation-coefficient maps–BACRA results7.4. CLS results7.5. MCR-ALS and augmented-MCR-ALS results8. Conclusions and perspectivesAcknowledgementsReferences719Validation and automatic test generation on UMLmodels: the AGATHA approach Original Research ArticleElectronic Notes in Theoretical Computer Science, Volume66, Issue 2, December 2002, Pages 33-49David Lugato, Céline Bigot, Yannick ValotClose preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractThe related economic goals of test generation are quite important for softwareindustry. Manufacturers ever seeking to increase their productivity need toavoid malfunctions at the time of system specification: the later the defaultsare detected, the greater the cost is. Consequently, the development oftechniques and tools able to efficiently support engineers who are in charge of elaborating the specification constitutes a major challenge whose falloutconcerns not only sectors of critical applications but also all those where poor conception could be extremely harmful to the brand image of a product.This article describes the design and implementation of a set of tools allowingsoftware developers to validate UML (the Unified Modeling Language)specifications. This toolset belongs to the AGATHA environment, which is anautomated test generator, developed at CEA/LIST.The AGATHA toolset is designed to validate specifications of communicatingconcurrent units described using an EIOLTS formalism (Extended InputOutput Labeled Transition System). The goal of the work described in thispaper is to provide an interface between UML and an EIOLTS formalism givingthe possibility to use AGATHA on UML specifications.In this paper we describe first the translation of UML models into the EIOLTS formalism, and the translation of the results of the behavior analysis, providedby AGATHA, back into UML. Then we present the AGATHA toolset; wePurchase$ 35.95particularly focus on how AGATHA overcomes several problems of combinatorial explosion. We expose the concept of symbolic calculus and detection of redundant paths, which are the main principles of AGATHA's kernel. This kernel properly computes all the symbolic behaviors of a systemspecified in EIOLTS and automatically generates tests by way of constraintsolving. Eventually we apply our method to an example and explain thedifferent results that are computed.720Text mining techniques for patent analysis OriginalResearch ArticleInformation Processing & Management,Volume 43, Issue5, September 2007, Pages 1216-1247Yuen-Hsien Tseng, Chi-Jen Lin, Yu-I LinClose preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractPatent documents contain important research results. However, they arelengthy and rich in technical terminology such that it takes a lot of humanefforts for analyses. Automatic tools for assisting patent engineers or decisionmakers in patent analysis are in great demand. This paper describes a seriesof text mining techniques that conforms to the analytical process used bypatent analysts. These techniques include text segmentation, summaryextraction, feature selection, term association, cluster generation, topicidentification, and information mapping. The issues of efficiency andeffectiveness are considered in the design of these techniques. Someimportant features of the proposed methodology include a rigorous approachto verify the usefulness of segment extracts as the document surrogates, acorpus- and dictionary-free algorithm for keyphrase extraction, an efficientco-word analysis method that can be applied to large volume of patents, andan automatic procedure to create generic cluster titles for ease of resultPurchase$ 31.50interpretation. Evaluation of these techniques was conducted. The results confirm that the machine-generated summaries do preserve more important content words than some other sections for classification. To demonstrate the feasibility, the proposed methodology was applied to a real-world patent set for domain analysis and mapping, which shows that our approach is more effective than existing classification systems. The attempt in this paper to automate the whole process not only helps create final patent maps for topic analyses, but also facilitates or improves other patent analysis tasks such as patent classification, organization, knowledge sharing, and prior art searches. Article Outline1. Introduction2. A general methodology3. Technique details3.1. Text segmentation3.2. Text summarization3.3. Stopwords and stemming3.4. Keyword and phrase extraction3.5. Term association3.6. Topic clustering3.6.1. Document clustering3.6.2. Term clustering followed by document categorization3.6.3. Multi-stage clustering3.6.4. Cluster title generation3.6.5. Mapping cluster titles to categories3.7. Topic mapping4. Technique evaluation4.1. Text segmentation4.2. Text summarization 4.2.1. Feature selection4.2.2. Experiment results4.2.3. Findings4.3. Key term extraction and association4.4. Cluster title generation4.5. Mapping cluster titles to categories5. Application example5.1. The NSC patent set5.2. Text mining processing5.3. Topic mapping5.4. Topic analysis comparison6. Discussions7. Conclusions and future workAcknowledgementsAppendix A. AppendixAppendix B. AppendixReferences721Incremental bipartite drawing problem OriginalResearch ArticleComputers & Operations Research, Volume 28, Issue 13,November 2001, Pages 1287-1298Rafael Martí, Vicente EstruchClose preview | Related articles | Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractLayout strategies that strive to preserve perspective from earlier drawings arecalled incremental. In this paper we study the incremental arc crossingminimization problem for bipartite graphs. We develop a greedy randomizedPurchase$ 31.50adaptive search procedure (GRASP) for this problem. We have also developed a branch-and-bound algorithm in order to compute the relative gap to the optimal solution of the GRASP approach. Computational experiments are performed with 450 graph instances to first study the effect of changes in grasp search parameters and then to test the efficiency of the proposed procedure.Scope and purposeMany information systems require graphs to be drawn so that these systems are easy to interpret and understand. Graphs are commonly used as a basic modeling tool in areas such as project management, production scheduling, line balancing, business process reengineering, and software visualization. Graph drawing addresses the problem of constructing geometric representations of graphs. Although the perception of how good a graph is in conveying information is fairly subjective, the goal of limiting the number of arc crossings is a well-admitted criterion for a good drawing. Incremental graph drawing constructions are motivated by the need to support the interactive updates performed by the user. In this situation, it is helpful to preserve a―mental picture‖ of the layout of a graph over successive drawings. It would not be very intuitive or effective for a user to have a drawing tool in which after a slight modification of the current graph, the resulting drawing appears very different from the previous one. Therefore, generating incrementally stable layouts is important in a variety of settings. Since ―real-world‖ graphs tend to be large, an automated procedure to deal with the arc crossing minimization problem in the context of incremental strategies is desirable. In this article, we develop a procedure to minimize arc crossings that is fast and capable of dealing with large graphs, restricting our attention to bipartite graphs.Article Outline1. Introduction2. Branch-and-bound approach3. GRASP approach3.1. Construction phase3.2. Improvement phase4. Computational experiments5. ConclusionsReferencesVitae722 Applications of vibrational spectroscopy to the analysis of novel coatings Original Research ArticleProgress in Organic Coatings , Volume 41, Issue 4, May2001, Pages 254-260A. J. Vreugdenhil, M. S. Donley, N. T. Grebasch, R. J.Passinault Close preview | Related articles |Related reference work articles Abstract | Figures/Tables | ReferencesAbstractPrecise analysis is essential to the development of novel coating technologiesand to the systematic modification of metal surfaces. Ideally, these analysistechniques will provide molecular information and will be sensitive to changesin the chemical environment of interfacial species. Many of the techniquesavailable in the field of vibrational spectroscopy demonstrate some of thesecharacteristics. Examples of the ways in which FT-IR spectroscopy are appliedto the investigation of coatings at AFRL will be described.Attenuated total reflectance (ATR), specular reflectance and photoacousticspectroscopy (PAS) are important techniques for the analysis of surfaces.Purchase $ 31.50Both ATR and PAS can be used to provide depth-profiling information crucialfor the study of coatings. Recent developments in digital signal processing(DSP) and step scan interferometry which have dramatically improved the reliability and ease of use of PAS for depth analysis will be discussed. Article Outline 1. Introduction1.1. ATR theory1.2. PAS theory1.3. IR microscopy2. Experimental2.1. Instrumentation2.2. Samples3. Results and discussion3.1. IR microscopy3.2. Photoacoustic spectroscopy3.3. Variable angle ATR4. ConclusionsReferences 723 Affective disorders in children and adolescents: addressing unmet need in primary caresettings Review ArticleBiological Psychiatry , Volume 49, Issue 12, 15 June 2001, Pages 1111-1120Kenneth B. Wells, Sheryl H. Kataoka, Joan R. Asarnow Close preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractAffective disorders are common among children and adolescents but mayPurchase$ 31.50often remain untreated. Primary care providers could help fill this gap because most children have primary care. Yet rates of detection and treatment for mental disorders generally are low in general health settings, owing to multiple child and family, clinician, practice, and healthcare system factors. Potential solutions may involve 1) more systematic implementation of programs that offer coverage for uninsured children; 2) tougher parity laws that offer equity in defined benefits and application of managed care strategies across physical and mental disorders; and 3) widespread implementation of quality improvement programs within primary care settings that enhancespecialty/primary care collaboration, support use of care managers to coordinate care, and provide clinician training in clinically and developmentally appropriate principles of care for affective disorders. Research is needed to support development of these solutions and evaluation of their impacts. Article Outline• Introduction• Impact and appropriate treatment of affective disorders in youths• Unmet need for child mental health care• Primary care and treatment of child mental disorders• Barriers to detection and appropriate treatment in primary care• Child characteristics• Parental characteristics• Clinician and clinician–patient relationship factors• Finding policy solutions to unmet need: coverage for the uninsured and par ity for the insured• Finding practice-based solutions: implementing quality improvement programs• Discussion• Acknowledgements • References724 Orthogonal drawings of graphs with vertex and edgelabels Original Research ArticleComputational Geometry, Volume 32, Issue 2, October2005, Pages 71-114Carla Binucci, Walter Didimo, Giuseppe Liotta, MaddalenaNonatoClose preview | Related articles | Related reference work articlesAbstract | ReferencesAbstractThis paper studies the problem of computing orthogonal drawings of graphswith labels on vertices and edges. Our research is mainly motivated bySoftware Engineering and Information Systems domains, where tools like UMLdiagrams and ER-diagrams are considered fundamental for the design ofsophisticated systems and/or complex data bases collecting enormousamount of information. A label is modeled as a rectangle of prescribed widthand height and it can be associated with either a vertex or an edge. Ourdrawing algorithms guarantee no overlaps between labels, vertices, and edgesand take advantage of the information about the set of labels to compute thegeometry of the drawing. Several additional optimization goals are taken intoaccount. Namely, the labeled drawing can be required to have either minimumtotal edge length, or minimum width, or minimum height, or minimum areaamong those preserving a given orthogonal representation. All these goalslead to NP-hard problems. We present MILP models to compute optimaldrawings with respect to the first three goals and an exact algorithm that isbased on these models to compute a labeled drawing of minimum area. Wealso present several heuristics for computing compact labeled orthogonaldrawings and experimentally validate their performances, comparing theirsolutions against the optimum.Purchase$ 31.50725 Finite element response sensitivity analysis of multi-yield-surface J 2 plasticitymodel by direct differentiation method Original Research Article Computer Methods in Applied Mechanics and Engineering ,Volume 198, Issues 30-32, 1 June2009, Pages 2272-2285Quan Gu, Joel P. Conte, AhmedElgamal, Zhaohui YangClose preview | Related articles | Relatedreference work articles Abstract | Figures/Tables | ReferencesAbstractFinite element (FE) response sensitivity analysis is an essential tool for gradient-basedoptimization methods used in varioussub-fields of civil engineering such asstructural optimization, reliability analysis,system identification, and finite element modelupdating. Furthermore, stand-alone sensitivityanalysis is invaluable for gaining insight intothe effects and relative importance of varioussystem and loading parameters on systemresponse. The direct differentiation method(DDM) is a general, accurate and efficientmethod to compute FE response sensitivitiesto FE model parameters. In this paper, theDDM-based response sensitivity analysismethodology is applied to a pressureindependent multi-yield-surface J 2 plasticityPurchase$ 35.95material model, which has been used extensively to simulate the nonlinear undrained shear behavior of cohesive soils subjected to static and dynamic loading conditions. The complete derivation of the DDM-based response sensitivity algorithm is presented. This algorithm is implemented in a general-purpose nonlinear finite element analysis program. The work presented in this paper extends significantly the framework of DDM-based response sensitivity analysis, since it enables numerous applications involving the use of the multi-yield-surface J2 plasticity material model. The new algorithm and its software implementation are validated through two application examples, in which DDM-based response sensitivities are compared with their counterparts obtained using forward finite difference (FFD) analysis. The normalized response sensitivity analysis results are then used to measure the relative importance of the soil constitutive parameters on the system response.Article Outline1. Introduction2. Constitutive formulation ofmulti-yield-surface J2 plasticity model andnumerical integration2.1. Multi-yield surfaces2.2. Flow rule2.3. Hardening law2.3.1. Active yield surface update2.3.2. Inner yield surface update3. Derivation of response sensitivity algorithm for multi-yield-surface J2 plasticity model3.1. Introduction3.2. Displacement-based FE response sensitivity analysis using DDM3.3. Stress sensitivity for multi-yield-surface J2 material plasticity model3.3.1. Parameter sensitivity of initial configuration of multi-yield-surface plasticity model3.3.2. Stress response sensitivity3.3.3. Sensitivity of hardening parameters of active and inner yield surfaces4. Application examples4.1. Three-dimensional block of clay subjected to quasi-static cyclic loading4.2. Multi-layered soil column subjected to earthquake base excitation5. ConclusionsAcknowledgementsReferencesresults 701 - 725991 articles found for: pub-date > 1999 and tak((Communications or R&D or Engineer or DSP or software or "value-added" or services) and (communication or carriers or (Communication modem) or digital or signal or processing or algorithms or product or development or TI or Freescale) and (multicore or DSP or Development or Tools or Balanced or signal or processing or modem or codec) and (DSP or algorithm or principle or WiMAX or LTE or Matlab or Simulink) and (C or C++ or DSP or FPGA or MAC or layer or protocol or design or simulation or implementation or experience or OPNET or MATLAB or modeling) and (research or field or deal or directly or engaged or Engineers or Graduates))Edit this search | Save this search | Save as search alert | RSS Feed。

计算机发展通史第一讲计算机编年简史

计算机发展通史第一讲计算机编年简史
本文由wonder791123贡献
ppt文档可能在WAP端浏览体验不佳。建议您优先选择T首三百八十年 第一讲:回首三百八十年——计算机编 计算机编 年简史-电子管时代 年简史 电子管时代 主讲人:王锋 江西中医学院
人物介绍
冯.诺依曼 图灵 ada
1
1946 年:2月14日,美国宾西法尼亚大学 摩尔学院教授莫契利(J. Mauchiy)和埃克 特(J.Eckert)共同研制成功了ENIAC (Electronic Numerical Integrator And Computer):计算机。这台计算机总共安装 了17468只电子管,7200个二极管,70000 多电阻器,10000多只电容器和6000只继 电器,电路的焊接点多达50万个,机器被安 装在一排2.75米高的金属柜里,占地面积为 170平方米左右,总重量达到30吨,其运算 速度达到每秒钟5000次加法,可以在 3/1000秒时间内做完两个10位数乘法。
1936 年:阿兰.图灵发表论文《论可计算数 及其在判定问题中的应用》,首次阐明了 现代电脑原理,从理论上证明了现代通用 计算机存在的可能性,图灵把人在计算时 所做的工作分解成简单的动作,与人的计 算类似,机器需要:(1)存储器,用于贮 存计算结果;(2)一种语言,表示运算和 数字;(3)扫描;(4)计算意向,即在计算 过程中下一步打算做什么;(5)执行下一 步计算。具体到一步计算,则分成:(1) 改变数字可符号;(2)扫描区改变,如往 左进位和往右添位等;(3)改变计算意向 等。整个计算过程采用了二进位制,这就 是后来人们所称的“图灵机”。
1941年:楚泽完成了Z3计算机的研制工作, 这是第一台可编程的电子计算机。可处理7 位指数、14位小数。使用了大量的真空管。 每秒种能作3到4次加法运算,一次乘法需 要3到5秒。 1942 年:时任美国依阿华州立大学数学物 理教授的阿塔纳索夫(John V. Atanasoff) 与研究生贝瑞(Clifford Berry)组装了著名 的ABC(Atanasoff-Berry Computer)计算 机,共使用了300多个电子管,这也是世界 上第一台具有现代计算机雏形的计算机。 但是由于美国政府正式参加第二次世界大 战,致使该计算机并没有真正投入运行。

智能计算1

智能计算1
32
图灵试验
上述两种对话的区别在于,第一种可明显地感到 回答者是从知识库里提取简单的答案,第二种则具有 分析综合的能力,回答者知道观察者在反复提出同样 的问题。“图灵试验”没有规定问题的范围和提问的 标准,如果想要制造出能通过试验的机器,以我们现 在的技术水平,必须在电脑中储存人类所有可以想到 的问题,储存对这些问题的所有合乎常理的回答,并 且还需要理智地作出选择。
3
计算与电子计算机
二、第一台电子计算机(ENIAC:Electronic Numerical Integrator and Computer)
① 1946年,在美国宾夕法尼亚大学莫尔学院产生; ② 重量30吨,占地170平方米,功率140千瓦; ③ 电子管18000多个,继电器1500多个; ④ 采用10进制,机器字长10位,运算最快速度5000次/秒; ⑤ 工作方式:通过插件式“外接”线路实现的,尚未采用“程序存储”
11
冯·诺依曼
1928年,美国数学泰斗、普林斯顿高级研究院 维伯伦教授(O.Veblen)广罗天下之英才,一封烫 金的大红聘书,寄给了柏林大学这位无薪讲师,请他 去美国讲授“量子力学理论课”。冯·诺依曼预料到 未来科学的发展中心即将西移,欣然同意赴美国任教。 1930年,27岁的冯·诺依曼被提升为教授;1933年, 他又与爱因斯坦一起,被聘为普林斯顿高等研究院第 一批终身教授,而且是6名大师中最年轻的一名。
20
Turing图灵
1937年,伦敦权威的数学杂志又收到图灵一篇论文 《论可计算数及其在判定问题中的应用》,作为阐明 现代计算机原理的开山之作,被永远载入了计算机的 发展史册。
这篇论文原本是为了解决一个基础性的数学问题:是 否只要给人以足够的时间演算,数学函数都能够通过 有限次运算求得解答?传统数学家当然只会想到用公 式推导证明它是否成立,可是图灵独辟蹊径地想出了 一台冥冥之中的机器。

电子信息工程论文(英文)

电子信息工程论文(英文)

Electronic and information engineering is the application of the computer and modem technology for electronic information control and information processing the discipline, the main research information acquisition and processing, electronic equipment and information system design, development, application and integration. Now, electronic and information engineering has covered many aspects of the society, like telephone exchange station how to deal with various phone signal, a mobile phone is how to transfer our voice even image, the network around us how to transfer data, and even of the army of the information age how to confidential information transmission, are involved in electronic and information engineering application technology. We can through some basic knowledge learning know these things, and able to apply more advanced technology in new product research and electronic and information engineering is professional This program is to cultivate master the modern electronic technology theory, familiar with electronic system design principle and design method, have stronger computer, foreign language and corresponding engineering technology application ability, facing the electronic technology, automatic control and intelligentcontrol, computer and network technology, electronic, information, communication field of broad caliber, the high quality, comprehensive development of integrated with innovation ability engineering technology talent development.Electronic information engineering major is learning the basic circuit of knowledge, and master the computer processing with the method of information. The first to have solid mathematical knowledge, for physics requirement is high, and mainly electrical; To learn many circuit knowledge, electronic technology, signal and system, computer control principle, communication principle, basic courses. Learning in electronic and information engineering design, to themselves have to connect with computer some circuit experiment, to operate and use tools requirements is also higher. Such as their connection sensor circuit, with computer set small communications system, will also visit some big company of electronic and information processing equipment, understanding mobile phone signal, cable TV is how to transmission, etc, and can organic ?Course classification:1. The mathematicsThe higher mathematics-(the department of mathematics mathematical analysis + space analytic geometry + ordinary differential equation) speak mainly is calculus, to learn thecircuit of the people, the calculus (a yuan, multiple), curve surface integral, series, ordinary differential equation, Fourier transform, the other the Laplace transformation in the subsequent frequently encountered in theory.Probability and statistics-all communication, signal processing with relevant course with probability theory.Mathematical physical methods-some school graduate student intellect, some schools into complex variable functions (+ integral transform) and mathematical physics equation (is partial differential equations). Study the mathematical basis of electromagnetic field, microwave.May also be introduced stochastic process (need to probability basis) and functional analysis.2. TheoryThe circuit principle-basic of the program.Signal and system, continuous and discrete signal time domain, frequency domain analysis, is very important but also is difficultDigital signal processing-discrete signal and system analysis, signal digital transformation, digital filters, and so on.The application of information theory, information theoryrange is very wide, but electronic engineering often put this course speak into coding theory.Electromagnetic field and wave-the day the course, basically is the counterpart of the dynamics in the physics department of the electricity, using mathematical to study the magnetic field (constant electromagnetic field, time-dependent electromagnetic fields).3. CircuitAnalog circuit-the transistor, the op-amp, power supply, A/D and D/A.Digital circuit--a gate, trigger and combination circuit, timing circuit, programmable devices, digital electronic system4. ComputerMicrocomputer principle-80 x86 hardware work principle.Assembly language, direct correspondence of the CPU commands programming language.Single chip microcomputer CPU and control circuit, made a piece of integrated circuit, all sorts of electric equipment of all necessary, normal explanation 51 series.Cc++ language-(now speak only c language schools may not much) writing system programming language, and the development of hardware related often are used.Software foundation-(computer specialized data structure + + + algorithm operating system database principles + compilation approach + software engineering) can also be a few course, speaks the principle of software and how to write software.Professional training requirements:This major is an electronic and information engineering major. Students of this specialty mainly studies the signal acquisition and processing, the power plant equipment information system of professional knowledge, by electronic and information engineering practice of basic training, with design, development, application and integrated electronic equipment and the ability of the information system.Professional training requirements:This major is an electronic and information engineering major. Students of this specialty mainly studies the signal acquisition and processing, the power plant equipment information system of professional knowledge, by electronic and information engineering practice of basic training, with design, development, application and integrated electronic equipment and the ability of the information system.The graduates should have the following several aspects of knowledge and ability:1. Can a system to manage the field wide technology basic theoretical knowledge, to adapt to the electronic and information engineering extensive work range2. Grasp the electronic circuit of the basic theory and experiment technology, analysis and design of electronic equipment basic ability3. To grasp the information acquisition, processing the basic theory and application of the general method, has the design, integration, application and computer simulation of information system of the basic skills.4. Understand the basic principles of information industry, policies and regulations, understand the basic knowledge of the enterprise management5. Understand electronic equipment and information system of theoretical frontiers, with research, development of new system, the new technology preliminary ability6. Master of literature retrieval, material inquires basic ?The future:Electronic information engineering major is learning the basic circuit of knowledge, and master the computer processing with the method of information. The first to have solid mathematical knowledge, for physics requirement is high, andmainly electrical; To learn many circuit knowledge, electronic technology, signal and system, computer control principle, communication principle, basic courses. Learning in electronic and information engineering design, to themselves have to connect with computer some circuit experiment, to operate and use tools requirements is also higher. Such as their connection sensor circuit, with computer set small communications system, will also visit some big company of electronic and information processing equipment, understanding mobile phone signal, cable TV is the ? how to transferAlong with the social informatization of thorough, the most industries need electronic and information engineering professionals, and a high salary. Students can be engaged in electronic equipment and information system design, application development and technical management, etc. For example, make electronic engineers, design develop some electronics, communication device; Do software engineer, hardware design, development and all kinds of relevant software; Do project executive, planning some big system, the experience, knowledge requires high; Still can continue to study to become a teacher, engaged in scientific research work, etc.China IT industry started so far have ten years, very young.Fresh things, chaoyang industry is always much attention. It is for this reason, the computer professional quickly become the university of popular major, many schoolmates sharpening again sharpened head to the ivory tower of ivory top drill, or for interest, or to make a living master a foreign skills, or for future better and faster development.The first few years of the computer professional than hot, in recent years professional to this choice in the gradually rational and objective. Students and parents consider is more of a more advantageous to the personal self based on long-term development of the starting point.In this industry, seems to have the potential law: a short career. So the body not old heart first, thought the "hope the way how to turn what should IT management, sales, or under IT the bodies from beginning to the past business, or simply turned... ., exactly what to do, still wandering in the, in the confusion, the code of a few years ago life seems to be erased it shall not plan, leaving only the deserted what some memories.Too much about the industry's bad, many, many elder's kind advice, in computer professional students in the heart of the buried the uneasy seeds, whether should continue to choose the bank, or career path should be explicit turn? Choose this line,is likely to mean that the choice of physical and mental suffering course, accept the industry of experience.Exit? Is the heart has unwilling, think about for several years hard work, they write in pencil full program writing paper, the class was, when working with the, less romantic hold lots of time, for the future is more a self-confidence to submitting a professional, the profound professional resume. Who would like to be the last into the heart to the east of the water flow.Any one industry all have their own bright and gloomy, just people don't understand. For just the us towards campus, has entered the society for seniors learn elder sister, for different positions of each elder, life is always difficult, brilliant casting is progressive, we can not only see industry bright beautiful beautiful appearance, and neglect of its growth lift behind the difficult, the gap between the two extremes of course huge, from such a perspective, apparently went against the objective. And for his future career build is the same, it's early form, its make, its cast, it's affluent, and it's thick, is a brick step by step a tired build by laying bricks or stones.Exactly do a "starter, don't want to entry-level, want to introduction and no entry-level" IT people, the answer at ease in each one.Can say electronic and information engineering is a promising discipline, is not optional despise any a subject. To do a line, loves a line, since choosing it, will it never do things by halves.on Electronic and information engineering is the application of the computer and modem technology for electronic information control and information processing the discipline, the main research information acquisition and processing, electronic equipment and information system design, development, application and integration. Now, electronic and information engineering has covered many aspects of the society, like telephone exchange station how to deal with various phone signal, a mobile phone is how to transfer our voice even image, the network around us how to transfer data, and even of the army of the informatiage how to confidential information transmission, are involved in electronic and information engineering application technology. We can through some basic knowledge learning know these things, and able to apply more advanced technology to research and development of new products.Electronic information engineering major is learning the basic circuit of knowledge, and master the computer processing with the method of information. The first to have solidmathematical knowledge, for physics requirement is high, and mainly electrical; To learn many circuit knowledge, electronic technology, signal and system, computer control principle, communication principle, basic courses. Learning in electronic and information engineering design, to themselves have to connect with computer some circuit experiment, to operate and use tools requirements is also higher. Such as their connection sensor circuit, with computer set small communications system, will also visit some big company of electronic and information processing equipment, understanding mobile phone signal, cable TV is how to transmission, etc, and can organic ?。

从头算法

从头算法
Spectroscopy, from NMR to X-ray. Reaction mechanisms in chemistry and biochemistry.
Intermolecular interactions giving potentials which may be used to study macromolecules, solvent effects, crystal packing, etc.
Thermochemistry, kinetics, transport, materials properties, VLE, solutions
一、MD法原理
• 将微观粒子视为经典粒子,服从
定律 或
Newton 第二
Fi=-▽iU
• 若各粒子的瞬时受力已知,可用数值积分求出
运动的经典轨迹
二、MD法基本假定
用MD中的“模板强制法” ( Template-forcing ) 确定一对柔性分 子相应功能团可能的空间取向
模 板 加模板
起始取向为线型的分子逐步转化为能量较低的环型构象
例3
MD预测的顺磁性和反磁性冰晶体结构
O.A Karim & A.D.J. Haymet, J.Chem. Phys., 89, 6889(1988))
• 经典模型的局限 — 未涉及化学行为的物理本质
化合物的性质 电子结构
化学反应 核与电子运动状态的变化
• 伴随有电子跃迁、转移、变价的过程,经典的
分子模拟不能处理
一、量子力学第一原理 — 多体Shrö dinger方程
物理模型:
1
C
i 2
rij
• 分子中电子和原子
核均在运动中

计算机科学说明文英语作文

计算机科学说明文英语作文

计算机科学说明文英语作文英文回答:Computer science is the study of computation and information. It encompasses a wide range of topics, from theoretical foundations to practical applications. Computer scientists develop the algorithms, software, and hardware that power our modern world.There are two main branches of computer science: theoretical computer science and practical computer science. Theoretical computer science explores the fundamental principles of computation, such as the limits ofcomputation and the complexity of algorithms. Practical computer science focuses on the design and implementationof computer systems, including software engineering, operating systems, and computer architecture.Computer science is a rapidly evolving field, with new technologies and applications emerging all the time. Someof the most exciting recent advances in computer science include the development of artificial intelligence, machine learning, and quantum computing. These technologies have the potential to revolutionize many aspects of our lives, from the way we work and communicate to the way we learn and understand the world around us.Computer science is a challenging but rewarding field. It requires a strong foundation in mathematics and logical reasoning, as well as a passion for technology and problem-solving. Computer scientists are in high demand in today's job market, and they can work in a variety of industries, including technology, healthcare, finance, and government.If you are interested in a career in computer science, there are many resources available to help you get started. You can take classes at your local school or university, or you can find online courses and tutorials. There are also many clubs and organizations that support students and professionals in computer science.中文回答:计算机科学是计算和信息的研究。

WSN中SPIN路由协议的改进

WSN中SPIN路由协议的改进

2012 年 3 月 5 日
3 SPIN 协议的改进
在报警监控、医疗[8]等应用中,数据必须第一时间到达 Sink 节点以便做实时分析。离 Sink 节点越远,跳数相对就越 大,在 SPIN-MM 协议中增加了最小跳数发现阶段,用于计 算每个节点到达 Sink 节点的最小跳数,在协商阶段,通过节 点之间最小跳数大小的比较决定是否要发送 DATA,接收数 据节点选择阶段用于选择下一跳节点,备份恢复阶段用于处 理出现数据不可达的情况。
间,此参数用来测试改进的路由选择协议是否有效,是实时 应用中路由协议的一个重要指标。
(3)平均消耗能量,是指不同数目的传感器节点在工作中 所消耗的平均能量,此参数是衡量路由协议高效的一个重要 参数。
88
计算机工程
4.3 结果分析 图 3 给出了 3 种协议的 ADV 包总数对比。实验中,有
30 个传感器节点分布在传感区域内,当有一个事件发生时, 由于限制了数据的转发方向,源数据都向 Sink 节点方向传 送,因此在 SPIN-MM 协议中不同跳段的 ADV 包都有不同程 度的减少,与 SPIN 协议、SPIN-M 协议相比,SPIN-MM 协 议只需更少的 ADV 包就可以将 DATA 转发到 Sink 节点。
针对 SPIN-M 存在的问题,本文提出一种改进的协议 SPIN-MM。在该协议中,只选择一条到达 Sink 节点的路由, 其余符合条件的节点仍可以保存下来充当数据不可达时的备 份节点。
2 SPIN 协议
SPIN 协议[7]使用节点间的协商机制和资源自适应机制, 使节点只广播其他节点所没有的数据,从而减少冗余数据, 降低能量消耗,并解决了传统协议存在的“内爆”、“重叠” 以及盲目使用资源等问题。SPIN 协议采用三次握手来实现数 据的交互,如图 1 所示,其中,灰色的节点代表当前状态有 数据交互。协议运行过程中使用了 3 种报文数据:ADV,REQ 和 DATA。ADV 用于数据的广播,当某一个节点有数据可以 共享时,可以用 ADV 数据包通知邻居节点,ADV 消息比实 际的数据长度短,如果相邻节点中没有对该信息感兴趣的节

供应链研究参考文献集锦(按首字母排序过的哦)

供应链研究参考文献集锦(按首字母排序过的哦)

[1] A.Amid,S.H.Ghodsypour, C.O.Brien. A weighted max-min model for fuzzymulti-objective supplier selection in a supply chain. International Journal of Production Economics(2010)[2] Ayman Tajeddine, Ayman Kayssi, Ali Chehab, Hassan Artail.Fuzzy reputation-basedtrust model.Applied Soft Computing 11 (2011) 345–355[3] Babak Khosravifar, Jamal Bentahar, Maziar Gomrokchi, and Rafiul Alam. CRM: AnEfficient Trust and Reputation Model for Agent Computing.Knowledge-Based Systems,2011(Accepted Manuscript).[4] Blaze M, Feigenbaum J ,Ioannidis J. The Role of Trust Management in DistributedSystems Security[C].Secure Internet Programming, Issues for Mobile and Distributed Objects, Berlin: Springer-Verlag,1999,185-210[5] ChengChang Lin,ChaoChen Hsieh. A Cooperative Coalitional Game in DuopolisticSupply-Chain Competition. Springer Science: Business Media,2011[6] Dao-Quan LI Dao-Quan LI Dao-Quan LI Trust Model based on Bayesian Networkin Ecommerce 2010 International Conference on E-Business and E-Government [7] Dan J. Kim, Donald L. Ferrin, H. Raghav Rao A trust-based consumerdecision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents.Decision Support Systems 44 (2008) 544–564[8] Das,Kanchan. A quality integrated strategic level global supply chain model.International Journal of Production Research,2011,49(1):5-31[9] Felix T.S.Chan, T. Zhang. The impact of Collaborative Transportation Managementon supply chain performance: A simulation approach. Expert Systems with Applications 38(2011)2319-2329[10] Gabriela Corsanoa, Jorge M.Montagna. Mathematical modeling for simultaneousdesign of plants and supply chain in the batch process industry. Computers and Chemical Engineering 35(2011)149-164[11] Harald D.Stein,Romualdas Ginevičius. New Co-Opetition Approach For SupplyChain Applications And The Implementation A New Allocation Rule. 6th International Scientific Conference,2010,1092-1099[12] Huynh T D, Jennings N R. An Integrated Trust and Reputation Model for OpenMulti-Agent Systems[J].Autonomous Agents and Multi-Agent Systems,2006,13(2):119-154[13] Jeff Hoi Yan Yeung, Willen Selen, Min Zhang. The effects of trust and coercivepower on supplier integration. International Journal of Production Economics, 120(2009)66-79.[14] Jordi Sabater-Mir, Mario Paolucci. On representation and aggregation of socialevaluations in computational trust and reputation models. International Journal of Approximate Reasoning,46(2007)458-483[15] Jin Wang, Huai-Jiang Sun. A new evidential trust model for open communities.Computer Standards & Interfaces 31 (2009) 994–1001[16] Jijiao Jiang Jingwen Zhang. Organizational Trust and Supply Chain Performancein B2B E-Commerce: Evidence from an Emerging Logistics Market. Proceedings ofthe IEEE International Conference on Automation and Logistics Qingdao, China September 2008[17] Jengchung V. Chen, David C. Yen T.M. Rajkumar, Nathan A. Tomochko.Theantecedent factors on trust and commitment in supply chain relationships.Computer Standards & Interfaces 33 (2011) 262–270[18] Kamal K. Bharadwaj, Mohammad Yahya H. Al-Shamri.Fuzzy computational modelsfor trust and reputation systems.Electronic Commerce Research and Applications 8 (2009) 37–47[19] Lovekesh Vig,Julie A.Adams. Multi-Robot Coalition Formation. IEEE TRANSACTIONSON ROBOTICS,2006,22(4):637-649[20] LI Wen, Ping Lingdi, Wu Chunming Jiang Ming. Distributed Bayesian NetworkTrust Model in Virtual Network.2010 Second International Conference on Networks Security, Wireless Communications and Trusted Computing[21] Melaye D, Demazeau Y. Bayesian dynamic trust model. LNCS 3690. Berlin:Springer-Verlag, 2005. 480−489[22] Mogens Nielsen, Karl Krukow.A Bayesian Model for Event-based Trust.ElectronicNotes in Theoretical Computer Science 172 (2007) 499–521[23] Marco Carbone1 , Mogens Nielsen1 , Vladimiro Sassone. A Formal Model for Trustin Dynamic Networks. Proceedings of the First International Conference on Software Engineering and Formal Methods(SEFM'03) ,2003[24] PeiHua Fu,YanChu Liu. The Security Analysis of Supply Chain Based on ComplexNetwork. Advanced Materials Research,2011,1218-1222[25] R. Duane Ireland , Justin W. Webb. A multi-theoretic perspective on trust andpower in strategic supply chains.Journal of Operations Management 25 (2007) 482–497[26] Shaohan Cai, Minjoon Jun, Zhilin Yang.Implementing supply chain institutionalforces and trust.Journal of Operations Management 28 (2010) 257–268 [27] S.M.J. Mirzapour Al-e-hashem, H. Malekly and M.B. Aryanezhad. A multi-objectiverobust optimization model for multi-product multi-site aggregate production planning in A supply chain under uncertainty, International Journal of Production Economics(2011)[28] Stefan Spitz and York T¨uchelmann A Trust Model Considering the Aspects ofTime 2009 Second International Conference on Computer and Electrical Engineering[29] Stefan Spitz and York T¨uchelmann .A Trust Model Considering the Aspects ofTime 2009 Second International Conference on Computer and Electrical Engineering[30] Taewon Suh, Mark B. Houston Distinguishing supplier reputation from trust inbuyer–supplier relationships.Industrial Marketing Management 39 (2010) 744–751[31] Toshiya Kaihar Multi-agent based supply chain modelling with dynamicenvironment.International Journal of Production Economics 85 (2003) 263–269 [32] Tsung-Yi Chen, Yuh-Min Chen Advanced multi-phase trust evaluation model forcollaboration between coworkers in dynamic virtual project teams.Expert Systems with Applications 36 (2009) 11172–11185[33] V. B. KRENG AND CHIA-HUA CHANG Bayesian Network Based Multi agent System-Application in E-Marketplace[J] Computers and Mathematics with Applications 46 (2003) 429-444[34] Wang Yao,Vassileva Julita. Bayesian Network-Based Trust Model[C].WebIntelligence, 2003. WI 2003. Proceedings. IEEE/WIC International Conference on.pp.2~7.[35] Wang W, Zeng G. A reputation multi-agent system in semantic Web[C].Proceedingsof the Ninth(9th)Pacific Rim International Workshop on Multi-Agent,2006:211-219[36] Wang Lan, Zhang zheng ya. Study on the Trust Degree of Supply Chain.2008 IEEE[37] Xia Wang,Zhenxiang Zeng,Shilei Sun. Research on Decision Model of SoftwareOutsourcing Alliance Based on Game Theory.IEEE,2010,661-663[38] Xiaolong Xue, Xiaodong Li, Qiping Shen, Yaowu Wang.An agent-based frameworkfor supply chain coordination in construction.Automation in Construction 14 (2005) 413– 430[39] Youcef Begriche Houda Labiod A Bayesian Statistical Model for a Multi-pathTrust-based Reactive Ad hoc Routing Protocol (2009 IEEE)[40] Yao-Hua Tan, Walter Thoen.Formal aspects of a generic model of trust forelectronic commerce.Decision Support Systems 33 (2002) 233–246[41] Zhong Qin.Models of trust-sharing in Chinese private enterprises.EconomicModelling,2011(Accepted Manuscript)[42] Zhang Xiaoqin, Lesser Victor. Wagner Tom Integrative Negotiation Among AgentsSituated in Organizations[J].IEEE Transactions on Systems, Man, and Cybernetics: Pant C: Applications and Reviews,2006,36(1):19-30[43] 崔政东,刘晋.基于广义随机 Petri 网的供应链建模与分析.系统工程理论与实践,2005,12:18-24[44] 程柏良,曾国荪,揭安全.基于自组织演化的多Agent可信联盟研究.计算机研究与发展,2010,47(8):1382-1391[45] 成坚,冯仁剑,许小丰,万江文.基于D—S证据理论的无线传感器网络信任评估模型[J].传感技术学报,2009,22(12).[46] 胡山立,李少芳,石纯一.最坏情况具有限界的联盟结构生成.计算机研究与发展,2009,,46(8):1357-1363[47] 蒋建国,尹翔,夏娜,苏兆品.基于历史行为的agent联盟策略.电子学报,2007,35(8):1485-1489[48] 李小勇, 桂小林.大规模分布式环境下动态信任模型研究[J].软件学报,2007,18(6):1510−1521.[49] 苏射雄,胡山立,林超峰,郑盛福.基于局部最优的联盟结构生成算法.计算机研究与发展,2007,44(2):277-281[50] 王春喜,查建中,李建勇,鄂明成.面向网络制造的知识供应链建模.计算机集成制造系统,2000,6(6):7-10[51] 王冬冬,达庆利.基于模糊Petri网的供应链诊断建模分析.东南大学学报(自然科学版),2006,36(4):662-666[52] 武志峰,黄厚宽,赵翔.二进制编码差异演化算法在Agent联盟形成中的应用.计算机研究与发展,2008,45(5):848-852[53] 王静.供应链企业间信任及其违约风险的研究[D].长安大学博士学位论文,2009.[54] 许淑君,马士华.我国供应链企业间的信任危机分析[J].计算机集成制造系统,2002,8(1).[55] 叶斌,马忠贵,曾广平,涂序彦.多agent系统的联盟框架及形成机制.北京理工大学学报,2005,25(10):864-867[56] 严建援,李凯,张路.一种基于语法的供应链流程定义元模型.管理科学学报,2010,13(6):33-63[57] 袁禄来,曾国荪,王伟.基于Dempster-Shafer证据理论的信任评估模型[J]武汉大学学报,2006,52(5):627-630[58] 殷茗.动态供应链协作信任关键影响因素研究[D].西北工业大学博士学位论文,2006.[59] 杨静.供应链内企业间信任的产生机制及其对合作的影响——基于制造业企业的研究[D].浙江大学博士学位论文,2006.[60] 张新良,石纯一.多Agent联盟结构动态生成算法.软件学报,2007,18(3):574-581[61] 卓翔芝,王旭,代应.供应链联盟伙伴企业间的信任评估模型[J].计算机集成制造系统,2009,15(10).[62] 詹涛,周兴社,杨刚. 基于相似度的分布式信任模型[J].西北工业大学学报,2010(2).[15] 张晴,刘志学.基于多agent的供应链信息协调建模与仿真.计算机应用研究,2009,26(10):3709-3711[16] 隆清琦,林杰.基于多智能代理的供应链分布式仿真平台体系结构.计算机集成制造系统,2010,16(4):818-827[32] 李信鹏,赵文,刘殿兴,袁崇义,张世琨,王立福.基于RFID发现服务的一种供应链建模技术.电子学报,2010,38(2A):107-116[7] 崔政东,刘晋.基于广义随机 Petri 网的供应链建模与分析.系统工程理论与实践,2005,12:18-24[8] 王冬冬,达庆利.基于模糊Petri网的供应链诊断建模分析.东南大学学报(自然科学版),2006,36(4):662-666[9] 樊宏,刘晋.基于Meta图模型的供应链速度问题研究.工业技术经济,2006,25(2):76-79[10] 秦凡,严建援.基于过程优化的供应链过程语义建模方法.计算机工程与应用,2007,43(23):7194-197[11] 杨鹏,赵辉,呼生刚.基于强化学习和半马氏过程的供应链优化.计算机工程与应用,2007,43(4):240-248[12] 严建援,李凯,张路.一种基于语法的供应链流程定义元模型.管理科学学报,2010,13(6):33-63[13] 叶斌,马忠贵,曾广平,涂序彦.多agent系统的联盟框架及形成机制.北京理工大学学报,2005,25(10):864-867[5] 王春喜,查建中,李建勇,鄂明成.面向网络制造的知识供应链建模.计算机集成制造系统,2000,6(6):7-10。

纸质笔记和电子笔记英语作文

纸质笔记和电子笔记英语作文

纸质笔记和电子笔记英语作文英文回答:Paper notes and electronic notes both have their own advantages and disadvantages. Personally, I prefer using paper notes because I find them more convenient and reliable. When I write something down on paper, it feels more tangible and I can easily flip through my notes without any technical issues.For example, when I was studying for my exams last semester, I found that writing my notes by hand helped me remember the information better. The act of physically writing things down helped me process the information more effectively compared to typing on a computer. Additionally, I could easily draw diagrams and charts on paper, which was essential for certain subjects like biology and chemistry.On the other hand, electronic notes have their own benefits as well. They are more environmentally friendly asthey reduce the need for paper, and they can be easily organized and searched through using keywords. I do use electronic notes for work meetings and conferences, as it allows me to quickly search for specific information without flipping through pages.However, I find that electronic notes can be distracting at times. With notifications popping up on my screen and the temptation to check social media, I often find myself losing focus. Paper notes, on the other hand, keep me more focused as there are no distractions.中文回答:纸质笔记和电子笔记各有优缺点。

Theoretical and Computational Chemistry Studies

Theoretical and Computational Chemistry Studies

Theoretical and ComputationalChemistry StudiesChemistry is the branch of science that deals with the properties and behavior of matter, particularly at the atomic and molecular level. This field has expanded in recent years to include theoretical and computational studies, which use mathematical and computational models to understand the behavior of chemical systems. These studies are extremely important for advancing our knowledge of chemistry and have applications in many areas, including drug design, materials science, and environmental research.One important area of theoretical and computational chemistry involves the study of chemical reactions. Chemists use theoretical models to predict the behavior of molecules in reactions, and computational tools to simulate the reaction mechanisms and evaluate the energy and kinetics involved. These studies help chemists to design better catalysts, understand the mechanism of enzyme-catalyzed reactions, and develop new drugs.Another important area of theoretical and computational chemistry is the study of molecular structure and properties. Chemists use quantum mechanical calculations to predict the electronic structure and spectroscopic properties of molecules. These studies help to explain the bonding and reactivity of molecules and have applications in many areas, including materials science and drug design.In addition to these areas, there are many other important topics in theoretical and computational chemistry. For example, chemists use statistical mechanics to study the thermodynamics of chemical systems, and molecular dynamics simulations to study the behavior of molecules in solution. Other studies focus on the development of new computational tools and methods for analyzing complex chemical systems.Overall, theoretical and computational chemistry studies are essential for advancing our understanding of chemical systems and developing new materials and drugs. These studies require a strong background in both chemistry and mathematics, and are often carried out in collaboration with experimental chemists. As computational powercontinues to increase, these studies are likely to become even more important in the future.。

电脑普及时代书写依然重要英语作文

电脑普及时代书写依然重要英语作文

电脑普及时代书写依然重要英语作文In the age of computer literacy and digital technology, writing by hand may seem like a dying art. With the prevalence of computers, smartphones, and tablets, many people rely on keyboards and touchscreens to communicate. However, despite the convenience of electronic devices, the act of handwriting remains an important skill that should not be overlooked.Writing by hand has unique cognitive benefits that typing does not offer. Studies have shown that handwritten notes lead to better retention of information compared to typed notes. When we write by hand, we are forced to slow down and think more critically about what we are putting on paper. This deeper level of engagement with the material can enhance our understanding and memory of the content.Furthermore, handwriting is a more personalized form of communication. Each individual has a unique handwriting style that reflects their personality and creativity. Handwritten letters, notes, and cards convey a sense of intimacy and thoughtfulness that digital messages cannot replicate. In a world where instant communication is the norm, taking the time to write a letter by hand can make a lasting impression on the recipient.In addition, there are practical reasons why writing by hand is still important in the digital age. Not everyone has access to electronic devices or the internet, and in some situations, handwriting may be the only option for communication. Additionally, handwritten signatures are still required for legal documents, contracts, and official paperwork. Developing good penmanship and handwriting skills is essential for success in these situations.While digital technology has revolutionized the way we communicate and interact with the world, it is important to remember the value of handwriting. Writing by hand offers cognitive benefits, personalizes communication, and remains a necessary skill in certain situations. In a society that is increasingly reliant on technology, let us not forget the simple pleasure and importance of putting pen to paper.。

议论电子笔记好还是纸质好英语作文

议论电子笔记好还是纸质好英语作文

议论电子笔记好还是纸质好英语作文英文回答:In the contemporary digital landscape, the debate over the relative merits of electronic notes and paper notes continues to ignite discussions in educational and professional settings. Each medium offers unique advantages and drawbacks, and the optimal choice depends on individual preferences, usage patterns, and specific tasks.Electronic Notes.Convenience and Accessibility: Digital note-taking allows instant access to notes from any device with an internet connection. Notes can be easily shared, synchronized across multiple platforms, and backed up in the cloud, providing peace of mind in case of device loss or damage.Flexibility and Customization: Electronic notes offera wide range of customization options, including text formatting, image insertion, hyperlink insertion, andaudio/video embedding. Users can also organize notes using tags, folders, and hierarchical structures, making it easier to find and navigate information.Collaboration and Real-time Editing: Digital platforms facilitate real-time collaboration among multiple users, allowing teams to share, edit, and comment on notes simultaneously. This fosters better communication, knowledge sharing, and collective brainstorming.Paper Notes.Tactile and Multisensory Experience: The tactile nature of paper provides a more immersive and personalnote-taking experience. Writing by hand engages multiple senses, enhancing memory retention and fostering a deeper connection with the material.Physical Organization and Tangibility: Paper notes offer a physical organization system that is tangible andeasy to manipulate. Notes can be easily sorted, stacked,and filed, providing a visual representation of the information hierarchy.Focus and Reduced Distractions: The absence of digital distractions and notifications on paper notes promotes focused attention and allows for sustained writing sessions without interruptions.Ultimately, the choice between electronic and paper notes is a personal preference. Those who value convenience, accessibility, and collaboration may find electronic notes more suitable. Individuals who prefer a tactile experience, physical organization, and distraction-free writing environment may opt for paper notes.中文回答:电子笔记和纸质笔记各有优缺点,选择哪种取决于个人偏好、使用模式和具体任务。

LTL性质的可监视性

LTL性质的可监视性

" " 反 证 法 。 假 设 L 不 可 监 视 , 则
y , p y , 对 任 意 非 空 开 集 V p , 都 有 V L 并 且 V L c 。 对 任 意 A , 令 x A.y ,则 A.p A.y,A.V A.p ,有 A.V .L 并 且 A.V .L c , 即 A.V L 并 且 A.V L c ,从而, L 不可监视。矛盾。
V A1 Ai 1 i k q , (则 A0.V p )使得 V L 或 V L c 。因此, 非 空 开 集 A0.V .L 或 A0.V .L c 即 A0.V L 或 A0.V L c 。
对于长度为 1 的前缀,即 p 1, p A0 ,存在非 空开集 A0.V A 0 Ai A0 p ,因此,L 可 监视。
LTL 的语义和可监视的定义出发,证明 LTL 性质的 义 无 限 单 词 的 前 缀 和 后 缀 。 对 无 限 单 词
可监视性。ห้องสมุดไป่ตู้
1 符号说明
本文中, 表示非空有限字母表,并且假定 2 , 的元素是抽象符号,称为字母,用大写 字母 A, B,C, 表示。 上的单词是 中字母的有限 或无限序列,即单词具有形式 p A0 A1 An , (n N ) 或
but it’s not closed under (until). Further, by strengthening the conditions, we get several sufficient conditions to ensure the
monitorability of 1 2 . Key Words: monitorable; LTL formula; model checking; topology
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

FACS2006
From Theory to Practice in Distributed
Component Systems
Denis Caromel1
Departement d’Informatique
University of Nice-Sophia Antipolis
I3S-CNRS,INRIA,IUF,France
Abstract
This talk will start by presenting theoretical results on determinism for asynchronous distributed compo-nents.It will then shows how to apply those results in a practical implementation,available as Open Source within the ObjectWeb Open Source community.Further,current work aiming at defining a joint European component model for Grid computing(GCM)will be summarized.Finally,it will conclude with challenges at hand with component systems,especially work related to capturing behavioral properties.Current work aiming at defining behavioural models and techniques for hierarchical components will be introduced.
References
[1]Caromel D.,Henrio L.,Serpette B.,“Asynchronous and Deterministic Objects”,pp.123–134in
Proceedings of the POPL’04,31st ACM Symposium on Principles of Programming Languages,ACM Press,POPL,2004.
[2]Caromel D.,Henrio L.,“A Theory of Distributed Objects”,Springer-Verlag,2005,378pages,hardcover,
ISBN3-540-20866-6.
[3]GCM:Grid Component Model NoE CoreGrid deliverable,2006Towards a European Standard
Component Model for Grid Computing
[4]Object ProActive,/
[5]Behavioural Models for Hierarchical Components,T.Barros and L.Henrio and E.Madelaine,Model
Checking Software,12th International SPIN Workshop,Springer Verlag,2005,Aug.,San Francisco,pp.
154–168,LNCS.
1Email:Denis.Caromel@sophia.inria.fr
This paper is electronically published in
Electronic Notes in Theoretical Computer Science
URL:www.elsevier.nl/locate/entcs
Caromel
230。

相关文档
最新文档