Using Agent-Based Modeling in the Exploration of Self-Organizing Neural Networks Abstract

合集下载

Model_Based_Inversion_of_Dynamic_Range_Compression

Model_Based_Inversion_of_Dynamic_Range_Compression

Model-Based Inversion of Dynamic Range CompressionStanislaw Gorlow,Student Member,IEEE,and Joshua D.Reiss,Member,IEEEAbstract—In this work it is shown how a dynamic nonlinear time-variant operator,such as a dynamic range compressor,can be inverted using an explicit signal model.By knowing the model parameters that were used for compression one is able to recover the original uncompressed signal from a“broadcast”signal with high numerical accuracy and very low computational complexity.A compressor-decompressor scheme is worked out and described in detail.The approach is evaluated on real-world audio material with great success.Index Terms—Dynamic range compression,inversion, model-based,reverse audio engineering.I.I NTRODUCTIONS OUND or audio engineering is an established discipline employed in many areas that are part of our everyday life without us taking notice of it.But not many know how the audio was produced.If we take sound recording and reproduction or broadcasting as an example,we may imagine that a prerecorded signal from an acoustic source is altered by an audio engineer in such a way that it corresponds to certain criteria when played back.The number of these criteria may be large and usually depends on the context.In general,the said alteration of the input signal is a sequence of numerous forward transformations, the reversibility of which is of little or no interest.But what if one wished to do exactly this,that is to reverse the transfor-mation chain,and what is more,in a systematic and repeatable manner?The research objective of reverse audio engineering is twofold:to identify the transformation parameters given the input and the output signals,as in[1],and to regain the input signal that goes with the output signal given the transformation parameters.In both cases,an explicit signal model is manda-tory.The latter case might seem trivial,but only if the applied transformation is linear and orthogonal and as such perfectly invertible.Yet the forward transform is often neither linear nor invertible.This is the case for dynamic range compressionManuscript received December05,2012;revised February28,2013; accepted February28,2013.Date of publication March15,2013;date of current version March29,2013.This work was supported in part by the “Agence Nationale de la Recherche”within the scope of the DReaM project (ANR-09-CORD-006)as well as the laboratory with which thefirst author is affiliated as part of the“mobilitéjuniors”program.The associate editor coordinating the review of this manuscript and approving it for publication was Prof.Woon-Seng Gan.S.Gorlow is with the Computer Science Research Laboratory of Bordeaux (LaBRI),CNRS,Bordeaux1University,33405Talence Cedex,France(e-mail: stanislaw.gorlow@labri.fr).J.D.Reiss is with the Centre for Digital Music(C4DM),Queen Mary,Uni-versity of London,London E14NS,U.K.(e-mail:josh.reiss@). Digital Object Identifier10.1109/TASL.2013.2253099(DRC),which is commonly described by a dynamic nonlinear time-variant system.The classical linear time-invariant(LTI) system theory does not apply here,so a tailored solution to the problem at hand must be found instead.At this point,we also like to highlight the fact that neither V olterra nor Wiener model approaches[2]–[4]offer a solution,and neither do describing functions[5],[6].These are useful tools when identifying a time-invariant or a slowly varying nonlinear system or ana-lyzing the limit cycle behavior of a feedback system with a static nonlinearity.A method to invert dynamics compression is described in[7], but it requires an instantaneous gain value to be transmitted for each sample of the compressed signal.To provide a means to control the data rate,the gain signal is subsampled and also en-tropy coded.This approach is highly inefficient as it does not rely on a gain model and is extremely generic.On the other hand,transmitting the uncompressed signal in conjunction with a few typical compression parameters like threshold,ratio,attack,and release would require a much smaller capacity and yield the best possible signal quality with regard to any thinkable measure.A more realistic scenario is when the uncompressed signal is not available on the consumer side.This is usually the case for studio music recordings and broadcast material where the listener is offered a signal that is meant to sound“good”to everyone.However,the loudness war [8]has resulted in over-compressed audio material.Over-com-pression makes a song lose its artistic features like excitingness or liveliness and desensitizes the ear thanks to a louder volume. There is a need to restore the original signal’s dynamic range and to experience audio free of compression.In addition to the normalization of the program’s loudness level,the Dolby solution[9],[10]also includes dynamic range expansion.The expansion parameters that help reproduce the original program’s dynamic range are tuned on the broadcaster side and transmitted as metadata together with the broadcast signal.This is a very convenient solution for broadcasters,not least because the metadata is quite compact.Dynamic range ex-pansion is yet another forward transformation rather than a true inversion.Evidently,none of the previous approaches satisfy the re-verse engineering objective of this work.The goal of the present work,hence,is to invert dynamic range compression,which is a vital element not only in broadcasting but also in mastering. The paper is organized as follows.Section II provides a brief introduction to dynamic range compression and presents the compressor model upon which our considerations are based. The data model,the formulation of the problem,and the pur-sued approach are described next in Section III.The inversion1558-7916/$31.00©2013IEEEFig.1.Basic broadband compressor model(feed forward).is discussed in detail in Section IV.Section V illustrates how an integral step of the inversion procedure,namely the search for the zero-crossing of a non-linear function,can be solved in an iterative manner by means of linearization.Some other com-pressor features are discussed in Section VI.The complete al-gorithm is given in the form of pseudocode in Section VII and its performance is evaluated for different compressor settings in Section VIII.Conclusions are drawn in Section IX,where some directions for future work are mentioned.II.D YNAMIC R ANGE C OMPRESSIONDynamic range compression or simply“compression”is a sound processing technique that attenuates loud sounds and/or amplifies quiet sounds,which in consequence leads to a reduc-tion of an audio signal’s dynamic range.The latter is defined as the difference between the loudest and quietest sound mea-sured in decibel.In the following,we will use the word“com-pression”having“downward”compression in mind,though the discussed approach is likewise applicable to“upward”compres-sion.Downward compressing means attenuating sounds above a certain threshold while leaving sounds below the threshold unchanged.A sound engineer might use a compressor to reduce the dynamic range of source material for purposes of aesthetics, intelligibility,recording or broadcast limitations.Fig.1illustrates the basic compressor model from([11],ch.2)amended by a switchable RMS/peak detector in the side chain making it compatible with the compressor/limiter model from ([12],p.106).We will hereafter restrict our considerations to this basic model,as the purpose of the present work is to demon-strate a general approach rather than a solution to a specific problem.First,the input signal is split and a copy is sent to the side chain.The detector then calculates the magnitude or level of the sidechain signal using the root mean square(RMS)or peak as a measure for how loud a sound is([12],p.107). The detector’s temporal behavior is controlled by the attack and release parameters.The sound level is compared with the threshold level and,for the case it exceeds the threshold,a scale factor is calculated which corresponds to the ratio of input level to output level.The knee parameter determines how quick the compression ratio is reached.At the end of the side chain,the scale factor is fed to a smoothingfilter that yields the gain.The response of thefilter is controlled by another set of attack and re-lease parameters.Finally,the gain control applies the smoothed gain to the input signal and adds afixed amount of makeup gain to bring the output signal to a desired level.Such a broad-band compressor operates on the input signal’s full bandwidth, treating all frequencies from zero through the highest frequency equally.A detailed overview of all sidechain controls of a basic gain computer is given in([11],ch.3),e.g.,III.D ATA M ODEL,P ROBLEM F ORMULATION,ANDP ROPOSED S OLUTIONA.Data Model and Problem FormulationThe employed data model is based on the compressor from Fig.1.The following simplifications are additionally made:the knee parameter(“hard”knee)and the makeup gain(fixed at 0dB)are ignored.The compressor is defined as a single-input single-output(SISO)system,that is both the input and the output are single-channel signals.What follows is a description of each block by means of a dedicated function.The RMS/peak detector as well as the gain computer build upon afirst-order(one-pole)lowpassfilter.The sound level or envelope of the input signal is obtained by(1)where represents an RMS detector,and a peak detector.The non-zero smoothing factor,may take on different values,or,depending on whether the detector is in the attack or release phase.The condition for the level detector to enter the attack phase and to choose over is(2)A formula that converts a time constant into a smoothing factor is given in([12],p.109),so e.g.,where is the sampling frequency.The static nonlinearity in the gain computer is usually modeled in the logarithmic domain as a continuous piecewise linear function:(3) where is the slope,,and is the threshold in decibel.The slope is further derived from the de-sired compression ratio according to(4)Equation(3)is equivalently expressed in the linear domain as(5) where,and is the linear scale factor beforefiltering.The smoothed gain is then calculated as the exponentially-weighted moving average,(6) where the decision for the gain computer to choose the attack smoothing factor instead of is subject to(7) The output signal isfinally obtained by multiplying the above gain with the input signal:(8) Due to the fact that the gain is strictly positive,,it follows that(9) where sgn is the signum or sign function.In consequence,it is convenient to factorize the input signal as a product of the sign and the modulus according to(10)The problem at hand is formulated in the following manner: Given the compressed signal and the model parameters recover the modulus of the original signal from based on.For a more intuitive use,the smoothing factors and may be replaced by the time constants and.The meaning of each parameter is listed below.The threshold in dBThe compression ratio dB:dBThe detector type(RMS or peak)The attack time of the envelopefilter in msThe release time of the envelopefilter in msThe attack time of the gainfilter in msThe release time of the gainfilter in msB.Proposed SolutionThe output of the side chain,that is the gain of,given ,and,may be written as(11) In(11),denotes a nonlinear dynamic operator that maps the modulus of the input signal onto a sequence of instanta-neous gain values according to the compressor model rep-resented ing(11),(8)can be solved for yieldingsubject to invertibility of.In order to solve the above equa-tion one requires the knowledge of,which is unavailable. However,since is a function of,we can express as a function of one independent variable,and in that manner we obtain an equation with a single unknown:(12) where represents the entire compressor.If is invertible, i.e.,bijective for all can be obtained from by(13) And yet,since is unknown,the condition for applying decompression must be predicted from,and ,and therefore needs the condition for toggling between the attack and release phases.Depending on the quality of the prediction,the recovered modulus may differ somewhat at transition points from the original modulus,so that in the end(14)In the next section it is shown how such an inverse compressor or decompressor is derived.IV.I NVERSION OF D YNAMIC R ANGE C OMPRESSIONA.Characteristic FunctionFor simplicity,we choose the instantaneous envelope value instead of as the independent variable in(12).The relation between the two items is given by(1).From(6)and(8), when(15)(16) From(1),(17) or equivalently(note that by definition)(18) Moreover,(18)has a unique solution if and also are in-vertible.Moving the expression on the left-hand side over to the right-hand side,we may define(19) which shall be termed the characteristic function.The root or zero-crossing of hence represents the sought-after enve-lope value.Once is found(see Section V),the current values of,and are updated as per(20) and the decompressed sample is then calculated as(21)B.Attack-Release Phase Toggle1)Envelope Smoothing:In case a peak detector is in use, takes on two different values.The condition for the attack phase is then given by(2)and is equivalent to(22) Assuming that the past value of is known at time,what is needed to be done is to express the unknown in terms of such that the above equation still holds true.If is rather small,,or equivalently if is sufficiently large,ms at44.1-kHz sampling,the term in(15)is negligible,so it approximates(15)as(23) Solving(23)for and plugging the result into(22),we obtain(24) If(24)holds true,the detector is assumed to be in the attack phase.2)Gain Smoothing:Just like the peak detector,the gain smoothingfilter may be in either the attack or release phase. The necessary condition for the attack phase in(7)may also be formulated as(25) But since the current envelope value is unknown,we need to substitute in the above inequality by something that is known.With this in mind,(15)is rewritten as(26) Provided that,and due to the fact that ,the expression in square brackets in(26)is smaller than one,and thus during attack(27) Substituting by using(20), and solving(27)for results in(28) If in(25)is substituted by the expression on the right-hand side of(28),(25)still holds true,so the following sufficient condition is used to predict the attack phase of the gainfilter:(29) Note that the values of all variables are known whenever(29)is evaluated.C.Envelope PredictorAn instantaneous estimate of the envelope value is re-quired not only to predict when compression is active,formally according to(5),but also to initialize the iterative search algorithm in Section V.Resorting once more to(15)itcan be noted that in the opposite case where, and so(30) The sound level of the input signal at time is therefore(31) which must be greater than the threshold for compression to set in,whereas and are selected based on(24)and(29), respectively.D.Error AnalysisConsider being estimated from according to(32) The normalized error is then(33)(34) As during attack andduring release,respectively.The instantaneous gain can also be expressed as(35) where is the runtime in ing(35)in(34),the mag-nitude of the error is given by(36)(37) For,(36)becomes(38) whereas for,(37)converges to infinity:(39) So,the error is smaller for large or short.The smallest possible error is for,which then again depends on the current and the previous value of.The error accumulatesifFig.2.Graphical illustration for the iterative search for the zero-crossing.with.The difference between consecutive-values is signal dependent.The signal envelopefluctuates less and is thus smoother for smaller or longer.is also more stable when the compression ratio is low.Foris perfectly constant.The threshold has a negative impact on error propagation.The lower the more the error depends on ,since more samples are compressed with different-values. The RMS detector stabilizes the envelope more than the peak detector,which also reduces the error.Furthermore,since usu-ally,the error due to is smaller during release whereas the error due to is smaller during attack.Finally,the error is expected to be larger at transition points between quiet to loud signal passages.The above error may cause a decision in favor of a wrong smoothing factor in(24),like instead of e.g.,The decision error from(24)then propagates to(29).Given that ,the error due to(32)is accentuated by(24)with the consequence that(29)is less reliable than(24).The total error in(29)thus scales with.In regard to(31),re-liability of the envelope’s estimate is subject to validity of(24) and(29).A better estimate is obtained when the sound level de-tector and the gainfilter are both in either the attack or release phase.Here too,the estimation error increases withand also with.V.N UMERICAL S OLUTION OF THE C HARACTERISTIC F UNCTION An approximate solution to the characteristic function can be found,e.g.,by means of linearization.The estimate from(31) may moreover serve as a starting point for an iterative search of an optimum:The criterion for optimality is further chosen as the deviation of the characteristic function from zero,initialized to(40) Thereupon,(19)may be approximated at a given point using the equation of a straight line,,where is the slope and is the-intercept.The zero-crossing is characterized by the equation(41)as shown in Fig.2.The new estimate of the optimal is found as(42) If is less optimal than,the iteration is stopped and is thefinal estimate.The iteration is also stopped if is smaller than some.In the latter case,has the optimal value with respect to the chosen criterion.Otherwise,is set to and is set to after every step and the procedure is repeated until has converged to a more optimal value.The proposed method is a special form of the secant method with a single initial value.VI.G ENERAL R EMARKSA.Stereo LinkingWhen dealing with stereo signals,one might want to apply the same amount of gain reduction to both channels to prevent image shifting.This is achieved through stereo linking.One way is to calculate the required amount of gain reduction for each channel independently and then apply the larger amount to both channels.The question which arises in this context is which of the two channels was the gain derived from.To give an answer resolving the dilemma of ambiguity,one solution would be to signal which of the channels carries the applied gain.One could then decompress the marked sample and use its gain for the other channel.Although very simple to implement, this approach provokes an additional data rate of44.1kbps at44.1-kHz sampling.A rate-efficient alternative that comes witha higher computational cost is realized in the following way. First,one decompresses both the left and the right channel in-dependently and in so doing one obtains two estimates and,where subscript shall denote the left channel and subscript the right channel,respectively.In a second step,one calculates the compressed values of and and selects the channel for which holds true.In afinal step,one updates the remaining variables using the gain of the selected channel.B.LookaheadA compressor with a look-ahead function,i.e.,with a delay in the main signal path as in([12],p.106),uses past input samples as weighted output samples.Now that some future input sam-ples are required to invert the process—which are unavailable, the inversion is rendered impossible.and must thus be in sync for the approach to be applied.C.Clipping and LimitingAnother point worth mentioning is that“hard”clipping and “brick-wall”limiting are special cases of compression with the attack time set to zero and the compression ratio set to. The static nonlinearity in that particular case is a one-to-many mapping,which by definition is noninvertible.VII.T HE A LGORITHMThe complete algorithm is divided into three parts,each of them given as pseudocode below.Algorithm1out-lines the compressor that corresponds to the model from Sections II–III.Algorithm2illustrates the decompressor de-scribed in Section IV,and the iterative search from Section V isfinally summarized in Algorithm3.The parameter repre-sents the sampling frequency in kHz.function C OMPfor doif thenelseend ifif thenelseend ifif thenelseend ifend forreturnend functionVIII.P ERFORMANCE E VALUATIONA.Performance MetricsTo evaluate the inverse approach,the following quantities are measured:the root-mean-square error(RMSE),(43) given in decibel relative to full scale(dBFS),the perceptual sim-ilarity between the original and decompressed signal,and the execution time of the decompressor relative to real time(RT). Furthermore,we present the percentage of compressed samples, the mean number of iterations until convergence per compressed sample,the error rate of the attack-release toggle for the gainsmoothingfilter,andfinally the error rate of the envelope pre-dictor.The perceptual similarity is assessed by PEMO-Q[13], Algorithm2The decompressorfunction D ECOMPfor doif thenelseend ifif thenelseend ifif thenC HARFZEROelseend ifend forreturnend functionAlgorithm3The iterative search for the zero-crossingfunction C HARFZEROrepeatif thenreturnend ifuntilreturnend function [14]with as metric.The simulations are run in MATLAB on an Intel Core i5-520M CPU.putational ResultsFig.3shows the inverse output signal for a synthetic input signal using an RMS detector.The inverse signal is obtained from the compressed signal with an error of dBFS.It is visually indistinguishable from the original signal.Due to the fact that the signal envelope is con-stant most of the time,the error is noticeable only around tran-sition points—which are few.The decompressor’s performance is further evaluated for some commercial compressor presets. The used audio material consists of12items covering speech, sung voice,music,and jingles.All items are normalized to LKFS[15].The-value in the break condition of Algorithm3 is set to.A detailed overview of compressor settings and performancefigures is given in Tables I–II.The presented results suggest that the decompressed signal is perceptually in-distinguishable from the original—the-value isflawless. This was also confirmed by the authors through informal lis-tening tests.As can be seen from Table II,the largest inversion error is associated with setting E and the smallest with setting B.For allfive settings,the error is larger when an RMS detector is in use.This is partly due to the fact that has a stronger curvature in comparison to.By defining the distance in (40)as,it is possible to attain a smaller error for an RMS detector at the cost of a slightly longer runtime.In most cases,the envelope predictor works more reliably as compared to the toggle switch between attack and release.It can also be observed that the choice of time constants seems to have little impact on decompressor’s accuracy.The major parameters that affect the decompressor’s performance are and,while the threshold is evidently the predominant one:the RMSE strongly correlates with the threshold level.Figs.4–5show the inversion error as a function of various time constants.These are in the range of typical attack and re-lease times for a limiter(peak)or compressor(RMS)([12],pp. 109–110).It can be observed that the inversion accuracy de-pends on the release time of the peak detector and not so much on its attack time for both the envelope and the gainfilter,see Figs.4,5(b).For the envelopefilter,all error curves exhibit a local dip around a release time of0.5s.The error increases steeply below that bound but moderately with larger values.In the proximity of5s,the error converges to dBFS.With regard to the gainfilter,the error behaves in a reverse manner. The curves in Fig.5(b)exhibit a local peak around0.5s with a value of180dBFS.It can further be observed in Fig.4(a) that the curve for ms has a dip where is close to1ms,i.e.,where is minimal.This is also true for Fig.4(c)and(d):the lowest error is where the attack and release times are identical.As a general rule,the error that is due to the attack-release switch is smaller for the gainfilter in Fig.5. Looking at Fig.6one can see that the error decreases with threshold and increases with compression ratio.At a ratio of 10:1and beyond,the RMSE scales almost exclusively with the threshold.The lower the threshold,the stronger the error prop-agates between decompressed samples,which leads to a largerFig.3.An illustrative example using an RMS amplitude detector with set to 5ms,a threshold ofdBFS (dashed line in the upper right corner),acom-pression ratio of 4:1,and set to 1.6ms for attack and 17ms for release,respectively.TheRMSE is dBFS.TABLE IS ELECTED C OMPRESSOR S ETTINGSTABLE IIP ERFORMANCE F IGURES O BTAINED FOR V ARIOUS A UDIO M ATERIAL (12I TEMS )RMSE value.The RMS detector further augments the error be-cause it stabilizes the envelope more than the peak de-tector.Clearly,the threshold level has the highest impact on the decompressor’s accuracy.IX.C ONCLUSION AND O UTLOOKThis work examines the problem of finding an inverse to a nonlinear dynamic operator such as a digital compressor.The proposed approach is characterized by the fact that it uses an explicit signal model to solve the problem.To find the “dry”or uncompressed signal with high accuracy,it is suf ficient to know the model parameters.The parameters can e.g.,be sent together with the “wet”or compressed signal in the form of metadata as is the case with Dolby V olume and ReplayGain [16].A new bit-stream format is not mandatory,since many digital audio stan-dards,like WA V or MP3,provide means to tag the audio con-Fig.4.RMSE as a function of typical attack and release times using a peak (upper row)or an RMS amplitude detector (lower row).In the left column,the attack time of the envelope filter is varied while the release time is held constant.The right column shows the reverse case.The time constants of the gain filter are fixed at zero.In all four cases,threshold and ratio are fixed at 32dBFS and 4:1,respectively.Fig.5.RMSE as a function of typical attack and release times using a peak (upper row)or an RMS amplitude detector (lower row).In the left column,the attack time of the gain filter is varied while the release time is held constant.The right column shows the reverse case.The time constants of the envelope filter are fixed at zero.In all four cases,threshold and ratio are fixed at 32dBFS and 4:1,respectively.tent with “ancillary”data.With the help of the metadata,one can then reverse the compression applied after mixing or be-fore broadcast.This allows the end user to have control over the amount of compression,which may be preferred because the sound engineer has no control over the playback environ-ment or the listener’s individual taste.When the compressor parameters are unavailable,they can possibly be estimated from the compressed signal.This mayFig.6.RMSE as a function of threshold relative to the signal’s average loudness level(left column)and compression ratio(right column)using a peak(upper row)or an RMS amplitude detector(lower row).The time constants are:ms,ms,and s.thus be a direction for future work.Another direction would be to apply the approach to more sophisticated models that include a“soft”knee,parallel and multiband compression,or perform gain smoothing in the logarithmic domain,see[11],[12],[17], [18]and references therein.In conclusion,we want to draw the reader’s attention to the fact that the presentedfigures suggest that the decompressor is realtime capable which can pave the way for exciting new applications.One such application could be the restoration of dynamics in over-compressed audio or else the accentuation of transient components,see[19]–[21],by an adaptively tuned decompressor that has no prior knowledge of the compressor parameters.A CKNOWLEDGMENTThis work was carried out in part at the Centre for Digital Music(C4DM),Queen Mary,University of London.R EFERENCES[1]D.Barchiesi and J.Reiss,“Reverse engineering of a mix,”J.AudioEng.Soc.,vol.58,pp.563–576,2010.[2]T.Ogunfunmi,Adaptive Nonlinear System Identification:The Volterraand Wiener Model Approaches.New York,NY,USA:Springer Sci-ence+Business Media,2007,ch.3.[3]Y.Avargel and I.Cohen,“Adaptive nonlinear system identificationin the short-time Fourier transform domain,”IEEE Trans.SignalProcess.,vol.57,no.10,pp.3891–3904,Oct.2009.[4]Y.Avargel and I.Cohen,“Modeling and identification of nonlinear sys-tems in the short-time Fourier transform domain,”IEEE Trans.SignalProcess.,vol.58,no.1,pp.291–304,Jan.2010.[5]A.Gelb and W.E.Vander Velde,Multiple-Input Describing Functionsand Nonlinear System Design.New York,NY,USA:McGraw-Hill,1968,ch.1.[6]P.W.J.M.Nuij,O.H.Bosgra,and M.Steinbuch,“Higher-order sinu-soidal input describing functions for the analysis of non-linear systems with harmonic responses,”Mech.Syst.Signal Process.,vol.20,pp.1883–1904,2006.[7]chaise and L.Daudet,“Inverting dynamics compression withminimal side information,”in Proc.DAFx,2008,pp.1–6.[8]E.Vickers,“The loudness war:Background,speculation and recom-mendations,”in Proc.AES Conv.129,Nov.2010.[9]Dolby Digital and Dolby V olume Provide a Comprehensive LoudnessSolution,Dolby Laboratories,2007.[10]Broadcast Loudness Issues:The Comprehensive Dolby Approach,Dolby Laboratories,2011.[11]R.Jeffs,S.Holden,and D.Bohn,Dynamics processor—Technology&Application Tips,Rane Corporation,2005.[12]U.Zölzer,DAFX:Digital Audio Effects,2nd ed.Chichester,WestSussex,U.K.:Wiley,2011,ch.4,The Atrium,Southern Gate,PO19 8SQ.[13]R.Huber and B.Kollmeier,“PEMO-Q—A new method for objectiveaudio quality assessment using a model of auditory perception,”IEEE Trans.Audio Speech Lang.Process.,vol.14,no.6,pp.1902–1911, Nov.2006.[14]HörTech gGmbH,PEMO-Q[Online].Available:http://www.ho-ertech.de/web_en/produkte/pemo-q.shtml,version1.3[15]ITU-R,Algorithms to Measure Audio Programme Loudness and True-Peak Audio Level,Mar.2011,rec.ITU-R BS.1770-2.[16]Hydrogenaudio,ReplayGain[Online].Available:http://wiki.hydroge-/index.php?title=ReplayGain,Feb.2013[17]J.C.Schmidt and J.C.Rutledge,“Multichannel dynamic range com-pression for music signals,”in Proc.IEEE ICASSP,1996,vol.2,pp.1013–1016.[18]D.Giannoulis,M.Massberg,and J.D.Reiss,“Digital dynamic rangecompressor design—A tutorial and analysis,”J.Audio Eng.Soc.,vol.60,pp.399–408,2012.[19]M.M.Goodwin and C.Avendano,“Frequency-domain algorithms foraudio signal enhancement based on transient modification,”J.Audio Eng.Soc.,vol.54,pp.827–840,2006.[20]M.Walsh,E.Stein,and J.-M.Jot,“Adaptive dynamics enhancement,”in Proc.AES Conv.130,May2011.[21]M.Zaunschirm,J.D.Reiss,and A.Klapuri,“A sub-band approachto modification of musical transients,”Comput.Music J.,vol.36,pp.23–36,2012.。

Agent论文:AgentMulti-Agent-System动态集成框架模型脚本解释策略

Agent论文:AgentMulti-Agent-System动态集成框架模型脚本解释策略

Agent论文:Agent Multi-Agent-System 动态集成框架模型脚本解释策略【中文摘要】随着计算机软硬件技术的发展,软件系统的规模越来越庞大,功能也越来越复杂,如何有效的重用已有的软件单元,是目前很多研究的重点。

传统构件化的系统集成方法缺乏多角色,多用户,多层次的灵活的交互,使得集成后的系统缺乏必要的灵活性和柔性。

而人工智能领域的Agent技术,具有主动性,自治性,社会性和智能性等特性,将其应用到系统集成过程中就有可能解决面向多领域的、异构系统之间的、柔性的动态集成问题。

本文将Agent技术应用到软件系统集成领域,在对领域特征集成单元划分规则展开分析的基础上,提出了包装原集成单元的Agent模型,设计并实现了基于多Agent的系统动态集成框架模型。

把脚本语言中脚本的解释控制策略应用到系统集成过程中,提出用脚本定义集成规则、基于脚本解释控制来完成集成单元之间柔性的、动态的集成控制策略。

系统集成框架设计了Agent能力注册中心、Agent管理服务和公共消息黑板等三类管理Agent以及控制协调Agent并分发任务的控制Agent。

在单个Agent 独立求解的基础上,使用集中控制多Agent间交互的策略,柔性、动态的把被集成系统单元集成在一起。

最后将该框架模型应用到某领域仿真系统,在本文设...【英文摘要】With the development of computer software and hardware, software is becoming more hugeness and functional.It is point that how to use the existing software units in effect. The traditional method of component has the disadvantages of low flexibility and non-dynamic in the process of software system integration. With the properties of bounded autonomy, rationality, sociability, reactivity, cooperation, and responsibility, Agent is quite suitable for the software integration.This paper introduces agent t...【关键词】Agent Multi-Agent-System 动态集成框架模型脚本解释策略【采买全文】1.3.9.9.38.8.4.8 1.3.8.1.13.7.2.1同时提供论文写作定制和论文发表服务.保过包发.【说明】本文仅为中国学术文献总库合作提供,无涉版权。

基于Agent的行为建模在虚拟环境中的应用

基于Agent的行为建模在虚拟环境中的应用

摘 要:虚拟环境中应用的基于 Agent 的行为建模技术主要有:反应型 Agent 建模、混合型 Agent
建模和多 Agent 建模,对这些技术进行了分析与对比。并在此基础上介绍了南京大学虚拟足球比赛
系统应用的基于 Agent 的行为建模方案,该方案综合运用了各种建模技术。最后展望了行为建模技
术的发展方向。
引言
行为建模是探索一种能够尽可能贴近真实实体对象行 为的模型,使构造实体对象的人能够按照这种模型方便地构 造出一个行为上真实的虚拟实体对象[1]。它起源于人工智能 领域的基于知识系统(模拟外在智能的抽象形式,如问题求 解、博弈等)、人工生命(动物行为学中的适应性行为,适 应周围环境的变化)和基于行为的系统(Behavior-Based System,BBS),基于行为的系统中自主 Agent 行为表示与 计算机动画结合还产生了行为动画机制[2]。基于 Agent 的行 为建模的主要目标是对真实物体的行为(包括反应行为和智 能行为)进行准确的建模,使得能在计算机上对其进行模拟。
互操作行为模型
进行反应,如防守、抢断、传球、射门等。 观众的三种状态转换为:主队进攻时加油助威、主队进
目标 参数状态任务库
中长期规划单元
态势分析单元
任务调度单元
感知单元
执行单元

环境
图 3 基于 Agent 的框架
1.3 多 Agent 行为建模
以上讨论的反应型和混合型 Agent 行为建模都侧重单 个物体的建模,多 Agent 行为建模侧重对物体间交互进行合 适建模。多 Agent 行为建模将物体行为分为自主行为和外部 互操作行为[11]。自主行为是指 Agent 本身所特有的活动行 为,外部互操作行为是指该 Agent 与周围环境及其它 Agent 有关的交互行为。如图 4 所示,多 Agent 虚拟环境中的行 为模型包括属性、自主行为模型、互操作行为模型等几个部 分。自主行为模型由消息和规则组成,它与 Agent 局部行

agent-based modeling发展过程

agent-based modeling发展过程

Agent-Based Modeling (ABM,基于代理的建模) 是一种用于模拟和研究复杂
系统的方法,其中每个个体被视为一个“代理”,这些代理根据一组规则相互作
用和行为,从而产生整体系统的行为。下面是 Agent-Based Modeling 发展过 程的简要概述:
1. 起源: Agent-Based Modeling 的概念最早可以追溯到 20 世纪 50 年代
和 60 年代的社会科学领域,但真正的发展开始是在 20 世纪 70 年代和
80 年代。
2. 初期探索: 在 20 世纪 80 年代,研究人员开始使用 Agent-Based
Modeling 来模拟复杂系统,如城市交通、生态系统和经济市场等。这些 模型通常基于简单的规则,模拟个体之间的相互作用,从而展示出系统 整体的行为。 3. 发展和应用: 随着计算机技术的进步,Agent-Based Modeling 的发展
迅速加速。20 世纪 90 年代,该方法在各个领域得到了广泛的应用,如 社会科学、生态学、经济学、流行病学等。研究人员开始使用更复杂的 模型和代理行为规则来模拟更真实的现象。 4. 跨学科交叉: Agent-Based Modeling 的发展促使不同领域的研究人员 之间展开合作,进行跨学科的研究。该方法允许从多个角度来研究一个 问题,从而深入理解复杂系统的行为。 5. 高级技术的应用: 随着计算机性能的提升,研究人员开始使用更复杂的
Байду номын сангаас
断发展,人们可以预期将来会出现更复杂、更真实的 Agent-Based Modeling,以更好地理解和解释复杂系统的行为。
总之,Agent-Based Modeling 的发展过程是一个逐渐从初期探索到广泛应用、 跨学科交叉的过程,通过模拟个体代理之间的相互作用,帮助人们更好地理解 和研究复杂系统。

混合Filter与改进自适应GA的特征选择方法

混合Filter与改进自适应GA的特征选择方法

20215711特征选择是指为了降低数据维度,在保证特征集合分类性能的前提下,从原始特征集合中选出具有代表性的特征子集。

对于特征选择方法,按照分类器在算法选择特征过程中的参与方式进行分类,可将其分为三类:过滤式(Filter)、包装式(Wrapper)和嵌入式(Embedded)。

过滤式的特征选择方法先对初始特征进行过滤,再用过滤后的特征训练模型,所以过滤式方法有计算量小、易实现的优点,但分类精度较低;包装式的特征选择方法由于在其特征选择过程中需要多次训练分类器,计算开销通常比过滤式特征选择要大得多,但分类效果要好于过滤式;嵌入式特征选择在分类器训练过程中将自动地进行特征选择,利用嵌入式特征选择,分类效果明显但参数设置复杂且时间复杂度较高。

遗传算法(GA)是一种基于种群的迭代的元启发式优化算法,对初始化个体通过算法的编码技术和一些基本的遗传算子选择、交叉、变异等操作,依据个体的适应度值进行选择遗传,经过迭代,得到适应度最高的个体[1]。

遗传算法以生物进化为原型,具有良好的全局搜混合Filter与改进自适应GA的特征选择方法邱云飞,高华聪辽宁工程技术大学软件学院,辽宁葫芦岛125100摘要:针对高维度小样本数据在特征选择时出现的维数灾难和过拟合的问题,提出一种混合Filter模式与Wrapper 模式的特征选择方法(ReFS-AGA)。

该方法结合ReliefF算法和归一化互信息,评估特征的相关性并快速筛选重要特征;采用改进的自适应遗传算法,引入最优策略平衡特征多样性,同时以最小化特征数和最大化分类精度为目标,选择特征数作为调节项设计新的评价函数,在迭代进化过程中高效获得最优特征子集。

在基因表达数据上利用不同分类算法对简化后的特征子集分类识别,实验结果表明,该方法有效消除了不相关特征,提高了特征选择的效率,与ReliefF算法和二阶段特征选择算法mRMR-GA相比,在取得最小特征子集维度的同时平均分类准确率分别提高了11.18个百分点和4.04个百分点。

多agent建模与仿真(张发)

多agent建模与仿真(张发)

谢谢!
谢谢!
系统求解
采用真实系统
采用系统模型
物理模型
数学模型 仿真模型
为什么需要仿真?
如果模型足够简单 相当复杂,可以用仿真来解决
有人认为:
从事科学研究有三种基本方法
实验/实证 演绎(数学) 仿真? Robert Axelrod
“Advancing the Art of Simulation in the Social Sciences “ , 2005 update
研究途径
通过仿真实验,揭示宏观现象的微观机理。
多主体模型的特点
主体具有一定的自治性 主体往往是异质的 主体之间的交互方式灵活多样 主体的行为是并发的、异步的 空间拓扑没有限制
本质特征
本质特征是采用多主体视角建立实际系 统的概念模型 概念模型的建立
辨识组成实际系统的微观个体 将这些个体抽象为具有自治性的主体, 主体之间通过相互作用构成一个多主体系统
2009全国研究生暑期学校(公共管理与复杂性科学)
多Agent建模与仿真
张 发
Richter2000@
内容提要
仿真概念 社会科学中的仿真 多主体仿真基础 多主体仿真特点 多主体仿真框架 多主体仿真应用
人们是如何就座的?
有没有宏观模式?
原因何在?
如何探索?
仿真!
一、什么是仿真?
仿真是一种系统求解的方法
理解Agent
Agent :主体、智能体、代理 来源于分布式人工智能领域 一般用来描述自包含的、能感知环境并能在一 定程度上控制自身行为的计算实体。
主体的弱概念
自治性 (autonomy):
主体对自己的行动和内部状态有一定程度的控制权。
社会能力(social ability):

基于Agent的建模方法ppt课件

基于Agent的建模方法ppt课件
▪ 2)在建立和分析人类社会中的交互模型和理论方 面,MAS也可以扮演重要的角色。
▪ 由于Agent建模的思想来源于以上两个基本的 推动力,需要再次强调Agent 在建模中的角 色:
▪ 1) Agent是一个自治的计算实体。
▪ 2)智能性是指Agent在变化的环境中灵活而 有理性地运作,具有感知和效应的能力。
▪ 5)反应性。Agent能够感知其所处的环境, 可能是物理世界,或操纵人机界面的用户, 或与它进行交互和通信的其他Agent等等。 并能及时迅速地作出反应以适应环境变化。
▪ 在一些特定领域的研究,特别是人工智能领 域的研究,还赋予Agent一些更高级的特性, 使其更符合于所研究对象的特征:
▪ 1)理性。Agent没有相互冲突的目标。
▪ Agent通信语言(ACL:Agent Communication Language)等等。
▪ 由于Agent 具有巨大的研究优势和应用前景, 九十年代以来,Agent已成为了计算机领域和 人工智能研究的重点前沿;与此同时,许多
领域都在借鉴或采用该概念进行本领域的研 究工作。本章主要介绍基于Agent 的建模方法, 以及用于Agent建模和仿真的Swarm平台和应 用实例等。
▪ 2)诚实性。Agent 不故意传播虚假的信息。
▪ 3)友好性。 Agent 总是尽可能地完成其他 Agent的请求。
2. 特性综合:
▪ 可以看出,Agent的特性常常因为应用的不同领域 而有所不同,也就形成对Agent 的不同理解或定义。 但是,自治性是Agent概念的核心。在实际应用中, Agent常被分为三种类型:
▪ 类型Agent:描述特定实体或某一类实体。
▪ 集中服务Agent(多Agent):为多个Agent提供特 定的服务或一组服务。

Mellanox Ethernet 网络设备用户手册说明书

Mellanox Ethernet 网络设备用户手册说明书

SOLUTION BRIEFKEY BUSINESS BENEFITSEXECUTIVE SUMMARYAnalytic tools such as Spark, Presto and Hive are transforming how enterprises interact with and derive value from their data. Designed to be in memory, these computing and analytical frameworks process volumes of data 100x faster than Hadoop Map/Reduce and HDFS - transforming batch processing tasks into real-time analysis. These advancements have created new business models while accelerating the process of digital transformation for existing enterprises.A critical component in this revolution is the performance of the networking and storage infrastructure that is deployed in support of these modern computing applications. Considering the volumes of data that must be ingested, stored, and analyzed, it quickly becomes evident that the storage architecture must be both highly performant and massively scalable.This solution brief outlines how the promise of in-memory computing can be delivered using high-speed Mellanox Ethernet infrastructure and MinIO’s ultra-high performance object storage solution.IN MEMORY COMPUTINGWith data constantly flowing from multiple sources - logfiles, time series data, vehicles,sensors, and instruments – the compute infrastructure must constantly improve to analyze data in real time. In-memory computing applications, which load data into the memory of a cluster of servers thereby enabling parallel processing, are achieving speeds up to 100x faster than traditional Hadoop clusters that use MapReduce to analyze and HDFS to store data.Although Hadoop was critical to helping enterprises understand the art of the possible in big data analytics, other applications such as Spark, Presto, Hive, H2O.ai, and Kafka have proven to be more effective and efficient tools for analyzing data. The reality of running large Hadoop clusters is one of immense complexity, requiring expensive administrators and a highly inefficient aggregation of compute and storage. This has driven the adoption of tools like SparkDelivering In-memory Computing Using Mellanox Ethernet Infrastructure and MinIO’s Object Storage SolutionMinIO and Mellanox: Better TogetherHigh performance object storage requires the right server and networking components. With industryleading performance combined with the best innovation to accelerate data infrastructure Mellanox provides the networking foundation needed to connect in-memory computing applications with MinIO high performance object storage. Together, they allow in-memory compute applications to access and process large amounts of data to provide high speed business insights.Simple to Deploy, Simpler to ManageMinIO can be installed and configured within minutes simply by downloading a single binary and executing it. The amount of configuration options and variations has been kept to a minimum resulting in near-zero system administration tasks and few paths to failures. Upgrading MinIO is done with a single command which is non-disruptive and incurs zero downtime.MinIO is distributed under the terms of the Apache* License Version 2.0 and is actively developed on Github. MinIO’s development community starts with the MinIO engineering team and includes all of the 4,500 members of MinIO’s Slack Workspace. Since 2015 MinIO has gathered over 16K stars on Github making it one of the top 25 Golang* projects based on a number of stars.which are simpler to use and take advantage of the massive benefits afforded by disaggregating storage and compute. These solutions, based on low cost, memory dense compute nodes allow developers to move analytic workloads into memory where they execute faster, thereby enabling a new class of real time, analytical use cases.These modern applications are built using cloud-native technologies and,in turn, use cloud-native storage. The emerging standard for both the public and private cloud, object storage is prized for its near infinite scalability and simplicity - storing data in its native format while offering many of the same features as block or file. By pairing object storage with high speed, high bandwidth networking and robust compute enterprises can achieve remarkable price/performance results.DISAGGREGATE COMPUTE AND STORAGE Designed in an era of slow 1GbE networks, Hadoop (MapReduce and HDFS) achieved its performance by moving compute tasks closer to the data. A Hadoop cluster often consists of many 100s or 1000s of server nodes that combine both compute and storage.The YARN scheduler first identifies where the data resides, then distributes the jobs to the specific HDFS nodes. This architecture can deliver performance, but at a high price - measured in low compute utilization, costs to manage, and costs associated with its complexity at scale. Also, in practice, enterprises don’t experience high levels of data locality with the results being suboptimal performance.Due to improvements in storage and interconnect technologies speeds it has become possible to send and receive data remotely at high speeds with little (less than 1 microsecond) to no latency difference than if the storage were local to the compute.As a result, it is now possible to separate storage from the compute with no performance penalty. Data analysis is still possible in near real time because the interconnect between the storage and the compute is fast enough to support such demands.By combining dense compute nodes, large amounts of RAM, ultra-highspeed networks and fast object storage, enterprises are able to disaggregate storage from compute creating the flexibility to upgrade, replace, or add individual resources independently. This also allows for better planning for future growth as compute and storage can be added independently and when necessary, improving utilization and budget control.Multiple processing clusters can now share high performance object storage so that different types of processing, such as advanced queries, AI model training, and streaming data analysis, can run on their own independent clusters while sharing the same data stored on the object storage. The result is superior performance and vastly improved economics.HIGH PERFORMANCE OBJECT STORAGEWith in-memory computing, it is now possible to process volumes of data much faster than with Hadoop Map/Reduce and HDFS. Supporting these applications requires a modern data infrastructure with a storage foundation that is able to provide both the performance required by these applications and the scalability to handle the immense volume of data created by the modern enterprise.Building large clusters of storage is best done by combining simple building blocks together, an approach proven out by the hyper-scalers. By joining one cluster with many other clusters, MinIO can grow to provide a single, planet-wide global namespace. MinIO’s object storage server has a wide rangeof optimized, enterprise-grade features including erasure code and bitrot protection for data integrity, identity management, access management, WORM and encryption for data security and continuous replication and lamba compute for dynamic, distributed data.MinIO object storage is the only solution that provides throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a single namespace. MinIO runs Spark queries faster, captures streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms.LATENCY AND THROUGHPUTIndustry-leading performance and IT efficiency combined with the best of open innovation assist in accelerating big data analytics workloads which require intensive processing. The Mellanox ConnectX® adapters reduce the CPU overhead through advanced hardware-based stateless offloads and flow steering engines. This allows big data applications utilizing TCP or UDP over IP transport to achieve the highest throughput, allowing completion of heavier analytic workloads in less time for big data clusters so organizations can unlock and efficiently scale data-driven insights while increasing application densities for their business.Mellanox Spectrum® Open Ethernet switches feature consistently low latency and can support a variety of non-blocking, lossless fabric designs while delivering data at line-rate speeds. Spectrum switches can be deployed in a modern spine-leaf topology to efficiently and easily scalefor future needs. Spectrum also delivers packet processing without buffer fairness concerns. The single shared buffer in Mellanox switches eliminates the need to manage port mapping and greatly simplifies deployment. In an© Copyright 2019. Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox Onyx is a trademark of Mellanox Technologies, Ltd. All other trade-marks are property of their respective owners350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: MLNX-423558315-99349object storage environment, fluid resource pools will greatly benefit from fair load balancing. As a result, Mellanox switches are able to deliver optimal and predictable network performance for data analytics workloads.The Mellanox 25, 50 or 100G Ethernet adapters along with Spectrum switches results in an industry leading end-to-end, high bandwidth, low latency Ethernet fabric. The combination of in-memory processing for applications and high-performance object storage from MinIO along with reduced latency and throughput improvements made possible by Mellanox interconnects creates a modern data center infrastructure that provides a simple yet highly performant and scalable foundation for AI, ML, and Big Data workloads.CONCLUSIONAdvanced applications that use in-memory computing, such as Spark, Presto and Hive, are revealing business opportunities to act in real-time on information pulled from large volumes of data. These applications are cloud native, which means they are designed to run on the computing resources in the cloud, a place where Hadoop HDFS is being replaced in favor of using data infrastructures that disaggregates storage from compute. These applications now use object storage as the primary storage vehicle whether running in the cloud or on- premises.Employing Mellanox networking and MinIO object storage allows enterprises to disaggregate compute from storage achieving both performance and scalability. By connecting dense processing nodes to MinIO object storage nodes with high performance Mellanox networking enterprises can deploy object storage solutions that can provide throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a singlenamespace. The joint solution allows queries to run faster, capture streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms, effectively replacing existing Hadoop clusters with a data infrastructure solution, based on in-memory computing, that consumes a smaller data center footprint yet provides significantly more performance.WANT TO LEARN MORE?Click the link below to learn more about object storage from MinIO VAST: https://min.io/Follow the link below to learn more about Mellanox end-to-end Ethernet storage fabric:/ethernet-storage-fabric/。

Mason-a Java multiagent simulation library

Mason-a Java multiagent simulation library

MASON:A JAVA MULTI-AGENT SIMULATION LIBRARYG. C. BALAN, Department of Computer ScienceC. CIOFFI-REVILLA, Center for Social Complexity*S. LUKE, Department of Computer Science∗L. PANAIT, Department of Computer ScienceS. PAUS, Department of Computer ScienceGeorge Mason University, Fairfax, VAABSTRACTAgent-based modeling (ABM) has transformed social science research by allowingresearchers to replicate or generate the emergence of empirically complex socialphenomena from a set of relatively simple agent-based rules at the micro-level.Swarm, RePast, Ascape, and others currently provide simulation environments forABM social science research. After Swarm — arguably the first widely used ABMsimulator employed in the social sciences — subsequent simulators have sought toenhance available simulation tools and computational capabilities by providingadditional functionalities and formal modeling facilities. Here we present MASON(Multi-Agent Simulator Of Neighborhoods), following in a similar tradition that seeksto enhance the power and diversity of the available scientific toolkit in computationalsocial science. MASON is intended to provide a core of facilities useful not only tosocial science but to other agent-based modeling fields such as artificial intelligenceand robotics. We believe this can foster useful “cross-pollination” between suchdiverse disciplines, and further that MASON's additional facilities will becomeincreasing important as social complexity simulation matures and grows into newapproaches. We illustrate the new MASON simulation library with a replication ofHeatBugs and a demonstration of MASON applied to two challenging case studies:ant-like foragers and micro-aerial agents. Other applications are also beingdeveloped. The HeatBugs replication and the two new applications provide an idea ofMASON’s potential for computational social science and artificial societies.Keywords: MASON, agent-based modeling, multi-agent social simulation, ant foraging, aerial-vehicle flightINTRODUCTIONAgent-based modeling (ABM) in the social sciences is a productive and innovative frontier for understanding complex social systems (Berry, Kiel, and Elliott 2002), because object-oriented programming from computer science allows social scientists to model social phenomena directly in terms of social entities and their interactions, in ways that are inaccessible either through statistical or mathematical modeling in closed form (Axelrod 1997; Axtell and Epstein 1996; Gilbert and Troitzsch 1999). The multi-agent simulation environments that have been∗Corresponding authors’ address: Sean Luke, Department of Computer Science, George Mason University, 4400 University Drive MSN 4A5, Fairfax, VA 22030; e-mail: sean@. Claudio Cioffi-Revilla, Center for Social Complexity, George Mason University, 4400 University Drive MSN 3F4, Fairfax, VA 22030; e-mail: ccioffi@. All authors are listed alphabetically.developed in recent years are designed to meet the needs of a particular discipline; for example, simulators such as TeamBots () (Balch 1998) and Player/Stage () (Gerkey, Vaughan, and Howard 2003) emphasize robotics, StarLogo (/starlogo) is geared towards education, breve (Klein 2002) () aims at physics and a-life, and RePast (), Ascape (/dybdocroot/es/dynamics/models/ascape), a n d Swarm () have traditionally emphasized social complexity scenarios with discrete or network-based environments. Social science ABM applications based environments in this final category are well-documented in earlier Proceedings of this conference (Macal and Sallach 2000; Sallach and Wolsko 2001) and have contributed substantial new knowledge in numerous domains of the social sciences, including anthropology (hunter-gatherer societies and prehistory), economics (finance), sociology (organizations and collective behavior), political science (government and conflict), and linguistics (emergence of language) — to name a few examples.In this paper we present MASON, a new “Multi-Agent Simulator Of Neighborhoods”developed at George Mason University as a joint collaborative project between the Department of Computer Science's Evolutionary Computation Laboratory (Luke) and the Center for Social Complexity (Cioffi-Revilla). MASON seeks to continue the tradition of improvements and innovations initiated by Swarm, but as a more general system it can also support core simulation computations outside the human and social domain in a strict sense. More specifically, MASON is a general-purpose, single-process, discrete-event simulation library intended to support diverse multiagent models across the social and other sciences, artificial intelligence, and robotics, ranging from 3D continuous models, to social complexity networks, to discretized foraging algorithms based on evolutionary computation (EC). MASON is of special interest to the social sciences and social insect algorithm community because one of its primary design goals is to support very large numbers of agents efficiently. As such, MASON is faster than scripted systems such as StarLogo or breve, while still remaining portable and producing guaranteed replicable results. Another MASON design goal is to make it easy to build a wide variety of multi-agent simulation environments (for example, to test machine learning and artificial intelligence algorithms, or to cross-implement for validation purposes), rather than provide a domain-specific framework.This paper contains three general sections. The first section describes the new MASON environment in greater detail, including its motivation, main features, and modules. The second section argues for MASON's applicability to social complexity simulation, including a comparison with RePast and a simple case-study replication of HeatBugs (a common Swarm-inspired ABM widely familiar to computational social scientists. The third section presents two further case studies of MASON applied to areas somewhat outside of the computational social science realm, but which point, we think, in directions which will be of interest to the field in the future. We conclude with a brief summary.MASONMotivation: Why MASON? History and JustificationMASON originated as a small library for a very wide range of multi-agent simulation needs ranging from robotics to game agents to social and physical models. The impetus for this stemmed from the needs of the original architects of the system (S. Luke , G. C. Balan, and L.MASON Model Library, Utilities(Optional)MASON GUI Tools(Optional)Domain-Specific Simulation Library,ToolsApplicationsFIGURES 1 and 2 MASON Layers and Checkpointing ArchitecturePanait) being computer scientists specializing in artificial intelligence, machine learning, and multi-agent behaviors. We needed a system in which to apply these methods to a wide variety of multi-agent problems. Previously various robotics and social agent simulators were used for this purpose (notably TeamBots); but domain-specific simulators tend to be complex, and can lead to unexpected bugs if modified for use in domains for which they are not designed.Our approach in MASON is to provide the intersection of features needed for most multiagent problem domains, rather than the union of them, and to make it as easy as possible for the designer to add additional domain functionality. We think this “additive” approach to simulation development is less prone to problems than the “subtractive” method of modifying an existing domain-specific simulation environment. As such, MASON is intentionally simple but highly flexible.Machine learning methods, optimization, and other techniques are also expensive,requiring a large number of simulation runs to achieve good results. Thus we needed a system that ran efficiently on back-end machines (such as beowulf clusters), while the results were visualized, often in the middle of a run, on a front-end workstation. As simulations might take a long time, we further needed built-in checkpointing to disk so we could stop a simulation at any point and restart it later.Last, our needs tended towards parallelism in the form of many simultaneous simulationruns, rather than one large simulation spread across multiple machines. Thus MASON is a single-process library intended to run on one machine at a time.While MASON was not conceived originally for the social agents community, we believe it will prove a useful tool for social agent simulation designers, especially as computational social science matures and grows into new approaches that require functionalities such as those implemented by the MASON environment. MASON's basic functionality has considerable overlap with Ascape and RePast partially to facilitate new applications as well as replications of earlier models in Swarm, Repast or Ascape; indeed we think that developers used to these simulators will find MASON's architecture strikingly familiar. Finally, MASON is motivated by simulation results replicability as an essential tool in advancing computationally-based claims (Cioffi-Revilla 2002), similar to the role of replication in empirical studies (Altman et al. 2001).Visualization Tools Model Running on Back-End PlatformDiskCheckpointedModel Running under Visualization on User's PlatformRecoveredDiskCheckpointedRecoveredFeaturesMASON was conceived as a core library around which one might build a domain-specific custom simulation library, rather than as a full-fledged simulation environment. Such custom simulation library “flavors” might include robotics simulation library tools, graphics and physical modeling tools, or interactive simulator environments. However, MASON provides enough simulation tools that it is quite usable as a basic “vanilla” flavor library in and of itself; indeed, the applications described later in this paper use plain MASON rather than any particular simulator flavor wrapped around it.In order to achieve the flavors concept, MASON is highly modular, with an explicit layered architecture: inner layers have no ties to outer layers whatsoever, and outer layers may be completely removed. In some cases, outer layers can be removed or added to the simulation dynamically during a simulation run. We envision at least five layers: a set of basic utilities, the core model library, provided visualization toolkits, additional custom simulation layers (flavors), and the simulation applications using the library. These layers are shown in Figure 1.Two other MASON design goals are portability and guaranteed replicability. By replicability we mean that for a given initial setting, the system should produce identical results regardless of the platform on which it is running, and whether or not it is being visualized. We believe that replicability and portability are crucial features of a high-quality scientific simulation system because they guarantee the ability to disseminate simulation results not only in publication form but also in repeatable code form. To meet these goals, MASON is written in 100% Java.Java's serialization facilities, and MASON's complete divorcing of model from visualization, permit it to easily perform checkpointing: at any time the model may be serialized to the disk and reloaded later. As shown in Figure 2, models may be checkpointed and loaded with or without visualization. Additionally, serialized data can be reused on any Java platform: for example, one can freely checkpoint a model from a back-end Intel platform running Linux, then load and visualize its current running state on MacOS X.Despite its Java roots, MASON is also intended to be fast, particularly when running without visualization. The core model library encourages direct manipulation of model data, is designed to avoid thread synchronization wherever possible, has carefully tuned visualization facilities, and is built on top of a set of utility classes optimized for modern Java virtual machines.1Additionally, while MASON is a single-process, discrete-event library, it still permits multithreaded execution in certain circumstances, primarily to parallelize expensive operations in a given simulation.1 One efficiency optimization not settled yet is whether to use Java-standard multidimensional arrays or to use so-called ``linearized'' array classes (such as used in RePast). MASON has been implemented with both of them for testing purposes. In tight-loop microbenchmarks. linearized arrays are somewhat faster; but in full MASON simulation applications, Java arrays appear to be significantly faster. This is likely due to a loss in cache and basic-block optimization in real applications as opposed to simple microbenchmarks. We are still investigating this issue.FIGURE 3 MASON Utilities, Model, and Visualization LayersThe Model and Utilities LayersMASON's model layer, shown in Figure 3, consists of two parts: fields and a discrete-event schedule. Fields store arbitrary objects and relate them to locations in some spatial neighborhood. Objects are free to belong to multiple fields or, in some cases, the same field multiple times. The schedule represents time, and permits agents to perform actions in the future.A basic simulation model typically consists of one or more fields, a schedule, and user-defined auxiliary objects. There is some discrepancy in the use of the term agents between the social sciences and computer science fields. When we speak of agents , we refer to entities that can manipulate the world in some way: they are brains rather than bodies. Agents are very often embodied — physically located in fields along with other objects — but are not required to be so.The model layer comes with fields providing the following spatial relationships, butothers can be created easily.•Bounded and toroidal discrete grids in 2D and in 3D for integers, doubles, and arbitrary Objects (one integer/double/Object per grid location)•Bounded and toroidal hexagonal grids in 2D for integers, doubles, and arbitrary Objects (one integer/double/Object per grid location)• Efficient sparse bounded, unbounded, and toroidal discrete grids in 2D and 3D(mapping zero or more Objects to a given grid location)• Efficient sparse bounded, unbounded, and toroidal continuous space in 2D and 3D(mapping zero or more Objects to a real-valued location in space)• Binary directed graphs or networks (a set of Objects plus an arbitrary binary relation)Simulation ModelDiscrete Event Schedule (Representation of Time)Fields(Representations of Space)Holds AgentsAny ObjectHoldUtilitiesVisualization and GUI ToolsControllers(Manipulate the Schedule)2D and 3D Displays2D and 3D Portrayals(Draw Fields and the Objects they hold)DiskHoldThe model layer does not contain any visualization or GUI code at all, and it can be run all by itself, plus certain classes in the utilities layer. The utilities layer consists of Java classes free of simulation-specific function. Such classes include bags (highly optimized Java Collection subclasses designed to permit direct access to int, double, and Object array data), immutable 2D and 3D vectors, and a highly efficient implementation of the Mersenne Twister random number generator.The Visualization LayerAs noted earlier, MASON simulations may operate with or without a GUI, and can switch between the two modes in the middle of a simulation run. To achieve this, the model layer is kept completely separate from the visualization layer. When operated without a GUI the model layer runs in the main Java thread as an ordinary Java application. When run with a GUI, the model layer is kept essentially in its own “sandbox”: it runs in its own thread, with no relationship to the GUI, and can be swapped in and out at any time. Besides the checkpointing advantages described earlier, another important and desirable benefit of MASON's separation of model from visualization is that the same model objects may be visualized in radically different ways at the same time (in both 2D and 3D, for example). The visualization layer, and its relationship to the Model layer, is shown in Figure 3.To perform the feat of separation, the GUI manages its own separate auxiliary schedule tied to the underlying schedule, to queue visualization-agents that update the GUI displays. The schedule and auxiliary schedule are stepped through a controller in charge of the running of the simulation. The GUI also displays and manipulates the model not directly but through portrayals which act as proxies for the objects and fields in the model layer. Objects in the model proper may act as their own portrayals but do not have to.The portrayal architecture is divided into field portrayals, which portray fields in the model, and various simple portrayals stored in a field portrayal and used to portray various objects in the field portrayals' underlying field. Field portrayals are, in turn, attached to a display, which provides a GUI environment for them to draw and manipulate their fields and field objects. Portrayals can also provide auxiliary objects known as inspectors (approximately equivalent to “probes” in RePast and Swarm) that permit the examination and manipulation of basic model data. MASON provides displays and portrayals for both 2D and 3D space, and can display all of its provided fields in 2D and 3D, including displaying certain 2D fields in 3D. 2D portrayals are displayed using AWT and Java2D graphics primitives. 3D portrayals are displayed using the Java3D scene graph library. Examples of these portrayals are shown in Figure 4.APPLICABILITY TO SOCIAL COMPLEXITY ENVIRONMENTS MASON was designed with an eye towards social agent models, and we think that social science experimenters will find it valuable. MASON shares many core features with social agent simulators such as Swarm, Ascape, and RePast. In this section we specify the primary differences between MASON and RePast, followed by a simple example of MASON used to simulate the well-known HeatBugs model.a. b. c.d. e. f.FIGURE 4 Sample Field Portrayals (Applications in Parentheses). a. Discrete 2D Grids (Ant Foraging). b. Hexagonal 2D Grids (Hexagonal HeatBugs). c. Continuous2D Space (“Woims”: Flocking Worms). d. Networks in 2D (A Network Test). e. Continuous 3D Space (3D “Woims”).f. Discrete 2D Grids in 3D Space (HeatBugs)Comparison with RePastHere we provide a brief enumeration of most of the differences between the facilities provided by MASON and those of RePast, the latter of which evolved from Swarm to model situated social agents.Differences•MASON provides a full division between model and visualization. One consequenceof this key difference is that MASON can separate or join the two at any time andprovide cross-platform checkpointing in an easy fashion. Another consequence isthat MASON objects and fields can be portrayed in very different ways at the sametime, and visualization methods may change even during an expensive simulation run.•MASON has facilities for 3D models and other visualization capabilities that remainlargely unexplored in the social science realm, but which are potentially insightful forsocial science ABM simulations.•In our experience, MASON has generally faster models and visualization than RePast,especially on Mac OS X, and has more memory-efficient sparse and continuousfields. MASON's model data structures also have computational complexityadvantages.•MASON has a clean, unified way of handling network and continuous fieldvisualization.•RePast provides many facilities, notably: GIS, Excel import/export, charts and graphs,and SimBuilder and related tools. Due to its design philosophy, MASON does notinclude these facilities. We believe they are better provided as separate packagesrather than bundled. Further, we believe that many of these tools can be easily portedto MASON.Differences in Flux: MASON Will Likely Change These Features in its Final Version •RePast uses linearized array classes for multidimensional arrays. MASON presentlyhas facilities for both linearized arrays and true Java arrays, but may reduce to usingone or the other.•RePast's schedule uses doubles, while MASON's schedule presently uses longs.•RePast allows objects to be selected and moved by the mouse.•RePast allows deep-inspection of objects; MASON's inspection is presently shallow.Replicating HeatBugsHeatBugs is arguably the best known ABM simulation introduced by Swarm, and is a standard demonstration application in RePast as well. It contains basic features common to a great many social agent simulations: for example, a discrete environment defining neighborhood relationships among agents, residual effects (heat) of agents, and interactions among them. We feel that the ability to replicate models like HeatBugs, Sugarscape, Conway’s Game of Life (or other cellular automata), and Schelling’s segregation model in a new computational ABM environment should be as essential as the ability to implement regression, factor analysis, ANOVA, and similar basic data facilities in a statistical analysis environment.Indeed, a 100x100 toroidal world, 100-agent HeatBugs model was MASON's very first application. In addition to this classical HeatBugs model, we have implemented several other HeatBugs examples. Figure 4 includes partial screenshots of two of them: Figure 4B shows HeatBugs on a hexagonal grid (fittingly called “HexaBugs” in RePast), and Figure 4f shows 2D HeatBugs visualized in 3D space, where vertical scale indicates temperature, and HeatBugs onthe same square are shown stacked vertically as well. Whereas the original HeatBugs is based on a 2D grid of interacting square cells (connected by Moore or von Neumann neighborhoods), HexaBugs is more relevant in some areas of computational social science where hexagonal cells are more natural (e.g., computational political science, especially international relations) and four-corner situations are rare or nonexistent (Cioffi-Revilla and Gotts 2003).CASE STUDIESAlthough MASON has existed for only six months, we have already used it in a variety of research and educational contexts. Additionally, we are conducing tests to port RePast, Swarm, and Ascape models to MASON, ported by modelers not immediately familiar with MASON. These ports include a model of warfare among countries, a model of land use in a geographic region, and a model of the spread of anthrax in the human body).In this section we describe the implementation and results of two research projects that used the MASON simulation library. The first case study used MASON to discover new ant-colony foraging and optimization algorithms. The second case study applied MASON to the development of evolved micro-aerial vehicle flight behaviors. These are not computational social science models per se: but they are relevant enough to prove illuminating. The first case study uses a model that is similar to the discrete ABM models presently used, but it is applied to an automated learning method, demonstrating the automated application of large numbers of simulations in parallel. The second case study uses a continuous 2D domain environment and interaction, which we think points to one future area of ABM research. Neither of these more advanced applications is implemented in Swarm, RePast, or Ascape at present, and both take advantage of features special to MASON. In both cases experiments were conducted running MASON on the command-line in several back-end machines, and the progress was analyzed by attaching the simulators to visualization tools on a front-end workstation. Additionally, the second case involves a continuous field that is scalable and both memory- and time-efficient (both O(#agents), rather than O(spatial area)).Both of the projects described below have an evolutionary computation (EC) component: to save repetition, we give a quick explanation of evolutionary computation here. EC is family of stochastic search and optimization techniques for “hard” problems for which there is no known procedural optimization or solution-discovery method. EC is of special interest to certain multiagent fields because it is agent-oriented: it operates not by modifying a single candidate solution, but by testing a “population” of such solutions all at one time. Such candidate solutions are known as “individuals”, and each individual's assessed quality is known as its “fitness”. The general EC algorithm is as follows. First, an initial population of randomly-generated individuals is created and each individual's fitness is assessed. Then a new population of individuals (the next generation) is assembled through an iterative process of stochastically selecting individuals (tending to select the fitter ones), copying them, then breeding the copies (mixing and matching individuals' components and mutating them), and placing the results into the next generation. The new generation replaces the old generation; its individuals' fitnesses are in turn assessed, and the cycle continues. EC ends when a sufficiently fit individual is discovered, or when resources (notably time) expire. The most famous example of EC is the genetic algorithm (GA) (Holland 1975), but other versions exist as well; we will discuss genetic programming (GP) (Koza 1992) as one alternative EC method below.Ant ForagingAnt foraging models attempt to explain how ant colonies discover food sources, then communicate those discoveries to other ants through the use of pheromone trails — leaving proverbial “bread crumbs” to mark the way. This area has become popular not just in biology but curiously in artificial intelligence and machine learning as well, because pheromone-based communication has proven an effective abstract notion for new optimization algorithms (known collectively as ant colony optimization) and for cooperative robotics.Previous ant foraging models have to date relied to some degree on a priori knowledge of the environment, in the form of explicit gradients generated by the nest, by hard-coding the nest location in an easily-discoverable place, or by imbuing the ants with the knowledge of the nest direction. In contrast, the case study presented here solves ant foraging problems using two pheromones, one applied when leaving the nest and one applied when returning to the nest. The resulting algorithm is orthogonal and simple, and biologically plausible, yet ants are able to establish increasingly efficient trails from the nest to the food even in the presence of obstacles.Ants are sensitive to one of the two pheromones at any given time; the sensitivity depends on whether they are foraging or carrying food. While foraging, an ant will stochastically move in the direction of increasing food pheromone concentration, and will deposit some amount of nest pheromone. If there is already more nest pheromone than the desired level, the ant deposits nothing. Otherwise, the ant “tops off” the pheromone value in the area to the desired level. As the ant wanders from the nest, its desired level of nest pheromone drops. This decrease in deposited pheromone establishes an effective gradient. When the ant is instead carrying food, the movement and pheromone-laying procedures use the opposite pheromones than those used during foraging.The model assumes a maximum number of ants per location in space. At each time step, an ant will move to its best choice among non-full, non-obstacle locations; the decision is made stochastically with probabilities correlated to the amounts of pheromones in the nearby locations. Ants move in random order. Ants live for 500 time steps; a new ant is born at the nest each time step unless the total number of ants is at its limit. Pheromones both evaporate and diffuse in the environment.Figure 4a shows a partial screenshot of a small portion of the ant colony foraging environment. The ants have laid down a path from the nest to the food and back again. Part of the ground is colored with pheromones. The large oval regions are obstacles. The MASON implementation was done with two discrete grids of doubles (two pheromone values), discrete grids of obstacles, food sources, and ant nests, and a sparse discrete grid holding the ants proper. Each ant is also an agent (and so is scheduled at each time step to move itself). Additional agents are responsible for the evaporation and diffusion of pheromones in the environment, and also for creating new ants when necessary.In addition to the successful design of hard-coded ant foraging behaviors, we also experimented with letting the computer search for and optimize those behaviors on its own. For this purpose, we connected MASON to the ECJ evolutionary computation system (Luke 2002) (/~eclab). ECJ handled the main evolutionary loop: an individual took the form of a set of ant behaviors that was applied to each ant in the colony. To evaluate an individual, ECJ spawned a MASON simulation with the specified ant behaviors. The simulation was run。

英语教案全英文版

英语教案全英文版

English Teaching Plan (全英文版)第一章:Introduction to English Grammar1.1 Objective:To introduce basic concepts of English grammar, including nouns, verbs, adjectives, adverbs, and prepositions.1.2 Teaching Content:1.2.1 Define each of the above-mentioned grammatical terms.1.2.2 Expln the function and usage of each grammatical term in a sentence.1.2.3 Provide examples to illustrate the use of each grammatical term.1.3 Teaching Methods:1.3.1 Lecture method: Explning the concepts and providing examples. 1.3.2 Practice method: Conducting exercises to reinforce learning.1.4 Teaching Aids:PowerPoint slides, exercise handouts, and textbooks.1.5 Learning Activities:1.5.1 Students will be able to define and identify nouns, verbs, adjectives, adverbs, and prepositions.1.5.2 Students will practice constructing sentences using these grammatical terms.1.6 Homework:Complete exercises related to the lesson on nouns, verbs, adjectives,adverbs, and prepositions.第二章:English Vocabulary Building2.1 Objective:To expand students' vocabulary introducing new words and phrases related to dly life.2.2 Teaching Content:2.2.1 Introduce new words and phrases related to dly life, such as "family," "food," "hobbies," and "weather."2.2.2 Expln the meaning and usage of each new word or phrase.2.2.3 Provide examples to illustrate the use of each new word or phrase.2.3 Teaching Methods:2.3.1 Audio-visual method: Using pictures, flashcards, and videos to introduce new words and phrases.2.3.2 Group work: Encouraging students to work in groups to practice using the new vocabulary.2.4 Teaching Aids:Flashcards, pictures, videos, and textbooks.2.5 Learning Activities:2.5.1 Students will be able to define and use the new words and phrases related to dly life.2.5.2 Students will practice using the new vocabulary in sentences and conversations.2.6 Homework:Complete exercises related to the lesson on new vocabulary words and phrases.第三章:Listening and Speaking Skills3.1 Objective:To improve students' listening and speaking skills through interactive activities and practice.3.2 Teaching Content:3.2.1 Introduce different listening and speaking activities, such as role-playing, group discussions, and question-and-answer sessions.3.2.2 Provide instructions and guidance on how to effectively listen and speak in English.3.2.3 Conduct various listening and speaking exercises to practice these skills.3.3 Teaching Methods:3.3.1 Interactive method: Engaging students in active discussions and role-playing activities.3.3.2 Group work: Assigning students to work in groups to practice listening and speaking skills.3.4 Teaching Aids:Audio materials, handouts, and textbooks.3.5 Learning Activities:3.5.1 Students will participate in role-playing, group discussions, and question-and-answer sessions.3.5.2 Students will practice actively listening and speaking in English. 3.6 Homework:Listen to audio materials provided and practice speaking with classmates.第四章:Reading Comprehension4.1 Objective:To improve students' reading prehension skills analyzing and interpreting English texts.4.2 Teaching Content:4.2.1 Introduce different reading strategies, such as scanning, skimming, and careful reading.4.2.2 Provide instructions and guidance on how to effectively read and understand English texts.4.2.3 Analyze and discuss a given text to practice reading prehension skills.4.3 Teaching Methods:4.3.1 Direct instruction method: Teaching reading strategies and providing guidance.4.3.2 Group work: Assigning students to work in groups to analyze and discuss the given text.4.4 Teaching Aids:Textbooks, handouts, and multimedia resources.4.5 Learning Activities:4.5.1 Students will learn and practice different reading strategies.4.5.2 Students will analyze and discuss the given text to improve reading prehension第五章:Writing Skills5.1 Objective:To enhance students' writing skills teaching different writing techniques and styles.5.2 Teaching Content:5.2.1 Introduce different types of writing, such as narrative, descriptive, and persuasive.5.2.2 Provide instructions and guidance on how to effectively structure and organize a written piece.5.2.3 Conduct writing exercises to practice these skills.5.3 Teaching Methods:5.3.1 Demonstration method: Modeling effective writing techniques and styles.5.3.2 Peer review: Encouraging students to exchange and provide feedback on their writing.5.4 Teaching Aids:Handouts, textbooks, and multimedia resources.5.5 Learning Activities:5.5.1 Students will learn and practice different writing techniques and styles.5.5.2 Students will write a short piece on a given topic and participate in peer review.5.6 Homework:Complete writing exercises related to the lesson and revise their written pieces based on peer feedback.第六章:Cultural Awareness6.1 Objective:To enhance students' cultural awareness exploring English-speaking cultures.6.2 Teaching Content:6.2.1 Introduce English-speaking countries, such as the United States, United Kingdom, and Australia.6.2.2 Discuss cultural aspects, such as traditions, customs, and social norms.6.2.3 Explore the impact of culture on language and munication.6.3 Teaching Methods:6.3.1 Cultural presentation method: Using visuals, videos, and guest speakers to present cultural information.6.3.2 Group work: Assigning students to work in groups to research and present on a specific English-speaking country.6.4 Teaching Aids:Visuals, videos, guest speakers, and textbooks.6.5 Learning Activities:6.5.1 Students will learn about different English-speaking countries and their cultures.6.5.2 Students will research and present on a specific English-speaking country, highlighting cultural aspects and their impact on language. 6.6 Homework:Complete research on an English-speaking country and prepare a presentation.第七章:Grammar Review7.1 Objective:To reinforce students' understanding of English grammar through review and practice.7.2 Teaching Content:7.2.1 Review previously taught grammatical concepts, such as tenses, moods, and sentence structures.7.2.2 Identify and correct mon grammar mistakes in written and spoken English.7.2.3 Provide exercises to practice and solidify grammar knowledge.7.3 Teaching Methods:7.3.1 Interactive method: Engaging students in activities that require correcting grammar mistakes.7.3.2 Worksheet method: Providing students with grammar exercises to practice.7.4 Teaching Aids:Handouts, textbooks, and multimedia resources.7.5 Learning Activities:7.5.1 Students will review and understand previously taught grammatical concepts.7.5.2 Students will practice identifying and correcting grammar mistakes in sentences and paragraphs.7.6 Homework:Complete grammar exercises related to the lesson and revise their written pieces based on feedback.第八章:Exam Preparation8.1 Objective:To prepare students for an English exam reviewing key concepts and practicing test-taking strategies.8.2 Teaching Content:8.2.1 Review all topics covered in the course, including grammar, vocabulary, listening, speaking, reading, and writing.8.2.2 Discuss and practice test-taking strategies, such as time management and answering multiple-choice questions.8.2.3 Conduct mock tests to familiarize students with the exam format.8.3 Teaching Methods:8.3.1 Review method: Summarizing key concepts and providing examples.8.3.2 Practice method: Conducting mock tests and discussing answers.8.4 Teaching Aids:Mock test papers, answer keys, and textbooks.8.5 Learning Activities:8.5.1 Students will review all topics covered in the course.8.5.2 Students will practice test-taking strategies and take mock tests. 8.6 Homework:Review course materials, plete mock test papers, and analyze answers. 第九章:Project Work9.1 Objective:To provide students with an opportunity to apply their English skills in a practical context.9.2 Teaching Content:重点和难点解析1. 第五章:Writing Skills难点解析:写作是一种复杂的认知过程,涉及创意、组织、语言选择等多个方面。

model-based deep learning 概述及解释说明

model-based deep learning 概述及解释说明

model-based deep learning 概述及解释说明1. 引言1.1 概述深度学习作为一种机器学习方法,已经在各个领域取得了显著的成就。

传统的深度学习方法主要依赖于大量标注的数据进行训练,从而提取出有效的特征表示。

然而,这些方法在面对缺乏标签或样本稀缺的问题时表现不佳。

因此,基于模型的深度学习方法应运而生。

1.2 文章结构本文首先介绍深度学习基础知识,包括神经网络和深度学习概述、模型训练与优化算法以及损失函数与评估指标。

之后,详细介绍Model-Based Deep Learning的定义、背景以及与传统深度学习方法的区别与联系。

接着,探讨Model-Based Deep Learning在不同领域中的应用和案例研究。

随后,重点解析Model-Based Reinforcement Learning,在强化学习中的模型建模方法及其应用案例分析,并探讨实际问题中可能遇到的挑战和解决方案。

之后是Model-Based Generative Adversarial Networks(GAN)综述,包括GAN 原理简介及其发展历程回顾、基于模型的GAN方法在视觉图像合成、图像处理等任务中的应用,以及Model-Based GAN的潜在应用和研究展望。

最后,通过总结主要观点,对Model-Based Deep Learning未来研究方向进行展望。

1.3 目的本文旨在全面介绍Model-Based Deep Learning,并解释其背景、优势和与传统深度学习方法的区别。

通过案例分析,探讨Model-Based Reinforcement Learning和Model-Based GAN在实际问题中的应用。

同时,本文还将探讨现有方法可能遇到的挑战,并提出解决方案。

最后,希望通过对未来研究方向的展望来推动Model-Based Deep Learning领域的发展。

(Translation)1. Introduction1.1 OverviewDeep learning, as a machine learning method, has achieved remarkable success in various fields. Traditional deep learning methods rely heavily on a large amount of annotated data for training to extract effective feature representations. However, these methods perform poorly when faced with problems that lack labels or have scarce samples. Hence, model-based deep learning approaches have emerged.1.2 Article StructureThis article begins by introducing the basics of deep learning, including an overview of neural networks and deep learning, model training andoptimization algorithms, as well as loss functions and evaluation metrics. It then provides a detailed explanation of Model-Based Deep Learning, including its definition, background, and the differences and connections with traditional deep learning methods. The article goes on to explore the applications and case studies of Model-Based Deep Learning in various domains. Next, it delves into the details of Model-Based Reinforcement Learning, covering the modeling methods and application case analysis in reinforcement learning and discussing challenges and solutions in real-world problems. Following that, a comprehensive review of Model-Based Generative Adversarial Networks (GAN) is presented. This includes an introduction to GAN principles, a retrospective on its development, the application of model-based GAN methods in tasks such as visual image synthesis and image processing, as well as the potential applications and future prospects of Model-Based GAN. Finally, the article concludes by summarizing the main points and providing insights into future research directions for Model-Based Deep Learning.1.3 ObjectivesThe objective of this article is to provide a comprehensive overview of Model-Based Deep Learning and explain its background, advantages, and differences from traditional deep learning methods. Through casestudies, it aims to explore the applications of Model-Based Reinforcement Learning and Model-Based GAN in practical problems. Additionally, this article will discuss the challenges faced by existing methods and propose potential solutions. Lastly, by offering insights into future research directions, it hopes to drive advancements in the field of Model-Based Deep Learning.2. 深度学习基础:2.1 神经网络和深度学习概述:深度学习是机器学习领域中的一个重要分支,它模仿人脑神经网络的工作方式,通过构建多层神经网络来实现对大规模数据的高效处理和学习。

Advances in

Advances in

Advances in Geosciences,4,17–22,2005 SRef-ID:1680-7359/adgeo/2005-4-17 European Geosciences Union©2005Author(s).This work is licensed under a Creative CommonsLicense.Advances in GeosciencesIncorporating level set methods in Geographical Information Systems(GIS)for land-surface process modelingD.PullarGeography Planning and Architecture,The University of Queensland,Brisbane QLD4072,Australia Received:1August2004–Revised:1November2004–Accepted:15November2004–Published:9August2005nd-surface processes include a broad class of models that operate at a landscape scale.Current modelling approaches tend to be specialised towards one type of pro-cess,yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management.This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as waterflow,biochemical diffusion,and plant dispersal.Its theoretical development applies a La-grangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique,called level set method,is implemented in a geographical information system(GIS).It fits with afield data model in GIS and is implemented as operators in map algebra.The paper describes an implemen-tation of the level set methods in a map algebra program-ming language,called MapScript,and gives example pro-gram scripts for applications in ecology and hydrology.1IntroductionOver the past decade there has been an explosion in the ap-plication of models to solve environmental issues.Many of these models are specific to one physical process and of-ten require expert knowledge to use.Increasingly generic modeling frameworks are being sought to provide analyti-cal tools to examine and resolve complex environmental and natural resource problems.These systems consider a vari-ety of land condition characteristics,interactions and driv-ing physical processes.Variables accounted for include cli-mate,topography,soils,geology,land cover,vegetation and hydro-geography(Moore et al.,1993).Physical interactions include processes for climatology,hydrology,topographic landsurface/sub-surfacefluxes and biological/ecological sys-Correspondence to:D.Pullar(d.pullar@.au)tems(Sklar and Costanza,1991).Progress has been made in linking model-specific systems with tools used by environ-mental managers,for instance geographical information sys-tems(GIS).While this approach,commonly referred to as loose coupling,provides a practical solution it still does not improve the scientific foundation of these models nor their integration with other models and related systems,such as decision support systems(Argent,2003).The alternative ap-proach is for tightly coupled systems which build functional-ity into a system or interface to domain libraries from which a user may build custom solutions using a macro language or program scripts.The approach supports integrated models through interface specifications which articulate the funda-mental assumptions and simplifications within these models. The problem is that there are no environmental modelling systems which are widely used by engineers and scientists that offer this level of interoperability,and the more com-monly used GIS systems do not currently support space and time representations and operations suitable for modelling environmental processes(Burrough,1998)(Sui and Magio, 1999).Providing a generic environmental modeling framework for practical environmental issues is challenging.It does not exist now despite an overwhelming demand because there are deep technical challenges to build integrated modeling frameworks in a scientifically rigorous manner.It is this chal-lenge this research addresses.1.1Background for ApproachThe paper describes a generic environmental modeling lan-guage integrated with a Geographical Information System (GIS)which supports spatial-temporal operators to model physical interactions occurring in two ways.The trivial case where interactions are isolated to a location,and the more common and complex case where interactions propa-gate spatially across landscape surfaces.The programming language has a strong theoretical and algorithmic basis.The-oretically,it assumes a Eulerian representation of state space,Fig.1.Shows a)a propagating interface parameterised by differ-ential equations,b)interface fronts have variable intensity and may expand or contract based onfield gradients and driving process. but propagates quantities across landscapes using Lagrangian equations of motion.In physics,a Lagrangian view focuses on how a quantity(water volume or particle)moves through space,whereas an Eulerian view focuses on a localfixed area of space and accounts for quantities moving through it.The benefit of this approach is that an Eulerian perspective is em-inently suited to representing the variation of environmen-tal phenomena across space,but it is difficult to conceptu-alise solutions for the equations of motion and has compu-tational drawbacks(Press et al.,1992).On the other hand, the Lagrangian view is often not favoured because it requires a global solution that makes it difficult to account for local variations,but has the advantage of solving equations of mo-tion in an intuitive and numerically direct way.The research will address this dilemma by adopting a novel approach from the image processing discipline that uses a Lagrangian ap-proach over an Eulerian grid.The approach,called level set methods,provides an efficient algorithm for modeling a natural advancing front in a host of settings(Sethian,1999). The reason the method works well over other approaches is that the advancing front is described by equations of motion (Lagrangian view),but computationally the front propagates over a vectorfield(Eulerian view).Hence,we have a very generic way to describe the motion of quantities,but can ex-plicitly solve their advancing properties locally as propagat-ing zones.The research work will adapt this technique for modeling the motion of environmental variables across time and space.Specifically,it will add new data models and op-erators to a geographical information system(GIS)for envi-ronmental modeling.This is considered to be a significant research imperative in spatial information science and tech-nology(Goodchild,2001).The main focus of this paper is to evaluate if the level set method(Sethian,1999)can:–provide a theoretically and empirically supportable methodology for modeling a range of integral landscape processes,–provide an algorithmic solution that is not sensitive to process timing,is computationally stable and efficient as compared to conventional explicit solutions to diffu-sive processes models,–be developed as part of a generic modelling language in GIS to express integrated models for natural resource and environmental problems?The outline for the paper is as follow.The next section will describe the theory for spatial-temporal processing us-ing level sets.Section3describes how this is implemented in a map algebra programming language.Two application examples are given–an ecological and a hydrological ex-ample–to demonstrate the use of operators for computing reactive-diffusive interactions in landscapes.Section4sum-marises the contribution of this research.2Theory2.1IntroductionLevel set methods(Sethian,1999)have been applied in a large collection of applications including,physics,chemistry,fluid dynamics,combustion,material science,fabrication of microelectronics,and computer vision.Level set methods compute an advancing interface using an Eulerian grid and the Lagrangian equations of motion.They are similar to cost distance modeling used in GIS(Burroughs and McDonnell, 1998)in that they compute the spread of a variable across space,but the motion is based upon partial differential equa-tions related to the physical process.The advancement of the interface is computed through time along a spatial gradient, and it may expand or contract in its extent.See Fig.1.2.2TheoryThe advantage of the level set method is that it models mo-tion along a state-space gradient.Level set methods start with the equation of motion,i.e.an advancing front with velocity F is characterised by an arrival surface T(x,y).Note that F is a velocityfield in a spatial sense.If F was constant this would result in an expanding series of circular fronts,but for different values in a velocityfield the front will have a more contorted appearance as shown in Fig.1b.The motion of thisinterface is always normal to the interface boundary,and its progress is regulated by several factors:F=f(L,G,I)(1)where L=local properties that determine the shape of advanc-ing front,G=global properties related to governing forces for its motion,I=independent properties that regulate and influ-ence the motion.If the advancing front is modeled strictly in terms of the movement of entity particles,then a straightfor-ward velocity equation describes its motion:|∇T|F=1given T0=0(2) where the arrival function T(x,y)is a travel cost surface,and T0is the initial position of the interface.Instead we use level sets to describe the interface as a complex function.The level set functionφis an evolving front consistent with the under-lying viscosity solution defined by partial differential equa-tions.This is expressed by the equation:φt+F|∇φ|=0givenφ(x,y,t=0)(3)whereφt is a complex interface function over time period 0..n,i.e.φ(x,y,t)=t0..tn,∇φis the spatial and temporal derivatives for viscosity equations.The Eulerian view over a spatial domain imposes a discretisation of space,i.e.the raster grid,which records changes in value z.Hence,the level set function becomesφ(x,y,z,t)to describe an evolv-ing surface over time.Further details are given in Sethian (1999)along with efficient algorithms.The next section de-scribes the integration of the level set methods with GIS.3Map algebra modelling3.1Map algebraSpatial models are written in a map algebra programming language.Map algebra is a function-oriented language that operates on four implicit spatial data types:point,neighbour-hood,zonal and whole landscape surfaces.Surfaces are typ-ically represented as a discrete raster where a point is a cell, a neighbourhood is a kernel centred on a cell,and zones are groups of mon examples of raster data include ter-rain models,categorical land cover maps,and scalar temper-ature surfaces.Map algebra is used to program many types of landscape models ranging from land suitability models to mineral exploration in the geosciences(Burrough and Mc-Donnell,1998;Bonham-Carter,1994).The syntax for map algebra follows a mathematical style with statements expressed as equations.These equations use operators to manipulate spatial data types for point and neighbourhoods.Expressions that manipulate a raster sur-face may use a global operation or alternatively iterate over the cells in a raster.For instance the GRID map algebra (Gao et al.,1993)defines an iteration construct,called do-cell,to apply equations on a cell-by-cell basis.This is triv-ially performed on columns and rows in a clockwork manner. However,for environmental phenomena there aresituations Fig.2.Spatial processing orders for raster.where the order of computations has a special significance. For instance,processes that involve spreading or transport acting along environmental gradients within the landscape. Therefore special control needs to be exercised on the order of execution.Burrough(1998)describes two extra control mechanisms for diffusion and directed topology.Figure2 shows the three principle types of processing orders,and they are:–row scan order governed by the clockwork lattice struc-ture,–spread order governed by the spreading or scattering ofa material from a more concentrated region,–flow order governed by advection which is the transport of a material due to velocity.Our implementation of map algebra,called MapScript (Pullar,2001),includes a special iteration construct that sup-ports these processing orders.MapScript is a lightweight lan-guage for processing raster-based GIS data using map alge-bra.The language parser and engine are built as a software component to interoperate with the IDRISI GIS(Eastman, 1997).MapScript is built in C++with a class hierarchy based upon a value type.Variants for value types include numeri-cal,boolean,template,cells,or a grid.MapScript supports combinations of these data types within equations with basic arithmetic and relational comparison operators.Algebra op-erations on templates typically result in an aggregate value assigned to a cell(Pullar,2001);this is similar to the con-volution integral in image algebras(Ritter et al.,1990).The language supports iteration to execute a block of statements in three ways:a)docell construct to process raster in a row scan order,b)dospread construct to process raster in a spreadwhile(time<100)dospreadpop=pop+(diffuse(kernel*pop))pop=pop+(r*pop*dt*(1-(pop/K)) enddoendwhere the diffusive constant is stored in thekernel:Fig.3.Map algebra script and convolution kernel for population dispersion.The variable pop is a raster,r,K and D are constants, dt is the model time step,and the kernel is a3×3template.It is assumed a time step is defined and the script is run in a simulation. Thefirst line contained in the nested cell processing construct(i.e. dospread)is the diffusive term and the second line is the population growth term.order,c)doflow to process raster byflow order.Examples are given in subsequent sections.Process models will also involve a timing loop which may be handled as a general while(<condition>)..end construct in MapScript where the condition expression includes a system time variable.This time variable is used in a specific fashion along with a system time step by certain operators,namely diffuse()andfluxflow() described in the next section,to model diffusion and advec-tion as a time evolving front.The evolving front represents quantities such as vegetation growth or surface runoff.3.2Ecological exampleThis section presents an ecological example based upon plant dispersal in a landscape.The population of a species follows a controlled growth rate and at the same time spreads across landscapes.The theory of the rate of spread of an organism is given in Tilman and Kareiva(1997).The area occupied by a species grows log-linear with time.This may be modelled by coupling a spatial diffusion term with an exponential pop-ulation growth term;the combination produces the familiar reaction-diffusion model.A simple growth population model is used where the reac-tion term considers one population controlled by births and mortalities is:dN dt =r·N1−NK(4)where N is the size of the population,r is the rate of change of population given in terms of the difference between birth and mortality rates,and K is the carrying capacity.Further dis-cussion of population models can be found in Jrgensen and Bendoricchio(2001).The diffusive term spreads a quantity through space at a specified rate:dudt=Dd2udx2(5) where u is the quantity which in our case is population size, and D is the diffusive coefficient.The model is operated as a coupled computation.Over a discretized space,or raster,the diffusive term is estimated using a numerical scheme(Press et al.,1992).The distance over which diffusion takes place in time step dt is minimally constrained by the raster resolution.For a stable computa-tional process the following condition must be satisfied:2Ddtdx2≤1(6) This basically states that to account for the diffusive pro-cess,the term2D·dx is less than the velocity of the advancing front.This would not be difficult to compute if D is constant, but is problematic if D is variable with respect to landscape conditions.This problem may be overcome by progressing along a diffusive front over the discrete raster based upon distance rather than being constrained by the cell resolution.The pro-cessing and diffusive operator is implemented in a map al-gebra programming language.The code fragment in Fig.3 shows a map algebra script for a single time step for the cou-pled reactive-diffusion model for population growth.The operator of interest in the script shown in Fig.3is the diffuse operator.It is assumed that the script is run with a given time step.The operator uses a system time step which is computed to balance the effect of process errors with effi-cient computation.With knowledge of the time step the it-erative construct applies an appropriate distance propagation such that the condition in Eq.(3)is not violated.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.As a diffusive front propagates through the raster,a cost distance kernel assigns the proper time to each raster cell.The time assigned to the cell corresponds to the minimal cost it takes to reach that cell.Hence cell pro-cessing is controlled by propagating the kernel outward at a speed adaptive to the local context rather than meeting an arbitrary global constraint.3.3Hydrological exampleThis section presents a hydrological example based upon sur-face dispersal of excess rainfall across the terrain.The move-ment of water is described by the continuity equation:∂h∂t=e t−∇·q t(7) where h is the water depth(m),e t is the rainfall excess(m/s), q is the discharge(m/hr)at time t.Discharge is assumed to have steady uniformflow conditions,and is determined by Manning’s equation:q t=v t h t=1nh5/3ts1/2(8)putation of current cell(x+ x,t,t+ ).where q t is theflow velocity(m/s),h t is water depth,and s is the surface slope(m/m).An explicit method of calcula-tion is used to compute velocity and depth over raster cells, and equations are solved at each time step.A conservative form of afinite difference method solves for q t in Eq.(5). To simplify discussions we describe quasi-one-dimensional equations for theflow problem.The actual numerical com-putations are normally performed on an Eulerian grid(Julien et al.,1995).Finite-element approximations are made to solve the above partial differential equations for the one-dimensional case offlow along a strip of unit width.This leads to a cou-pled model with one term to maintain the continuity offlow and another term to compute theflow.In addition,all calcu-lations must progress from an uphill cell to the down slope cell.This is implemented in map algebra by a iteration con-struct,called doflow,which processes a raster byflow order. Flow distance is measured in cell size x per unit length. One strip is processed during a time interval t(Fig.4).The conservative solution for the continuity term using afirst or-der approximation for Eq.(5)is derived as:h x+ x,t+ t=h x+ x,t−q x+ x,t−q x,txt(9)where the inflow q x,t and outflow q x+x,t are calculated in the second term using Equation6as:q x,t=v x,t·h t(10) The calculations approximate discharge from previous time interval.Discharge is dynamically determined within the continuity equation by water depth.The rate of change in state variables for Equation6needs to satisfy a stability condition where v· t/ x≤1to maintain numerical stabil-ity.The physical interpretation of this is that afinite volume of water wouldflow across and out of a cell within the time step t.Typically the cell resolution isfixed for the raster, and adjusting the time step requires restarting the simulation while(time<120)doflow(dem)fvel=1/n*pow(depth,m)*sqrt(grade)depth=depth+(depth*fluxflow(fvel)) enddoendFig.5.Map algebra script for excess rainfallflow computed over a 120minute event.The variables depth and grade are rasters,fvel is theflow velocity,n and m are constants in Manning’s equation.It is assumed a time step is defined and the script is run in a simulation. Thefirst line in the nested cell processing(i.e.doflow)computes theflow velocity and the second line computes the change in depth from the previous value plus any net change(inflow–outflow)due to velocityflux across the cell.cycle.Flow velocities change dramatically over the course of a storm event,and it is problematic to set an appropriate time step which is efficient and yields a stable result.The hydrological model has been implemented in a map algebra programming language Pullar(2003).To overcome the problem mentioned above we have added high level oper-ators to compute theflow as an advancing front over a land-scape.The time step advances this front adaptively across the landscape based upon theflow velocity.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.The map algebra script is given in Fig.5.The important operator is thefluxflow operator.It computes the advancing front for waterflow across a DEM by hydrologi-cal principles,and computes the local drainageflux rate for each cell.Theflux rate is used to compute the net change in a cell in terms offlow depth over an adaptive time step.4ConclusionsThe paper has described an approach to extend the function-ality of tightly coupled environmental models in GIS(Ar-gent,2004).A long standing criticism of GIS has been its in-ability to handle dynamic spatial models.Other researchers have also addressed this issue(Burrough,1998).The con-tribution of this paper is to describe how level set methods are:i)an appropriate scientific basis,and ii)able to perform stable time-space computations for modelling landscape pro-cesses.The level set method provides the following benefits:–it more directly models motion of spatial phenomena and may handle both expanding and contracting inter-faces,–is based upon differential equations related to the spatial dynamics of physical processes.Despite the potential for using level set methods in GIS and land-surface process modeling,there are no commercial or research systems that use this mercial sys-tems such as GRID(Gao et al.,1993),and research systems such as PCRaster(Wesseling et al.,1996)offerflexible andpowerful map algebra programming languages.But opera-tions that involve reaction-diffusive processing are specific to one context,such as groundwaterflow.We believe the level set method offers a more generic approach that allows a user to programflow and diffusive landscape processes for a variety of application contexts.We have shown that it pro-vides an appropriate theoretical underpinning and may be ef-ficiently implemented in a GIS.We have demonstrated its application for two landscape processes–albeit relatively simple examples–but these may be extended to deal with more complex and dynamic circumstances.The validation for improved environmental modeling tools ultimately rests in their uptake and usage by scientists and engineers.The tool may be accessed from the web site .au/projects/mapscript/(version with enhancements available April2005)for use with IDRSIS GIS(Eastman,1997)and in the future with ArcGIS. It is hoped that a larger community of users will make use of the methodology and implementation for a variety of environmental modeling applications.Edited by:P.Krause,S.Kralisch,and W.Fl¨u gelReviewed by:anonymous refereesReferencesArgent,R.:An Overview of Model Integration for Environmental Applications,Environmental Modelling and Software,19,219–234,2004.Bonham-Carter,G.F.:Geographic Information Systems for Geo-scientists,Elsevier Science Inc.,New York,1994. Burrough,P.A.:Dynamic Modelling and Geocomputation,in: Geocomputation:A Primer,edited by:Longley,P.A.,et al., Wiley,England,165-191,1998.Burrough,P.A.and McDonnell,R.:Principles of Geographic In-formation Systems,Oxford University Press,New York,1998. Gao,P.,Zhan,C.,and Menon,S.:An Overview of Cell-Based Mod-eling with GIS,in:Environmental Modeling with GIS,edited by: Goodchild,M.F.,et al.,Oxford University Press,325–331,1993.Goodchild,M.:A Geographer Looks at Spatial Information Theory, in:COSIT–Spatial Information Theory,edited by:Goos,G., Hertmanis,J.,and van Leeuwen,J.,LNCS2205,1–13,2001.Jørgensen,S.and Bendoricchio,G.:Fundamentals of Ecological Modelling,Elsevier,New York,2001.Julien,P.Y.,Saghafian,B.,and Ogden,F.:Raster-Based Hydro-logic Modelling of Spatially-Varied Surface Runoff,Water Re-sources Bulletin,31(3),523–536,1995.Moore,I.D.,Turner,A.,Wilson,J.,Jenson,S.,and Band,L.:GIS and Land-Surface-Subsurface Process Modeling,in:Environ-mental Modeling with GIS,edited by:Goodchild,M.F.,et al., Oxford University Press,New York,1993.Press,W.,Flannery,B.,Teukolsky,S.,and Vetterling,W.:Numeri-cal Recipes in C:The Art of Scientific Computing,2nd Ed.Cam-bridge University Press,Cambridge,1992.Pullar,D.:MapScript:A Map Algebra Programming Language Incorporating Neighborhood Analysis,GeoInformatica,5(2), 145–163,2001.Pullar,D.:Simulation Modelling Applied To Runoff Modelling Us-ing MapScript,Transactions in GIS,7(2),267–283,2003. Ritter,G.,Wilson,J.,and Davidson,J.:Image Algebra:An Overview,Computer Vision,Graphics,and Image Processing, 4,297–331,1990.Sethian,J.A.:Level Set Methods and Fast Marching Methods, Cambridge University Press,Cambridge,1999.Sklar,F.H.and Costanza,R.:The Development of Dynamic Spa-tial Models for Landscape Ecology:A Review and Progress,in: Quantitative Methods in Ecology,Springer-Verlag,New York, 239–288,1991.Sui,D.and R.Maggio:Integrating GIS with Hydrological Mod-eling:Practices,Problems,and Prospects,Computers,Environ-ment and Urban Systems,23(1),33–51,1999.Tilman,D.and P.Kareiva:Spatial Ecology:The Role of Space in Population Dynamics and Interspecific Interactions.Princeton University Press,Princeton,New Jersey,USA,1997. Wesseling C.G.,Karssenberg, D.,Burrough,P. A.,and van Deursen,W.P.:Integrating Dynamic Environmental Models in GIS:The Development of a Dynamic Modelling Language, Transactions in GIS,1(1),40–48,1996.。

基于视觉的旋翼无人机地面目标跟踪(英文)

基于视觉的旋翼无人机地面目标跟踪(英文)
scale-space extrema detection
I. INTRODUCTION UAV is one of the best platforms to perform dull, dirty or dangerous (3D) tasks [1]. UAV can be used in various applications where human is impossible to intervene. It greatly expands the application space of visual tracking. Research on the technology of vision based ground target tracking for UAV has been a great concern among cybernetic experts and robotic experts, and has become one of the most active research directions in UAV applications. Currently, researchers from America, Britain, France and Sweden are on the cutting edge in this field [2]. Typical visual tracking platforms for UAV include Scan Eagle, GTMax, RQ-11, RQ-16, DragonFly, etc. Because of many advantages, such as small size, light weight, flexible, easy to carry and low cost, rotor UAV has a broad application prospect in the fields of traffic monitoring, resource exploration, electricity patrol, forest fire prevention, aerial photography, atmospheric monitoring, etc [3]. Vision based ground target tracking system for rotor UAV is such a system that gets images by the camera installed on a low-flying rotor UAV, then recognizes the target in the images and estimates the motion state of the target, and finally according to the visual information regulates the pan-tilt-zoom (PTZ) camera automatically to keep the target at the center of the camera view. In view of the current situation of international researches, the study of ground target tracking system for

高二英语科技词汇单选题70题(答案解析)

高二英语科技词汇单选题70题(答案解析)

高二英语科技词汇单选题70题(答案解析)1.The first computer was invented by _.A.Thomas EdisonB.Alexander Graham BellC.John von NeumannD.John Mauchly and J. Presper Eckert答案:D。

选项A 托马斯·爱迪生发明了电灯等;选项B 亚历山大·格雷厄姆·贝尔发明了电话;选项C 约翰·冯·诺依曼在计算机发展中有重要贡献,但不是第一台计算机的发明者;选项D 约翰·莫克利和普雷斯伯·埃克特发明了第一台电子计算机。

2.The light bulb was a great _ in history.A.inventionB.discoveryC.creationD.invent答案:A。

选项A“invention”表示发明;选项B“discovery”表示发现;选项C“creation”表示创造;选项D“invent”是动词发明。

灯泡是一项发明,所以选A。

3.The telephone is one of the most important _ of the 19th century.A.inventionsB.discoveriesC.creationsD.invents答案:A。

选项意思同第2 题,电话是19 世纪最重要的发明之一,用复数形式inventions。

4.The printing press was a major _ in the spread of knowledge.A.inventionB.discoveryC.creationD.inventor答案:A。

印刷机是一项发明,选项B 发现不合适,选项C 创造语义不准确,选项D 发明家不对。

5.The steam engine was a significant _.A.inventionB.discoveryC.creationD.invent答案:A。

基于Agent-based 自组织算法的鱼群逃生模型

基于Agent-based 自组织算法的鱼群逃生模型

科技与创新┃Science and Technology&Innovation ·58·2018年第22期文章编号:2095-6835(2018)22-0058-03基于Agent-based自组织算法的鱼群逃生模型崔旭冉,林涵轩,刘云帆,石朝宇(山东科技大学电气信息系,山东济南250000)摘要:自然界生物以其特定的方式抵御伤害和袭击。

硬骨鱼纲动物沙丁鱼以聚集成群、协作逃生的方式对抗捕食者的捕食和侵犯。

基于Agent-based自组织算法建立关于鱼群逃生过程的数学模型,并加以改进。

将逃生过程分为两部分进行模型的建立与求解。

将鱼群遇到捕食者之前设为第一部分,使用Boids行为准则作为鱼群普通运动的规律要求,根据内聚、分散、对齐三大原则分别对沙丁鱼的运动规律进行约束,并使用Matlab仿真实现模型的可视化,从而证明鱼群普通运动状态近似成球形的推断。

将鱼群突遇捕食者开始改变运动状态设为第二部分,使用基于Agent-baesd的自组织算法,定义每条沙丁鱼为一个AFishAgent,其行为模型为AFishBehModel[1]。

在第一部分建立的普通运动规律模型的基础上和AFishAgent之间的局部连接准则,建立鱼群突遇捕食者时协作逃生运动模型。

关键词:Boids行为准则;Agent-based自组织算法;Matlab仿真;逃生模型中图分类号:TP301.6文献标识码:A DOI:10.15913/ki.kjycx.2018.22.0581问题分析在光线较暗的海洋环境中,沙丁鱼群在遇到捕食者捕食前后,其运动状态会发生显著变化。

假设捕食者只能通过回声定位判断较远处鱼群的位置,第一部分将鱼群运动状态抽象成球形,并分析出鱼群的普通运动轨迹;第二部分以现有模型为基础,模拟鱼群在突遇袭击时整体的协作逃生规律,以实现对鱼群在遇捕食者前后的运动规律变化路线的近似模拟。

从捕食者使用回声定位时只能判断鱼群的整体位置,难以分辨个体为切入点,分析沙丁鱼群在没有受到外部威胁时的运动状态,将Boids行为准则作为鱼群在没有受到外部威胁时的运动准则,并给鱼群的各种变量赋予参数,建立出鱼群普通运动规律的模型。

基于Agent的智能建筑办公能源消耗研究

基于Agent的智能建筑办公能源消耗研究

基于Agent的智能建筑办公能源消耗研究余雷;TaoZhang;许宏科;胡欣【摘要】In modern major industrialized countries,office buildings as the basic unit,energy conservation has become an essential element of modern architecture. Aiming at the deficiency in the practical application of intelligent building technology at present,in this paper, studied the power consumption of office environment through the interaction and cooperation of intelligent Agent based on the multi A-gent technology development platform. Defined four basic elements of building energy consumption,described a method and basic theory based on Agent simulation,through the establishment of the office building power consumption model based on Agent. Used Anylogic to build the simulation model. On the basis of College of Computer Science,Jubilee Campus,the University of Nottingham,studied the be-havior of the staff Agent to determine the building energy consumption,provided decision-making for manager,and established building power consumption simulation model,and researched on energy development practice. Results show that the simulation model based on Agent as a new method in office building energy consumption,is a very useful tool.%在现代主要工业化国家,办公室作为建筑物的基本单元,节能已成为现代建筑必不可少的要素. 针对目前智能建筑技术在实际应用中的不足,文中在基于多agent技术的开发平台上,通过对多智能agent之间的交互和相互协作研究办公环境的电力消耗. 通过建立基于agent 的办公大楼电力消耗模型,定义了办公大楼能源消耗的四个基本要素,描述了基于agent仿真的方法和基本理论. 在此基础上,使用anylogic建立仿真模型,并以英国诺丁汉大学Jubilee校区计算机学院为仿真目标,建立大楼电力消耗仿真模型,通过研究员工agent的行为来确定大楼能源消耗,为管理人员该采用何种方式进行管理提供了决策依据,并主要研究了实际的能源发展. 结果证明,基于agent的模型仿真作为一种新的方法参与办公建筑能源消耗,是一个非常有用的工具.【期刊名称】《计算机技术与发展》【年(卷),期】2015(025)012【总页数】5页(P177-181)【关键词】agent;智能建筑;能源消耗;计算机仿真【作者】余雷;TaoZhang;许宏科;胡欣【作者单位】长安大学电子与控制工程学院,陕西西安 710064;诺丁汉大学计算机学院,诺丁汉 NG8 1BB;长安大学电子与控制工程学院,陕西西安 710064;长安大学电子与控制工程学院,陕西西安 710064【正文语种】中文【中图分类】TP39工业耗能、建筑耗能和交通耗能已成为能源消耗的三大主要来源。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Using Agent-Based Modeling in the Exploration of Self-OrganizingNeural NetworksTimothy SchoenharlGreg MadeyDepartment of Computer Science and EngineeringUniversity of Notre DameNotre Dame,IN46556USAAbstractIn this paper we leverage the power of agent-based modeling to explore a novel self-organizing neural net-work topology.We have drawn inspiration from re-cent research into complex networks and advances in neurobiology,and applied it towards the construction of a neural network.Techniques from agent-based modeling are used to simplify the construction pro-cess and provideflexibility in modifying the simu-lation.The experiment also implements ideas from swarm programming,using local information to de-velop global structure.We demonstrate our simula-tion,modeled using the RePast framework.1Introduction1.1Complex NetworksIn recent years there have been significant develop-ments in the analysis of complex networks.The re-sults of Watts and Strogatz[1]and Barab´a si et al [2][3]are well-known and often cited.Researchers have discovered that many complex networks,such as the World Wide Web[3],the neural network of the nematode C.Elegans[1]and the actor collaboration network[4][5]share important characteristics.These networks do not have a regular structure;however, they are not completely random either[1].Watts and Strogatz,in[1],analyze the actors col-laboration network,the neural network of C.Ele-gans and the power grid of the western United States. They show that these diverse networks display sim-ilar characteristics,namely short characteristic path length and high average clustering coefficients.Char-acteristic path length can be described simply as“the typical distance between every vertex and every other vertex”[4].The average clustering coefficient refers to the average level of interconnectedness of the neigh-borhood around each node[4].For a more formal def-inition of these terms,readers are directed to[4]. Graphs displaying both high clustering coefficients and short characteristic path length have been re-ferred to as“small-world”graphs.Some of the ben-efits of small world graphs include“enhanced signal-propagation speed[and]computational power”[1]. Additionally,high clustering coefficient in a neural network can be interpreted as a large number of lat-eral connections.According to[6],lateral connections in a neural network can aid in pattern enhancement. Barab´a si et al have analyzed numerous natural and artificial networks,including cellular metabolic net-works[7],the actor collaboration network[5]and the power grid of the western United States[5].Barab´a si et al highlight a characteristic that differentiate these graphs from random graphs,a“scale-free”link distri-bution[2].Networks with a“scale-free”link distribu-tion have a small diameter and have a high tolerance for random node failure[8].Diameter is defined as “the average length of the shortest path between any two nodes in the network”[8].A tolerance for ran-dom failures is important in natural neural networks, since neurons do not regenerate like other cells[9]. It is important to note that Barab´a si et al describe an algorithm for constructing a“scale-free”network. This algorithm relies on growing node by node with preferential attachment[5].We take Barab´a si’s al-gorithm as inspiration for our network organization method,but we modify it so that only local informa-tion is used.1.2Artificial Neural NetworksArtificial neural network research began in1943with the work of McCulloch and Pitts[10](as cited in[6]). In subsequent years,a major focus of artificial neural network(ANN)research has been on the develop-ment of successively more accurate models of biologi-cal neurons[11].One of the most basic designs is the perceptron,a more advanced design is the“spiking”neuron[11].This design more accurately captures the behavior of biological neurons,which can lead to more powerful artificial neural networks[12].There has also been significant work done on the topology of artificial neural networks.Current de-signs range from feed-forward networks to more com-plex topologies such as Adaptive Resonance Theory (ART)[9].Most designs of neural network require that the network topology be defined offline,before training begins.It is known that the topology affects the computational ability of the ANN[9],thus it is vital that the ANN have a suitable topology.There is a class of ANN that develops its own topology,the self-organizing map(SOM)[9].These maps do self-organize,but the mechanism involved in their orga-nization is not considered to be biologically plausible [9],suggesting that there is room for alternate de-signs.1.3Biological Neural Networks According to[13],human DNA lacks the storage ca-pability to contain an exact plan of the wiring of the human brain.Thus on some scale,the brain must self-organize.Although the large-scale(above1cm) structure of the brain is largely deterministic,the small-scale(below1mm)structure of the brain ap-pears to be random[13].However,given the human capacity for knowledge and memory,it is obvious that the human neural network must possess some form of organization.In the early stages of development,from before birth until puberty,the mammalian brain is subjected to a massive loss of neurons,axons and synapses[14]. Approximately half of the neurons and about one third of the synapses will die offbefore adolescence. The neurons,axons and synapses do not die offran-domly,instead it appears that the selection process is related to Hebbian learning[14].Chechik and Meilij-son[15]suggest that neuronal regulation,a biological mechanism that maintains post-synaptic membrane potential,may play a part in the synaptic pruning process.1.4Agent Based Modeling,SwarmSimulationThis simulation differs from the computation neu-ral networks described above,as well as the simu-lations used by neurobiologists to study neurobiolog-ical mechanisms.By modeling neurons as agents,we can add more complex behaviors in addition to re-sponse functions.The neurons in our simulation con-duct evaluation of their links and control the pruning process based on local criteria.The interaction of the neurons through signaling enables evaluation offit-ness,allow neurons to make decisions affecting global structure.The neural network topology thus emerges as the result of these local interactions.Central to this emergence is the utilization of feedback.An agent based approach offers gains in terms of modeling neuron complexity over other approaches, such as stochastic simulations.Our simulation uti-lizes the RePast framework,which has a sophisticated discrete time event simulator[19]and several net-work analysis tools.It would be difficult to get the same functionality from neural net simulation writ-ten purely in Java.This would require the replication of a scheduling mechanism,if not an entire library of support code.Utilizing an ABM framework allows us to concentrate on details that are important to our in-vestigation.RePast is written entirely in Java,allow-ing us to develop and deploy the simulation on het-erogeneous hardware(PowerPC,SPARC,Intel)and operating systems(MacOS X,Solaris,Linux).2Simulation2.1Hebbian Learning and NeuronalRegulationHebb’s postulate[16],as cited in[17],states: When an axon of cell A is near enough toexcite cell B or repeatedly or persistentlytakes part infiring it,some growth processor metabolic change takes place in one orboth cells such that A’s efficiency,as one ofthe cellsfiring B,is increased.Hebbian learning can be seen as a major mechanism in the selection of neurons and synapses during the early phase of development in biological neural net-works[14].See Figure1for a simple description of Hebbian learning.The Hebbian reinforcement of the connections between neurons may be mitigated by neuronal regulation[15][18].Neuronal regulation,ac-cording to[15],is a biological process that acts to “maintain the homeostasis of the neuron’s membrane potential.”It is a method of maintaining the input level into a neuron in the event of a change in synap-tic strength.Thus neuronal regulation can help to bound the effects of Hebbian learning.In our simulation,we implement a model of Heb-bian learning as described in[17],as well as the model of neuronal regulation as described in[15].In[17],the author describes a model of Hebbian learning for use in rate-based ANNs.We have interpreted the author’sFigure1:A simplified diagram of the Hebbian learn-ing process.important equations in terms of our model,which uses simple perceptrons.The equation we use for the adjustment of link strength is as follows:∆(w ij)=−γw ij(1−w ij)(wθ−w ij)(1) withγ>0and0<wθ<1.The w ij term is the link weight of the synapse from neuron i to neuron j. Theγterm is a scaling factor that varies the intensity of the adjustment.For values of w ij<wθthe overall adjustment is negative,and for values of w ij<wθthe overall adjustment is positive.This equation leads to a long term stability in learning,preventing learned patterns from being overwritten[17].Our model of neuronal regulation comes from[15]. We utilize Chechik’s method offirst degrading synap-tic weights and then multiplicatively strengthening them.We start with a post-synaptic neuron j,de-grading all of its incoming synaptic weights.The de-graded weights are determined by the following func-tion:w ij=w ij−(w ij)αη(2) Where w is the synaptic strength,ηis a Gaussian distributed noise term with positive mean,andαis the degradation dimension,0<=α<=1.After ap-plying the degradation function we apply the neu-ronal regulation function:w ij=w ij f0jafter a certain amount of time.The length of a neu-ron’s refractory period is set at the beginning of the simulation and is maintained throughout the simula-tion.2.3Initial SetupWe start by creating100neurons and randomly dis-tributing them in a500by500grid.(Side note:We currently do not use location information,as neurons only signal via their synapses.It is known that neu-rons also signal through the release of chemical signals into the medium,thus in future iterations of the sim-ulation we may want to enable location based signal-ing.)The neurons are all initialized with same thresh-old value,though this could easily be modified to be a random value.We iterate through the list of neurons, at each neuron we randomly attach it to a number of other neurons.The number of neurons to attach to is a randomly chosen number from a Gaussian dis-tribution.We add a synapse to the target neuron, ensuring that there are no loops or multiple edges. We prohibit loops and multiple edges for simplicity and not because of any biological considerations.The synaptic strength is currently set to the same value for all axons.It would not be difficult to modify this so that the link strength is randomly chosen.2.4Running the SimulationWe start by selecting afixed number of neurons at random to be the inputs into the network.Every time step we initiate a signal into a randomly se-lected subset of these neurons.The pulse propagates throughout the network,occasionally causing neurons tofire.Neuronal regulation occurs at regular inter-vals,whereas Hebbian learning occurs in conjunction with thefiring events.Periodically,we iterate through the list of neurons,initiating a prune action.The prune action eliminates all axons that have a link strength below a certain value.These actions work together to create a self organizing structure out of the initial random graph.The competition among neurons and synapses can be seen as a measure offitness.Neurons and synapses that are unfit will have low link strength(in the case of synapses)or few connections to the network(in the case of neurons).These unfit nodes are“detached”from the network.The detachment criteria is a simple threshold in terms of connection strength or number of links.This criteria is determined at the beginning of the simulation and each neuron evaluates its at-tached synapses and itself.This local criteria could easily be varied to mimic a heterogeneousnetwork.Figure2:A screen shot of the simulation.3ResultsOur simulation runs in either a graphical mode or a GUI free batch mode.When in graphical mode,the simulation displays the real time status of the net-work topology.A side window displays a dynamic his-togram of the connectivity of the graph.For a screen shot of the running program,see Figure2.The batch mode of the simulation runs without the GUI front end.Simulation data,in the form of adjacency ma-trices and computed information,such as degree dis-tribution,are output as ASCIIfiles.3.1Network StructureWe have analyzed the resulting output from our ing only Hebbian learning,we demonstrate a noticeable change in the network topology.The sim-ulation was run with links initially distributed via a Gaussian distribution withµ=15,σ=1.5.The sim-ulation was run for250time steps,which is more that sufficient for the topology to stabilize.Consider Figure3,which shows the average clustering coeffi-cient of5different runs of the simulation at the same parameter levels.By time step50,the average clus-tering coefficient has stabilized,and remains at the same level throughout the simulation.There are interesting results in terms of link struc-ture as paring the histograms of link dis-tribution in Figures4and5,we can see a marked change in the link distribution as a result of the run-ning of the simulation.In Figure5,it is interesting to note the number of neurons with0links,suggesting that a result of the link pruning is a reduction in the number of effective neurons in the system.Figure 3:A graph of the average clustering coefficient of 5runs of thesimulation.Figure 4:The histogram showing the link distribution at time step2.Figure 5:The histogram showing the link distribution at time step 249.4ConclusionsOur simulation is an example of the power and flex-ibility of an agent based approach to modeling.The interaction of a network of neurons,based on lo-cal rules and local information,leads to a notice-able global change in the structure of the network.Moreover,once the structure has settled down,it re-mains constant throughout the remainder of the sim-ulation.This is validation that the swarm program-ming paradigm is a suitable approach to the modeling of self-organizing neural networks.A close scrutiny of the link distribution histogram shows that the distribution does not approach the exponential or scale-free distribution that is seen in many complex networks.This histogram describes only one run of the program,yet it is representa-tive of the distributions that were seen for a sweep of the parameter space.It is our belief that pruning in conjunction with Hebbian learning is insufficient to create the complex structure that is seen in biological neural networks.It is important to note that by the simulation end,in this example,almost 30neurons have no links.This behavior is very significant in a qualitative way,con-sidering the aforementioned discovery of a significant die offof neurons by adolescence [14].The trend of the data in this regard is encouraging.5Future WorkThere is still much progress to be made in biolog-ically inspired self-organizing neural networks.Heb-bian learning is clearly an important mechanism,but there are certainly other mechanisms that contribute to the self-organization of biological neural networks.In the future,we intend to train the network on meaningful patterns in the hopes of benchmarking the network against a more traditional feed-forward network.We also hope to implement the spiking neu-ron model,which would bring us one step closer to modeling a biological neural network.References[1]Watts,Duncan and S.Strogatz.“Collective Dy-namics of ’Small World’Networks”Nature 393,440-442(1998).[2]Albert,R´e ka and A.-L.Barab´a si.“Topology ofEvolving Networks:Local Events and Univer-sality.”Physical Review Letters ,Vol 85,No 24.(2000).[3]Albert,R´e ka,H.Jeong,and A.-L.Barab´a si“Di-ameter of the World Wide Web”Nature401, 130-131(1999).[4]Watts,Duncan.Small Worlds:The Dynam-ics of Networks Between Order and Random-ness Princeton,NJ:Princeton University Press (1999).[5]Barab´a si,A.-L.Linked:The New Science of Net-works Cambridge,Massachusetts:Perseus Pub-lishing(2002).[6]Harvey,R.L.Neural Network Principles Engle-wood Cliffs,NJ:Prentice Hall,(1994)[7]Jeong,H.,B.Tombor,R.Albert,Z.Oltvai andA.Barab´a si.”The Large-Scale Organization ofMetabolic Networks”Nature407,651(2000).[8]Albert,R.,H.Jeong and A.L.Barab´a si.“Errorand attack tolerance in complex networks”Na-ture406,378(2000).[9]Anderson,J.“An Introduction to Neural Net-works.”Cambridge,MIT Press.(1996).[10]McCulloch,W.and Pitts,W.“A Logical Calcu-lus of the Ideas Immanent in Nervous Activity.”Bulletin of Mathematical Biophysics5,115-33.(1943)[11]Maass,W.“Networks of Spiking Neurons:TheThird Generation of Neural Networks.”Aus-tralian Conference on Neural Networks(1996).[12]Hopfield,J.J.and A.V.M.Herz.“Rapid lo-cal synchronization of action potentials:Toward computation with coupled integrate-and-fire neurons.”Proceedings of the National Academy of Science,USA Vol.92,pp.6655-6662(1995).[13]Segev,R.and E.Ben-Jacob.“From Neuronsto Brain:Adaptive Self-Wiring of Neurons.”Advances in Complex Systems,Vol1,p67-78.(1998).[14]Chalup,Stephan K.“Issues of Neurodevelop-ment in Biological and Artificial Neural Net-works.”Proceedings of the Fifth Biannual Con-ference on Artificial Neural Networks and Expert Systems(ANNES2001),pp.40-45.(2001). [15]Chechik,G.,I.Meilijson and E.Ruppin.“Neu-ronal Regulation:A Mechanism For Synap-tic Pruning During Brain Maturation.”Neural Computation,Vol11,No8.(1999).[16]Hebb,D.O.The Organization of Behavior.Wi-ley,New York.(1949).[17]Gerstner,W.and W.M.Kistler.“MathematicalFormulations of Hebbian Learning.”Biological Cybernetics.Vol87,pp.404-415.(2002). [18]Chechik,G.,D.Horn and E.Ruppin.“NeuronalRegulation and Hebbian Learning”In M.Arbib, editor.The handbook of brain theory and neural networks.2nd Edition,MIT Press.(2000). [19]Collier,Nick.“RePast:An Extensible Frame-work for Agent Simulation.”(2003)。

相关文档
最新文档