Braun_Zenker - Towards an Integrated Approach for Place Brand Management

合集下载

Computer vision( Wikipedia)(计算机视觉)

Computer vision( Wikipedia)(计算机视觉)

A third field which plays an important role is neurobiology, specifically the study of the biological vision system. Over the last century, there has been an extensive study of eyes, neurons, and the brain structures devoted to processing of visual stimuli in both humans and various animals. This has led to a coarse, yet complicated, description of how "real" vision systems operate in order to solve certain vision related tasks. These results have led to a subfield within computer vision where artificial systems are designed to mimic the processing and behaviour of biological systems, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology.Yet another field related to computer vision is signal processing. Many methods for processing of one-variable signals, typically temporal signals, can be extended in a natural way to processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images there are many methods developed within computer vision which have no counterpart in the processing of one-variable signals. A distinct character of these methods is the fact that they are non-linear which, together with the multi-dimensionality of the signal, defines a subfield in signal processing as a part of computer vision.Beside the above mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance.The fields most closely related to computer vision are image processing, image analysis, robot vision and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, localoperations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.Computer vision tends to focus on the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.Machine vision tends to focus on applications, mainly in industry, e.g., vision basedautonomous robots and systems for vision based inspection or measurement. This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that the externalconditions such as lighting can be and are often more controlled in machine vision thanthey are in general computer vision, which can enable the use of different algorithms.There is also a field called imaging which primarily focus on the process of producingexploration is already being made with autonomous vehicles using computer vision, e. g., NASA's Mars Exploration Rover.Other application areas include:Support of visual effects creation for cinema and broadcast, e.g., camera tracking(matchmoving).Surveillance.Typical tasks of computer visionEach of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below.RecognitionThe classical problem in computer vision, image processing and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case: arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g., polyhedrons), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera.Different varieties of the recognition problem are described in the literature:Recognition: one or several pre-specified or learned objects or object classes can berecognized, usually together with their 2D positions in the image or 3D poses in the scene.Identification: An individual instance of an object is recognized. Examples: identification ofa specific person's face or fingerprint, or identification of a specific vehicle.Detection: the image data is scanned for a specific condition. Examples: detection ofpossible abnormal cells or tissues in medical images or detection of a vehicle in anautomatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correctinterpretation.Several specialized tasks based on recognition exist, such as:Content-based image retrieval: finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms ofsimilarity relative a target image (give me all images similar to image X), or in terms ofhigh-level search criteria given as text input (give me all images which contains manyhouses, are taken during winter, and have no cars in them).Pose estimation: estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm inretrieving objects from a conveyor belt in an assembly line situation.Optical character recognition (or OCR): identifying characters in images of printed orhandwritten text, usually with a view to encoding the text in a format more amenable toediting or indexing (e.g. ASCII).MotionSeveral tasks relate to motion estimation, in which an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene. Examples of such tasks are:Egomotion: determining the 3D rigid motion of the camera.Tracking: following the movements of objects (e.g. vehicles or humans).Scene reconstructionGiven one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model.Image restorationThe aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look like, a model which distinguishes them from the noise. By first analysing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.Computer vision systemsThe organization of a computer vision system is highly application dependent. Some systems are stand-alone applications which solve a specific measurement or detection problem, while other constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on if its functionality ispre-specified or if some part of it can be learned or modified during operation. There are, however, typical functions which are found in many computer vision systems.Image acquisition: A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomographydevices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resultingimage data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images orcolour images), but can also be related to various physical measures, such as depth,absorption or reflectance of sonic or electromagnetic waves, or nuclear magneticresonance.Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data inorder to assure that it satisfies certain assumptions implied by the method. Examples are Re-sampling in order to assure that the image coordinate system is correct.Noise reduction in order to assure that sensor noise does not introduce falseinformation.Contrast enhancement to assure that relevant information can be detected.Scale-space representation to enhance image structures at locally appropriate scales.Feature extraction: Image features at various levels of complexity are extracted from theimage data. Typical examples of such features areLines, edges and ridges.Localized interest points such as corners, blobs or points.More complex features may be related to texture, shape or motion.Detection/Segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are Selection of a specific set of interest pointsSegmentation of one or multiple image regions which contain a specific object ofinterest.High-level processing: At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. Theremaining processing deals with, for example:Verification that the data satisfy model-based and application specific assumptions.Estimation of application specific parameters, such as object pose or object size.Classifying a detected object into different categories.See alsoActive visionArtificial intelligence Digital image processing Image processing List of computer visiontopicsMachine learningMachine visionMachine Vision GlossaryMedical imagingPattern recognitionTopological data analysisFurther readingSorted alphabetically with respect to first author's family namePedram Azad, Tilo Gockel, Rüdiger Dillmann (2008). Computer Vision - Principles andPractice. Elektor International Media BV. ISBN 0905705718. /book.html.Dana H. Ballard and Christopher M. Brown (1982). Computer Vision. Prentice Hall. ISBN 0131653164. /rbf/BOOKS/BANDB/bandb.htm.Wilhelm Burger and Mark J. Burge (2007). Digital Image Processing: An AlgorithmicApproach Using Java. Springer. ISBN 1846283795 and ISBN 3540309403./.James L. Crowley and Henrik I. Christensen (Eds.) (1995). Vision as Process. Springer-Verlag. ISBN 3-540-58143-X and ISBN 0-387-58143-X.E. Roy Davies (2005). Machine Vision : Theory, Algorithms, Practicalities. MorganKaufmann. ISBN 0-12-206093-8.Olivier Faugeras (1993). Three-Dimensional Computer Vision, A Geometric Viewpoint. MIT Press. ISBN 0-262-06158-9.R. Fisher, K Dawson-Howe, A. Fitzgibbon, C. Robertson, E. Trucco (2005). Dictionary of Computer Vision and Image Processing. John Wiley. ISBN 0-470-01526-8.David A. Forsyth and Jean Ponce (2003). Computer Vision, A Modern Approach. Prentice Hall. ISBN 0-12-379777-2.Gösta H. Granlund and Hans Knutsson (1995). Signal Processing for Computer Vision.Kluwer Academic Publisher. ISBN 0-7923-9530-1.Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer Vision.Cambridge University Press. ISBN 0-521-54051-8.Berthold Klaus Paul Horn (1986). Robot Vision. MIT Press. ISBN 0-262-08159-8.Fay Huang, Reinhard Klette and Karsten Scheibe (2008). Panoramic Imaging - Sensor-Line Cameras and Laser Range-Finders. Wiley. ISBN 978-0-470-06065-0.Bernd Jähne and Horst Haußecker (2000). Computer Vision and Applications, A Guide for Students and Practitioners. Academic Press. ISBN 0-13-085198-1.Bernd Jähne (2002). Digital Image Processing. Springer. ISBN 3-540-67754-2.Reinhard Klette, Karsten Schluens and Andreas Koschan (1998). Computer Vision - Three-Dimensional Data from Images. Springer, Singapore. ISBN 981-3083-71-9.Tony Lindeberg (1994). Scale-Space Theory in Computer Vision. Springer. ISBN0-7923-9418-6. http://www.nada.kth.se/~tony/book.html.David Marr (1982). Vision. W. H. Freeman and Company. ISBN 0-7167-1284-9.Gérard Medioni and Sing Bing Kang (2004). Emerging Topics in Computer Vision. Prentice Hall. ISBN 0-13-101366-1.Tim Morris (2004). Computer Vision and Image Processing. Palgrave Macmillan. ISBN0-333-99451-5.Nikos Paragios and Yunmei Chen and Olivier Faugeras (2005). Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-387-26371-3.Azriel Rosenfeld and Avinash Kak (1982). Digital Picture Processing. Academic Press. ISBN 0-12-597301-2.Linda G. Shapiro and George C. Stockman (2001). Computer Vision. Prentice Hall. ISBN 0-13-030796-3.Milan Sonka, Vaclav Hlavac and Roger Boyle (1999). Image Processing, Analysis, andMachine Vision. PWS Publishing. ISBN 0-534-95393-X.Emanuele Trucco and Alessandro Verri (1998). Introductory Techniques for 3-D Computer Vision. Prentice Hall. ISBN 0132611082.External linksGeneral resourcesKeith Price's Annotated Computer Vision Bibliography (/Vision-Notes/bibliography/contents.html) and the Official Mirror Site Keith Price's AnnotatedComputer Vision Bibliography (/bibliography/contents.html)USC Iris computer vision conference list (/Information/Iris-Conferences.html)Retrieved from "/wiki/Computer_vision"Categories: Artificial intelligence | Computer visionThis page was last modified on 30 June 2009 at 03:36.Text is available under the Creative Commons Attribution/Share-Alike License; additional terms may apply. See Terms of Use for details.Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profitorganization.。

英语

英语

The Neutral Grounding Resistor Sizing Using an Analytical Method Based on Nonlinear Transformer Model for Inrush Current MitigationGholamabas M.H.Hajivar Shahid Chamran University,Ahvaz, Iranhajivar@S.S.MortazaviShahid Chamran University,Ahvaz, IranMortazavi_s@scu.ac.irMohsen SanieiShahid Chamran University,Ahvaz, IranMohsen.saniei@Abstract-It was found that a neutral resistor together with 'simultaneous' switching didn't have any effect on either the magnitudes or the time constant of inrush currents. The pre-insertion resistors were recommended as the most effective means of controlling inrush currents. Through simulations, it was found that the neutral resistor had little effect on reducing the inrush current peak or even the rate of decay as compared to the cases without a neutral resistor. The use of neutral impedances was concluded to be ineffective compared to the use of pre-insertion resistors. This finding was explained by the low neutral current value as compared to that of high phase currents during inrush. The inrush currents could be mitigated by using a neutral resistor when sequential switching is implemented. From the sequential energizing scheme performance, the neutral resistor size plays the significant role in the scheme effectiveness. Through simulation, it was found that a few ohms neutral grounding resistor can effectively achieve inrush currents reduction. If the neutral resistor is directly selected to minimize the peak of the actual inrush current, a much lower resistor value could be found.This paper presents an analytical method to select optimal neutral grounding resistor for mitigation of inrush current. In this method nonlinearity and core loss of the transformer has been modeled and derived analytical equations.Index Terms--Inrush current, neutral grounding resistor, transformerI.I NTRODUCTIONThe energizing of transformers produces high inrush currents. The nature of inrush currents have rich in harmonics coupled with relatively a long duration, leads to adverse effects on the residual life of the transformer, malfunction of the protection system [1] and power quality [2]. In the power-system industry, two different strategies have been implemented to tackle the problem of transformer inrush currents. The first strategy focuses on adapting to the effects of inrush currents by desensitizing the protection elements. Other approaches go further by 'over-sizing' the magnetic core to achieve higher saturation flux levels. These partial countermeasures impose downgrades on the system's operational reliability, considerable increases unit cost, high mechanical stresses on the transformer and lead to a lower power quality. The second strategy focuses on reducing the inrush current magnitude itself during the energizing process. Minimizing the inrush current will extend the transformer's lifetime and increase the reliability of operation and lower maintenance and down-time costs. Meanwhile, the problem of protection-system malfunction is eliminated during transformer energizing. The available inrush current mitigation consist "closing resistor"[3], "control closing of circuit breaker"[4],[5], "reduction of residual flux"[6], "neutral resistor with sequential switching"[7],[8],[9].The sequential energizing technique presents inrush-reduction scheme due to transformer energizing. This scheme involves the sequential energizing of the three phases transformer together with the insertion of a properly sized resistor at the neutral point of the transformer energizing side [7] ,[8],[9] (Fig. 1).The neutral resistor based scheme acts to minimize the induced voltage across the energized windings during sequential switching of each phase and, hence, minimizes the integral of the applied voltage across the windings.The scheme has the main advantage of being a simpler, more reliable and more cost effective than the synchronous switching and pre-insertion resistor schemes. The scheme has no requirements for the speed of the circuit breaker or the determination of the residual flux. Sequential switching of the three phases can be implemented through either introducing a mechanical delay between each pole in the case of three phase breakers or simply through adjusting the breaker trip-coil time delay for single pole breakers.A further study of the scheme revealed that a much lower resistor size is equally effective. The steady-state theory developed for neutral resistor sizing [8] is unable to explain this phenomenon. This phenomenon must be understood using transient analysis.Fig. 1. The sequential phase energizing schemeUPEC201031st Aug - 3rd Sept 2010The rise of neutral voltage is the main limitation of the scheme. Two methods present to control the neutral voltage rise: the use of surge arrestors and saturated reactors connected to the neutral point. The use of surge arresters was found to be more effective in overcoming the neutral voltage rise limitation [9].The main objective of this paper is to derive an analytical relationship between the peak of the inrush current and the size of the resistor. This paper presents a robust analytical study of the transformer energizing phenomenon. The results reveal a good deal of information on inrush currents and the characteristics of the sequential energizing scheme.II. SCHEME PERFORMANCESince the scheme adopts sequential switching, each switching stage can be investigated separately. For first-phase switching, the scheme's performance is straightforward. The neutral resistor is in series with the energized phase and this resistor's effect is similar to a pre-insertion resistor.The second- phase energizing is one of the most difficult to analyze. Fortunately, from simulation studies, it was found that the inrush current due to second-phase energizing is lower than that due to first-phase energizing for the same value of n R [9]. This result is true for the region where the inrush current of the first-phase is decreasing rapidly as n R increases. As a result, when developing a neutral-resistor-sizing criterion, the focus should be directed towards the analysis of the first-phase energizing.III. A NALYSIS OF F IRST -P HASE E NERGIZING The following analysis focuses on deriving an inrush current waveform expression covering both the unsaturatedand saturated modes of operation respectively. The presented analysis is based on a single saturated core element, but is suitable for analytical modelling of the single-phase transformers and for the single-phase switching of three-phase transformers. As shown in Fig. 2, the transformer's energized phase was modeled as a two segmented saturated magnetizing inductance in series with the transformer's winding resistance, leakage inductance and neutral resistance. The iron core non-l inear inductance as function of the operating flux linkages is represented as a linear inductor inunsaturated ‘‘m l ’’ and saturated ‘‘s l ’’ modes of operation respectively. (a)(b)Fig. 2. (a) Transformer electrical equivalent circuit (per-phase) referred to the primary side. (b) Simplified, two slope saturation curve.For the first-phase switching stage, the equivalent circuit represented in Fig. 2(a) can accurately represent behaviour of the transformer for any connection or core type by using only the positive sequence Flux-Current characteristics. Based on the transformer connection and core structure type, the phases are coupled either through the electrical circuit (3 single phase units in Yg-D connection) or through the Magnetic circuit (Core type transformers with Yg-Y connection) or through both, (the condition of Yg-D connection in an E-Core or a multi limb transformer). The coupling introduced between the windings will result in flux flowing through the limbs or magnetic circuits of un-energized phases. For the sequential switching application, the magnetic coupling will result in an increased reluctance (decreased reactance) for zero sequence flux path if present. The approach presented here is based on deriving an analytical expression relating the amount of inrush current reduction directly to the neutral resistor size. Investigation in this field has been done and some formulas were given to predict the general wave shape or the maximum peak current.A. Expression for magnitude of inrush currentIn Fig. 2(a), p r and p l present the total primary side resistance and leakage reactance. c R shows the total transformer core loss. Secondary side resistance sp r and leakage reactance sp l as referred to primary side are also shown. P V and s V represent the primary and secondary phase to ground terminal voltages, respectively.During first phase energizing, the differential equation describing behaviour of the transformer with saturated ironcore can be written as follows:()())sin((2) (1)φω+⋅⋅=⋅+⋅+⋅+=+⋅+⋅+=t V (t)V dtdi di d λdt di l (t)i R r (t)V dt d λdt di l (t)i R r (t)V m P ll p pp n p P p p p n p PAs the rate of change of the flux linkages with magnetizing current dt d /λcan be represented as an inductance equal to the slope of the i −λcurve, (2) can be re-written as follows;()(3) )()()(dtdi L dt di l t i R r t V lcore p p P n p P ⋅+⋅+⋅+=λ (4) )()(L core l p c l i i R dtdi−⋅=⋅λ⎩⎨⎧==sml core L L di d L λλ)(s s λλλλ>≤The general solution of the differential equations (3),(4) has the following form;⎪⎩⎪⎨⎧>−⋅⋅+−⋅+−−⋅+≤−⋅⋅+−⋅+−⋅=(5) )sin(//)()( )sin(//)(s s 22222221211112121111λλψωττλλψωττt B t e A t t e i A t B t e A t e A t i s s pSubscripts 11,12 and 21,22 denote un-saturated and saturated operation respectively. The parameters given in the equation (5) are given by;() )(/12221σ⋅++⎟⎟⎠⎞⎜⎜⎝⎛⋅−++⋅=m p c p m n p c m m x x R x x R r R x V B()2222)(/1σ⋅++⎟⎟⎠⎞⎜⎜⎝⎛⋅−++⋅=s p c p s n p c s m x x R x x R r R x V B⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎝⎛⋅−+++=⋅−−⎟⎟⎟⎠⎞⎜⎜⎜⎝⎛−c p m n p m p c m R x x R r x x R x σφψ111tan tan ⎟⎟⎟⎟⎟⎠⎞⎜⎜⎜⎜⎜⎝⎛⋅−+++=⋅−−⎟⎟⎟⎠⎞⎜⎜⎜⎝⎛−c p s n p s p c m R R r x x R x σφψ112tan tan )sin(111211ψ⋅=+B A A )sin(222221s t B A A ⋅−⋅=+ωψ mp n p m p m p m p c xx R r x x x x x x R ⋅⋅+⋅−⋅+−⋅+⋅⋅⋅=)(4)()(21211σστm p n p m p m p m p c xx R r x x x x x x R ⋅⋅+⋅−⋅++⋅+⋅⋅⋅=)(4)()(21212σστ s p n p s p s p s p xx R r x x x x x x c R ⋅⋅+⋅−⋅+−⋅+⋅⋅⋅=)(4)()(21221σστ sp n p s p s p sp c xx R r x x x x x x R ⋅⋅+⋅−⋅++⋅+⋅⋅⋅=)(4)()(21222σστ ⎟⎟⎠⎞⎜⎜⎝⎛−⋅==s rs s ri i λλλ10 cnp R R r ++=1σ21221112 , ττττ>>>>⇒>>c R , 012≈A , 022≈A According to equation (5), the required inrush waveform assuming two-part segmented i −λcurve can be calculated for two separate un-saturated and saturated regions. For thefirst unsaturated mode, the current can be directly calculated from the first equation for all flux linkage values below the saturation level. After saturation is reached, the current waveform will follow the second given expression for fluxlinkage values above the saturation level. The saturation time s t can be found at the time when the current reaches the saturation current level s i .Where m λ,r λ,m V and ωare the nominal peak flux linkage, residual flux linkage, peak supply voltage and angular frequency, respectivelyThe inrush current waveform peak will essentially exist during saturation mode of operation. The focus should be concentrated on the second current waveform equation describing saturated operation mode, equation (5). The expression of inrush current peak could be directly evaluated when both saturation time s t and peak time of the inrush current waveform peak t t =are known [9].(10))( (9) )(2/)(222222121//)()(2B eA t e i A peak peak t s t s n peak n n peak R I R R t +−⋅+−−⋅+=+=ττωψπThe peak time peak t at which the inrush current will reachits peak can be numerically found through setting the derivative of equation (10) with respect to time equal to zero at peak t t =.()(11) )sin(/)(022222221212221/ψωωττττ−⋅⋅⋅−−−⋅+−=+−⋅peak t s t B A t te A i peak s peakeThe inrush waveform consists of exponentially decaying'DC' term and a sinusoidal 'AC' term. Both DC and AC amplitudes are significantly reduced with the increase of the available series impedance. The inrush waveform, neglecting the relatively small saturating current s i ,12A and 22A when extremely high could be normalized with respect to theamplitude of the sinusoidal term as follows; (12) )sin(/)()(2221221⎥⎦⎤⎢⎣⎡−⋅+−−⋅⋅=ψωτt t t e B A B t i s p(13) )sin(/)()sin()( 22221⎥⎦⎤⎢⎣⎡−⋅+−−⋅⋅−⋅=ψωτωψt t t e t B t i s s p ))(sin()( 2s n n t R R K ⋅−=ωψ (14) ωλλλφλφωλλφωmm m r s s t r m s mV t dt t V dtd t V V s=⎪⎭⎪⎬⎫⎪⎩⎪⎨⎧⎥⎥⎦⎤⎢⎢⎣⎡⎟⎟⎠⎞⎜⎜⎝⎛−−+−⋅=+⋅+⋅⋅==+⋅⋅=−∫(8) 1cos 1(7))sin((6))sin(10The factor )(n R K depends on transformer saturation characteristics (s λand r λ) and other parameters during saturation.Typical saturation and residual flux magnitudes for power transformers are in the range[9]; .).(35.1.).(2.1u p u p s <<λ and .).(9.0.).(7.0u p r u p <<λIt can be easily shown that with increased damping 'resistance' in the circuit, where the circuit phase angle 2ψhas lower values than the saturation angle s t ⋅ω, the exponential term is negative resulting in an inrush magnitude that is lowerthan the sinusoidal term amplitude.B. Neutral Grounding Resistor SizingBased on (10), the inrush current peak expression, it is now possible to select a neutral resistor size that can achieve a specific inrush current reduction ratio )(n R α given by:(15) )0(/)()(==n peak n peak n R I R I R α For the maximum inrush current condition (0=n R ), the total energized phase system impedance ratio X/R is high and accordingly, the damping of the exponential term in equation (10) during the first cycle can be neglected; [][](16))0(1)0()0(2212=⋅++⎥⎦⎤⎢⎣⎡⋅−+===⎟⎟⎠⎞⎜⎜⎝⎛+⋅⋅n s p c p s pR x n m n peak R x x R x x r R K V R I c s σ High n R values leading to considerable inrush current reduction will result in low X / R ratios. It is clear from (14) that X / R ratios equal to or less than 1 ensure negative DC component factor ')(n R K ' and hence the exponential term shown in (10) can be conservatively neglected. Accordingly, (10) can be re-written as follows;()[](17) )()(22122n s p c p s n p R x m n n peak R x x R x x R r V R B R I c s σ⋅++⎥⎦⎤⎢⎣⎡⋅−+=≈⎟⎟⎠⎞⎜⎜⎝⎛+⋅Using (16) and (17) to evaluate (15), the neutral resistorsize which corresponds to a specific reduction ratio can be given by;[][][](18) )0()(1)0( 12222=⋅++⋅−⋅++⋅−+⋅+=⎥⎥⎦⎤⎢⎢⎣⎡⎥⎥⎦⎤⎢⎢⎣⎡=n s p c p s p n s p c p s n p n R x x R x x r R x x R x x R r R K σσα Very high c R values leading to low transformer core loss, it can be re-written equation (18) as follows [9]; [][][][](19) 1)0(12222s p p s p n p n x x r x x R r R K +++++⋅+==α Equations (18) and (19) reveal that transformers require higher neutral resistor value to achieve the desired inrush current reduction rate. IV. A NALYSIS OF SECOND-P HASE E NERGIZING It is obvious that the analysis of the electric and magnetic circuit behavior during second phase switching will be sufficiently more complex than that for first phase switching.Transformer behaviour during second phase switching was served to vary with respect to connection and core structure type. However, a general behaviour trend exists within lowneutral resistor values where the scheme can effectively limitinrush current magnitude. For cases with delta winding or multi-limb core structure, the second phase inrush current is lower than that during first phase switching. Single phase units connected in star/star have a different performance as both first and second stage inrush currents has almost the same magnitude until a maximum reduction rate of about80% is achieved. V. NEUTRAL VOLTAGE RISEThe peak neutral voltage will reach values up to peak phasevoltage where the neutral resistor value is increased. Typicalneutral voltage peak profile against neutral resistor size is shown in Fig. 6- Fig. 8, for the 225 KVA transformer during 1st and 2nd phase switching. A del ay of 40 (ms) between each switching stage has been considered. VI. S IMULATION A 225 KVA, 2400V/600V, 50 Hz three phase transformer connected in star-star are used for the simulation study. The number of turns per phase primary (2400V) winding is 128=P N and )(01.0pu R R s P ==, )(05.0pu X X s P ==,active power losses in iron core=4.5 KW, average length and section of core limbs (L1=1.3462(m), A1=0.01155192)(2m ), average length and section of yokes (L2=0.5334(m),A2=0.01155192)(2m ), average length and section of air pathfor zero sequence flux return (L0=0.0127(m),A0=0.01155192)(2m ), three phase voltage for fluxinitialization=1 (pu) and B-H characteristic of iron core is inaccordance with Fig.3. A MATLAB program was prepared for the simulation study. Simulation results are shown in Fig.4-Fig.8.Fig. 3.B-H characteristic iron coreFig.4. Inrush current )(0Ω=n RFig.5. Inrush current )(5Ω=n RFig.6. Inrush current )(50Ω=n RFig.7. Maximum neutral voltage )(50Ω=n RFig.8. Maximum neutral voltage ).(5Ω=n RFig.9. Maximum inrush current in (pu), Maximum neutral voltage in (pu), Duration of the inrush current in (s)VII. ConclusionsIn this paper, Based on the sequential switching, presents an analytical method to select optimal neutral grounding resistor for transformer inrush current mitigation. In this method, complete transformer model, including core loss and nonlinearity core specification, has been used. It was shown that high reduction in inrush currents among the three phases can be achieved by using a neutral resistor .Other work presented in this paper also addressed the scheme's main practical limitation: the permissible rise of neutral voltage.VIII.R EFERENCES[1] Hanli Weng, Xiangning Lin "Studies on the UnusualMaloperation of Transformer Differential Protection During the Nonlinear Load Switch-In",IEEE Transaction on Power Delivery, vol. 24, no.4, october 2009.[2] Westinghouse Electric Corporation, Electric Transmissionand Distribution Reference Book, 4th ed. East Pittsburgh, PA, 1964.[3] K.P.Basu, Stella Morris"Reduction of Magnetizing inrushcurrent in traction transformer", DRPT2008 6-9 April 2008 Nanjing China.[4] J.H.Brunke, K.J.Frohlich “Elimination of TransformerInrush Currents by Controlled Switching-Part I: Theoretical Considerations” IEEE Trans. On Power Delivery, Vol.16,No.2,2001. [5] R. Apolonio,J.C.de Oliveira,H.S.Bronzeado,A.B.deVasconcellos,"Transformer Controlled Switching:a strategy proposal and laboratory validation",IEEE 2004, 11th International Conference on Harmonics and Quality of Power.[6] E. Andersen, S. Bereneryd and S. Lindahl, "SynchronousEnergizing of Shunt Reactors and Shunt Capacitors," OGRE paper 13-12, pp 1-6, September 1988.[7] Y. Cui, S. G. Abdulsalam, S. Chen, and W. Xu, “Asequential phase energizing method for transformer inrush current reduction—part I: Simulation and experimental results,” IEEE Trans. Power Del., vol. 20, no. 2, pt. 1, pp. 943–949, Apr. 2005.[8] W. Xu, S. G. Abdulsalam, Y. Cui, S. Liu, and X. Liu, “Asequential phase energizing method for transformer inrush current reduction—part II: Theoretical analysis and design guide,” IEEE Trans. Power Del., vol. 20, no. 2, pt. 1, pp. 950–957, Apr. 2005.[9] S.G. Abdulsalam and W. Xu "A Sequential PhaseEnergization Method for Transformer Inrush current Reduction-Transient Performance and Practical considerations", IEEE Transactions on Power Delivery,vol. 22, No.1, pp. 208-216,Jan. 2007.。

90-A geometric setting for the quantum deformation of GLn

90-A geometric setting for the quantum deformation of GLn
ol. 61, No. 2
DUKE MATHEMATICAL JOURNAL (C)
October 1990
A GEOMETRIC SETTING FOR THE QUANTUM DEFORMATION OF GL
A. A. BEILINSON, G. LUSZTIG*,
AND
R. MACPHERSON*
1. Flags and the algebra K. 1.1. We fix an integer n>l. Let i9 be the set of all n n matrices with integer entries such that the entries off diagonal are > 0. Let (R) be the set of all n x n matrices with integer, > 0 entries. Thus, (R) c 19. Let r: 19 ---, N be the map defined by taking the sum of all entries of a matrix. Let (R)a a-l(d); we have (R) Ila>o (R)a, and each (R)a is a finite set. Let V be a vector space of finite dimension d over a field F. Let be the set of all n-step filtrations V1 c V2 c"" V. E The group GL(V) acts naturally on -; its orbits are the fibres of the map --, N" given by

LucasKanadeMeetsHornSchunckCombiningLocaland GlobalOptic

LucasKanadeMeetsHornSchunckCombiningLocaland GlobalOptic

International Journal of Computer Vision61(3),211–231,2005c 2005Springer Science+Business Media,Inc.Manufactured in The Netherlands. Lucas/Kanade Meets Horn/Schunck:Combining Local and Global OpticFlow MethodsANDR´ES BRUHN AND JOACHIM WEICKERTMathematical Image Analysis Group,Faculty of Mathematics and Computer Science,Saarland University,Building27,66041Saarbr¨u cken,Germanybruhn@mia.uni-saarland.deweickert@mia.uni-saarland.deCHRISTOPH SCHN¨ORRComputer Vision,Graphics and Pattern Recognition Group,Faculty of Mathematics and Computer Science,University of Mannheim,68131Mannheim,Germanyschnoerr@uni-mannheim.deReceived August5,2003;Revised April22,2004;Accepted April22,2004First online version published in October,2004Abstract.Differential methods belong to the most widely used techniques for opticflow computation in image sequences.They can be classified into local methods such as the Lucas–Kanade technique or Big¨u n’s structure tensor method,and into global methods such as the Horn/Schunck approach and its extensions.Often local methods are more robust under noise,while global techniques yield denseflowfields.The goal of this paper is to contribute to a better understanding and the design of novel differential methods in four ways:(i)We juxtapose the role of smoothing/regularisation processes that are required in local and global differential methods for opticflow computation.(ii)This discussion motivates us to describe and evaluate a novel method that combines important advantages of local and global approaches:It yields denseflowfields that are robust against noise.(iii)Spatiotemporal and nonlinear extensions as well as multiresolution frameworks are presented for this hybrid method.(iv)We propose a simple confidence measure for opticflow methods that minimise energy functionals.It allows to sparsify a dense flowfield gradually,depending on the reliability required for the resultingflparisons with experiments from the literature demonstrate the favourable performance of the proposed methods and the confidence measure. Keywords:opticflow,differential techniques,variational methods,structure tensor,partial differential equations, confidence measures,performance evaluation1.IntroductionIll-posedness is a problem that is present in many im-age processing and computer vision techniques:Edgedetection,for example,requires the computation of im-age derivatives.This problem is ill-posed in the senseof Hadamard,1as small perturbations in the signalmay create largefluctuations in its derivatives(Yuilleand Poggio,1986).Another example consists of opticflow computation,where the ill-posedness manifestsitself in the nonuniqueness due to the aperture prob-lem(Bertero et al.,1988):The data allow to computeonly the opticflow component normal to image edges.Both types of ill-posedness problems appear jointlyin so-called differential methods for opticflow recov-ery,where opticflow estimation is based on computing212Bruhn,Weickert and Schn¨o rrspatial and temporal image derivatives.These tech-niques can be classified into local methods that may optimise some local energy-like expression,and global strategies which attempt to minimise a global en-ergy functional.Examples of thefirst category include the Lucas–Kanade method(Lucas and Kanade,1981; Lucas,1984)and the structure tensor approach of Big¨u n and Granlund(1988)and Big¨u n et al.(1991), while the second category is represented by the clas-sic method of Horn and Schunck(Horn and Schunck, 1981)and its numerous discontinuity-preserving vari-ants(Alvarez et al.,1999;Aubert et al.,1999;Black and Anandan,1991;Cohen,1993;Heitz and Bouthemy, 1993;Kumar et al.,1996;Nagel,1983;Nesi,1993; Proesmans et al.,1994;Schn¨o rr,1994;Shulman and Herv´e,1989;Weickert and Schn¨o rr,2001).Differential methods are rather popular:Together with phase-based methods such as(Fleet and Jepson,1990)they belong to the techniques with the best performance(Barron et al.,1994;Galvin et al.,1998).Local methods may offer relatively high robustness under noise,but do not give denseflowfields.Global methods,on the other hand, yieldflowfields with100%density,but are experimen-tally known to be more sensitive to noise(Barron et al., 1994;Galvin et al.,1998).A typical way to overcome the ill-posedness prob-lems of differential opticflow methods consists of the use of smoothing techniques and smoothness as-sumptions:It is common to smooth the image se-quence prior to differentiation in order to remove noise and to stabilise the differentiation process.Lo-cal techniques use spatial constancy assumptions on the opticflowfield in the case of the Lucas–Kanade method,and spatiotemporal constancy for the Big¨u n method.Global approaches,on the other hand,sup-plement the opticflow constraint with a regularising smoothness term.Surprisingly,the actual role and the difference between these smoothing strategies,how-ever,has hardly been addressed in the literature so far. In afirst step of this paper we juxtapose the role of the different smoothing steps of these methods.We shall see that each smoothing process offers certain advantages that cannot be found in other cases.Conse-quently,it would be desirable to combine the different smoothing effects of local and global methods in or-der to design novel approaches that combine the high robustness of local methods with the full density of global techniques.One of the goals of the present pa-per is to propose and analyse such an embedding of local methods into global approaches.This results in a technique that is robust under noise and givesflow fields with100%density.Hence,there is no need for a postprocessing step where sparse data have to be interpolated.On the other hand,it has sometimes been criticised that there is no reliable confidence measure that al-lows to sparsify the result of a denseflowfield such that the remainingflow is more reliable(Barron et al., 1994).In this way it would be possible to compare the real quality of dense methods with the character-istics of local,nondense approaches.In our paper we shall present such a measure.It is simple and applica-ble to the entire class of energy minimising global op-ticflow techniques.Our experimental evaluation will show that this confidence measure can give excellent results.Our paper is organised as follows.In Section2 we discuss the role of the different smoothing pro-cesses that are involved in local and global opticflow approaches.Based on these results we propose two combined local-global(CLG)methods in Section3, one with spatial,the other one with spatiotemporal smoothing.In Section4nonlinear variants of the CLG method are presented,while a suitable multiresolu-tion framework is discussed in Section5.Our nu-merical algorithm is described in Section6.In Sec-tion7,we introduce a novel confidence measure for all global opticflow methods that use energy func-tionals.Section8is devoted to performance evalua-tions of the CLG methods and the confidence mea-sure.A summary and an outlook to future work is given in Section9.In the Appendix,we show how the CLG principle has to be modified if one wants to replace the Lucas–Kanade method by the struc-ture tensor method of Big¨u n and Granlund(1988)and Big¨u n et al.(1991).1.1.Related WorkIn spite of the fact that there exists a very large number of publications on motion analysis(see e.g.(Mitiche and Bouthemy,1996;Stiller and Konrad,1999)for reviews),there has been remarkably little work de-voted to the integration of local and global opticflow methods.Schn¨o rr(Schn¨o rr,1993)sketched a frame-work for supplementing global energy functionals with multiple equations that provide local data constraints. He suggested to use the output of Gaussianfilters shifted in frequency space(Fleet and Jepson,1990)orLucas/Kanade Meets Horn/Schunck213local methods incorporating second-order derivatives (Tretiak and Pastor,1984;Uras et al.,1988),but did not consider methods of Lucas–Kanade or Big¨u n type. Our proposed technique differs from the majority of global regularisation methods by the fact that we also use spatiotemporal regularisers instead of spa-tial ones.Other work with spatiotemporal regularisers includes publications by Murray and Buxton(1987), Nagel(1990),Black and Anandan(1991),Elad and Feuer(1998),and Weickert and Schn¨o rr(2001). While the noise sensitivity of local differential methods has been studied intensively in recent years (Bainbridge-Smith and Lane,1997;Ferm¨u ller et al., 2001;J¨a hne,2001;Kearney et al.,1987;Ohta,1996; Simoncelli et al.,1991),the noise sensitivity of global differential methods has been analysed to a signifi-cantly smaller extent.In this context,Galvin et al. (1998)have compared a number of classical methods where small amounts of Gaussian noise had been added.Their conclusion was similar to thefindings of Barron et al.(1994):the global approach of Horn and Schunck is more sensitive to noise than the local Lucas–Kanade method.A preliminary shorter version of the present paper has been presented at a conference(Bruhn et al.,2002). Additional work in the current paper includes(i)the use of nonquadratic penalising functions,(ii)the ap-plication of a suitable multiresolution strategy,(iii)the proposal of a confidence measure for the entire class of global variational methods,(iv)the integration of the structure tensor approach of Big¨u n and Granlund (1988)and Big¨u n et al.(1991)and(v)a more extensive experimental evaluation.2.Role of the Smoothing ProcessesIn this section we discuss the role of smoothing tech-niques in differential opticflow methods.For simplicity we focus on spatial smoothing.All spatial smoothing strategies can easily be extended into the temporal domain.This will usually lead to improved results (Weickert and Schn¨o rr,2001).Let us consider some image sequence g(x,y,t), where(x,y)denotes the location within a rectangular image domain ,and t∈[0,T]denotes time.It is com-mon to smooth the image sequence prior to differentia-tion(Barron et al.,1994;Kearney et al.,1987),e.g.by convolving each frame with some Gaussian Kσ(x,y) of standard deviationσ:f(x,y,t):=(Kσ∗g)(x,y,t),(1)The low-pass effect of Gaussian convolution removes noise and other destabilising high frequencies.In a sub-sequent opticflow method,we may thus callσthe noise scale.Many differential methods for opticflow are based on the assumption that the grey values of image objects in subsequent frames do not change over time:f(x+u,y+v,t+1)=f(x,y,t),(2) where the displacementfield(u,v) (x,y,t)is called opticflow.For small displacements,we may perform afirst order Taylor expansion yielding the opticflow constraintf x u+f y v+f t=0,(3) where subscripts denote partial derivatives.Evidently, this single equation is not sufficient to uniquely com-pute the two unknowns u and v(aperture problem): For nonvanishing image gradients,it is only possible to determine theflow component parallel to∇f:= (f x,f y) ,i.e.normal to image edges.This so-called normalflow is given byw n=−f t∇f.(4)Figure1(a)depicts one frame from the famous Hamburg taxi sequence.2We have added Gaussian noise,and in Fig.1(b)–(d)we illustrate the impact of presmoothing the image data on the normalflow. While some moderate presmoothing improves the re-sults,great care should be taken not to apply too much presmoothing,since this would severely destroy im-portant image structure.In order to cope with the aperture problem,Lucas and Kanade(1981)and Lucas(1984)proposed to assume that the unknown opticflow vector is constant within some neighbourhood of sizeρ.In this case it is possible to determine the two constants u and v at some location (x,y,t)from a weighted least squarefit by minimising the functionE L K(u,v):=Kρ∗(f x u+f y v+f t)2.(5) Here the standard deviationρof the Gaussian serves as an integration scale over which the main contribution of the least squarefit is computed.A minimum(u,v)of E L K satisfies∂u E L K=0and ∂v E L K=0.This gives the linear systemKρ∗f2xKρ∗(f x f y)Kρ∗(f x f y)Kρ∗f2yuv=−Kρ∗(f x f t)−Kρ∗(f y f t)(6)214Bruhn,Weickert and Schn¨orrFigure 1.From left to right,and from top to bottom :(a)Frame 10of the Hamburg taxi sequence,where Gaussian noise with standard deviationσn =10has been added.The white taxi turns around the corner,the left car drives to the right,and the right van moves to the left.(b)Normal flow magnitude without presmoothing.(c)Normal flow magnitude,presmoothing with σ=1.(d)Ditto,presmoothing with σ=5.(e)Lucas-Kanade method with σ=0,ρ=7.5.(f)Ditto,σ=0,ρ=15.(g)Optic flow magnitude with the Horn-Schunck approach,σ=0,α=105.(h)Ditto,σ=0,α=106.which can be solved provided that its system matrix is invertible.This is not the case in flat regions where the image gradient vanishes.In some other regions,the smaller eigenvalue of the system matrix may be close to 0,such that the aperture problem remains present and the data do not allow a reliable determination of the full optic flow.All this results in nondense flow fields.They constitute the most severe drawback of local gradient methods:Since many computer vision applications require dense flow estimates,subsequent interpolation steps are needed.On the other hand,one may use the smaller eigenvalue of the system matrix as a confidence measure that characterises the reliability of the estimate.Experiments by Barron et al.(1994)indicated that this performs better than the trace-based confidence measure in Simoncelli et al.(1991).Figure 1(e)and (f)show the influence of the integra-tion scale ρon the final result.In these images we haveLucas/Kanade Meets Horn/Schunck215 displayed the entireflowfield regardless of its localreliability.We can see that in each case,theflowfieldhas typical structures of orderρ.In particular,a suffi-ciently large value forρis very successful in renderingthe Lucas–Kanade method robust under noise.In order to end up with denseflow estimates one mayembed the opticflow constraint into a regularisationframework.Horn and Schunck(Horn and Schunck,1981)have pioneered this class of global differen-tial methods.They determine the unknown functionsu(x,y,t)and v(x,y,t)as the minimisers of the globalenergy functionalE HS(u,v)=((f x u+f y v+f t)2+α(|∇u|2+|∇v|2))dx dy(7)where the smoothness weightα>0serves as regu-larisation parameter:Larger values forαresult in astronger penalisation of largeflow gradients and leadto smootherflowfields.Minimising this convex functional comes down tosolving its corresponding Euler–Lagrange equations(Courant and Hilbert,1953;Elsgolc,1961).They aregiven by0= u−1αf2x u+f x f y v+f x f t,(8)0= v−1f x f y u+f2y v+f y f t.(9)with reflecting boundary conditions. denotes the spa-tial Laplace operator::=∂xx+∂yy.(10) The solution of these diffusion–reaction equations is not only unique(Schn¨o rr,1991),it also benefits from thefilling-in effect:At locations with|∇f|≈0,no reliable localflow estimate is possible,but the reg-ulariser|∇u|2+|∇v|2fills in information from theneighbourhood.This results in denseflowfields and makes subsequent interpolation steps obsolete.This is a clear advantage over local methods.It has,however,been criticised that for such global differential methods,no good confidence measures are available that would help to determine locations where the computations are more reliable than elsewhere (Barron et al.,1994).It has also been observed that they may be more sensitive to noise than local differential methods(Barron et al.,1994;Galvin et al.,1998).An explanation for this behaviour can be given as follows.Noise results in high image gradients.They serve as weights in the data term of the regularisation functional(7).Since the smoothness term has a con-stant weightα,smoothness is relatively less important at locations with high image gradients than elsewhere. As a consequence,flowfields are less regularised at noisy image structures.This sensitivity under noise is therefore nothing else but a side-effect of the desired filling-in effect.Figure1(g)and(h)illustrate this be-haviour.Figure1(g)shows that theflowfield does not reveal a uniform scale:It lives on afine scale at high gra-dient image structures,and the scale may become very large when the image gradient tends to zero.Increasing the regularisation parameterαwillfinally also smooth theflowfield at noisy structures,but at this stage,it might already be too blurred inflatter image regions (Fig.1(h)).3.A Combined Local–Global MethodWe have seen that both local and global differential methods have complementary advantages and short-comings.Hence it would be interesting to construct a hybrid technique that constitutes the best of two worlds:It should combine the robustness of local methods with the density of global approaches.This shall be done next.We start with spatial formulations before we extend the approach to the spatiotemporal domain.3.1.Spatial ApproachIn order to design a combined local–global(CLG) method,let usfirst reformulate the previous ing the notationsw:=(u,v,1) ,(11)|∇w|2:=|∇u|2+|∇v|2,(12)∇3f:=(f x,f y,f t) ,(13) Jρ(∇3f):=Kρ∗(∇3f∇3f )(14)it becomes evident that the Lucas–Kanade method min-imises the quadratic formE L K(w)=w Jρ(∇3f)w,(15)216Bruhn,Weickert and Schn¨o rrwhile the Horn–Schunck technique minimises the functionalE HS(w)=(w J0(∇3f)w+α|∇w|2)dx dy.(16)This terminology suggests a natural way to extend the Horn–Schunck functional to the desired CLG func-tional.We simply replace the matrix J0(∇3f)by the structure tensor Jρ(∇3f)with some integration scale ρ>0.Thus,we propose to minimise the functionalE CLG(w)=w Jρ(∇3f)w+α|∇w|2dx dy.(17)Its minimisingflowfield(u,v)satisfies the Euler–Lagrange equations0= u−1αKρ∗f2xu+Kρ∗(f x f y)v+Kρ∗(f x f t),(18)0= v−1Kρ∗(f x f y)u+Kρ∗f2yv+Kρ∗(f y f t).(19)It should be noted that these equations are hardly more complicated than the original Horn–Schunck Eqs.(8) and(9).All one has to do is to evaluate the terms con-taining image data at a nonvanishing integration scale. The basic structure with respect to the unknown func-tions u(x,y,t)and v(x,y,t)is identical.It is there-fore not surprising that the well-posedness proof for the Horn–Schunck method that was presented in(Schn¨o rr, 1991)can also be extended to this case.3.2.Spatiotemporal ApproachThe previous approaches used only spatial smooth-ness operators.Rapid advances in computer technol-ogy,however,makes it now possible to consider also spatiotemporal smoothness operators.Formal exten-sions in this direction are straightforward.In general, one may expect that spatiotemporal formulations give better results than spatial ones because of the additional denoising properties along the temporal direction.In the presence of temporalflow discontinuities smooth-ing along the time axis should only be used moderately. However,even in this case one can observe the benefi-cial effect of temporal information.A spatiotemporal variant of the Lucas–Kanade ap-proach simply replaces convolution with2-D Gaus-sians by spatiotemporal convolution with3-D Gaus-sians.This still leads to a2×2linear system of equa-tions for the two unknowns u and v. Spatiotemporal versions of the Horn-Schunck method have been considered by Elad and Feuer (1998),while discontinuity preserving global methods with spatiotemporal regularisers have been proposed in different formulations in Black and Anandan(1991), Murray and Buxton(1987),Nagel(1990),Weickert and Schn¨o rr(2001).Combining the temporal extended variant of both the Lucas–Kanade and the Horn–Schunck method we obtain a spatiotemporal version of our CLG functional given byE C LG3(w)=×[0,T](w Jρ(∇3f)w+α|∇3w|2)dx dy dt(20) where convolutions with Gaussians are now to be un-derstood in a spatiotemporal way and|∇3w|2:=|∇3u|2+|∇3v|2.(21)Due to the different role of space and time the spa-tiotemporal Gaussians may have different standard de-viations in both directions.Let us denote by J nm the component(n,m)of the structure tensor Jρ(∇3f). Then the Euler–Lagrange equations for(20)are given by3u−1α(J11u+J12v+J13)=0,(22) 3v−1(J12u+J22v+J23)=0.(23)One should note that they have the same structure as(18)–(19),apart from the fact that spatiotempo-ral Gaussian convolution is used,and that the spa-tial Laplacean is replaced by the spatiotemporal Laplacean3:=∂xx+∂yy+∂tt.(24)The spatiotemporal Lucas–Kanade method is similar to the approach of Big¨u n and Granlund(1988)and Big¨u n et al.(1991).In the Appendix we show how the latter method can be embedded in a global energy functional.Lucas/Kanade Meets Horn/Schunck217 4.Nonquadratic ApproachSo far the underlying Lucas–Kanade and Horn–Schunck approaches are linear methods that are basedon quadratic optimisation.It is possible to replacethem by nonquadratic optimisation problems that leadto nonlinear methods.From a statistical viewpointthis can be regarded as applying methods from ro-bust statistics where outliers are penalised less severelythan in quadratic approaches(Hampel et al.,1986;Huber,1981).In general,nonlinear methods give bet-ter results at locations withflow discontinuities.Ro-bust variants of the Lucas–Kanade method have beeninvestigated by Black and Anandan(1996)and byYacoob and Davis(1999),respectively,while a surveyof the numerous convex discontinuity-preserving reg-ularisers for global opticflow methods is presented inWeickert and Schn¨o rr(2001).In order to render our approach more robust againstoutliers in both the data and the smoothness term wepropose the minimisation of the following functional:E CLG3−N(w)=×[0,T](ψ1(w Jρ(∇3f)w)+αψ2(|∇3w|2))dx dy dt(25)whereψ1(s2)andψ2(s2)are nonquadratic penalisers.Encouraging experiments with related continuous en-ergy functionals have been performed by Hinterbergeret al.(2002).Suitable nonquadratic penalisers can bederived from nonlinear diffusionfilter design,wherepreservation or enhancement of discontinuities is alsodesired(Weickert,1998).In order to guarantee well–posedness for the remaining problem,we focus onlyon penalisers that are convex in s.In particular,we usea function that has been proposed by Charbonnier et al.(1994):ψi(s2)=2β2i1+s2i,i∈1,2(26)whereβ1andβ2are scaling parameters.Under some technical requirements,the choice of convex penalis-ers ensures a unique solution of the minimisation prob-lem and allows to construct simple globally convergent algorithms.The Euler–Lagrange equations of the energy func-tional(25)are given by0=div(ψ 2(|∇3w|2)∇3u)−1ψ 1(w Jρ(∇3f)w)(J11u+J12v+J13),(27) 0=div(ψ 2(|∇3w|2)∇3v)−1ψ 1(w Jρ(∇3f)w)(J21v+J22u+J23).(28) withψ i(s2)=11+s2β2i,i∈1,2(29)One should note that for large values ofβi the nonlinearcase comes down to the linear one sinceψi(s2)≈ 1.5.Multiresolution ApproachAll variants of the CLG method considered so far are based on a linearisation of the grey value con-stancy assumption.As a consequence,u and v are re-quired to be relatively small so that the linearisation holds.Obviously,this cannot be guaranteed for arbi-trary sequences.However,there are strategies that al-low to overcome this limitation.These so called multi-scale focusing or multiresolution techniques(Ananden, 1989;Black and Anandan,1996;M´e min and P´e rez, 1998;M´e min and P´e rez,2002)incrementally compute the opticflowfield based on a sophisticated coarse-to-fine strategy:Starting from a coarse scale the resolution is refined step by step.However,the estimatedflowfield at a coarser level is not used as initalisation at the nextfiner scale.In particular for energy functionals with a global minimum,such a proceeding would only lead to an ac-celeration of the convergence,since the result would not change.Instead,the coarse scale motion is used to warp(correct)the original sequence before going to the nextfiner level.This compensation for the already computed motion results in a hierarchy of modified problems that only require to compute small displace-mentfields,the so called motion increments.Thus it is not surprising that thefinal displacementfield obtained by a summation of all motion increments is much more accurate regarding the linearisation of the grey value constancy assumption.218Bruhn,Weickert and Schn¨o rrLet δw m denote the motion increment at resolutionlevel m ,where m =0is the coarsest level with ini-talisation w 0=(0,0,0) .Then δw m is obtained by optimisation of the following spatiotemporal energy functional:E m CLG3−N (δw m )= ×[0,T ](ψ1(δw m J ρ(∇3f (x +w m ))δw m )+αψ2(|∇3(w m +δw m )|2))dxwhere w m +1=w m +δw m and x =(x ,y ,t ).One shouldnote that warping the original sequence does only af-fect the data term.Since the smoothness assumption applies to the complete flow field,w m +δw m is used as argument of the penaliser.If we denote the structure tensor of the corrected se-quence by J mρ=J ρ(∇3f (x +w m )),the corresponding Euler–Lagrange equations are given by0=div (ψ2(|∇3(w m +δw m )|2)∇3δu m )−1αψ 1 δw m J m ρδw J m 11δu +J m 12δv +J m13,(30)0=div (ψ2(|∇3(w m +δw m )|2)∇3δv m )−1αψ 1 δw m J m ρδw J m 21δv +J m 22δu +J m23.(31)6.Algorithmic Realisation 6.1.Spatial and Spatiotemporal ApproachLet us now discuss a suitable algorithm for the CLG method (18)and (19)and its spatiotemporal variant.To this end we consider the unknown functions u (x ,y ,t )and v (x ,y ,t )on a rectangular pixel grid of size h ,and we denote by u i the approximation to u at some pixel i with i =1,...,N .Gaussian convolution is realised in the spatial/spatiotemporal domain by discrete con-volution with a truncated and renormalised Gaussian,where the truncation took place at 3times the stan-dard deviation.Symmetry and separability has been exploited in order to speed up these discrete convolu-tions.Spatial derivatives of the image data have been approximated using a sixth-order approximation with the stencil (−1,9,−45,0,45,−9,1)/(60h ).Tempo-ral derivatives are either approximated with a sim-ple two-point stencil or the fifth-order approximation(−9,125,−2250,2250,−125,9)/(1920h ).Let us denote by J nmi the component (n ,m )of the structure tensor J ρ(∇f )in some pixel i .Furthermore,let N (i )denote the set of (4in 2-D,6in 3-D)neigh-bours of pixel i .Then a finite difference approxima-tion to the Euler–Lagrange equations (18)–(19)is given by0= j ∈N (i )u j −u i h 2−1(J 11i u i +J 12i v i +J 13i ),(32)0= j ∈N (i )v j −v i h 2−1α(J 21i u i +J 22i v i +J 23i )(33)for i =1,...,N .This sparse linear system of equationsmay be solved iteratively.The successive overrelax-ation (SOR)method (Young,1971)is a good compro-mise between simplicity and efficiency.If the upper index denotes the iteration step,the SOR method can be written asu k +1i =(1−ω)u k i +ωj ∈N −(i )u k +1j + j ∈N +(i )u k j −h 2αJ 12i v k i +J 13i|N (i )|+h 2αJ 11i,(34)v k +1i =(1−ω)v k i +ωj ∈N −(i )v k +1j + j ∈N +(i )v k j −h 2αJ 21i u k +1i+J 23i|N (i )|+h αJ 22i(35)whereN −(i ):={j ∈N (i )|j <i },(36)N +(i ):={j ∈N (i )|j >i }(37)and |N (i )|denotes the number of neighbours of pixel i that belong to the image domain.The relaxation parameter ω∈(0,2)has a strong influence on the convergence speed.For ω=1one obtains the well-known Gauß–Seidel method .We usu-ally use values for ωbetween 1.9and 1.99.This nu-merically inexpensive overrelaxation step results in a speed-up by one order of magnitude compared with the Gauß–Seidel approach.We initialised the flow compo-nents for the first iteration by 0.The specific choice。

Distributed interactive simulation for group-distance exercise on the web

Distributed interactive simulation for group-distance exercise on the web

DISTRIBUTED INTERACTIVE SIMULATION FOR GROUP-DISTANCEEXERCISES ON THE WEBErik Berglund and Henrik ErikssonDepartment of Computer and Information ScienceLinköping UniversityS-581 83 Linköping, SwedenE-mail: {eribe, her}@ida.liu.seKEYWORDS: Distributed Interactive Simulation, Distance Education, Network, Internet, Personal ComputerABSTRACTIn distributed-interactive simulation (DIS), simulators act as elements of a bigger distributed simulation. A group-distance exercise (GDE) based on the DIS approach can therefore enable group training for group members participating from different locations. Our GDE approach, unlike full-scale DIS systems, uses affordable simulators designed for standard hardware available in homes and offices.ERCIS (group distance exERCISe) is a prototype GDE system that we have implemented. It takes advantage of Internet and Java to provide distributed simulation at a fraction of the cost of full-scale DIS systems.ERCIS illustrates that distributed simulation can bring advanced training to office and home computers in the form of GDE systems.The focus of this paper is to discuss the possibilities and the problems of GDE and of web-based distributed simulation as a means to provide GDE. INTRODUCTIONSimulators can be valuable tools in education. Simulators can reduce the cost of training and can allow training in hazardous situations (Berkum & Jong 1991). Distributed-interactive simulation (DIS) originated in military applications, where simulators from different types of forces were connected to form full battle situations. In DIS, individual simulators act as elements of a bigger distributed simulation (Loper & Seidensticker 1993).Thus, DIS could be used to create a group-distance exercise (GDE), where the participants perform a group exercise from different locations. Even though DIS systems based on complex special-hardware simulators provide impressive training tools, the cost and immobility of these systems prohibit mass training.ERCIS (group distance exERCISe) is a prototype GDE system that uses Internet technologies to provide affordable DIS support. Internet (or Intranet) technologies form a solid platform for GDE systems because they are readily available, and because they provide high level of support for network communication and for graphical simulation. ERCIS, therefore, takes advantage of the programming language Java, to combine group training, distance education and real-time interaction at a fraction of the cost of full-scale DIS systems.In this paper we discuss the possibilities and the problems of GDE and of web-based distributed simulation as a means to provide GDE. We do this by discussing and drawing conclusions from the ERCIS project.BACKGROUNDLet us first provide some background on GDE, DIS, distributed objects, ERCIS’s military application, and related work.Group-Distance Exercise (GDE)The purpose of GDE is to enable group training in distance education through the use of DIS. Unlike full-scale DIS systems, our GDE approach assumes simulators designed for standard hardware available in homes and offices. This approach calls for software-based simulators which are less expensive to use, can be multiplied virtually limitlessly, and can enable training with expensive, dangerous and/or non-existing equipment.A thorough background on the GDE concept can be found in Computer-Based Group-Distance Exercise (Berglund 1997).Distributed Interactive Simulation (DIS)DIS originated as a means to utilize military simulators in full battle situations by connecting them (Loper & Seidensticker 1993). As a result it becomes possible tocombine the use of advanced simulators and group training.The different parts of DIS systems communicate according to predefined data packets (IEEE 1995) that describe all necessary data on the bit level. The implementation of the communication is, therefore, built into DIS systems.Distributed ObjectsDistributed objects (Orfali et al. 1996) can be characterized as network transient objects, objects that can bridge networks. Two issues must be addressed when dealing with distributed objects: to locate them over the network and to transform them from abstract data to a transportation format and vice versa.The common object request broker architecture (CORBA) is a standard protocol for distributed objects, developed by the object management group (OMG). CORBA is used to cross both networks and programming languages. In CORBA, all objects are distributed via the object request broker (ORB). Objects requesting service of CORBA object have no knowledge about the location or the implementation of the CORBA objects (Vinoski 1997).Remote method invocation (RMI) is Java’s support for distributed objects among Java programs. RMI only provides the protocol to locate and distribute abstract data, unlike CORBA. In the ERCIS project we chose RMI because ERCIS is an all Java application. It also provided us with an opportunity to assess Java’s support for distributed objects.RBS-70 Missile UnitERCIS supports training of the RBS-70 missile unit of the Swedish anti-aircraft defense. The RBS-70 missile unit’s main purpose is to defend objects, for instance bridges, against enemy aircraft attacks, see Figure 1. The RBS-70 missile unit is composed of sub units: two intelligence units and nine combat units. The intelligence units use radar to discover and calculate flight data of hostile aircraft. Guided by the intelligence unit, the combat units engage the aircraft with RBS-70 missiles. (Personal Communication).Intelligence unitCombat unitData transferFigure 1. The RBS-70 missile unit. Its main purpose is to defend ground targets.During training, the RBS-70 unit uses simulators, for instance, to simulate radar images. All or part of the RBS-70 unit’s actual equipment is still used (Personal Communication).Related WorkIn military applications, there are several examples of group training conducted using distributed simulation. For instance, MIND (Jenvald 1996), Janus, Eagle, the Brigade/Battalion Simulation (Loper & Seidensticker 1993), and ABS 2000.C3Fire (Granlund 1997) is an example of a tool for group training, in the area of emergency management, that uses distributed simulation.A common high-level architecture for modeling and simulation, that will focus on a broader range of simulators than DIS, is being developed (Duncan 1996). There are several educational applets on the web that use simulation; see for instance the Gamelan applet repository. These applets are, however, generally small and there are few, if any, distributed simulations. ERCISERCIS is a prototype GDE system implemented in Java for Internet (or Intranets). It supports education of the RBS-70 missile unit by creating a DIS of that group’s environment. We have used RMI to implement the distribution of the system.ERCIS has two principal components: the equipment simulators and the simulator server, see Figure 2. A maximum of 11 group members can participate in a single ERCIS session, representing the 11 sub-unit leaders, see Figure 3. The group members join by loading an HTML document with an embedded equipment-simulator applet.Simulator serverIntelligence unit equipmentsimulatorCombat unit equipmentsimulatorFigure 2. The principal parts of ERCIS. The equipment simulators are connected to one another through the simulator server.Figure 3. A full ERCIS session. 11 equipment simulators can be active in one ERCIS session at a time. The equipment simulators communicate via the simulator server.Simulator ServerThe simulator server controls a microworld of the group’s environment, including simulated aircraft, exercise scenario and geographical information.The simulator server also distributes network communication among the equipment simulators. The reason for this client-server type of communication is that Java applets are generally only allowed to connect to the computer they were loaded from (Flanagan 1997).Finally, the simulator server functions as a point of reference by which the distributed parts locate one another. The simulator-server computer is therefore the only computer specified prior to an ERCIS session.Equipment SimulatorsThe equipment simulators simulate equipment used by the RBS-70 sub units and also function as user interfaces for the group members. There are two types of equipment simulators: the intelligence-unit equipment simulator and combat-unit equipment simulator.Intelligence Unit Equipment SimulatorThe intelligence-unit’s equipment simulator, see Figure 4,contains a radar simulator and a target-tracking simulator . The radar simulator monitors the air space. The target-tracking simulator performs part of the work of three intelligence-unit personnel to approximate position,speed and course of three aircraft simultaneously.The intelligence-unit leader’s task is to distribute the hostile aircraft among the combat units and to send approximated information to them.Panel used to send information to the combat unit equipment simulator.Target-tracking symbol used to initate target tracking.Simulated radarFigure 4. The user interface of the intelligence unit equipment simulator, running on a Sun Solaris Applet Viewer.Combat Unit Equipment SimulatorThe combat unit equipment simulator, see Figure 5,contains simulators of the target-data receiver and the RBS-70 missile launcher and operator .The target-data receiver presents information sent from the intelligence unit. The information is recalculated relative to the combat unit’s position and shows information such as: the distance and direction to the target. The RBS-70 missile launcher and operator represents the missile launcher and its crew. Based on the intelligence unit’s information it can locate, track and fire upon the target.The combat-unit leader’s task is to assess the situation and to grant permission to fire if all criteria are met.Switch used to grant fire permissionFigure 5. The user interface of the combat unit equipment simulator, running on a Windows 95 Applet Viewer.DISCUSSIONLet us then, with experience from the ERCIS project, discuss problems and possibilities of GDE though DIS and of the support Internet and Java provide for GDE. Pedagogical valueEducators can use GDE to introduce group training at an early stage by, for instance, simplifying equipment and thereby abstracting from technical details. Thus, GDE can focus the training on group performance.GDE could also be used to support knowledge recapitulation for expert practitioners. To relive the group’s tasks and environment can provide a more vivid experience than notes and books.GDE systems can automate collection and evaluation of performance statistics. It is possible to log sessions and replay them to visualize comments on performance in after-action reviews (Jenvald 1996).To fully assess the values of GDE, however, real-world testing is required.SecurityAccess control, software authentication, and communication encryption are examples of security issues that concern GDE and distributed simulators. Java 1.1 provides basic security, which includes fine-grained access control, and signed applets. The use of dedicated GDE Intranets would increase security, especially access control. It would, however, reduce the ability to chose location from where to participate.Security restrictions, motivated or not, limit applications. ERCIS was designed with a client-server type of communication because web browsers enforce security restrictions on applets (Flanagan 1997). Peer-to-peer communication would have been more suitable, from the perspective of both implementation and scalability. We are not saying that web browsers should give applets total freedom. In ERCIS, it would have sufficed if applets were allowed to make remote calls to RMI objects regardless of their location.Performance characteristicsOur initial concerns about the speed of RMI calls and of data transfer over Internet proved to be unfounded. The speed of communication is not a limiting factor for ERCIS. For instance, a modem link (28.8 kilobits per second) is sufficient to participate in exercises.Instead the speed of animation limits ERCIS. To provide smooth animation, ERCIS, therefore, requires more than the standard hardware of today, for instance a Pentium Pro machine or better.ScalabilityIn response to increased network load, ERCIS scales relatively well, because the volume of the data that is transmitted among the distributed parts is very small, for instance, 1 Kbytes.Incorporating new and better simulators in ERCIS requires considerable programming effort. In a full-scale GDE system it could be beneficial to modularize the simulators in a plug-and-play fashion, to allow variable simulator complexity.Download timeThe download time for applets the size of ERCIS’s equipment simulator can be very long. One way to overcome this problem is to create Java archive files (JAR files). JAR files aggregate many files into one and also compress them to decrease the download time considerably. Push technology such as Marimba’s Castanet could also be used to provide automatic distribution of the equipment-simulator software. Distributed objectsDistributed objects, such as RMI, provide a high level of abstraction in network communication compared to the DIS protocol. There are several examples of typical distributed applications that do not utilize distributed objects but that would benefit greatly from this approach. Two examples are the Nuclear Power Plant applet (Eriksson 1997), and NASA’s distributed control of the Sojourner.CONCLUSIONERCIS is a GDE prototype that can be used in training under teacher supervision or as part of a web site where web pages provide additional information. The system illustrates that the GDE approach can provide equipment-free mass training, which is beneficial, especially in military applications where training can be extremely expensive.Java proved to be a valuable tool for the implementation of ERCIS. Java’s level of abstraction is high in the two areas that concern ERCIS: animation and distributed objects. Java’s speed of animation is, however, too slow to enable acceptable performance for highly graphic-oriented simulators. Apart from this Java has supplied the support that can be expected from a programming language, for instance C++.Using RMI to implement distribution was straightforward. Compared to the DIS protocol, RMI provides a flexible and dynamic communication protocol. In conclusion, ERCIS illustrates that it is possible to use Internet technologies to develop affordable DIS systems. It also shows that distributed simulations can bring advanced training to office and home computers in the form of GDE systems.AcknowledgmentsWe would like to thank Major Per Bergström at the Center for Anti-Aircraft Defense in Norrtälje, Sweden for supplying domain knowledge of the RBS-70 missile unit. This work has been supported in part by the Swedish National Board for Industrial and Technical Development(Nutek) grant no. 93-3233, and by the Swedish Research Council for Engineering Science (TFR) grant no. 95-186. REFERENCESBerglund E. (1997) Computer-Based Group Distance Exercise, M.Sc. thesis no. 97/36, Department of Computer and Information Science, Linköping University (http://www.ida.liu.se/~eribe/publication/ GDE.zip: compressed postscript file).van Berkum J, de Jong T. (1991) Instructional environments for simulations Education & Computing vol. 6: 305-358.Duncan C. (1996) The DoD High Level Architecture and the Next Generation of DIS, Proceedings of the Fourteenth Workshop on Interoperability of Distributed Simulation, Orlando, Florida.Eriksson H. (1996) Expert Systems as Knowledge Servers, IEEE Expert vol. 11 no. 3: 14 -19. Flanagan, D. (1997) Java in a Nutshell 2nd Edition, O’Reilly, Sebastopol, CA.Granlund R. (1997) C3Fire A Microworld Supporting Emergency Management Training, licentiate thesis no.598, Department of Computer and Information Science 598, Department of Computer and Information Science Linköping University.IEEE (1995) IEEE Standard for Distributed Interactive Simulation--Application Protocols, IEEE 1278.1-1995 (Standard): IEEE.Jenvald J. (1996) Simulation and Data Collection in Battle Training, licentiate thesis no. 567, Department of Computer and Information Science, Linköping University.Loper M, Seidensticker S. (1994) The DIS Vision: A Map to the Future of Distributed Simulation, Orlando, Florida: Institute for Simulation & Training (/SISO/dis/library/vision.doc) Orfali R, Harkey D, Edwards J. (1996) The Essential Distributed Objects Survival Guide John Wiley, New York.Vinoski S. (1997) CORBA: Integrating Diverse Applications Within Distributed Heterogeneous Environments, IEEE Communications, vol. 14, no. 2.RESOURCES AT THE WEBThe OMG home page: /CORBA JavaSoft’s Java 1.1 documentation: /-products/jdk/1.1/docs/index.htmlThe Gamelan applet repository: / Marimba’s Castanet home page: http://www.marimba.-com/products/castanet.htmlThe Nuclear Power Plant Applet (Eriksson 1995): http://-www.ida.liu.se/∼her/npp/demo.htmlNASAS Soujourner, The techical details on the control distribution of NASAS Soujourner: /features/1997/july/juicy.wits.details.html AuthorsErik Berglund is a doctoral student of computer science at Linköping University. His research interests include knowledge acquisition, program understanding, software engineering, and computer supported education. He received his M.Sc. at Linköping University in 1997. Henrik Eriksson is an assistant professor of computer science at Linköping University. His research interests include expert systems, knowledge acquisition, reusable problem-solving methods and medical Informatics. He received his M.Sc. and Ph.D. at Linköping University in 1987 and 1991. He was a postdoctoral fellow and research scientist at Stanford University between 1991 and 1994. Since 1996, he is a guest researcher at the Swedish Institute for Computer Science (SICS).。

学术报告通知

学术报告通知

学术报告通知报告人:谭睿(新加坡南洋理工大学)题目:Cyber-Physical Approaches to Sustainable Power Grids时间:2016年6月6日(周一)9:00地点:电院群楼2-406邀请人:方涛教授Abstract:While power grids are evolving in the direction of cyber-physical systems, new computing methodologies must be developed to enhance grid sustainability by improving efficiency of energy consumption, reliability against natural faults, and cybersecurity regarding malicious attacks. This talk will present our recent research results on power grids efficiency, reliability, and cybersecurity. In particular, it will focus on the cybersecurity and countermeasures in automatic generation control (AGC), which is a fundamental closed-loop control system in all power grids that regulates grid frequency at a nominal value. The inputs to AGC, i.e., various measurements collected from geographically distributed sensors over computer networks, are susceptible to attacks. This work shows that, starting from little prior information and based on passively eavesdropped sensor measurements, an attacker can establish an accurate dynamic model for the impact of tampering with these sensor measurements on the grid frequency. Based on the model, the attacker can compute stealthy attack vectors that bypass various sensor data quality checkers and minimize the remaining time before the grid must apply remedial actions such as disconnecting customers. This work also develops algorithms to detect and profile such attacks. In addition, this talk will briefly present other projects on residential power disaggregation using a wireless sensor system, grid reliability enhancement by demand response, security of electricity real-time pricing, and security of traction power systems.Biography:Rui T an is an Assistant Professor at School of Computer Science and Engineering, Nanyang Technological University, Singapore. Previously, he was a Research Scientist (2012–2015) and a Senior Research Scientist (2015) at Advanced Digital Sciences Center, a Singapore-based research center of University of Illinois at Urbana-Champaign (UIUC), a Principle Research Affiliate (2012–2015) at Coordinated Science Lab of UIUC, and a postdoctoral Research Associate (2010–2012) at Michigan State University. He received the Ph.D. (2010) degree in computer science from City University of Hong Kong, the B.S. (2004) and M.S. (2007) degrees from Shanghai Jiao Tong University. His research interests include cyber-physical systems, sensor networks, and pervasive computing systems. His papers in 2013 IEEE Intl. Conf. Pervasive Computing & Communications (PerCom) and 2014 ACM/IEEE Intl. Conf. Information Processing in Sensor Networks (IPSN) were Best Paper finalists. He has published 40+ research papers on prestigious conferences and IEEE/ACM transactions. He has also served on the TPC of IEEE Real-Time Systems Symposiums (RTSS), INFOCOM, ICPADS, EWSN, and etc.。

Alan_Turing

Alan_Turing

• • •

39years old, put forward the theory of the nonlinear biological growth
Hale Waihona Puke olossusEnigma
Turing Machine
Turing Award
Government apology and pardon
In August 2009, John Graham-Cumming started a petition urging the British Government to apologize for Turing's prosecution as a homosexual. The petition received more than 30,000 signatures. Prime Minister Gordon Brown acknowledged the petition, releasing a statement on 10 September 2009 apologizing and describing the treatment of Turing as "appalling": On September 10th, 2009,Britain Prime Minister James Gordon Brown as the representative of Britain government to apologize the public for Turing's prosecution as a homosexual.
During the Second World war, Turing worked for the Government Code and Cypher school (GC&CS) at Bletchley Park, Britain's codevreaking centre. For a time he led Hut 8, the section responsible for German naval cryptanalysis. He devised a number of techniques for breaking German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma Machine. Turing's pivotal role in cracking intercepted coded messages enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic; it has been estimated that the work at Bletchley Park shortened the war in Europe by as many as two to four years

To transfer or not to transfer

To transfer or not to transfer

To Transfer or Not To TransferMichael T.Rosenstein,Zvika Marx,Leslie Pack KaelblingComputer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge,MA02139{mtr,zvim,lpk}@Thomas G.DietterichSchool of Electrical Engineering and Computer ScienceOregon State UniversityCorvallis,OR97331tgd@AbstractWith transfer learning,one set of tasks is used to bias learning and im-prove performance on another task.However,transfer learning may ac-tually hinder performance if the tasks are too dissimilar.As describedin this paper,one challenge for transfer learning research is to developapproaches that detect and avoid negative transfer using very little datafrom the target task.1IntroductionTransfer learning involves two interrelated learning problems with the goal of using knowl-edge about one set of tasks to improve performance on a related task.In particular,learning for some target task—the task on which performance is ultimately measured—is influenced by inductive bias learned from one or more auxiliary tasks,e.g.,[1,2,8,9].For example, athletes make use of transfer learning when they practice fundamental skills to improve training in a more competitive setting.Even for the restricted class of problems addressed by supervised learning,transfer can be realized in many different ways.For instance,Caruana[2]trained a neural network on several tasks simultaneously as a way to induce efficient internal representations for the target task.Wu and Dietterich[9]showed improved image classification by SVMs when trained on a large set of related images but relatively few target images.Sutton and McCallum[7]demonstrated effective transfer by“cascading”a class of graphical models, with the prediction from one classifier serving as a feature for the next one in the cascade. In this paper we focus on transfer using hierarchical Bayesian methods,and elsewhere we report on transfer using learned prior distributions over classifier parameters[5].In broad terms,the challenge for a transfer learning system is to learn what knowledge should be transferred and how.The emphasis of this paper is the more specific problem of deciding when transfer should be attempted for a particular class of learning algorithms. With no prior guarantee that the auxiliary and target tasks are sufficiently similar,an algo-rithm must use the available data to guide transfer learning.We are particularly interested in the situation where an algorithm must detect,perhaps implicitly,that the inductive bias learned from the auxiliary tasks will actually hurt performance on the target task.In the next section,we describe a“transfer-aware”version of the naive Bayes classification algorithm.We then illustrate that the benefits of transfer learning depend,not surprisingly, on the similarity of the auxiliary and target tasks.The key challenge is to identify harmful transfer with very few training examples from the target task.With larger amounts of “target”data,the need for auxiliary training becomes diminished and transfer learning becomes unnecessary.2Hierarchical Naive BayesThe standard naive Bayes algorithm—which we callflat naive Bayes in this paper—has proven to be effective for learning classifiers in non-transfer settings[3].Theflat naive Bayes algorithm constructs a separate probabilistic model for each output class,under the “naive”assumption that each feature has an independent impact on the probability of the class.We chose naive Bayes not only for its effectiveness but also for its relative sim-plicity,which facilitates analysis of our hierarchical version of the algorithm.Hierarchical Bayesian models,in turn,are well suited for transfer learning because they effectively combine data from multiple sources,e.g.,[4].To simplify our presentation we assume that just two tasks,A and B,provide sources of data,although the methods extend easily to multiple A data sources.Theflat version of naive Bayes merges all the data without distinction,whereas the hierarchical version con-structs two ordinary naive Bayes models that are coupled together.LetθA i andθB i denote the i-th parameter in the two models.Transfer is achieved by encouragingθA i andθB i to have similar values during learning.This is implemented by assuming thatθA i andθB i are both drawn from a common hyperprior distribution,P i,that is designed to have unknown mean but small variance.Consequently,at the start of learning,the values ofθA i andθB i are unknown,but they are constrained to be similar.As with any Bayesian learning method,learning consists of computing posterior distribu-tions for all of the parameters in the two models,including the hyperprior parameters.The overall model can“decide”that two parameters are very similar(by decreasing the variance of the hyperprior)or that two other parameters are very different(by increasing the vari-ance of the hyperprior).To compute the posterior distributions,we developed an extension of the“slice sampling”method introduced by Neal[6].3ExperimentsWe tested the hierarchical naive Bayes algorithm on data from a meeting acceptance task. For this task,the goal is to learn to predict whether a person will accept an invitation to a meeting given information about(a)the current state of the person’s calendar,(b)the person’s roles and relationships to other people and projects in his or her world,and(c)a description of the meeting request including time,place,topic,importance,and expected duration.Twenty-one individuals participated in the experiment:eight from a military exercise and 13from an academic setting.Each individual supplied between99and400labeled ex-amples(3966total examples).Each example was represented as a15-dimensional feature vector that captured relational information about the inviter,the proposed meeting,and any conflicting meetings.The features were designed with the meeting acceptance task in mind but were not tailored to the algorithms studied.For each experiment,a single person was08162432Amount of Task B Training (# instances)T a s k B P e r f o r m a n c e (% c o r r e c t )Figure 1:Effects of B training set size on performance of the hierarchical naive Bayes al-gorithm for three cases:no transfer (“B-only”)and transfer between similar and dissimilar individuals.In each case,the same person served as the B data source.Filled circles de-note statistically significant differences (p <0.05)between the corresponding transfer and B-only conditions.chosen as the target (B )data source;100of his or her examples were set aside as a holdout test set,and from the remaining examples either 2,4,8,16,or 32were used for training.These training and test sets were disjoint and stratified by class.All of the examples from one or more other individuals served as the auxiliary (A )data source.Figure 1illustrates the performance of the hierarchical naive Bayes algorithm for a single B data source and two representative A data sources.Also shown is the performance for the standard algorithm that ignores the auxiliary data (denoted “B-only”in the figure).Transfer learning has a clear advantage over the B-only approach when the A and B data sources are similar,but the effect is reversed when A and B are too dissimilar.Figure 2a demonstrates that the hierarchical naive Bayes algorithm almost always performs at least as well as flat naive Bayes,which simply merges all the available data.Figure 2b shows the more interesting comparison between the hierarchical and B-only algorithms.The hierarchical algorithm performs well,although the large gray regions depict the many pairs of dissimilar individuals that lead to negative transfer.This effect diminishes—along with the positive transfer effect—as the amount of B training data increases.We also ob-served qualitatively similar results using a transfer-aware version of the logistic regression classification algorithm [5].4ConclusionsOur experiments with the meeting acceptance task demonstrate that transfer learning often helps,but can also hurt performance if the sources of data are too dissimilar.The hierar-chical naive Bayes algorithm was designed to avoid negative transfer,and indeed it does so quite well compared to the flat pared to the standard B-only approach,however,there is still room for improvement.As part of ongoing work we are exploring the use of clustering techniques,e.g.,[8],to represent more explicitly that some sources of data may be better candidates for transfer than others.Amount of Task B Training (#instances)F r a c t i o n o f P e r s o n P a i r sAmount of Task B Training (#instances)F r a c t i o n o f P e r s o n P a i r s(a)(b)Figure 2:Effects of B training set size on performance of the hierarchical naive Bayes al-gorithm versus (a)flat naive Bayes and (b)training with no auxiliary data.Shown are the fraction of tested A-B pairs with a statistically significant transfer effect (p <0.05).Black and gray respectively denote positive and negative transfer,and white indicates no statis-tically significant difference.Performance scores were quantified using the log odds of making the correct prediction.AcknowledgmentsThis material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA),through the Department of the Interior,NBC,Acquisition Services Division,under Con-tract No.NBCHD030010.Any opinions,findings,and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.References[1]J.Baxter.A model of inductive bias learning.Journal of Artificial Intelligence Research ,12:149–198,2000.[2]R.Caruana.Multitask learning.Machine Learning ,28(1):41–70,1997.[3]P.Domingos and M.Pazzani.On the optimality of the simple bayesian classifier under zero-one loss.Machine Learning ,29(2–3):103–130,1997.[4] A.Gelman,J.B.Carlin,H.S.Stern,and D.B.Rubin.Bayesian Data Analysis,Second Edition .Chapman and Hall/CRC,Boca Raton,FL,2004.[5]Z.Marx,M.T.Rosenstein,L.P.Kaelbling,and T.G.Dietterich.Transfer learning with an ensemble of background tasks.Submitted to this workshop.[6]R.Neal.Slice sampling.Annals of Statistics ,31(3):705–767,2003.[7] C.Sutton and position of conditional random fields for transfer learning.In Proceedings of the Human Language Technologies /Emprical Methods in Natural Language Processing Conference (HLT/EMNLP),2005.[8]S.Thrun and J.O’Sullivan.Discovering structure in multiple learning tasks:the TC algorithm.In L.Saitta,editor,Proceedings of the Thirteenth International Conference on Machine Learning ,pages 489–497.Morgan Kaufmann,1996.[9]P.Wu and T.G.Dietterich.Improving SVM accuracy by training on auxiliary data sources.In Proceedings of the Twenty-First International Conference on Machine Learning ,pages 871–878.Morgan Kaufmann,2004.。

新教材高中英语学业质量检测4Unit4AdversityandCourage新人教版选择性

新教材高中英语学业质量检测4Unit4AdversityandCourage新人教版选择性

UNIT 4 学业质量检测选择题部分第一部分:听力(共两节,满分30分)第一节(共5小题;每小题分,满分分)听下面5段对话。

每段对话后有一个小题,从题中所给的A、B、C三个选项中选出最佳选项。

听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。

每段对话仅读一遍。

1.How much should the woman pay? __B__A.£2.85.B.£.C.£.2.What is the probable relationship between the speakers? __A__A.Teacher and student.B.Doctor and patient.C.Passenger and conductor.3.What is the weather like now? __A__A.It’s raining.B.It’s clear.C.It’s windy.4.Why did the man go back to the office? __A__A.He wanted to get the important things.B.He wanted to find the lost key.C.He went back to lock the office door.5.Where does the conversation probably take place? __A__A.At an office. B.In a library. C.In a park.第二节(共15小题;每小题分,满分分)听下面5段对话或独白。

每段对话或独白后有几个小题,从题中所给的A、B、C三个选项中选出最佳选项。

听每段对话或独白前,你将有时间阅读各个小题,每小题5秒钟;听完后,各小题将给出5秒钟的作答时间。

每段对话或独白读两遍。

听第6段材料,回答第6和第7两个小题。

6.What did the man use to take pictures? __B__A.A helicopter. B.A drone. C.A smartphone.7.What takes a lot of practice for the man? __C__A.Driving a helicopter. B.Taking pictures. C.Controlling a drone.听第7段材料,回答第8和第9两个小题。

Knowledge Engineering-Principles And Methods

Knowledge Engineering-Principles And Methods

Knowledge Engineering:Principles and MethodsRudi Studer1, V. Richard Benjamins2, and Dieter Fensel11Institute AIFB, University of Karlsruhe, 76128 Karlsruhe, Germany{studer, fensel}@aifb.uni-karlsruhe.dehttp://www.aifb.uni-karlsruhe.de2Artificial Intelligence Research Institute (IIIA),Spanish Council for Scientific Research (CSIC), Campus UAB,08193 Bellaterra, Barcelona, Spainrichard@iiia.csic.es, http://www.iiia.csic.es/~richard2Dept. of Social Science Informatics (SWI),richard@swi.psy.uva.nl, http://www.swi.psy.uva.nl/usr/richard/home.htmlAbstractThis paper gives an overview about the development of the field of Knowledge Engineering over the last 15 years. We discuss the paradigm shift from a transfer view to a modeling view and describe two approaches which considerably shaped research in Knowledge Engineering: Role-limiting Methods and Generic Tasks. To illustrate various concepts and methods which evolved in the last years we describe three modeling frameworks: CommonKADS, MIKE, and PROTÉGÉ-II. This description is supplemented by discussing some important methodological developments in more detail: specification languages for knowledge-based systems, problem-solving methods, and ontologies. We conclude with outlining the relationship of Knowledge Engineering to Software Engineering, Information Integration and Knowledge Management.Key WordsKnowledge Engineering, Knowledge Acquisition, Problem-Solving Method, Ontology, Information Integration1IntroductionIn earlier days research in Artificial Intelligence (AI) was focused on the development offormalisms, inference mechanisms and tools to operationalize Knowledge-based Systems (KBS). Typically, the development efforts were restricted to the realization of small KBSs in order to study the feasibility of the different approaches.Though these studies offered rather promising results, the transfer of this technology into commercial use in order to build large KBSs failed in many cases. The situation was directly comparable to a similar situation in the construction of traditional software systems, called …software crisis“ in the late sixties: the means to develop small academic prototypes did not scale up to the design and maintenance of large, long living commercial systems. In the same way as the software crisis resulted in the establishment of the discipline Software Engineering the unsatisfactory situation in constructing KBSs made clear the need for more methodological approaches.So the goal of the new discipline Knowledge Engineering (KE) is similar to that of Software Engineering: turning the process of constructing KBSs from an art into an engineering discipline. This requires the analysis of the building and maintenance process itself and the development of appropriate methods, languages, and tools specialized for developing KBSs. Subsequently, we will first give an overview of some important historical developments in KE: special emphasis will be put on the paradigm shift from the so-called transfer approach to the so-called modeling approach. This paradigm shift is sometimes also considered as the transfer from first generation expert systems to second generation expert systems [43]. Based on this discussion Section 2 will be concluded by describing two prominent developments in the late eighties:Role-limiting Methods [99] and Generic Tasks [36]. In Section 3 we will present some modeling frameworks which have been developed in recent years: CommonKADS [129], MIKE [6], and PROTÈGÈ-II [123]. Section 4 gives a short overview of specification languages for KBSs. Problem-solving methods have been a major research topic in KE for the last decade. Basic characteristics of (libraries of) problem-solving methods are described in Section 5. Ontologies, which gained a lot of importance during the last years are discussed in Section 6. The paper concludes with a discussion of current developments in KE and their relationships to other disciplines.In KE much effort has also been put in developing methods and supporting tools for knowledge elicitation (compare [48]). E.g. in the VITAL approach [130] a collection of elicitation tools, like e.g. repertory grids (see [65], [83]), are offered for supporting the elicitation of domain knowledge (compare also [49]). However, a discussion of the various elicitation methods is beyond the scope of this paper.2Historical Roots2.1Basic NotionsIn this section we will first discuss some main principles which characterize the development of KE from the very beginning.Knowledge Engineering as a Transfer Process…This transfer and transformation of problem-solving expertise from a knowledge source to a program is the heart of the expert-system development process.” [81]In the early eighties the development of a KBS has been seen as a transfer process of humanknowledge into an implemented knowledge base. This transfer was based on the assumption that the knowledge which is required by the KBS already exists and just has to be collected and implemented. Most often, the required knowledge was obtained by interviewing experts on how they solve specific tasks [108]. Typically, this knowledge was implemented in some kind of production rules which were executed by an associated rule interpreter. However, a careful analysis of the various rule knowledge bases showed that the rather simple representation formalism of production rules did not support an adequate representation of different types of knowledge [38]: e.g. in the MYCIN knowledge base [44] strategic knowledge about the order in which goals should be achieved (e.g. “consider common causes of a disease first“) is mixed up with domain specific knowledge about for example causes for a specific disease. This mixture of knowledge types, together with the lack of adequate justifications of the different rules, makes the maintenance of such knowledge bases very difficult and time consuming. Therefore, this transfer approach was only feasible for the development of small prototypical systems, but it failed to produce large, reliable and maintainable knowledge bases.Furthermore, it was recognized that the assumption of the transfer approach, that is that knowledge acquisition is the collection of already existing knowledge elements, was wrong due to the important role of tacit knowledge for an expert’s problem-solving capabilities. These deficiencies resulted in a paradigm shift from the transfer approach to the modeling approach.Knowledge Engineering as a Modeling ProcessNowadays there exists an overall consensus that the process of building a KBS may be seen as a modeling activity. Building a KBS means building a computer model with the aim of realizing problem-solving capabilities comparable to a domain expert. It is not intended to create a cognitive adequate model, i.e. to simulate the cognitive processes of an expert in general, but to create a model which offers similar results in problem-solving for problems in the area of concern. While the expert may consciously articulate some parts of his or her knowledge, he or she will not be aware of a significant part of this knowledge since it is hidden in his or her skills. This knowledge is not directly accessible, but has to be built up and structured during the knowledge acquisition phase. Therefore this knowledge acquisition process is no longer seen as a transfer of knowledge into an appropriate computer representation, but as a model construction process ([41], [106]).This modeling view of the building process of a KBS has the following consequences:•Like every model, such a model is only an approximation of the reality. In principle, the modeling process is infinite, because it is an incessant activity with the aim of approximating the intended behaviour.•The modeling process is a cyclic process. New observations may lead to a refinement, modification, or completion of the already built-up model. On the other side, the model may guide the further acquisition of knowledge.•The modeling process is dependent on the subjective interpretations of the knowledge engineer. Therefore this process is typically faulty and an evaluation of the model with respect to reality is indispensable for the creation of an adequate model. According to this feedback loop, the model must therefore be revisable in every stage of the modeling process.Problem Solving MethodsIn [39] Clancey reported on the analysis of a set of first generation expert systems developed to solve different tasks. Though they were realized using different representation formalisms (e.g. production rules, frames, LISP), he discovered a common problem solving behaviour.Clancey was able to abstract this common behaviour to a generic inference pattern called Heuristic Classification , which describes the problem-solving behaviour of these systems on an abstract level, the so called Knowledge Level [113]. This knowledge level allows to describe reasoning in terms of goals to be achieved, actions necessary to achieve these goals and knowledge needed to perform these actions. A knowledge-level description of a problem-solving process abstracts from details concerned with the implementation of the reasoning process and results in the notion of a Problem-Solving Method (PSM).A PSM may be characterized as follows (compare [20]):• A PSM specifies which inference actions have to be carried out for solving a given task.• A PSM determines the sequence in which these actions have to be activated.•In addition, so-called knowledge roles determine which role the domain knowledge plays in each inference action. These knowledge roles define a domain independent generic terminology.When considering the PSM Heuristic Classification in some more detail (Figure 1) we can identify the three basic inference actions abstract ,heuristic match , and refine . Furthermore,four knowledge roles are defined:observables ,abstract observables ,solution abstractions ,and solutions . It is important to see that such a description of a PSM is given in a generic way.Thus the reuse of such a PSM in different domains is made possible. When considering a medical domain, an observable like …410 C“ may be abstracted to …high temperature“ by the inference action abstract . This abstracted observable may be matched to a solution abstraction, e.g. …infection“, and finally the solution abstraction may be hierarchically refined to a solution, e.g. the disease …influenca“.In the meantime various PSMs have been identified, like e.g.Cover-and-Differentiate for solving diagnostic tasks [99] or Propose-and-Revise [100] for parametric design tasks.PSMs may be exploited in the knowledge engineering process in different ways:Fig. 1 The Problem-Solving Method Heuristic Classificationroleinference action•PSMs contain inference actions which need specific knowledge in order to perform their task. For instance,Heuristic Classification needs a hierarchically structured model of observables and solutions for the inference actions abstract and refine, respectively.So a PSM may be used as a guideline to acquire static domain knowledge.• A PSM allows to describe the main rationale of the reasoning process of a KBS which supports the validation of the KBS, because the expert is able to understand the problem solving process. In addition, this abstract description may be used during the problem-solving process itself for explanation facilities.•Since PSMs may be reused for developing different KBSs, a library of PSMs can be exploited for constructing KBSs from reusable components.The concept of PSMs has strongly stimulated research in KE and thus has influenced many approaches in this area. A more detailed discussion of PSMs is given in Section 5.2.2Specific ApproachesDuring the eighties two main approaches evolved which had significant influence on the development of modeling approaches in KE: Role-Limiting Methods and Generic Tasks. Role-Limiting MethodsRole-Limiting Methods (RLM) ([99], [102]) have been one of the first attempts to support the development of KBSs by exploiting the notion of a reusable problem-solving method. The RLM approach may be characterized as a shell approach. Such a shell comes with an implementation of a specific PSM and thus can only be used to solve a type of tasks for which the PSM is appropriate. The given PSM also defines the generic roles that knowledge can play during the problem-solving process and it completely fixes the knowledge representation for the roles such that the expert only has to instantiate the generic concepts and relationships, which are defined by these roles.Let us consider as an example the PSM Heuristic Classification (see Figure 1). A RLM based on Heuristic Classification offers a role observables to the expert. Using that role the expert (i) has to specify which domain specific concept corresponds to that role, e.g. …patient data”(see Figure 4), and (ii) has to provide domain instances for that concept, e.g. concrete facts about patients. It is important to see that the kind of knowledge, which is used by the RLM, is predefined. Therefore, the acquisition of the required domain specific instances may be supported by (graphical) interfaces which are custom-tailored for the given PSM.In the following we will discuss one RLM in some more detail: SALT ([100], [102]) which is used for solving constructive tasks.Then we will outline a generalization of RLMs to so-called Configurable RLMs.SALT is a RLM for building KBSs which use the PSM Propose-and-Revise. Thus KBSs may be constructed for solving specific types of design tasks, e.g. parametric design tasks. The basic inference actions that Propose-and-Revise is composed of, may be characterized as follows:•extend a partial design by proposing a value for a design parameter not yet computed,•determine whether all computed parameters fulfil the relevant constraints, and•apply fixes to remove constraint violations.In essence three generic roles may be identified for Propose-and-Revise ([100]):•…design-extensions” refer to knowledge for proposing a new value for a design parameter,•…constraints” provide knowledge restricting the admissible values for parameters, and •…fixes” make potential remedies available for specific constraint violations.From this characterization of the PSM Propose-and-Revise, one can easily see that the PSM is described in generic, domain-independent terms. Thus the PSM may be used for solving design tasks in different domains by specifying the required domain knowledge for the different predefined generic knowledge roles.E.g. when SALT was used for building the VT-system [101], a KBS for configuring elevators, the domain expert used the form-oriented user interface of SALT for entering domain specific design extensions (see Figure 2). That is, the generic terminology of the knowledge roles, which is defined by object and relation types, is instantiated with VT specific instances.1Name:CAR-JAMB-RETURN2Precondition:DOOR-OPENING = CENTER3Procedure:CALCULATION4Formula:[PLATFORM-WIDTH -OPENING-WIDTH] / 25Justification:CENTER-OPENING DOORS LOOKBEST WHEN CENTERED ONPLATFORM.(the value of the design parameter CAR-JUMB-RETURN iscalculated according to the formula - in case the preconditionis fulfilled; the justification gives a description why thisparameter value is preferred over other values (example takenfrom [100]))Fig. 2 Design Extension Knowledge for VTOn the one hand, the predefined knowledge roles and thus the predefined structure of the knowledge base may be used as a guideline for the knowledge acquisition process: it is clearly specified what kind of knowledge has to be provided by the domain expert. On the other hand, in most real-life situations the problem arises of how to determine whether a specific task may be solved by a given RLM. Such task analysis is still a crucial problem, since up to now there does not exist a well-defined collection of features for characterizing a domain task in a way which would allow a straightforward mapping to appropriate RLMs. Moreover, RLMs have a fixed structure and do not provide a good basis when a particular task can only be solved by a combination of several PSMs.In order to overcome this inflexibility of RLMs, the concept of configurable RLMs has been proposed.Configurable Role-Limiting Methods (CRLMs) as discussed in [121] exploit the idea that a complex PSM may be decomposed into several subtasks where each of these subtasks may be solved by different methods (see Section 5). In [121], various PSMs for solving classification tasks, like Heuristic Classification or Set-covering Classification, have been analysed with respect to common subtasks. This analysis resulted in the identification ofshared subtasks like …data abstraction” or …hypothesis generation and test”. Within the CRLM framework a predefined set of different methods are offered for solving each of these subtasks. Thus a PSM may be configured by selecting a method for each of the identified subtasks. In that way the CRLM approach provides means for configuring the shell for different types of tasks. It should be noted that each method offered for solving a specific subtask, has to meet the knowledge role specifications that are predetermined for the CRLM shell, i.e. the CRLM shell comes with a fixed scheme of knowledge types. As a consequence, the introduction of a new method into the shell typically involves the modification and/or extension of the current scheme of knowledge types [121]. Having a fixed scheme of knowledge types and predefined communication paths between the various components is an important restriction distinguishing the CRLM framework from more flexible configuration approaches such as CommonKADS (see Section 3).It should be clear that the introduction of such flexibility into the RLM approach removes one of its disadvantages while still exploiting the advantage of having a fixed scheme of knowledge types, which build the basis for generating effective knowledge-acquisition tools. On the other hand, configuring a CRLM shell increases the burden for the system developer since he has to have the knowledge and the ability to configure the system in the right way. Generic Task and Task StructuresIn the early eighties the analysis and construction of various KBSs for diagnostic and design tasks evolved gradually into the notion of a Generic Task (GT) [36]. GTs like Hierarchical Classification or State Abstraction are building blocks which can be reused for the construction of different KBSs.The basic idea of GTs may be characterized as follows (see [36]):• A GT is associated with a generic description of its input and output.• A GT comes with a fixed scheme of knowledge types specifying the structure of domain knowledge needed to solve a task.• A GT includes a fixed problem-solving strategy specifying the inference steps the strategy is composed of and the sequence in which these steps have to be carried out. The GT approach is based on the strong interaction problem hypothesis which states that the structure and representation of domain knowledge is completely determined by its use [33]. Therefore, a GT comes with both, a fixed problem-solving strategy and a fixed collection of knowledge structures.Since a GT fixes the type of knowledge which is needed to solve the associated task, a GT provides a task specific vocabulary which can be exploited to guide the knowledge acquisition process. Furthermore, by offering an executable shell for a GT, called a task specific architecture, the implementation of a specific KBS could be considered as the instantiation of the predefined knowledge types by domain specific terms (compare [34]). On a rather pragmatic basis several GTs have been identified including Hierarchical Classification,Abductive Assembly and Hypothesis Matching. This initial collection of GTs was considered as a starting point for building up an extended collection covering a wide range of relevant tasks.However, when analyzed in more detail two main disadvantages of the GT approach have been identified (see [37]):•The notion of task is conflated with the notion of the PSM used to solve the task, sinceeach GT included a predetermined problem-solving strategy.•The complexity of the proposed GTs was very different, i.e. it remained open what the appropriate level of granularity for the building blocks should be.Based on this insight into the disadvantages of the notion of a GT, the so-called Task Structure approach was proposed [37]. The Task Structure approach makes a clear distinction between a task, which is used to refer to a type of problem, and a method, which is a way to accomplish a task. In that way a task structure may be defined as follows (see Figure 3): a task is associated with a set of alternative methods suitable for solving the task. Each method may be decomposed into several subtasks. The decomposition structure is refined to a level where elementary subtasks are introduced which can directly be solved by using available knowledge.As we will see in the following sections, the basic notion of task and (problem-solving)method, and their embedding into a task-method-decomposition structure are concepts which are nowadays shared among most of the knowledge engineering methodologies.3Modeling FrameworksIn this section we will describe three modeling frameworks which address various aspects of model-based KE approaches: CommonKADS [129] is prominent for having defined the structure of the Expertise Model, MIKE [6] puts emphasis on a formal and executable specification of the Expertise Model as the result of the knowledge acquisition phase, and PROTÉGÉ-II [51] exploits the notion of ontologies.It should be clear that there exist further approaches which are well known in the KE community, like e.g VITAL [130], Commet [136], and EXPECT [72]. However, a discussion of all these approaches is beyond the scope of this paper.Fig. 3 Sample Task Structure for DiagnosisTaskProblem-Solving MethodSubtasksProblem-Solving MethodTask / Subtasks3.1The CommonKADS ApproachA prominent knowledge engineering approach is KADS[128] and its further development to CommonKADS [129]. A basic characteristic of KADS is the construction of a collection of models, where each model captures specific aspects of the KBS to be developed as well as of its environment. In CommonKADS the Organization Model, the Task Model, the Agent Model, the Communication Model, the Expertise Model and the Design Model are distinguished. Whereas the first four models aim at modeling the organizational environment the KBS will operate in, as well as the tasks that are performed in the organization, the expertise and design model describe (non-)functional aspects of the KBS under development. Subsequently, we will briefly discuss each of these models and then provide a detailed description of the Expertise Model:•Within the Organization Model the organizational structure is described together with a specification of the functions which are performed by each organizational unit.Furthermore, the deficiencies of the current business processes, as well as opportunities to improve these processes by introducing KBSs, are identified.•The Task Model provides a hierarchical description of the tasks which are performed in the organizational unit in which the KBS will be installed. This includes a specification of which agents are assigned to the different tasks.•The Agent Model specifies the capabilities of each agent involved in the execution of the tasks at hand. In general, an agent can be a human or some kind of software system, e.g.a KBS.•Within the Communication Model the various interactions between the different agents are specified. Among others, it specifies which type of information is exchanged between the agents and which agent is initiating the interaction.A major contribution of the KADS approach is its proposal for structuring the Expertise Model, which distinguishes three different types of knowledge required to solve a particular task. Basically, the three different types correspond to a static view, a functional view and a dynamic view of the KBS to be built (see in Figure 4 respectively “domain layer“, “inference layer“ and “task layer“):•Domain layer : At the domain layer all the domain specific knowledge is modeled which is needed to solve the task at hand. This includes a conceptualization of the domain in a domain ontology (see Section 6), and a declarative theory of the required domain knowledge. One objective for structuring the domain layer is to model it as reusable as possible for solving different tasks.•Inference layer : At the inference layer the reasoning process of the KBS is specified by exploiting the notion of a PSM. The inference layer describes the inference actions the generic PSM is composed of as well as the roles , which are played by the domain knowledge within the PSM. The dependencies between inference actions and roles are specified in what is called an inference structure. Furthermore, the notion of roles provides a domain independent view on the domain knowledge. In Figure 4 (middle part) we see the inference structure for the PSM Heuristic Classification . Among others we can see that …patient data” plays the role of …observables” within the inference structure of Heuristic Classification .•Task layer : The task layer provides a decomposition of tasks into subtasks and inference actions including a goal specification for each task, and a specification of how theseFig. 4 Expertise Model for medical diagnosis (simplified CML notation)goals are achieved. The task layer also provides means for specifying the control over the subtasks and inference actions, which are defined at the inference layer.Two types of languages are offered to describe an Expertise Model: CML (Conceptual Modeling Language) [127], which is a semi-formal language with a graphical notation, and (ML)2 [79], which is a formal specification language based on first order predicate logic, meta-logic and dynamic logic (see Section 4). Whereas CML is oriented towards providing a communication basis between the knowledge engineer and the domain expert, (ML)2 is oriented towards formalizing the Expertise Model.The clear separation of the domain specific knowledge from the generic description of the PSM at the inference and task layer enables in principle two kinds of reuse: on the one hand, a domain layer description may be reused for solving different tasks by different PSMs, on the other hand, a given PSM may be reused in a different domain by defining a new view to another domain layer. This reuse approach is a weakening of the strong interaction problem hypothesis [33] which was addressed in the GT approach (see Section 2). In [129] the notion of a relative interaction hypothesis is defined to indicate that some kind of dependency exists between the structure of the domain knowledge and the type of task which should be solved. To achieve a flexible adaptation of the domain layer to a new task environment, the notion of layered ontologies is proposed:Task and PSM ontologies may be defined as viewpoints on an underlying domain ontology.Within CommonKADS a library of reusable and configurable components, which can be used to build up an Expertise Model, has been defined [29]. A more detailed discussion of PSM libraries is given in Section 5.In essence, the Expertise Model and the Communication Model capture the functional requirements for the target system. Based on these requirements the Design Model is developed, which specifies among others the system architecture and the computational mechanisms for realizing the inference actions. KADS aims at achieving a structure-preserving design, i.e. the structure of the Design Model should reflect the structure of the Expertise Model as much as possible [129].All the development activities, which result in a stepwise construction of the different models, are embedded in a cyclic and risk-driven life cycle model similar to Boehm’s spiral model [21].The basic structure of the expertise model has some similarities with the data, functional, and control view of a system as known from software engineering. However, a major difference may be seen between an inference layer and a typical data-flow diagram (compare [155]): Whereas an inference layer is specified in generic terms and provides - via roles and domain views - a flexible connection to the data described at the domain layer, a data-flow diagram is completely specified in domain specific terms. Moreover, the data dictionary does not correspond to the domain layer, since the domain layer may provide a complete model of the domain at hand which is only partially used by the inference layer, whereas the data dictionary is describing exactly those data which are used to specify the data flow within the data flow diagram (see also [54]).3.2The MIKE ApproachThe MIKE approach (Model-based and Incremental Knowledge Engineering) (cf. [6], [7])。

病理检验技术习题与答案

病理检验技术习题与答案

病理检验技术习题与答案一、单选题(共IOO题,每题1分,共100分)1、钺酸-Q秦胺染色法的结果错误的是A、变性髓鞘呈黑色B、尼氏小体呈紫色C、红细胞呈淡红色D、结缔组织呈蓝色E、正常髓鞘呈红色正确答案:B2、用下述哪种单纯固定液固定的组织宜避光保存,以防止蛋白质溶解A、醋酸B、铭酸C、重铭酸钾D、升汞E、苦味酸正确答案:B3、关于重铭酸钾固定液的描述正确的是A、无毒性B、未酸化的重铭酸钾能使蛋白质沉淀C、固定高尔基体和线粒体效果良好D、经其固定的组织酸性染料着色较差E、能与乙醇还原剂混合正确答案:C4、用于电镜标本制备A、火棉胶切片法B、石蜡切片法C、震动切片D、超薄切片E、冷冻切片法正确答案:D5、计算机病理档案管理系统应具有的功能包括A、信息输入B、分类统计C、按需检索D、资料备份及输出E、以上都是正确答案:E6、钺酸固定液的特点描述中,错误的是A、渗透力极弱B、不能用于线粒体和内质网的固定C、固定的组织块应小D、主要为脂类固定剂E、不溶于乙醇与苯等有机溶剂正确答案:B7、用Bouin固定液固定组织的特点是A、该固定方法适用于标本的长期保存B、细胞核与细胞质均着色鲜明C、细胞核着色鲜明,但细胞质着色较差D、细胞质着色鲜明,但细胞核着色较差E、固定后组织被染成黑色正确答案:C8、免疫荧光组织化学技术中非特异性染色的主要原因不包括A、切片太薄B、抗体以外的血清蛋白与荧光素结合C、荧光素不纯D、未与蛋白质结合的荧光素没有被透析除去E、抗体分子上标记的荧光素分子太多正确答案:A9、采用Warthin—Stairy进行胃幽门螺杆菌染色时胃幽门螺杆菌呈A、黄色B、黑色C、绿色D、红色E、蓝色10、SABC法的基本原理是A、链霉亲和素同一定浓度的生物素化酶混合后,形成链霉亲和素一生物素一酶复合物,此复合物可以和各种生物素标记抗体结合B、链霉亲和素同一定浓度的卯白素化酶混合后,形成链霉亲和素卯白素一酶复合物,此复合物可以和各种卯白素标记抗体结合C、亲和素同一定浓度的生物素化酶混合后,形成亲和素一生物素一酶复合物,此复合物可以和各种生物素标记抗体结合D、亲和素同一定浓度的卯白素化酶混合后,形成亲和素卯白素酶复合物,此复合物可以和各种卯白素标记抗体结合E、链霉亲和素同一定浓度的辣根过氧化物酶混合后,形成链霉亲和素酶复合物,此复合物可以和各种生物素标记抗体结合正确答案:AIK使用Mowry阿尔辛蓝过碘酸希夫(ABPAS)染色法进行黏多糖染色时,中性和酸性混合物质呈A、红色B、蓝色C、紫红色D、黄色E、绿色正确答案:C12、尸体剖检时,测量两侧膈肌高度的正确方法是A、取出腹腔脏器后测量B、取出胸腔脏器后测量C、腹腔和胸腔脏器均取出后测量D、腹腔和胸腔脏器均未取出时测量E、取出盆腔脏器后测量正确答案:D13、下列混合固定液不含有重铭酸钾的是A、Zenker液B、Heny液C、M aximov液D、Bouin液E、R egaud液答案解析:以上混合固定液只有Bouin液不含有重铭酸钾,其配方是饱和苦味酸水溶液:甲醛水溶液:冰醋酸=15:5:Io14、GranI甲紫染色法纤维蛋白的颜色是A、紫色B、红色C、蓝黑色D、橙黄色E、绿色正确答案:C答案解析:Gram甲紫染色法纤维蛋白呈蓝黑色,背景呈红色。

Knowledge-Based Systems

Knowledge-Based Systems

Ultsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.Integration of Neural Networks withKnowledge-Based SystemsAlfred Ultsch, Dieter KorusDepartment of Mathematics/Informatics; University of MarburgHans-Meerwein-Straße/Lahnberge; D-35032 Marburg; F. R. Germanyemail: ultsch or korus@mathematik.uni-marburg.dehttp://www.uni-marburg.de/~wina/ABSTRACTExisting prejudices of some Artificial Intelligence researchers against neural networks are hard tobreak. One of their most important argument is that neural networks are not able to explain theirdecisions. Further they claim that neural networks are not able so solve the variable binding pro-blem for unification. We show in this paper that neural networks and knowledge-based systemsmust not be competitive, but are capable to complete each other. The disadvantages of the oneparadigm are the advantages of the other and vice versa. We show several ways to integrate bothparadigms in the areas of explorative data analysis, knowledge acquisition, introspection, andunification. Our approach to such hybrid systems has been prooved in real world applications.1. IntroductionThe successful application of knowledge-based systems in different areas as diagnosis, construction and plan-ning shows the usefulness of a symbolic knowledge representation. However, this representation implies pro-blems in processing data from natural processes. Normally such data are results of measurements and have therefore no straightforward kind of symbolic representation [1]. Knowledge-based systems often fall short in handling inconsistent and noisy data. It is also difficult to formalize knowledge in such domains where ´a priori´rules are unknown. Often the performance in ´learning from examples´ and ´dealing with untypical situations´(graceful degradation) is insufficient. The rules used by conventional expert systems are sait to be able to repre-sent complex concepts only approximately [4]. In such complex systems inconsistent and context-dependent rules (cases) may result in unacceptable errors. In addition, it is almost impossible for experts to describe their knowledge, which they acquired from many examples by experience, entirely in symbolic form [6].State-of-the-art knowledge-based system technology is based on symbolic processing. Acknowledged shortco-ming of current computational techniques is their brittleness, often arising from the inability of first order logic to capture adequately the dynamics of a changing and incompletely known environment. An important property of knowledge stored in symbolic form is that it can be interpreted and communicated to experts. The limits of such an approach, however, become quite evident when sensor data or measurement data, for example from physical processes, are handled. Inconsistent data frequently force symbolic systems into an undefined state. Another heavy problem in knowledge-based system design is the acquisition of knowledge. It is well known that it is almost impossible for an expert to describe his domain specific knowledge entirely in form of rules or other knowledge representation schemes. In addition, it is very difficult or even impossible to describe expertise acquired by experience.Neural networks claim to avoid most of the disadvantages of knowledge-based systems described above. These systems which rely on a distributed knowledge representation are able to develop a concise representation of complex concepts. It is possible to learn knowledge from experience directly [4]. Characteristic attributes of connectionist systems are the ability of generalization and graceful degradation. E.g. they are able to process inconsistent and noisy data. In addition, neural networks compute the most plausible output to each input. Neural networks, however, also have their disadvantages. It is difficult to provide an explanation of the beha-viour of the neural network because of the distributed knowledge representation. Therefore expertise learned by neural networks is not available in a form that is intelegible for human beings as well as for knowledge-basedUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.systems. It seems to be difficult to describe or to interpret this kind of information. In knowledge-based systems on the other hand it is easy to describe and to verify the underlying concepts.2. Integration of Neural Networks with Knowledge-Based SystemsIndications are that neural networks provide fault-tolerance and noise resistance. They adapt to unstable and lar-gely unknown environments as well. Their weakness lies in a reliance on data-intensive training algorithms, with little opportunity to integrate available, discrete knowledge. At present, neural networks are relatively suc-cessfull in applications dealing with subsymbolic raw data; in particular, if the data is noisy or inconsistent. Such subsymbolic level processing seems to be appropriate for dealing with perceptions tasks and perhaps even with tasks that call for combined perception and cognition. Neural networks are able to learn structures of an input set without using a priori information. Unfortunately they cannot explain their behavior because a distributed repre-sentation of the knowledge is used. They only can tell about the knowledge by showing responses to a given input.Both approaches, knowledge-based systems and neural networks, of modelling brain-like information proces-sing are complementary in the sense that traditional knowledge-based systems are a top-down approach starting from high-level cognitive functions whereas neural networks are a bottom-up approach on a biophysical basis of neurons and synapses. It is a matter of fact that the symbolic as well as the subsymbolic aspects of information processing are essential to systems dealing with real world tasks. Integrating neural networks and knowledge-based systems is certainly a challenging task [10]. Beside these general considerations several specific tasks have to be solved. The most important are - without claiming on completeness:Structure Detection by Collective Behavior: In real world people have continuously to do with raw and sub-symbolic data which is characterized by the property that one single element does not have a meaning (interpre-tation) of itself alone. The question is, how to transform the subsymbolic data into a symbolic form. Unsupervised learning neural networks can adapt to structures inherent in the data. They exhibit the property to produce their structure during learning by the integration (overlay) of many case data. But they have the disad-vantage that they cannot be interpreted by looking at the activity or weights of single neurons. Because of this we need tools to detect the structure in large neural networks.Integrated Knowledge Acquisition: Knowledge acquisition is one of the biggest problems in artificial intelli-gence. A knowledge-based system may therefore not be able to diagnose a case which an expert is able to. The question is, how to extract experience from a set of examples for the use of knowledge-based systems. Under Integrated Knowledge Acquisition we understand subsymbolic approaches, i.e. the usage of neural networks, to gain symbolic knowledge. Neural networks can easily process subsymbolic raw data by handling noisy and inconsistent data. An intrinsic property of neural networks is, however, that no high level knowledge can be identified in the trained neural network. The central problem for Integrated Knowledge Acquisition is therefore how to transform whatever a neural network has learned into a symbolic form.Introspection: Under introspection we understand methods and techniques whereby a knowledge-based system observes its own behaviour and improves its performance. This approach can be realized using neural networks that observe the sequence of steps an expert system takes in the derivation of a conclusion. This is often called control knowledge. When the observed behaviour of the expert system is appropriately encoded, a neural net-work can learn how to avoid missleading paths and how to arrive faster at its conclusions.Unification: One type of integrated reasoning is the realization of an important part of the reasoning process, the unification, using neural networks. Unification pays a central role in logic programming (e.g. in the language Prolog) and is also a central feature for the implementation of many knowledge-based systems. The idea of this approach is to realize the matching and unification part of the reasoning process in a suitable neural network. 3. Structure Detection by Collective BehaviorOne of the neural network types we use for representing subsymbolic raw data in large distributed neural net-works are the Self-Organizing Feature Maps (SOFM) by Kohonen [5]. It has the ability to map a high-dimen-sional feature space onto a usually two-dimensional grid of neurons. The important feature of this mapping is that adjacent points in the data space are mapped onto adjacent neurons in the grid by conserving the distributionUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/Australia, 1995.of the input data. In normal applications we use 64 by 64, 128 by 128 or 256 by 256 neurons. Due to the repre-sentation of the input data in learning phase the SOFM adapts to the structure inherent in the data. On the map neighbouring neurons form regions, which correspond to similar input vectors. These neighbourhoods form dis-joint regions, thus classifying the input vectors.But looking at the learned SOFM as it is one is not able to see much structure in the neural network, especially when processing a large amount of data with high dimensionality. In addition, automatic detection of the classi-fication is difficult because the SOFM converges to an equal distribution of the neurons on the map. So a special visualization tool, the so called …unified distance matrix methods“, short U-matrix methods, were developed [19]to graphically visualize the structure of the SOFM in a three-dimensional landscape (fig. 1). The simplest U-matrix method is to calculate for each neuron the mean of the distances to its (at most) 8 neighbours and add this value as the height of each neuron in a third dimension. Other methods e.g. consider also the position of the refe-rence vectors on the map. Using an U-matrix method we get, with the help of interpolation and other visualisa-tion technics, a three-dimensional landscape with walls and valleys. Neurons which belong to the same valley are quite similar and may belong to the same class; walls separate different classes (fig. 1). Unlike in other clas-sification algorithms the number of expected classes must not be known a priori. Also, subclasses of larger clas-ses can be detected. Single neurons in deep valleys indicate possible outliers. These visualizations are implemented together with a complete toolbox to show the interpolated U-matrices in three dimensions, with different interpolation methods, in different colour tables, different perspectives, with clipping, tiled or single top view, the position of the reference vectors, the identification of single reference vectors to identify possible outliers or special calses, the drawing of class borders, labeling of clusters, and in addition, single component maps, which show the distribution of a single feature on the SOFM . For example, using a data set containing blood analysis values from 20 patients (20 vectors with 11 real-valued components) selected from a set of 1500patients [3], it turned out that the clustering coresponds nicely with the different patient's diagnoses.4. Integrated Knowledge AcquisitionIn the previous Section we presented the combination of the Self-organization Feature Map ( SOFM ) by Koho-nen [5] and the U-matrix methods [19] to detect structure in large neural networks with collective behaviour representing the structure of the input data. As a result we are able to classify the input data. To acquire know-ledge out of this neuronal classification, we developed an inductive machine learning algorithm, called sig*.Fig. 1. U-MatrixUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.Fuzzy logic, based on fuzzy set theory [20], opens the possibility to model and process vague knowledge in knowledge-based systems. This offers for example the chance to explain the decision making process of human experts derived from vague or uncertain information. In another way some problems of traditional knowledge-based systems like dealing with exceptions in rule based systems can solved by fuzzy logic. Further, because of the generalisation ability of neural networks fuzzy theory is well suited to express the vague knowledge of lear-ned neural networks. To take use out of these advantages we expanded our system by extracting membership functions out of neural networks which are used to transfer the knowledge into fuzzy rules.1. Neural UnificationWe have investigated an approach that is close to the problem representation. The main idea is to use Kohonen´s Self-Organizing Feature Maps (SOFM) [5] for the representation of the atoms and functors in a term. SOFM have the property that similar input data (in this case atoms and functors) are represented in a close neighbor-hood in the feature map (relating to their semantical context). For each atom, resp. functor, in the logical state-ment the input vector for the SOFM is generated as follows: each component of the feature vector represents the number of occurences of the given atom/functor in a (sub-)term, whereby the number after the feature term refers to the arity of the term. The length of the vector is the number of possible (sub-)terms. The training of the SOFM with these input vectors results in a map, called Input Feature Map (IFM).A special designed relaxation network, called Cube, performs the unification by determining the most common unifier. For each argument position of the unifying terms a layer of neurons is constructed, having the same topology as the IFM. For each occurrence of a variable in the given Prolog program a vector of neurons, called Variable Vector, is constructed. The encoding of the vectors is the same as for the input vector of the IFM. Cube and Variable Vectors are connected through neurons leading to and from a vector of neurons called Pairs. Each neuron in the Pairs vector encodes two argument positions that have to be unified. Lateral connections between the pairing neurons activate identical argument positions. With an simple threshold neuron operating on the Pairs neurons the occur check can be realised [15]. The activation functions of the different neurons are con-structed such that if the network has reached a stable state (relaxation process), the unification process can be performed [15]. In order to actually calculate the most common unifier a special SOFM, called Output Feature Map (OFM), is constructed. Weights of this feature map are the activations of the Cube neurons. If the Variable Vector of a variable is used as input pattern to the OFM, the neuron representing an instance of that variable responds.In our network tests like occurrence and clash are implemented such that they can be calculated in parallel and during the relaxation process. Unification is performed via a relaxation neural network. If this network has rea-ched a stable state the most common unifier can be read out using the OFM. It can be proven that our network performs the unification process precisely [15]. Real world applications of logic programming, in particular in expert systems, require more than exact reasoning capabilities. In order to perform fuzzy unification the AND resp. OR neurons of the relaxation networks have to be modified. Instead of the AND resp. OR function in the neurons with connections from the Variable Vectors to the Pairs neurons the activation function is changed to the minimum resp. maximum of the two input activations [15]. We have tested the system with different pro-grams consisting of a small Prolog database, simplification of algebraic terms, symbolic differentiation, and the traveling salesman problem [15].2. IntrospectionMany symbolic knowledge processing systems rely on programs that are able to perform symbolic proofs. Inter-preters for the programming language Prolog are examples of such programs. The usage of Prolog interpreters for symbolic proofs, however, implies a certain proof strategy. But in case of failure of a partial goal, the inter-preter backtracks systematically to the last choice made without analyzing the cause of failure. Even for simple programs, this implicit control strategy is not sufficient to obtain efficient computations. Neural networks can be used to automatically optimize symbolic proofs without the need of an explicit formulation of control know-ledge [16]. We have realized an approach to learn and store control knowledge in a neural network. Input to the neural network is the Prolog clause to be proved. The output is an encoded structural description of the subgoal that is to be proved next. In order to do a comparison we have realized three different neural networks for that problem [16]: ART1 extended to supervised learning mode [2]; Backpropagation [8]; and Kohonen's Self-Orga-nizing Feature Maps (SOFM) [5].Ultsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.A meta-interpreter generates training patterns for the neural network. It encodes successful Prolog proofs. Trai-ned with these examples of proofs the neural network generalizes a control strategy to select clauses. Another meta-interpreter, called generating meta-interpreter (GMI), is asked to prove a goal. The GMI constructs the optimal proof for the given goal, i.e. the proof with the minimal number of resolutions. The optimal proof is found by generating all possible proofs and comparing them with reference to the number of resolutions. For an optimal proof each clause-selection-situation is recorded. A clause-selection-situation is described by the fea-tures of the partial goal to be proved and the clause which is selected to solve that particular goal. The clause is described by a unique identification and two different sorts of information concerning the structure of arguments are used: the types of arguments and their possible identity. For the types of arguments a hierarchical ordering of the possible argument types is used. The encoder takes the clause-selection-situation and produces a training pattern for the neural network. The encoding preserves similarities among the types. The neural network is trai-ned with the encoded training patterns until it is able to reproduce the choice of a clause for a partial goal. A query is passed to an optimizing meta-interpreter (OMI). For each partial goal the OMI presents the description of the partial goal as input to the neural network and obtains a candidate-clause for resolution. With this candi-date the resolution is attempted. If resolution fails, the OMI uses Prolog search strategy as default.Our system allows to generalize the earned control knowledge to new programs. In order to do this, structural similarities beween the new program and the learned one are used to generate a mapping of the corresponding selection situations of different programs.We have tested our approach using several different Prolog programs, for example programs for map coloring, travelling salesman, symbolic differentiation and a little expert system [16]. It turned out, that almost all neural networks were in principle able to learn a proof strategy. Best results in reproducing learned strategies were obtained with the modified ART1-network which reproduced the optimal number of resolutions for a known proof. For queries of the same type (same program) that were not used as training data, however, the SOFM tur-ned out to be the best. A proof strategy using this neural network averaged slightly over the optimal number of resolutions even for completely new programs but well below of the number of resolutions a Prolog interpreter needs. The backpropagation network we used was the worst for both cases [16].3. SummaryWe showed several meaningful ways to integrate neural networks with knowledge-based systems. Concerning Neural Unification we have studied a neural unification algorithm using Self-Organizing Feature Maps by Kohonen. This neural unification algorithm is capable to do the ordinary unification with neural networks whe-reby important problems like the occur-check and the calculation of a most common unifier can be done in par-allel. In Introspection we have tested several different neural networks for their ability to detect and learn proof strategies. A modified Self-Organizing Feature Map has been identified to yield best results concerning the reproduction of proofs made before and for generalizing to completely new programs. In the field of Structure Detection we have developed a combined toolbox to detect structures that a Self-Organized Feature Map has learned from subsymbolic raw data by the collective behaviour of assemblies of neurons. Data stemming from measurements with typically high dimensionality can be analyzed by using an apt visualization of a Self-Organi-zing Feature Map (U-matrix methods). The detected structures can be reformulated in the form of symbolic rules for Integrated Knowledge Acquisition by a sophisticated machine learning algorithm sig*.The usage of neural networks for integrated subsymbolic and symbolic knowledge acquisition realizes a new type of learning from examples. Unsupervised leaning neural networks are capable to extract regularities from data with the help of apt visualization technics. Due to the distributed subsymbolic representation, neural net-works are, however, not able to explain their inferences. Our system avoids this disadvantage by extracting sym-bolic rules out of the neural network. It is possible to give an explaination of the inferences made by the neural networks. By exploiting the properties of the neural networks the system is also able to effectively handle noisy and incomplete data. Algorithms for neural unification allow an efficient realization of the central part of a sym-bolic knowledge processing system and may also be used for neural approximative reasoning. Introspection with neural networks frees the user and programmer of knowledge processing systems to formulate control know-ledge explicitelyUltsch, A. & Korus, D. …Integration of Neural Networks with Knowledge-Based Systems“ Proc. IEEE Int. Conf. Neural Networks, Perth/ Australia, 1995.4. AcknowledgementWe thank Ms. G. Guimaraes, Mr H. Li, and Mr. V. Weber for the helpful discussions. We thank all the students of the University of Dortmund, Germany who worked on preliminary versions of our systems. This research has been supported in part by the German Ministry of Research and Technology, project WiNA (contract No. 413-5839-01 IN 103 C/3) and by the Bennigsen-Foerde price of NRW.5. References[1]K.H. Becks, W. Burchard., A.B. Cremers, A. Heuker, A. Ultsch …Using Activation Networks for Analogical Ordering ofConsideration: Due Method for Integrating Connectionist and Symbolic Processing“ in Eckmiller/ Hartmann/ Hauske (Eds) pp. 465-469, 1990.[2]G.A. Carpenter, S. Grossberg …Self-Organization of Stable Category Recognition Codes for Analog Input Patterns“Applied Optics V ol. 26, pp. 4.919-4.930, 1987.[3]G. Deichsel, H.J. Trampisch …Clusteranalyse und Diskriminanzanalyse“ Gustav Fisher Verlag, Stuttgart, 1985.[4]W.R. Hutchison, K.R. Stephens …Integration of Distributed and Symbolic Knowledge Reresentatoins“ Proc. IEEEIntern. Conf. on Neural Networks, San Diego, CA, p. 395, 1987.[5]T. Kohonen …Self-Organization and Associative Memory“ Springer, Berlin, 1989 (3rd ed.).[6]M.C. Mozer …RAMBOT: A Connectionist Expert System that Learns by Example“ Proc. IEEE Intern. Conf. on NeuralNetworks, San Diego, CA, V ol. 2, p. 693, 1987.[7]G. Palm, A. Ultsch, K. Goser, U. Rückert …Knowledge Processing in Neural Architecture“ in Delgado-Fraias/ Moore(Eds) VLSI for Neural Networks and Artificial Intelligence Plenum Publ., New York, 1993.[8] D.E. Rumelhart, J.L. McClelland (Eds) …Parallel Distributed Processing“ MIT Press, 1986.[9]M. Schweizer, P.M.B. Foehn, J. Schweizer, A. Ultsch … A Hybrid Expert System for Avalanche Forecasting“ Proc. Intl.Conf. ENTER 94 - Inform. Communic. Technol. in Tourism Innsbruck, pp. 148-1531994.[10]A. Ultsch …Connectionist Modells and their Integration with Knowledge-Based Systems“ Technical Report No 396,Univ. of Dortmund, Germany, 1991 (in german).[11]A. Ultsch …Self-organizing Neural Networks for Knowledge Acquisition“ Proc. European Conf. on AI (ECAI) Wien,Austria, pp. 208-210, 1992.[12]A. Ultsch …Knowledge Acquisition with Self-Organizing Neural Networks“ Proc. Intl. Conf. on Artificial Neural Net-works (ICANN) Brighton, UK, pp. 735-740, 1992.[13]A. Ultsch …Self-Organized Feature Maps for Monitoring and Knowledge Akquisition of a Chemical Process“ Proc. Intl.Conf. on Artificial Neural Networks (ICANN) Amsterdam, Netherlands, pp. 864-867, 1993.[14]A. Ultsch, G. Guimaraes, D. Korus, H. Li …Knowledge Extraction from Artificial Neural Networks and Applications“TAT & World Transpter Congress 93, Aachen, Germany, Springer, pp. 194-203, 1993.[15]A. Ultsch, G. Guimaraes, V. Weber …Self Organizing Feature Maps for Logical Unification“ Proc. Intl. Joint Conf. AIPortugal, 1994.[16]A. Ultsch, R. Hannuschka, U. Hartmann, M. Mandischer, V. Weber …Optimizing logical proofs with connectionist net-works“ in Kohonen et al (Eds.) Proc ICANN, Helsinki, Elsevier, pp. 585-590, 1991.[17]A. Ultsch, D. Korus, T.O. Kleine …Neural Networks in Biochemical Analysia“ Abstract Intl. Conf. Biochemical Analy-sis 95, publ. in: Europ. Journal of Clinical Chemistry and Clinical Biochemistry, V ol. 33, No. 4, Berlin, pp. A144-A145., April 1995.[18]A. Ultsch, H. Li …Automatic Acquisition of Symbolic Knowledge from Subsymbolic Neural Networks“ Proc Intl. Conf.on Signal Processing Peking, China, pp. 1201-1204, 1993.[19]A. Ultsch, H.P. Siemon …Self-Organizing Neural Networks for Exploratory Data Analysis“ Proc. Conf. Soc. for Infor-mation and Classification Dortmund, Germany, 1992.[20]L.A. Zadeh …Fuzzy Sets“ Information and Control 8, pp. 338-353, 1965.。

Alan Turing

Alan Turing

Alan Turing (1912-1954), was a British pioneering computer scientist, mathematician, logician, cryptanalyst and theoretical biologist. He was highly influential in the development of computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general purpose computer.From 1931 to 1934, Turing studied at King's College, Cambridge, where he gained first-class honors in mathematics. In June 1938, he obtained his PhD from Princeton University. During the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. But because of his eccentric personality, many people put a spoke in his wheel, some even think he is the spy from other country.He devised a number of techniques for breaking German ciphers, including an electromechanical machine that could find settings for the Enigma machine which is recognized that no one can breaking the ciphers in it at that times. Turing played a important role in cracking messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic. it has been estimated that this work shortened the war in Europe by as many as two to four years.In 1945, Turing was awarded the OBE by King George VI for his wartime services, but his work remained secret for many years.In 1952, Turing was prosecuted for homosexual acts, when such behavior was still a criminal act in the UK. He accepted treatment with estrogen injections. Two years later, He was found commit suicide on his bed. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology for "the appalling(可怕的)way he was treated". Queen Elizabeth II granted him a posthumous pardon in 2013. After a year, a film named The Imitation Game about him was released in the UK on 14 November and released in the US on 28 November.。

纪念图灵百年诞辰(刘瑞挺)

纪念图灵百年诞辰(刘瑞挺)

创新思维卓越贡献特立独行传奇人生纪念艾伦·图灵百年诞辰刘瑞挺/文艾伦·麦席森·图灵(Alan Mathison Turing)1912年6月23日生于英国伦敦梅达维洛(Maida Vale, London),今年正好是他100周年诞辰。

这位英国皇家学会会员、数学家、逻辑学家,被国际公认为计算机科学与人工智能之父。

正当他具有奔流不息的思维源泉和将其付诸实践的巨大热情时,1954年6月7日图灵在英国柴郡的韦姆斯洛(Wilmslow, Cheshire)住所不幸意外辞世,差半个月才满42岁,一代科学巨星陨落。

长期以来,人们把图灵看得很神秘、很古怪、很遥远,令人敬畏而难以理解。

褒者认为他是奇才,贬者认为他是畸才。

事实上,他的确是涉猎广泛的科学天才,不仅对数学及计算科学,而且对物理学(量子力学与相对论)、化学(类似炼金术士般着迷)、生物学(生物形态及数学生物学)都有浓厚的兴趣与创新。

他看似对人漫不经心,但极其友善诚恳。

他是世界级的马拉松运动员,却陷于同性恋的惩罚与折磨。

真是人无完人、金无足赤,我们知道达·芬奇也是同性恋者,艾萨克·牛顿曾是隐秘而执著的炼金术士。

伟大的科学家并不是神,我们应该把图灵还原成真实的人。

家族溯源(1316-)图灵的父系家族来自法国诺曼底,家谱可追溯至1316年。

14世纪初该家族来到苏格兰的阿伯丁郡(Aberdeenshire)。

家族的格言是:“幸运帮助有胆量的人”(拉丁文Fortuna audentes juvat)。

这个姓氏有几个拼法:Turyn、Turine、Turin、Turing,其中Turin是法国姓氏。

17世纪初,当时威廉·图灵(Sir William Turyn)从英王詹姆斯一世(1566-1625,其间1567-1625为苏格兰国王,1603年起为英格兰国王)接受了骑士爵位后,才在词尾加了“g”,成为后来的英国姓氏Turing。

自适应光学视觉科学手册说明书

自适应光学视觉科学手册说明书

Adaptive Optics for Vision Science: Principles, Practices, Designand ApplicationsJason Porter, Abdul Awwal, Julianna LinHope Queener, Karen Thorn(Editorial Committee)Updated on June 30, 2003−Introduction1.Introduction (David Williams)University of Rochester1.1 Goals of the AO Manual (This could also be a separate preface written by the editors)* practical guide for investigators who wish to build an AO system* summary of vision science results obtained to date with AO1.2 Brief History of Imaging1.2.1 The evolution of astronomical AOThe first microscopes and telescopes, Horace Babcock , military applications during StarWars, ending with examples of the best AO images obtained to date. Requirements forastronomical AO1.2.2 The evolution of vision science AOVision correction before adaptive optics:first spectacles, first correction of astigmatism, first contact lenses, Scheiner and thefirst wavefront sensor.Retinal imaging before adaptive optics:the invention of the ophthalmoscope, SLO, OCTFirst AO systems: Dreher et al.; Liang, Williams, and Miller.Comparison of Vision AO and Astronomical AO: light budget, temporal resolutionVision correction with AO:customized contact lenses, IOLs, and refractive surgery, LLNL AO Phoropter Retinal Imaging with Adaptive OpticsHighlighted results from Rochester, Houston, Indiana, UCD etc.1.3 Future Potential of AO in Vision Science1.3.1 Post-processing and AO1.3.2 AO and other imaging technologies (e.g. OCT)1.3.3 Vision Correction1.3.4 Retinal Imaging1.3.5 Retinal SurgeryII. Wavefront Sensing2. Aberration Structure of the Human Eye (Pablo Artal)(Murcia Optics Lab; LOUM)2.1 Aberration structure of the human eye2.1.1 Monochromatic aberrations in normal eyes2.1.2 Chromatic aberrations2.1.3 Location of aberrations2.1.4 Dynamics (temporal properties) of aberrations2.1.5 Statistics of aberrations in normal populations (A Fried parameter?)2.1.6 Off-axis aberrations2.1.7 Effects of polarization and scattering3. Wavefront Sensing and Diagnostic Uses (Geunyoung Yoon) University of Rochester3.1 Introduction3.1.1 Why is wavefront sensing technique important for vision science?3.1.2 Importance of measuring higher order aberrations of the eyeCharacterization of optical quality of the eyePrediction of retinal image quality formed by the eye’s opticsBrief summary of potential applications of wavefront sensing technique3.1.3 Chapter overview3.2 Wavefront sensors for the eye3.2.1 History of ophthalmic wavefront sensing techniques3.2.2 Different types of wavefront sensors and principle of each wavefrontsensorSubjective vs objective method (SRR vs S-H, LRT and Tcherning)Measuring light going into vs coming out of the eye (SRR, LRT and Tcherning vs S-H) 3.3 Optimizing Shack-Hartmann wavefront sensor3.3.1 Design parametersWavelength, light source, laser beacon generation, pupil camera, laser safety…3.3.2 OSA standard (coordinates system, sign convention, order of Zernikepolynomials)3.3.3 Number of sampling points (lenslets) vs wavefront reconstructionperformance3.3.4 Tradeoff between dynamic range and measurement sensitivityFocal length of a lenslet array and lenslet spacing3.3.5 PrecompensationTrial lenses, trombone system, bite bar (Badal optometer)3.3.6 Increasing dynamic range without losing measurement sensitivityTranslational plate with subaperturesComputer algorithms (variable centroiding box position)3.3.7 Requirement of dynamic range of S-H wavefront sensor based on a largepopulation of the eye’s aberrations3.4 Calibration of the wavefront sensor3.4.1 reconstruction algorithm - use of simulated spot array pattern3.4.2 measurement performance - use of phase plate or deformable mirror 3.5 Applications of wavefront sensing technique to vision science3.5.1 Laser refractive surgery (conventional and customized ablation)3.5.2 Vision correction using customized optics (contact lenses andintraocular lenses)3.5.3 Autorefraction (image metric to predict subjective vision perception)3.5.4 Objective vision monitoring3.5.5 Adaptive optics (vision testing, high resolution retinal imaging)3.6 SummaryIII. Wavefront Correction with Adaptive Optics 4. Mirror Selection (Nathan Doble and Don Miller)University of Rochester / Indiana University4.1 Introduction4.1.1 Describe the DMs used in current systems.4.1.1.2 Xinetics type – Williams, Miller, Roorda – (PZT and PMN)4.1.1.3 Membrane – Artal, Zhu(Bartsch)4.1.1.4 MEMS – LLNL Phoropter, Doble4.1.1.5 LC-SLM – Davis System.4.2 Statistics of the two populations4.2.1 State of refraction:4.2.1.1 All aberrations present4.2.1.2 Zeroed Defocus4.2.1.3 Same as for 4.2.1.2 but with astigmatism zeroed in addition4.2.2 For various pupil sizes (7.5 - 2 mm) calculate:4.2.2.1 PV Error4.2.2.2 MTF4.2.2.3 Power Spectra4.2.3 Required DM stroke given by 95% of the PV error for the variousrefraction cases and pupil sizes.4.2.4 Plot of the variance with mode order and / or Zernike mode.4.3 Simulation of various Mirror TypesDetermine parameters for all mirrors to achieve 80% Strehl.4.3.1 Continuous Faceplate DMs4.3.1.1 Describe mode of operation.4.3.1.2 Modeled as a simple Gaussian4.3.1.3 Simulations for 7.5mm pupil4.3.1.4 Parameters to vary:Number of actuators.Coupling coefficient.Wavelength.4.3.1.5 All the above with unlimited stroke.4.3.2 Piston Only DMs4.3.2.1 Describe mode of operation.4.3.2.2 Simulations for 7.5mm pupil with either cases4.3.2.3 No phase wrapping i.e. unlimited stroke.Number of actuators.Packing geometryWavelength.Need to repeat the above but with gaps.4.3.2.4 Effect of phase wrappingTwo cases:Phase wrapping occurs at the segment locations.Arbitrary phase wrap.4.3.3 Segmented Piston / tip / tilt DMs4.3.3.1 Describe mode of operation.4.3.3.2 Three influence functions per segment, do the SVD fit on a segment by segmentbasis.4.3.3.3 Simulations for 7.5mm pupil.4.3.3.4 No phase wrapping unlimited stroke and tip/tilt.Number of actuators - squareSame as above except with hexagonal packing.Wavelength.Gaps for both square and hexagonal packing.4.3.3.5 Effect of phase wrappingPhase wrapping occurs at the segment locations.Arbitary phase wrap. Wrap the wavefront and then determine the required number ofsegments. Everything else as listed in part 1).4.3.4 Membrane DMs4.3.4.1 Describe mode of operation. Bimorphs as well.4.3.4.2 Simulations for 7.5mm pupil with either cases.4.3.4.3 Parameters to vary:Number of actuators.Actuator size.Membrane stressWavelength.5. Control Algorithms (Li Chen)University of Rochester5.1 Configuration of lenslets and actuators5.2 Influence function measurement5.3 Control command of wavefront corrector5.3.1 Wavefront control5.3.2 Direct slope control5.3.3 Special control for different wavefront correctors5.4 Transfer function modelization of adaptive optics system5.4.1 Transfer function of adaptive optics components5.4.2 Overall system transfer function5.4.3 Adaptive optics system bandwidth analysis5.5 Temporal modelization with Transfer function5.5.1 Feedback control5.5.2 Proportional integral control5.5.3 Smith compensate control5.6 Temporal controller optimization5.6.1 Open-loop control5.6.2 Closed-loop control5.6.2 Time delay effect on the adaptive optics system5.6.3 Real time considerations5.7 Summary6. Software/User Interface/Operational Requirements (Ben Singer) University of Rochester6.1 Introduction6.2 Hardware setup6.2.1 Imaging6.2.1.1 Hartmann-Shack Spots6.2.1.2 Pupil Monitoring6.2.1.3 Retinal Imaging6.2.2 Triggered devices: Shutters, lasers, LEDs6.2.3 Serial devices: Defocusing slide, custom devices6.2.4 AO Mirror control6.3 Image processing setup6.3.1 Setting regions of interest: search boxes6.3.2 Preparing the image6.3.2.1 Thresholding6.3.2.2 Averaging6.3.2.3 Background subtraction6.3.2.4 Flat-fielding6.3.3 Centroiding6.3.4 Bad data6.4 Wavefront reconstruction and visualization6.4.1 Zernike mode recovery and RMS6.4.1.1 Display of modes and RMS: traces, histograms6.4.1.2 Setting modes of interest6.4.2 Wavefront visualization6.4.2.1 Continuous grayscale image6.4.2.2 Wrapped grayscale image6.4.2.3 Three-D plots6.5 Adaptive optics6.5.1 Visualizing and protecting write-only mirrors6.5.2 Testing, diagnosing, calibrating6.5.3 Individual actuator control6.5.4 Update timing6.5.5 Bad actuators6.6 Lessons learned, future goals6.6.1 Case studies from existing systems at CVS and B&L6.6.1.1 One-shot wavefront sensing vs realtime AO6.6.1.2 Using AO systems in experiments: Step Defocus6.6.2 Engineering trade-offs6.6.2.1 Transparency vs Simplicity6.6.2.2 Extensibility vs Stability6.6.3 How to please everyone6.6.3.1 Subject6.6.3.2 Operator6.6.3.3 Experimenter6.6.3.4 Programmer6.6.4 Software tools6.7 Summary7. AO Assembly, Integration and Troubleshooting (Brian Bauman) Lawrence Livermore7.1 Introduction and Philosophy7.2 Optical alignment7.2.1 General remarks7.2.2 Understanding the penalties for misalignments7.2.3 Having the right knobs: optomechanics7.2.4 Common alignment practices7.2.4.1 Tools7.2.4.2 Off-line alignment of sub-systems7.2.4.3 Aligning optical components7.2.4.4 Sample procedures (taken from the AO phoropter project)7.3 Wavefront sensor checkout7.3.1 Wavefront sensor camera checkout7.3.2 Wavefront sensor checkout7.3.2.1 Proving that centroid measurements are repeatable.7.3.2.2 Proving that the centroid measurements do not depend on where centroids are withrespect to pixels7.3.2.3 Measuring plate scale.7.3.2.4 Proving that a known change in the wavefront produces the correct change incentroids.7.4 Wavefront Reconstruction7.4.1 Testing the reconstruction code: Prove that a known change in thewavefront produces the correct change in reconstructed wavefront.7.5 Aligning the “probe” beam into the eye7.6 Visual stimulus alignment7.7 Flood-illumination alignment7.8 DM-to-WFS Registration7.8.1 Tolerances & penalties for misregistration7.8.2 Proving that the wavefront sensor-to-SLM registration is acceptable.7.9 Generating control matrices7.9.1 System (“push”) matrix7.9.2 Obtaining the control matrix7.9.3 Checking the control matrix7.9.4 Null spaces7.10 Closing the loop7.10.1 Checking the gain parameter7.10.2 Checking the integration parameter7.11 Calibration7.11.1 Obtaining calibrated reference centroids.7.11.2 Proving that reference centroids are good7.11.3 Image-sharpening to improve Strehl performance.7.12 Science procedures7.13 Trouble-shooting algorithms8. System Performance: Testing, Procedures, Calibration and Diagnostics (Bruce Macintosh, Marcos Van Dam)Lawrence Livermore / Keck Telescope8.1 Spatial and Temporal characteristics of correction8.2 Power Spectra calculations8.3 Disturbance rejection curves8.4 Strehl ratio/PSF measurements/calculations8.5 Performance vs. different parameters (beacon brightness, field angle, …)?8.6 Summary Table and Figures of above criteria8.6.1 Results from Xinetics, BMC, IrisAOIV. Retinal Imaging Applications 9. Fundamental Properties of the Retina (Ann Elsner) Schepens Eye Research Institute9.1 Shape of the retina, geometric optics9.1.1 Normal fovea, young vs. old9.1.1.1. foveal pit9.1.1.2. foveal crest9.1.2 Normal optic nerve head9.1.3 Periphery and ora serrata9.2 Two blood supplies, young vs. old9.2.1 Retinal vessels and arcades9.2.2 0 – 4 layers retinal capillaries, foveal avascular zone9.2.3 Choriocapillaris, choroidal vessels, watershed zone 9.3 Layers vs. features, young vs. old, ethnic differences9.3.1 Schlera9.3.2 Choroidal vessels, choroidal melanin9.3.3 Bruch’s membrane9.3.4 RPE, tight junctions, RPE melanin9.3.5 Photoreceptors, outer limiting membrane9.3.5.1 Outer segment9.3.5.2 Inner segment9.3.5.3 Stiles-Crawford effect9.3.5.4 Macular pigment9.3.6 Neural retina9.3.7 Glia, inner limiting membrane, matrix9.3.8 Inner limiting membrane9.3.9 Vitreo-retinal interface, vitreous floaters9.4 Spectra, layers and features9.4.1 Main absorbers in the retina9.4.2 Absorbers vs. layers9.4.3 Features in different wavelengths9.4.4 Changes with aging9.5 Light scattering, layers and features9.5.1 Directly backscattered light9.5.2 Multiply scattered light9.5.3 Geometric changes in specular light return9.5.4 Layers for specular and multiply scattered light9.5.5 Imaging techniques to benefit from light scattering properties 9.6 Polarization9.6.1 Polarization properties of the photoreceptors9.6.2 Polarization properties of the nerve fiber bundles, microtubules9.6.3 Anterior segment and other polarization artifacts9.6.4 Techniques to measure polarization properties9.7 Imaging techniques to produce contrast from specular or multiply scattered light9.7.1 Confocal imaging9.7.2 Polarization to narrow the point spread function9.7.3 Polarization as a means to separate directly backscattered light frommultiply scattered light, demonstration using the scattered light9.7.4 Coherence techniques as a means to separate directly backscattered light from multiply scattered light, with a goal of using the scattered light10. Strategies for High Resolution Retinal Imaging (Austin Roorda, Remy Tumbar, Julian Christou)University of Houston / University of Rochester / University of California, Santa Cruz10.1 Conventional Imaging (Roorda)10.1.1 Basic principlesThis will be a simple optical imaging system10.1.2 Basic system designShow a typical AO flood-illuminated imaging system for the eye10.1.3 Choice of optical componentsDiscuss the type of optical you would use (eg off axis parabolas)10.1.4 Choice of light sourceHow much energy, what bandwidth, flash duration, show typical examples10.1.5 Controlling the field sizeWhere to place a field stop and why10.1.6 Choice of cameraWhat grade of camera is required? Show properties of typical cameras that are currently used10.1.7 Implementation of wavefront sensingWhere do you place the wavefront sensor. Using different wavelengths for wfs.10.2 Scanning Laser Imaging (Roorda)10.2.1 Basic principlesThis will show how a simple scanning imaging system operates10.2.2 Basic system designThis shows the layout of a simple AOSLO10.2.3 Choice of optical componentsWhat type of optical components shoud you use and why (eg mirrors vs lenses). Where doyou want to place the components (eg raster scanning, DM etc) and why.10.2.4 Choice of light sourceHow to implement different wavelengths. How to control retinal light exposure10.2.5 Controlling the field sizeOptical methods to increase field sizeMechanical (scanning mirror) methods to increase field size10.2.6 Controlling light deliveryAcousto-optical control of the light source for various applications10.2.7 Choice of detectorPMT vs APD what are the design considerations10.2.8 Choice of frame grabbing and image acquisition hardwareWhat are the requirements for a frame grabber. What problems can you expect.10.2.9 Implementation of wavefront sensingStrategies for wavefront sensing in an AOSLO10.2.10 Other: pupil tracking, retinal tracking, image warping10.3 OCT systems (Tumbar)10.3.1 Flood illuminated vs. Scanning10.4 Future ideas (Tumbar)10.4.1 DIC (Differential Interference Contrast)10.4.2 Phase Contrast10.4.3 Polarization Techniques10.4.4 Two-photon10.4.5 Fluorescence/Auto-fluorescence10.5 Survey of post-processing/image enhancement strategies (Christou)11. Design Examples11.1 Design of Houston Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) (Krishna Venkateswaran)11.1.1 Basic optical designEffect of double pass system on psf, imaging in conjugate planes11.1.2 Light delivery opticsFiber optic source and other optics11.1.3 Raster scanningScanning speeds etc.,11.1.4 Physics of confocal imaging11.1.5 Adaptive optics in SLOWavefront sensing, Zernike polynomials, Deformable mirror, correction time scales11.1.6 Detailed optical layout of the AOSLOLens, mirrors, beam splitters with specs11.1.7 Image acquisitionBack end electronics, frame grabber details11.1.8 Software interface for the AOSLOWavefront sensing, Image acquisition11.1.9 Theoretical model of AOSLO:Limits on axial and lateral resolution11.1.10 Image registration11.1.11 Results11.1.12 Discussions on improving performance of AOSLOLight loss in optics, Deformable mirror, Wavefront sensing,11.1.13 Next generation AOSLO type systems11.2 Indiana University AO Coherence Gated System (Don Miller)11.2.1 Resolution advantages of an AO-OCT retina camera11.2.2 AO-OCT basic system design concepts11.2.2.1 Application-specific constraints−Sensitivity to weak tissue reflections−Tolerance to eye motion artifacts−Yoking focal plane to the coherence gate11.2.2.2 Integration of AO and OCT sub-systems−Generic OCT system−Specific OCT architectures−Preferred AO-OCT embodiments11.2.3 Description of the Indiana AO-OCT retina cameraOptical layout of the Indiana AO-OCT retina camera11.2.3.1 Adaptive Optics for correction of ocular aberrationsA. System descriptionB. Results11.2.3.2 1D OCT axial scanning for retina trackingA. System descriptionB. Results11.2.3.3 High speed 2D incoherent flood illumination for focusing and aligningA. System descriptionB. Results11.2.3.4 CCD-based 2D OCT for en face optical sectioning the retinaA. System descriptionB. Results11.2.4 Future developments11.2.4.1 Smart photodiode array11.2.4.2 En face and tomographic scanning11.2.4.3 Reduction of image speckle11.2.4.4 Detector sensitivity11.2.4.5 Faster image acquisition11.3 Rochester Second Generation AO System (Heidi Hofer)V. Vision Correction Applications12. Customized Vision Correction Devices (Ian Cox)Bausch & Lomb12.1 Contact Lenses12.1.1 Rigid or Soft Lenses?12.1.2 Design Considerations – More Than Just Optics12.1.3 Measurement – The Eye, the Lens or the System?12.1.4 Manufacturing Issues – Can The Correct Surfaces Be Made?12.1.5 Who Will Benefit?12.1.6 Summary12.2 Intraocular Lenses12.2.1 Which Aberrations - The Cornea, The Lens or The Eye?12.2.2 Surgical Procedures – Induced Aberrations12.2.3 Design & Manufacturing Considerations12.2.4 Future Developments & Summary13. Customized Refractive Surgery (Scott MacRae)University of Rochester / StrongVision14. Visual Psychophysics (UC Davis Team, headed by Jack Werner) UC Davis14.1 Characterizing visual performance14.1.1 Acuity14.1.2 Contrast sensitivity functions (CSFs)14.1.3 Photopic/scotopic performance (include various ways to defineluminance)14.2 What is psychophysics?14.2.1 Studying the limits of vision14.2.2 Differences between detection, discrimination and identification14.3 Psychophysical methods14.3.1 Psychometric function14.3.2 signal detection theory14.3.3 measuring threshold14.3.4 Criterion-free methods14.3.5 Method of constant stimuli, method of adjustment, adaptive methods(e.g. Quest).14.4 The visual stimulus14.4.1 Issues in selecting a display systemTemporal resolutionSpatial resolutionIntensity (maximum, bit depth)HomogeneitySpectral characteristics14.4.2 Hardware optionsCustom optical systems (LEDs, Maxwellian view)DisplaysCRTsDLPsLCDsPlasmaProjectorsDisplay generationcustom cardsVSGBits++10-bit cardsPelli attenuatorDithering/bit stealing14.4.3 SoftwareOff the shelf software is not usually flexible enough. We recommend doing it yourself. This canbe done using entirely custom software (e.g. C++) or by using software libraries such as VSG(PC) or PsychToolbox (Mac/PC).14.4.4 CalibrationGamma correctionSpatial homogeneityTemporal and spatial resolution14.5 Summary15. Wavefront to Phoropter Refraction (Larry Thibos)Indiana University15.1 Basic terminology15.1.1 Refractive error15.1.2 Refractive correction15.1.3 Lens prescriptions15.2 The goal of subjective refraction15.2.1 Definition of far point15.2.2 Elimination of astigmatism15.2.3 Using depth-of-focus to expand the range of clear vision15.2.4 Placement of far-point at hyperfocal distance15.3 Methods for estimating the monochromatic far-point from an aberration map15.3.1 Estimating center of curvature of an aberrated wavefront15.3.1.1 Least-squares fitting15.3.1.2 Paraxial curvature matching15.3.2 Estimating object distance that optimizes focus15.3.2.1 Metrics based on point objects15.3.2.2 Metrics based on grating objects15.4 Ocular chromatic aberration and the polychromatic far-point15.4.1 Polychromatic center of curvature metrics15.4.2 Polychromatic point image metrics15.4.3 Polychromatic grating image metrics15.5 Experimental evaluation of proposed methods15.5.1 Conditions for subjective refraction15.5.2. Monochromatic predictions15.5.3 Polychromatic predictions16. Design ExamplesDetailed Layouts, Numbers, Noise Analysis, Limitations for Visual Psychophysics: 16.1 LLNL/UR/B&L AO Phoroptor (Scot Olivier)16.2 UC Davis AO Phoropter (Scot Olivier)16.3 Rochester 2nd Generation AO System (Heidi Hofer)V. Appendix/Glossary of Terms (Hope Queener, JosephCarroll)• Laser safety calculations• Other ideas?• Glossary to define frequently used terms。

Bidirectional Recurrent Neural Networks

Bidirectional Recurrent Neural Networks

Bidirectional Recurrent Neural Networks Mike Schuster and Kuldip K.Paliwal,Member,IEEEAbstract—In thefirst part of this paper,a regular recurrent neural network(RNN)is extended to a bidirectional recurrent neural network(BRNN).The BRNN can be trained without the limitation of using input information just up to a preset future frame.This is accomplished by training it simultaneously in positive and negative time direction.Structure and training procedure of the proposed network are explained.In regression and classification experiments on artificial data,the proposed structure gives better results than other approaches.For real data,classification experiments for phonemes from the TIMIT database show the same tendency.In the second part of this paper,it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.For this part,experiments on real data are reported.Index Terms—Recurrent neural networks.I.I NTRODUCTIONA.GeneralM ANY classification and regression problems of engi-neering interest are currently solved with statistical approaches using the principle of“learning from examples.”For a certain model with a given structure inferred from the prior knowledge about the problem and characterized by a number of parameters,the aim is to estimate these parameters accurately and reliably using afinite amount of training data. In general,the parameters of the model are determined by a supervised training process,whereas the structure of the model is defined in advance.Choosing a proper structure for the model is often the only way for the designer of the system to put in prior knowledge about the solution of the problem. Artificial neural networks(ANN’s)(see[2]for an excellent introduction)are one group of models that take the principle “infer the knowledge from the data”to an extreme.In this paper,we are interested in studying ANN structures for one particular class of problems that are represented by temporal sequences of input–output data pairs.For these types of problems,which occur,for example,in speech recognition, time series prediction,dynamic control systems,etc.,one of the challenges is to choose an appropriate network structureManuscript received June5,1997.The associate editor coordinating the review of this paper and approving it for publication was Prof.Jenq-Neng Hwang.M.Schuster is with the ATR Interpreting Telecommunications Research Laboratory,Kyoto,Japan.K.K.Paliwal is with the ATR Interpreting Telecommunications Research Laboratory,Kyoto,Japan,on leave from the School of Microelectronic Engineering,Griffith University,Brisbane,Australia.Publisher Item Identifier S1053-587X(97)08055-0.that,at least theoretically,is able to use all available input information to predict a point in the output space.Many ANN structures have been proposed in the literature to deal with time varying patterns.Multilayer perceptrons (MLP’s)have the limitation that they can only deal with static data patterns(i.e.,input patterns of a predefined dimen-sionality),which requires definition of the size of the input window in advance.Waibel et al.[16]have pursued time delay neural networks(TDNN’s),which have proven to be a useful improvement over regular MLP’s in many applications.The basic idea of a TDNN is to tie certain parameters in a regular MLP structure without restricting the learning capability of the ANN too much.Recurrent neural networks(RNN’s)[5],[8], [12],[13],[15]provide another alternative for incorporating temporal dynamics and are discussed in more detail in a later section.In this paper,we investigate different ANN structures for incorporating temporal dynamics.We conduct a number of experiments using both artificial and real-world data.We show the superiority of RNN’s over the other structures.We then point out some of the limitations of RNN’s and propose a modified version of an RNN called a bidirectional recurrent neural network,which overcomes these limitations.B.TechnicalConsider a(time)sequence of input datavectorsand a sequence of corresponding output datavectorswith neighboring data-pairs(in time)being somehow statisti-cally dependent.Given timesequences as training data,the aim is to learn the rules to predict the output data given the input data.Inputs and outputs can,in general,be continuous and/or categorical variables.When outputs are continuous,the problem is known as a regression problem, and when they are categorical(class labels),the problem is known as a classification problem.In this paper,the term prediction is used as a general term that includes regression and classification.1)Unimodal Regression:For unimodal regression or func-tion approximation,the components of the output vectors are continuous variables.The ANN parameters are estimated to maximize some predefined objective criterion(e.g.,maximize the likelihood of the output data).When the distribution of the errors between the desired and the estimated output vectors is assumed to be Gaussian with zero mean and afixed global data-dependent variance,the likelihood criterion reduces to the1053–587X/97$10.00©1997IEEE(a)(b)Fig.1.General structure of a regular unidirectional RNN shown (a)with a delay line and (b)unfolded in time for two time steps.convenient Euclidean distance measure between the desired and the estimated output vectors or the mean-squared-error criterion ,which has to be minimized during training [2].It has been shown by a number of researchers [2],[9]that neural networks can estimate the conditional average of the desired output (or target)vectors at their network outputs,i.e.,,where is an expectation operator.2)Classification:In the case of a classification problem,one seeks the most probable class out of a given poolof .To make this kind of problem suitable to besolved by an ANN,the categorical variables are usually coded as vectors as follows.Consider that.Then,construct an outputvectorth component is one and other componentsare zero.The output vectorsequence constructed in this manner along with the input vectorsequenceth network output at each timepoint,with thequality of the estimate depending on the size of the training data and the complexity of the network.For some applications,it is not necessary to estimate the conditional posteriorprobability)orclassification [i.e.,computeand decide the class using themaximum a posteriori decision rule].In this case,the outputs are treated statistically independent.Experiments1Here,we want to make a distinction between C t and c t .C t is a categoricalrandom variable,and c t is its value.for this part are conducted for artificial toy data as well as for real data.•Estimation of the conditional probability of a complete sequence of classes oflength)topredict .How much of this information is captured by a particular RNN depends on its structure and the training algorithm.An illustration of the amount of input information used for prediction with different kinds of NN’s is given in Fig.2.Future input information coming up later than is usually also useful for prediction.With an RNN,this can be partially achieved by delaying the output by a certain numberof(Fig.2).Theoretically,is too large.Apossible explanation for this could be that withrising for thepredictionof ,leaving less modeling power for combining the prediction knowledge from different input vectors.While delaying the output by some frames has been used successfully to improve results in a practical speech recogni-tion system [12],which was also confirmed by the experiments conducted here,the optimal delay is task dependent and has toSCHUSTER AND PALIWAL:BIDIRECTIONAL RECURRENT NEURAL NETWORKS2675Fig.2.Visualization of the amount of input information used for prediction by different networkstructures.Fig.3.General structure of the bidirectional recurrent neural network (BRNN)shown unfolded in time for three time steps.be found by the “trial and error”error method on a validation test set.Certainly,a more elegant approach would be desirable.To use all available input information,it is possible to use two separate networks (one for each time direction)and then somehow merge the results.Both networks can then be called experts for the specific problem on which the networks are trained.One way of merging the opinions of different experts is to assume the opinions to be independent,which leads to arithmetic averaging for regression and to geometric averaging (or,alternatively,to an arithmetic averaging in the log domain)for classification.These merging procedures are referred to as linear opinion pooling and logarithmic opinion pooling ,respectively [1],[7].Although simple merging of network outputs has been applied successfully in practice [14],it is generally not clear how to merge network outputs in an optimal way since different networks trained on the same data can no longer be regarded as independent.B.Bidirectional Recurrent Neural NetworksTo overcome the limitations of a regular RNN outlined in the previous section,we propose a bidirectional recurrent neural network (BRNN)that can be trained using all available input information in the past and future of a specific time frame.1)Structure:The idea is to split the state neurons of a regular RNN in a part that is responsible for the positive time direction (forward states)and a part for the negative time direction (backward states).Outputs from forward states are not connected to inputs of backward states,and vice versa.This leads to the general structure that can be seen in Fig.3,where it is unfolded over three time steps.It is not possible to display the BRNN structure in a figure similar to Fig.1with the delay line since the delay would have to be positive and negative in time.Note that without the backward states,this structure simplifies to a regular unidirectional forward RNN,as shown in Fig.1.If the forward states are taken out,a regular RNN with a reversed time axis results.With both time directions taken care of in the same network,input information in the past and the future of the currently evaluated time frame can directly be used to minimize the objective function without the need for delays to include future information,as for the regular unidirectional RNN discussed above.2)Training:The BRNN can principally be trained with the same algorithms as a regular unidirectional RNN because there are no interactions between the two types of state neurons and,therefore,can be unfolded into a general feed-forward network.However,if,for example,any form of back-propagation through time (BPTT)is used,the forward and backward pass procedure is slightly more complicated because the update of state and output neurons can no longer be done one at a time.If BPTT is used,the forward and backward passes over the unfolded BRNN over time are done almost in the same way as for a regular MLP.Some special treatment is necessary only at the beginning and the end of the training data.The forward state inputsat2676IEEE TRANSACTIONS ON SIGNAL PROCESSING,VOL.45,NO.11,NOVEMBER1997 backward state inputs atfor the forward states and atto tototo)drawn from the uniformdistribution,except the output biases,which are set so thatthe corresponding output gives the prior average of the outputdata in case of zero input activation.For the regression experiments,the networks use theactivation function and are trained to minimize the mean-squared-error objective function.For type“MERGE,”thearithmetic mean of the network outputs of“RNN-FOR”and“RNN-BACK”is taken,which assumes them to be indepen-dent,as discussed above for the linear opinion pool.For the classification experiments,the output layer uses the“softmax”output function[4]so that outputs add up to oneand can be interpreted as probabilities.As commonly used forANN’s to be trained as classifiers,the cross-entropy objectivefunction is used as the optimization criterion.Because theoutputs are probabilities assumed to be generated by inde-pendent events,for type“MERGE,”the normalized geometricmean(logarithmic opinion pool)of the network outputs of“RNN-FOR”and“RNN-BACK”is taken.c)Results:The results for the regression and the classifi-cation experiments averaged over100training/evaluation runscan be seen in Figs.4and5,respectively.For the regressiontask,the mean squared error depending on the shift of theoutput data in positive time direction seen from the timeaxis of the network is shown.For the classification task,therecognition rate,instead of the mean value of the objectivefunction(which would be the mean cross-entropy),is shownSCHUSTER AND PALIWAL:BIDIRECTIONAL RECURRENT NEURAL NETWORKS2677Fig.4.Averaged results(100runs)for the regression experiment on artificial data over different shifts of the output data with respect to the input data in future direction(viewed from the time axis of the corresponding network)for several structures.because it is a more familiar measure to characterize results of classification experiments.Several interesting properties of RNN’s in general can be directly seen from thesefigures.The minimum(maximum) for the regression(classification)task should be at20frames delay for the forward RNN and at10frames delay for the backward RNN because at those points,all information for a perfect regression(classification)has been fed into the network.Neither is the case because the modeling power of the networks given by the structure and the number of free parameters is not sufficient for the optimal solution. Instead,the single time direction networks try to make a tradeoff between“remembering”the past input information, which is useful for regression(classification),and“knowledge combining”of currently available input information.This results in an optimal delay of one(two)frame for the forward RNN andfive(six)frames for the backward RNN.The optimum delay is larger for the backward RNN because the artificially created correlations in the training data are not symmetrical with the important information for regression (classification)being twice as dense on the left side as on the right side of each frame.In the case of the backward RNN, the time series is evaluated from right to left with the denser information coming up later.Because the denser information can be evaluated easier(fewer parameters are necessary for a contribution to the objective function minimization),the optimal delay is larger for the backward RNN.If the delay is so large that almost no important information can be saved over time,the network converges to the best possible solution based only on prior information.This can be seen for the classification task with the backward RNN,which converges to59%(prior of class0)for more than15frames delay. Another sign for the tradeoff between“remembering”and “knowledge combining”is the variation in the standard devia-tion of the results,which is only shown for the backward RNN in the classification task.In areas where both mechanisms could be useful(a3to17frame shift),different local minima of the objective function correspond to a certain amount to either one of these mechanisms,which results in larger fluctuations of the results than in areas where“remembering”is not very useful(2to10)are,in almost all cases,better than with only one network.This is no surprise because besides the use of more useful input information,the number of free parameters for the model doubled.For the BRNN,it does not make sense to delay the output data because the structure is already designed to cope with all available input information on both sides of the currently evaluated time point.Therefore,the experiments for the BRNN are only run forSHIFT.For the regression and classifica-tion tasks tested here,the BRNN clearly performs better than the network“MERGE”built out of the single time-direction networks“RNN-FOR”and“RNN-BACK,”with a comparable number of total free parameters.2)Experiments with Real Data:The goal of the experi-ments with real data is to compare different ANN structures2678IEEE TRANSACTIONS ON SIGNAL PROCESSING,VOL.45,NO.11,NOVEMBER1997Fig.5.Averaged results for the classification experiment on artificial data.for the classification of phonemes from the TIMIT speech database.Several regular MLP’s and recurrent neural network architectures,which make use of different amounts of acoustic context,are tested here.a)Description of Data:The TIMIT phoneme database is a well-established database consisting of6300sentences spoken by630speakers(ten sentences per speaker).Following official TIMIT recommendations,two of the sentences(which are the same for every speaker)are not included in our experiments,and the remaining data set is divided into two sets:1)the training data set consisting of3696sentences from462speakers and2)the test data set consisting of1344 sentences from168speakers.The TIMIT database provides hand segmentation of each sentence in terms of phonemes and a phonemic label for every segment out of a pool of61 phonemes.This gives142910phoneme segments for training and51681for testing.In our experiments,every sentence is transformed into a vector sequence using three levels of feature extraction. First,features are extracted every frame to represent the raw waveform in a compressed form.Then,with the knowledge of the boundary locations from the corresponding labelfiles, segment features are extracted to map the information from an arbitrary length segment to afixed-dimensional vector.A third transformation is applied to the segment feature vectors to make them suitable as inputs to a neural net.These three steps are briefly described below.1)Frame Feature Extraction:As frame features,12reg-ular MFCC’s(from24mel-space frequency bands)plus the log-energy are extracted every10ms with a25.6-msHamming window and a preemphasis of0.97.This is a commonly used feature extraction procedure for speech signals at the frame level[17].2)Segment Feature Extraction:From the frame fea-tures,the segment features are extracted by dividing the segment in time intofive equally spaced regions and computing the area under the curve in each region, with the function values between the data points linearly interpolated.This is done separately for each of the 13frame features.The duration of the segment is used as an additional segment feature.This results in a66-dimensional segment feature vector.3)Neural Network Preprocessing:Although ANN’s canprincipally handle any form of input distributions,we have found in our experiments that the best results are achieved with Gaussian input distributions,which matches the experiences from[12].To generate an “almost-Gaussian distribution,”the inputs arefirst nor-malized to zero mean and unit variance on a sentence basis,and then,every feature of a given channel2is quantized using a scalar quantizer having256recon-struction levels(1byte).The scalar quantizer is designed to maximize the entropy of the channel for the whole training data.The maximum entropy scalar quantizer can be easily designed for each channel by arranging the channel points in ascending order according to their feature values and putting(almost)an equal number of 2Here,each vector has a dimensionality of66.Temporal sequence of each component(or feature)of this vector defines one channel.Thus,we have here 66channels.SCHUSTER AND PALIWAL:BIDIRECTIONAL RECURRENT NEURAL NETWORKS2679 TABLE IITIMIT P HONEME C LASSIFICATION R ESULTS FOR F ULLT RAINING AND T EST D ATA S ETS WITH 13000P ARAMETERSchannel points in each quantization cell.For presentationto the network,the byte-coded value is remapped withvalue byte ata certain time point.For some applications,it is necessary to estimate theconditional posterior probabilityto,we decompose the sequence posteriorprobability asbackward posterior probabilityforward posterior probability(which are the probability termsin the products).The estimates for these probabilities can thenbe combined by using the formulas above to estimate the fullconditional probability of the sequence.It should be noted2680IEEE TRANSACTIONS ON SIGNAL PROCESSING,VOL.45,NO.11,NOVEMBER1997Fig.6.Modified bidirectional recurrent neural network structure shown here with extensions for the forward posterior probability estimation. that the forward and the backward posterior probabilities areexactly equal,provided the probability estimator is perfect.However,if neural networks are used as probability estimators,this will rarely be the case because different architecturesor different local minima of the objective function to beminimized correspond to estimators of different performance.It might therefore be useful to combine several estimatorsto get a better estimate of the quantity of interest usingthe methods of the previous section.Two candidates thatcould be merged hereareand.B.Modified Bidirectional Recurrent Neural NetworksA slightly modified BRNN structure can efficiently beused to estimate the conditional probabilities of thekind,which is conditioned on continu-ousdimensions of the whole input vector.To make theBRNN suitable toestimate,twochanges are necessary.First,instead of connecting the forwardand backward states to the current output states,they areconnected to the next and previous output states,respectively,and the inputs are directly connected to the outputs.Second,if in the resulting structure thefirstcan be used to make predictions.This isexactly what is required to estimate the forward posteriorprobability.Fig.6illustrates thischange of the original BRNN architecture.Cutting the inputconnections to the forward states instead of the backward statesgives the architecture for estimating the backward posteriorprobability.Theoretically,all discrete and continuousinputsthat are necessary to estimate the prob-ability are still accessible for a contribution to the prediction.During training,the bidirectional structure can adapt to the bestpossible use of the input information,as opposed to structuresthat do not provide part of the input information because of thelimited size of the input windows(e.g.,in MLP and TDNN)or one-sided windows(unidirectional RNN).TABLE IIIC LASSIFICATION R ESULTS FOR F ULL TIMITT RAINING AND T EST D ATA WITH61(39)SYMBOLSC.Experiments and Results1)Experiments:Experiments are performed using the fullTIMIT data set.To include the output(target)class in-formation,the original66-dimensional feature vectors areextended to72dimensions.In thefirst six dimensions,thecorresponding output class is coded in a binary format(binary[0,1]activation function.The forward(backward)modified BRNN has64(32)forward and32(64)backward states.Additionally,64hidden neurons areimplemented before the output layer.This results in a forward(backward)modified BRNN structure with26333weights.These two structures,as well as their combination—mergedas a linear and a logarithmic opinion pool—are evaluated forphoneme classification on the test data.2)Results:The results for the phoneme classification taskare shown in Table III.It can be seen that the combination ofthe forward and backward modified BRNN structures resultsin much better performance than the individual structures.Thisshows that the two structures,even though they are trained onthe same training data set to compute the sameprobabilitySCHUSTER AND PALIWAL:BIDIRECTIONAL RECURRENT NEURAL NETWORKS2681sequence and that it does not provide a class sequence with the highest probability.For this,all possible class sequences have to be searched to get the most probable class sequence (which is a procedure that has to be followed if one is interested in a problem like continuous speech recognition). In the experiments reported in this section,we have used the class sequence provided by the TIMIT data base.Therefore, the context on the(right or left)output side is known and is correct.IV.D ISCUSSION AND C ONCLUSIONIn thefirst part of this paper,a simple extension to a regular recurrent neural network structure has been presented, which makes it possible to train the network in both time directions simultaneously.Because the network concentrates on minimizing the objective function for both time directions simultaneously,there is no need to worry about how to merge outputs from two separate networks.There is also no need to search for an“optimal delay”to minimize the objective function in a given data/network structure combination be-cause all future and past information around the currently evaluated time point is theoretically available and does not depend on a predefined delay parameter.Through a series of extensive experiments,it has been shown that the BRNN structure leads to better results than the other ANN structures. In all these comparisons,the number of free parameters has been kept to be approximately the same.The training time for the BRNN is therefore about the same as for the other RNN’s. Since the search for an optimal delay(an additional search parameter during development)is not necessary,the BRNN’s can provide,in comparison to other RNN’s investigated in this paper,faster development of real applications with better results.In the second part of this paper,we have shown how to use slightly modified bidirectional recurrent neural nets for the estimation of the conditional probability of symbol sequences without making any explicit assumption about the shape of the output probability distribution.It should be noted that the modified BRNN structure is only a tool to estimate the conditional probability of a given class sequence;it does not provide the class sequence with the highest probability.For this,all possible class sequences have to be searched to get the most probable class sequence.We are currently working on designing an efficient search engine,which will use only ANN’s tofind the most probable class sequence.R EFERENCES[1]J.O.Berger,Statistical Decision Theory and Bayesian Analysis.Berlin,Germany:Springer-Verlag,1985.[2] C.M.Bishop,Neural Networks for Pattern Recognition.Oxford,U.K.:Clarendon,1995.[3]H.Bourlard and C.Wellekens,“Links between Markov models andmultilayer perceptrons,”IEEE Trans.Pattern Anal.Machine Intell.,vol.12,pp.1167–1178,Dec.1990.[4]J.S.Bridle,“Probabilistic interpretation of feed-forward classifica-tion network outputs,with relationships to statistical pattern recogni-tion,”in Neurocomputing:Algorithms,Architectures and Applications,F.Fougelman-Soulie and J.Herault,Eds.Berlin,Germany:Springer-Verlag,1989,NATO ASI Series,vol.F68,pp.227–236.[5] C.L.Giles,G.M.Kuhn,and R.J.Williams,“Dynamic recurrent neuralnetworks:Theory and applications,”IEEE Trans.Neural Networks,vol.5,pp.153–156,Apr.1994.[6]H.Gish,“A probabilistic approach to the understanding and trainingof neural network classifiers,”in Proc.IEEE Int.Conf.Acoust.,Speech, Signal Process.,1990,pp.1361–1364.[7]R.A.Jacobs,“Methods for combining experts’probability assessments,”Neural Comput.,vol.7,no.5,pp.867–888,1995.[8] B.A.Pearlmutter,“Learning state space trajectories in recurrent neuralnetworks,”Neural Comput.,vol.1,pp.263–269,1989.[9]M.D.Richard and R.P.Lippman,“Neural network classifiers estimateBayesian a posteriori probabilities,”Neural Comput.,vol.3,no.4,pp.461–483,1991.[10]M.Riedmiller and H.Braun,“A direct adaptive method for faster back-propagation learning:The RPROP algorithm,”in Proc.IEEE Int.Conf.Neural Networks,1993,pp.586–591.[11]T.Robinson,“Several improvements to a recurrent error propagationnetwork phone recognition system,”Cambridge Univ.Eng.Dept.Tech.Rep.CUED/F-INFENG/TR82,Sept.1991.[12] A.J.Robinson,“An application of recurrent neural nets to phoneprobability estimation,”IEEE Trans.Neural Networks,vol.5,pp.298–305,Apr.1994.[13]T.Robinson,M.Hochberg,and S.Renals,“The use of recurrentneural networks in continuous speech recognition,”in Automatic Speech Recognition:Advanced Topics,C.H.Lee,F.K.Soong,and K.K.Paliwal,Eds.Boston,MA:Kluwer,1996,pp.233–258.[14],“Improved phone modeling with recurrent neural networks,”inProc.IEEE Int.Conf.Acoust.,Speech,Signal Process.,vol.1,1994,pp.37–40.[15] D.E.Rumelhart,G.E.Hinton,and R.J.Williams,“Learning internalrepresentations by error backpropagation,”in Parallel Distributed Pro-cessing,vol.1,D.E.Rumelhart and J.L.McClelland,Eds.Cambridge, MA:MIT Press,1986,pp.318–362.[16] A.Waibel,T.Hanazawa,G.Hinton,K.Shikano,and ng,“Phoneme recognition using time-delay neural networks,”IEEE Trans.Acoust.,Speech,Signal Processing,vol.37,pp.328–339,Mar.1989.[17]S.Young,“A review of large vocabulary speech recognition,”IEEESignal Processing Mag.,vol.15,pp.45–57,May1996.Mike Schuster received the M.Sc.degree in elec-tronic engineering in1993from the Gerhard Mer-cator University,Duisburg,Germany.Currently,heis also working toward the Ph.D.degree at the NaraInstitute of Technology,Nara,Japan.After doing some research infiber optics atthe University of Tokyo,Tokyo,Japan,and someresearch in gesture recognition in Duisburg,hestarted at Advanced Telecommunication Research(ATR),Kyoto,Japan,to work on speech recognition.His research interests include neural networks and stochastic modeling in general,Bayesian approaches,information theory,andcoding.Kuldip K.Paliwal(M’89)is a Professor andChair of Communication/Information Engineeringat Griffith University,Brisbane,Australia.He hasworked at a number organizations,including theTata Institute of Fundamental Research,Bombay,India,the Norwegian Institute of Technology,Trondheim,Norway,the University of Keele,U.K.,AT&T Bell Laboratories,Murray Hill,NJ,and Advanced Telecommunication Research(ATR)Laboratories,Kyoto,Japan.He has co-edited twobooks:Speech Coding and Synthesis(New York: Elsevier,1995)and Speech and Speaker Recognition:Advanced Topics (Boston,MA:Kluwer,1996).His current research interests include speech processing,image coding,and neural networks.Dr.Paliwal received the1995IEEE Signal Processing Society Senior Award.He is an Associate Editor of the IEEE T RANSACTIONS ON S PEECH AND A UDIO P ROCESSING.。

新世纪高等院校 综合教程 第一册 教师用书 笔记 unit 12 Gender Bias in Language

新世纪高等院校 综合教程 第一册 教师用书 笔记 unit 12 Gender Bias in Language

Unit 12 Gender Bias in LanguageSection One Pre-reading Activities (2)I. Reading aloud (2)II.Cultural information (2)III. Audiovisual supplements (3)Section Two Global Reading (4)I. Text analysis (4)II. Structural analysis (4)Section Three Detailed Reading (5)Text I (5)Section Four Consolidation Activities (16)I. Vocabulary Analysis (16)II. Grammar Exercises (20)III Translation exercises (23)V. Oral activities (25)VI. Writing Practice (26)VII. Listening Exercises (28)Section Five Further Enhancement (31)I. Text II (31)II. Memorable Quotes (34)Section One Pre-reading ActivitiesI. Reading aloudRead the following sentences aloud, paying special attention to incomplete plosives and liaison. A plosive which has no audible release is put in brackets.1. Yet it is often misunderstood and misinterpreted, for language is a very complicated mechanism with a grea(t) deal of nuance.2. This is an example of the gender bias that exists in the English language.3. It is at this point that Nilsen argues tha(t) the gender bias comes into play.4. It is up to us to decide wha(t) we will allow to be used and ma(d)e proper in the area of language.II.Cultural information1. Why We Need an Equal Rights Amendment:Why We Need an ERA; The Gender Gap Runs Deep in American LawMartha Burk and Eleanor Smeal Why is the amendment needed? Twenty-three countries —including Sri Lanka and Moldova—have smaller gender gaps in education, politics and health than the United States, according to the World Economic Forum. We are 68th in the world in women's participation in national legislatures. On average, a woman working full time and year-round still makes only 77 cents to a man's dollar. Women hold 98 percent of the low-paying "women's" jobs and fewer than 15 percent of the board seats at major corporations. Because their private pensions—if they have them at all —are lower and because Social Security puts working women at a disadvantage and grants no credit for years spent at home caring for children or aging parents, three-quarters of the elderly in poverty are women. And in every state except Montana, women still pay higher rates than similarly situated men for almost all kinds of insurance. All that could change if we put equal rights for women in our Constitution.2. Gender bias in educationGender bias in education is an insidious problem that causes very few people to stand up and take notice. The victims of this bias have been trained through years of schooling to be silent and passive, and are therefore unwilling to stand up and make noise about the unfair treatment they are receiving. Girls and boys today are receiving separate and unequal educations due to the gender socialization that takes place in our schools and due to the sexist hidden curriculum students are faced with every day. Unless teachers are made aware of the gender-role socialization and the biased messages they are unintentionally imparting to students everyday, and until teachers are provided with the methods and resources necessary to eliminate gender-bias in their classrooms,girls will continue to receive an inequitable education.Sadker, D., Sadker, M. (1994) "Failing at Fairness: How Our Schools Cheat Girls". Toronto, ON: Simon & Schuster Inc.III. Audiovisual supplementsWatch a video clip and answer the following questions.1. What happened to the woman?2. What does the defense counsel mean in the last sentence?Answers to the Questions:1. She was hit by a male doctor when she was slowly pulling out and got severely injured in her neck. But she doesn’t have insurance, so she’s in debt now.2. He is trying to convince the jury that a male ER (emergency room) doctor is not possible to lose control of his car, but a woman facing a lot of problems in her life like Erin is quite dangerous when she is driving. The defence counsel’s words obviously show his gender discrimination.Video Script:Erin: I was pulling out real slow and out of nowhere his Jaguar comes racing around the corner like a bat out of hell ... They took some bone from my hip and put it in my neck. I don’t have insurance, so I’m about $17,000 in debt right now ... I couldn’t take painkillers ‘cause they made me too groggy to care for my kids ... Matthew’s eight, and Katie’s almost six ... and Beth’s just nine months ... I just wanna be a good mom, a nice person, a decent citizen (I)just wanna take good care of my kids, you know?Ed (Prosecuting Counsel): Yeah. I know.Defence Counsel: Seventeen thousand in debt? Is your ex-husband helping you?Erin: Which one?Defence Counsel: There’s more than one?Erin:Yeah. There’re two. Why?Defence Counsel: So, you must have been feeling pretty desperate that afternoon.Erin:What’s your point?Defence Counsel: Broke, three kids, no job. A doctor in Jaguar? Must be a pretty good meal ticket.Ed: Objection!Erin: What? He hit me!Defence Counsel: So you say.Erin: He came tearing around the corner out of control.Defence Counsel: An ER doctor who spend his days saving lives was the one out of control?Section Two Global ReadingI. Text analysis1. Which two opinions are presented in the first paragraph?There are those who believe that the language that we use everyday is biased in and of itself. Then there are those who feel that language is a reflection of the prejudices that people have within themselves.2. Which sentences in the conclusion show the writer’s attitude?In the last p aragraph, we find these sentences: “It is necessary for people to make the proper adjustments internally to use appropriate language to effectively include both genders. We qualify language. It is up to us to decide what we will allow to be used and made proper in the area of language.” Evidently, they denote the writer’s attitude toward what we should do about gender bias in language.II. Structural analysis1. What type of writing is the text?This text is an expositive essay with reference to gender bias in language.2. What’s the main strategy to develop this expositive essay?The text is mainly developed by means of exemplification. Examples are abundantly used in Paragraphs 2-6.Section Three Detailed ReadingText IGender Bias in Languagenguage is a very powerful element. It is the most common method of communication.Yet it is often misunderstood and misinterpreted, for language is a very complicated mechanism with a great deal of nuance. There are times when in conversation with another individual, that we must take into account the person’s linguistic genea logy. There are people who use language that would be considered prejudicia l or biased in use. But the question that is raised is in regard to language usage: Is language the cause of the bias or is it reflective of the preexisting bias that the user holds? There are those who believe that the language that we use in day-to-day conversation is biased in and of itself. They feel that the term "mailman", for example, is one that excludes women mail carriers. Then there are those who feel that language is a reflection of the prejudices that people have within themselves. That is to say, the words that people choose to use in conversation denote the bias that they harbor within their own existence.2.There are words in the English language that are existing or have existed (some of themhave changed with the new wave of “political correctness” coming about) that havei nherently been sexually biased against women. For example, the person who investigatesreported complaints (as from consumers or students), reports findings, and helps to achieve fair and impartial settlements is ombudsman (Merriam-Webster Dictionary), but ombudsperson here at Indiana State University. This is an example of the gender bias that exists in the English language. The language is arranged so that men are identified with exalted positions, and women are identified with more service-oriented positions in which they are being dominated and instructed by men. So the language used to convey this type of male supremacy is generally reflecting the honored position of the male and the subservience of the female. Even in relationships, the male in the home is often referred to as the “man of the house,” even if it is a 4-year-old child. It is highly insulting to say that a 4-year-old male, based solely on his gender, is more qualified and capable of conducting the business and affairs of the home than his possibly well-educated, highly intellectual mother. There is a definite disparity in that situation.3.In American culture, a woman is valued for the attractiveness of her body, while a manis valued for his physical strength and his achievements. Even in the example of word pairs the bias is evident. The masculine word is put before the feminine word, as in the examples of Mr. and Mrs., his and hers, boys and girls, men and women, kings and queens, brothers and sisters, guys and dolls, and host and hostess. This shows that the usage of many of the English words is also what contributes to the bias present in the English language.4.Alleen Pace Nilsenn notes that there are instances when women are seen as passivewhile men are active and bring things into being. She uses the example of the wedding ceremony. In the beginning of the ceremony, the father is asked who gives the bride away andhe answ ers, “I do.” It is at this point that Nilsen argues that the gender bias comes into play.The traditional concept of the bride as something to be handed from one man (the father) to another man (the husband-to-be) is perpetuated. Another example is in the instance of sexual relationships. The women become brides while men wed women. The man takes away a woman’s virginity and a woman loses her virginity. This denotes her inability, apparently due to her gender, to hold on to something that is a part of her, thus enforcing the man’s ability and right to claim something that is not his.5.To be a man, according to some linguistic differences, would be considered an honor. Tobe endowed by genetics with the encoding of a male would be as having been shown grace, unmerited favor. There are far greater positive connotations connected with being a man than with being a woman. Nilsen yields the example of “shrew” and “shrewd.” The word “shrew”is taken from the name of a small but especially vicious animal; however in Nilsen’s dictionary, a “shrew” was identified as an “ill-tempered, scolding woman.” However, the word “shrewd,” which comes from the same root, was defined as “marked by clever discerning awareness.” It was noted in her dictionary as a shrewd businessm an. It is also commonplace not to scold little girls for being “tomboys” but to scoff at little boys who play with dolls or ride girls’ bicycles.6.In the conversations that come up between friends, you sometimes hear the words“babe,” “broad,” and “chick.” These are words that are used in reference to or directed toward women. It is certainly the person’s right to use these words to reflect women, but why use them when there are so many more to choose from? Language is the most powerful tool of communication and the most effective tool of communication. It is also the most effective weapon of destruction.7.Although there are biases that exist in the English language, there has been considerablechange toward recognizing these biases and making the necessary changes formally so that they will be implemented socially. It is necessary for people to make the proper adjustments internally to use appropriate language to effectively include both genders. We qualify language. It is up to us to decide what we will allow to be used and made proper in the area of language.Paragraph 1Questions:1. What does the writer think of language?The author thinks that language is very powerful and the most common method of communication, but is often misunderstood and misinterpreted, for it is a very complicated system of symbols with plenty of subtle differences.Words and ExpressionsCollocation:1. bias:1) n. an opinion or feeling that strongly favors one side in an argument or an item in a group or series; predisposition; prejudicee.g. This university has a bias towards the sciences.Students were evaluated without bias.2) vt. to unfairly influence attitudes, choices, or decisionse.g. Several factors could have biased the results of the study.Collocation:bias against/towards/in favor ofe.g. It's clear that the company has a bias against women and minorities.Phrase:gender bias: sex prejudice; having bias towards the male and against the femalee.g. Gender bias is still quite common in work and payment.2. nuance: adj. slight, delicate or subtle difference in color, appearance, meaning, feeling, etc.e.g. Language teachers should be able to react to nuances of meaning of common words.He was aware of every nuance in her voice.Synonym:subtletyCollocation:nuance of3. prejudicial: adj.causing harm to sb’s rights, interests, etc.; havin g a bad effect on sth.e.g. These developments are prejudicial to the company’s future.What she said and did was prejudicial to her own rights and interests.Synonyms:damaging, detrimental, prejudiciousDerivation:prejudice: n.4. in/with regard to: in connection with; concerninge.g. I have nothing to say in regard to your complaints.She is very sensitive in regard to her family background.I refuted him in regard to his injustice.5. reflective: adj. (of a person, mood, etc.) thoughtful; (of a surface) reflecting lighte.g. She is in a reflective mood.These are reflective number plates.Derivation:reflectiveness: n.6. denote:vt. be the name, sign or symbol of; refer to; represent or be a sign of somethinge.g. What does the word "curriculum"denote that "course" does not?Crosses on the map denote villages.Derivations:denotative: adj.denotation: n.Synonyms:connoteindicate7. harbor: vt.1) keep bad thoughts, fears, or hopes in your mind for a long timee.g. She began to harbor doubts over the wisdom of their journey.2) contain something, especially something hidden and dangerouse.g. Sinks and draining boards can harbor germs.3) protect and hide criminals that the police are searching fore.g. You may be punished if you harbor an escaped criminal or a spy.Derivation:harbor: n.Sentences1. ... language is a very complicated mechanism with a great deal of nuance. (Paragraph 1) Explanation: … language is a very complicated system of communication. Even slight variations in the pitch, tone, and intensity of the voice and in the choice of words, etc. can express a great deal of subtle shades of meaning.2.… we must take into account the person’s linguistic genealogy. (Paragraph 1): Paraphrase: we must consider the person’s long-standing conventions in language use. Translation: 我们必须将这人的语言谱系学考虑在内。

Collaborative

Collaborative

Collaborative filteringCollaborative filtering,即协同过滤,是⼀种新颖的技术。

最早于1989年就提出来了,直到21世纪才得到产业性的应⽤。

应⽤上的代表在国外有,Last.fm,Digg等等。

最近由于毕业论⽂的原因,开始研究这个题⽬,看了⼀个多星期的论⽂与相关资料之后,决定写篇总结来总结⼀下最近这段时间资料收集的成果。

在微软1998年的那篇关于协同过滤的论⽂[1]中,将协同过滤分成了两个流派,⼀个是Memory-Based,⼀个是Model-Based。

关于Memory-Based的算法,就是利⽤⽤户在系统中的操作记录来⽣成相关的推荐结果的⼀种⽅法,主要也分成两种⽅法,⼀种是User-Based,即是利⽤⽤户与⽤户之间的相似性,⽣成最近的邻居,当需要推荐的时候,从最近的邻居中得到推荐得分最⾼的⼏篇⽂章,⽤作推荐;另外⼀种是Item-Based,即是基于item之间的关系,针对item来作推荐,即是使⽤这种⽅法,使⽤⼀种基本的⽅法来得到不俗的效果。

⽽实验结果也表明,Item-Based的做法⽐User-Based更有效[2]。

⽽对于Model-Based的算法,即是使⽤机器学习中的⼀些建模算法,在线下对于模型进⾏预计算,在线上能够快速得出结果。

主要使⽤的算法有 Bayesian belief nets , clustering , latent semantic , 最近⼏年⼜出现了使⽤SVM 等的CF算法。

最近⼏年⼜提出⼀种新的分类,content-based,即是对于item的内容进⾏分析,从⽽进⾏推荐。

⽽现阶段,⽐较优秀的⼀些应⽤算法,则是将以上⼏种⽅法,混合使⽤。

⽐较说Google News[3],在它的系统中,使⽤了⼀种将Memory-Based与Model-Based两种⽅法混合的算法来处理。

在Google的那篇论⽂⾥⾯,它提到了如何构建⼀个⼤型的推荐系统,其中Google的⼀些⾼效的基础架构如:BigTable,MapReduce等得到很好的应⽤。

2023年度军队文职人员公开招录《英语语言文学》高频考题汇编(含答案)

2023年度军队文职人员公开招录《英语语言文学》高频考题汇编(含答案)

2023年度军队文职人员公开招录《英语语言文学》高频考题汇编(含答案) 学校:________ 班级:________ 姓名:________ 考号:________一、单选题(55题)1.The computer center,( )last year,is very popular among the students in this school.A.openB.openingC.having openedD.opened2.( ),he does get annoyed with her sometimes.A.Although much he likes herB.Much although he likes herC.As he likes her muchD.Much as he likes her3.I just wonder( )that makes him so excited.A.why it doesB.what he doesC.how it isD.what it is4.Australia has several different climatic regions,from warm to( )and tropical.A.temperateB.subtropicalC.humidD.continental5.The fifth-generation computers,with artificial intelligence,( )and perfected now.A.developedB.have developedC.are being developedD.will have been developed6.Which kind of animal is not the executive of Australia?( )A.EmuB.KiwiC.Duck-billed platypusD.Kangaroo7.The criterion used in IC analysis is ( )A.transformationB.conjoiningC.groupingD.substitutability8.John is reading an interesting book on evolution theory which was written by Charles Darwin,who was a British naturalist who developed a theory of evolution based on natural selection.What design feature of language is reflected in the example?( )A.CreativityB.ArbitrarinessC.DisplacementD.Duality9.Her mother is one of the representatives of( )feminism.A.vitalB.fundamentalC.radicalD.basic10.According to the maxim of ( ) suggested by Grice,one should speak truthfully.A.qualityB.mannerC.relationD.quantity11.Which of the following statements about American education is wrong?( )A.Elementary and secondary education is free and compulsoryB.More public collges,universities than private onesC.Private school fnancially supported by religious,nonreligious and private organizations,individualsD.Credits taken at community colleges are normally applicable to requirement for a four-year bachelor’s degree12.The Anglo-Saxons brought( )religion to Britain.A.ChristianB.DruidC.Roman CatholicD.Teutonic13.The Hundred Year’s War lasted from 1337 to 1453 between Britain and( )A.the USB.FranceC.CanadaD.Australia14.In the Canadian parliamentary system,( )holds the highest position.A.free morphemeB.The PresidentC.The Governor GeneralD.The Prime Minister15.-It was an interesting exhibition,wasn′t it?-No,it was very uninteresting.Which maxim of the Politeness Principle that above example violates?( )A.The tact maximB.The modesty maximC.The agreement maximD.The sympathy maxim16.In which day is Halloween celebrated?( )A.5 NovemberB.31 OctoberC.17 MarchD.25 December17.The Cooperative Principle is proposed by ( ).A.SaussureB.GriceC.ChomskyD.Leech18.The captain and his crews depended on the( )of navigation- the compass for orientation.A.instrumentB.deviceC.applianceD.equipment19.The heart is( )intelligent than the stomach,for they are both controlled by the brain.A.not soB.not muchC.much moreD.no more20.I don’t think it advisable that Tim( )to the job since he has no experience.A.is assignedB.will be assignedC.be assignedD.has been assigned21.Agressive courage and determination,and( )spirit is inevitable for rapid social development.A.innovativeB.freshC.novelD.original22.“X buys something from Y” and“Y sells something to X” are in a relation of ( )A.hyponymyB.gradable antonymyplementary antonymyD.converse antonymy23.Neither of the young men who had applied for a position in the university _____.A.has been acceptedB.have been acceptedC.was acceptedD.were accepted24.( ) is regarded as the“father of American literature”A.James Fenimore CooperB.Ralph Waldo EmersonC.Thomas JeffersonD.Washington Irving25.The National Day of Canada is( )A.July 1stB.June 1stC.October 1stD.July 3rd26.Although the teacher has explained to us,the meaning of the article is still( )to me.A.faintB.obscureC.ambiguousD.vague27.The passengers in missing airplane were( )dead after several months of search.A.rectifiedB.testifiedC.certifiedD.verified28.Jane Austen wrote all the following novels EXCEPT ( )A.Sense and SensibilityB.FrankensteinC.Pride and PrejudiceD.Emma29.Year-end bonus will be( )according to individual contribution.A.detachedB.apportionedC.separatedD.allocated30.( ) is the first African-American winner of the Nobel Prize for Literature.A.Ralph EllisonB.Toni MorrisonC.Richard WrightD.James Baldwin31.In an effort to( )culture shocks,I think it is necessary to know something about the nature of culture.A.get offB.get byC.get throughD.get over32.The Great Charter was signed in( )and had( )clauses.A.1251,63B.1251,73C.1215,63D.1215,7333.The semantic components of the word“gentleman” can be expressed as ( )A.+animate,+male,+human,+adultB.+animate,-male,+human,+adultC.+animate,+male,-human,+adultD.+animate,+male,+human,-adult34.Of the following writers,( )is NOT a Nobel Prize Winner.A.Samuel BeckettB.James JoyceC.John GalsworthyD.William Butler Yeats35.The fans did not think( )of him because they know how poorly he was.A.highB.highlyC.badD.badly36.-Would you mind telling me your address?-Somewhere in the southern of Handan.Which maxim of the Cooperative Principle that above example violates?( )A.The maxim of qualityB.The maxim of relationC.The maxim of quantityD.The maxim of manner37.Lexical ambiguity arises from polysemy or ( ) which cannot be determined by the context.A.homonymyB.antonymyC.meronymyD.synonymy38.My son failed to come back last night.This morming the police came to our house and( )my worst fears that he was injured in a car accident.A.advocatedB.confirmedC.promisedD.insured39.The scents of the flowers were( )to us by the breeze.A.interceptedB.detestedC.saturatedD.wafted40.( ) is the defining properties of units like number,gender,case.A.Parts of speechB.Word classesC.Grammatical categoriesD.Functions of words41.( ) was honored as“the Father of English PoetryA.William LanglandB.Sir Thomas MarloryC.Geoffrey ChaucerD.Bede42.What is the construction of the sentence “The boy smiled”?( )A.ExocentricB.EndocentricC.CoordinateD.Subordinate43.The suspect made a pretty( )description of his process of committing a crime in his confession.A.dimB.faintC.obscureD.vague44.Of the fifty states,the smallest state in America is( )A.Rhode IslandB.VirginiaC.TexasD.Montana45.Typical of the grassland dwellers of the continent( ),or pronghorn.A.it is the American antelopeB.the American antelope isC.is the American antelopeD.the American antelope46.The( )nature of the plant is very different from others for its growth and distribution depend on its host completely.A.specificB.peculiarC.extraordinaryD.particular47.When did the Australian Constitution take effect?( )A.1 January,1900B.1 January,1901C.26 January,1801D.26 January,180048.What is the ranking of Canada in the world by land area?( )A.FirstB.SecondC.ThirdD.Fourth49.( ),domesticated grapes grow in clusters,range in color from pale green to black,and contain sugar in varying quantities.A.Their botanical classification as berriesB.Although their botanical classification as berriesC.Because berries being their botanical classificationD.Classified botanically as berries50.The bomb destroyed a police station and damaged a church( )A.badlyB.badC.worseD.mostly51.( ) is NOT included in the modernist groupA.Oscar WildeB.Virginia WoolfC.William Butler YeatsD.T.S.Eliot52.The old man has developed a( )headache which cannot be cured in a short time.A.perpetualB.permanentC.chronicD.sustained53.Bloomfield introduced the IC analysis,whose full name is ( ) Analysis.A.Internal ComponentB.Innate CapacityC.Internal ConstituentD.Immediate Constituents54.He copied other people’s ideas in writing his new book,which is a kind of copywrite( )A.offenceB.violationC.crimeD.sin55.Which of the following sounds does not belong to the allomorphs of the English plural morpheme?( )A.[s]B.[iz]C.[ai]D.[is]参考答案1.D2.D3.D4.B澳大利亚的气候,因其地域广阔,所以呈多样化。

分布式认知

分布式认知

分布式认知理论许国雄北京大学教育学院,100871[摘要] 分布式认知是认知科学的一个新分支,是关于认知研究的认知、社会和组织等各方面观点综合的产物。

本文通过对文献的综述介绍了分布式认知的发展背景、基本观点、典型案例、研究方法以及分布式认知的应用问题。

[关键词] 分布式认知、个体认知、表征系统认知心理学发展到20世纪90年代,一直注重对个体的认知的研究。

然而,认知工作不仅仅依赖于认知主体,还涉及其他认知个体、认知对象、认知工具及认知情境。

随着电视、电话、计算机、计算机网络等电子科技的迅猛发展,人类许多认知活动(如计算机支持的协同工作、远程教育等)越来越依赖于这些认知工具。

认知分布的思想,也逐渐被人们所认识,受到人们的重视。

[1]分布式认知(distributed cognition)是认知科学的一个新分支,它借鉴了认知科学、人类学、社会学、社会心理学的方法,认为要在由个体与其他个体、人工制品所组成的功能系统的层次来解释认知现象。

那些在个体的分析单元中不可能看到的认知现象,在分布式认知研究中得到了强调。

尤其重要的是,分布式认知强调个体、技术工具相互之间为了执行某个活动而发生的交互。

[2]一、分布式认知的发展背景[3]分布式认知是由加利福尼亚大学的赫钦斯(Edwin Hutchins*)于20世纪80年代中后期提出来的。

在对当时传统的认知观点(认为认知是个体级别上的信息加工过程)的批判基础上,Hutchins认为认知的本性是分布式的,认知现象不仅包括个人头脑中所发生的认知活动,还涉及人与人之间以及人与技术工具之间通过交互实现某一活动(比如飞行)的过程。

Hutchins在他的文章中讨论了分布式认知理论的思想来源,他指出人类学和社会学对记忆和文化的研究,社会心理学对小团体问题解决和陪审团决策的研究,组织行为学对组织学习的研究,科学哲学对科学发现的研究,政治学和经济学对个体和集体理性之间的关系的研究都充分地表明集体的认知与集体中各成员的认知间具有非常不同的特性。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

50th European Regional Science Association Congress,Jönköping, Sweden, 19th– 23rd August, 2010Towards an Integrated Approach for Place Brand ManagementErik Braun, PhD, senior researcherErasmus School of Economics, Erasmus University RotterdamRoom H12-33, P.O.Box 1738, Rotterdam, The NetherlandsEmail: braun@ese.eur.nlTel.: +31 10 4082740Fax: +31 10 4089153Sebastian Zenker, PhD, post-doctoral researcherInstitute of Marketing and Media, University of HamburgWelckerstrasse 8, D-20354 Hamburg, GermanyEmail: zenker@placebrand.euTel.: +49 40 428 38-74 99Fax: +49 40 428 38-87 15Abstract:The number of cities claiming to make use of branding has been growing considerably in the last decade. Competition is one of the key drivers for cities to establish their place as a brand and promoting that place to visitors, investors, companies and residents. Unfortunately, place marketers often believe that the place brand is a controllable and fully manageable communicat ion tool. Yet a brand is by definition a network of associations in consumers‟ minds and is therefore based on the perceptions of the different target groups, making branding a multi-faceted subject. Furthermore, the perception of a place (brand) can differ significantly given the various target groups‟ diverse perspectives and interests. Hence, place branding theory as well as practice should focus more on the place brand perception of its different target audiences and develop strategies for how places can build an advantageous place-brand architecture.Combining insights from a literature review of place-related academe and marketing academe, this paper outlines an integrated approach to place brand management called the Place Brand Centre. After reviewing the literature on place branding, brand architecture and customer-focused marketing, the paper contends that a target group-specific sub-branding-strategy is central for effective place brand management of cities. Gaps for future research and practical implications for place brand management are discussed.Keywords:Place Branding, Place Brand Management, Place Marketing, Place Management, Urban Planning, Customer-orientated MarketingTheme:Planning and place marketing – theoretical implications (special session)1.IntroductionCompetition among cities for tourists, investors, companies, new citizens, and most of all qualified workforce, has increased (Anholt, 2007; Hospers, 2003; Kavaratzis, 2005; Zenker, 2009). As a result place marketers are keen on establishing the place as a brand (Braun, 2008) and promoting that place to its different target groups. Unfortunately, place marketers often believe that the place brand is a controllable and fully manageable communication tool. Yet a brand is by definition a network of associations in consumers‟ mind s (Keller, 1993; Keller & Lehmann, 2006) and is therefore based on the perceptions of the different target groups, making branding a multi-faceted subject. Furthermore, the perception of a place (brand) can differ significantly given the various target groups‟ diverse perspectives and interests (Zenker et al., 2010). Hence, place branding should focus more on the place brand perception of its different target audiences and develop strategies for how places can build an advantageous place-brand architecture.The current academic discussion shows considerable shortcomings in this respect (Grabow et al., 2006) – since it mainly focuses on the explorative description of a certain city brand without distinguishing properly between target groups (e.g. De Carlo et al., 2009; Low Kim Cheng & Taylor, 2007) and lacks a convincing theoretical foundation. Hence, the aim of this paper is to translate a conceptual framework from brand architecture literature to the context of place brand management, taking into account the discrepancies between the place brand perceptions in the mental representation of different target groups.2.Place Marketing and Branding – History and Status QuoInitially, the broadening of the concept of marketing in the late 1960s and early 1970s under the influence of Kotler & Levy (1969) did not include place marketing on the agenda of marketing academe. In 1976, O‟Leary and Iredal were the first to identify place marketing as a challenging field for the future, describing place marketing as activities “designed to create favourable dispositions and behaviour toward geographic locations”(p. 156). The first publications really dedicated to place marketing came from regional economists, geographers, and other social scientists (see for an overview: Braun, 2008) with an article of Burgess (1982) questioning the benefits of place advertisement as one of the first examples. Unfortunately, most of the publications throughout the 1980s and early 1990s were limited to promotional aspects of places. In the early 1990s, the scope of the contributions widened andseveral attempts have been made to develop a strategic planning framework for place marketing (e.g. Ashworth & Voogd, 1990).It is important to note that from the early 1990s onwards, place marketing was discussed in the wider context of structural change in cities and regions (Van den Berg & Braun, 1999), arguing that marketing has become more important because of economic restructuring and city competition. Furthermore, the attempts to reimage cities have received considerable attention from place-related academe. Paddison (1993) observed that places have adopted “targeted forms of marketing to bolster directly the process of image reconstruction” that are essentially different from the previous (planning) practice in cities.Place marketing received another considerable push onto the agenda of marketing academe thanks to a serious of books by Kotler et al. (1993; 1999; 2002) on Marketing Places. These books were important for the recognition of place marketing, but the impact should not be overstated. Even now, place marketing is a subject in the periphery of marketing academe. A possible explanation for this only moderate attention from marketing scholars could be the nature of place marketing itself. After all, place marketing deals with numerous diverging target groups, complex and related products, as well as different political settings in which marketing decisions are made (Van den Berg & Braun, 1999). For example, other …family members‟ of the …place marketing family‟, e.g. those with a single focus on tourism marketing, have received much more attention from marketing scholars.At the start of the new millennium, the focus in the debate on place marketing shifted somewhat in the direction of another …family member‟: place branding (e.g. Kavaratzis, 2008). As a matter of fact, the branding of places (and cities in particular) has gained popularity among city officials in recent years. This is illustrated by the development of city brand rankings such as the Anholt-GMI City Brands Index(Anholt, 2006) or the Saffron European City Brand Barometer(Hildreth, n.d.). Places are eager to garner positive associations in the place consumers‟ mind. In marketing academe, the interest for this subject is on the wax, albeit moderately, although it has not been addressed in one of the top-class marketing journals. In comparison with destination branding, where Balakrishnan (2009) and Pike (2005) conclude that there is a paucity of academic research, the attention for place branding could be higher. Nevertheless, the numbers of interesting contributions are growing: Medway and Warnaby (2008) observe that places are being conceptualized as brands, referring to the work of Hankinson (2004) and Kavaratzis and Ashworth (2005) in particular. Recently, Iversen and Hem (2008) have discussed place umbrella brands for different geographical scales.At this point, we argue that it is a great challenge for marketing researchers to …translate‟contemporary branding insights and methods to the context of places; and a good translation is not literal but in the spirit of the text. The first argument in support of this statement deals with the variety of place customers and their diverse needs and wants. From a theoretical point of view, the main and broadly defined target groups in place marketing and place branding are: (1) visitors; (2) residents and workers; and (3) business and industry (Kotler et al., 1993). However, the groups actually targeted in recent marketing practice are much more specific and complex. Tourists, for example, could be divided into business and leisure time visitors (Hankinson, 2005). Even more complex is the group of residents: a first distinction is the internal residents and the external potential new residents. Within these groups specific target audience segments could be found, like students, talents or the so-called creative class (Braun, 2008; Florida, 2004; Zenker, 2009).As already mentioned, these target groups do not only differ in their perceptions of a place but foremost in their place needs and demands. Leisure time tourists, for example, are searching for leisure time activities like shopping malls or cultural offerings; investors, however, are more interested in business topics. Furthermore, the city´s customers are usually not simply interested in a …dot on the map‟; they need a suitable environment for their purposes. So as residents search for an attractive living environment, and businesses look for a suitable business environment, the same reasoning applies to visitors as well. It is inevitable that there are potential conflicts and synergies between the needs and wants of different target groups. Therefore, brand communication for the city‟s target groups should be developed with these factors in mind.A second related argument states that places are complex products. O ne‟s location cannot be seen separately from other useful locations –hence the place offering is not a single location but a package of locations. Consequently, the product for tourists in London, for instance, overlaps to some extent with the product for the city‟s residents. Similar to a shopping mall – as an illustrating metaphor – a place offers a large assortment for everybody and each customer fills his or her shopping bag individually.Third, we perceive places different than we perceive products of companies. Lynch (1960) already demonstrated that we receive various signals from places through buildings, public space, arts, street design, people, personal experiences or experiences of peers. All these factors communicate something about the place and are potential key associations in the minds of the city‟s target audiences. Th is variety of intended and unintended communicationof places (see also Kavaratzis, 2008) leads to a dissimilarity in the way we perceive places in comparison to commercial brands.Finally, a translation of a marketing concept has to deal with the political and administrative environment in which these decisions are taken. Place branding is a subject of political decision-making and therefore has to do with municipal administrative organisation(s) and policy-making procedures (e.g. Braun, 2008). This setting cannot be compared to regular business practice and thus sets the margins for place brand management. All these arguments indicate that some approaches to branding are more suitable than others.3. Development of the Place Brand CentreA corporate brand is the visual, verbal and behavioural expression of an organisation‟s unique business model, which takes place through the company‟s mission, core values, beliefs communication, culture and overall design. (Kavaratzis, 2009; Knox & Bickerton, 2003). Adapting this definition of a corporate brand for the context of place branding and in the comprehension of the brand as a network of associations in consumers‟ mind (Keller, 1993; Keller & Lehmann, 2006) we would define a Place Brand as: A network of associations in the consumers’ mind based on the visual, verbal, and behavioural expression of a place, which is embodied through the aims, communication, values, and the general culture of the place’s stakeholders and the overall place design. Essential for this definition is that a brand is not in reality the communicated expression or the …place physics ‟, but the perception of those expressions in the mind of the target group(s). These perceptions lead to brand effects such as identification (Anholt, 2007; Azevedo, 2009; Bhattacharya & Sen, 2003), satisfaction (Bruhn & Grund, 2000; Zenker, Petersen et al., 2009) or other effects like information-seeking bias, commitment and intention to stay, as shown in Figure 1.Figure 1: The concept of place brand perception As already mentioned, the brand perception differs strongly between target groups, because of different knowledge levels of the target audience (Brakus et al., 2009) and the different demands for a place (e.g. Zenker, 2009). In conformity with the social identity Target GroupCommunicatedPlace BrandPlace BrandPerceptionPlace Identification Place Satisfaction Other effectsPlace Physicstheory, for example, the external target audience (out-group) shows a much more common and stereotypical association set with a place, while the internal target audience (in-group) has a more diverse and heterogeneous place brand perception (Tajfel & Turner, 1979; Zenker et al., 2010). An equal brand communication for both target groups would disregard the complexity of a place and probably fail. For an advanced customer-focused place brand management, a diverse brand architecture is needed to match a specific target audience with a specific place sub-brand. Unfortunately, this customer-focused view – an essential part of the general marketing discussion (Webster, 1992) – is not yet common sense in the public sector (Buurma, 2001), nor in place marketing practice. However, place marketers could find strong parallels in the development of corporate marketing organization and learn how to deal with the complexity of the multiple target groups.The concept of brand architecture (Aaker, 2004; Aaker & Joachimstahler, 2000) shows hierarchical structures of brands (in the corporate context) with different strategies for multiple target groups. With the Branded House approach, a brand architecture is built with still-independent sub-brands that are marked (additionally) with the corporate umbrella brand (Petromilli et al., 2002). The aim is to build a strong overall umbrella brand with the help of the target group-specific product sub-brands. This approach is not limited to product and company brands; it could also be extended to product or company brands that include a place brand (Uggla, 2006), or fully to the place branding context (Dooley & Bowie, 2005; Iversen & Hem, 2008; Kotler & Gertner, 2002; Therkelsen & Halkier, 2008). In contrast to our colleagues, we are not using the brand architecture in the context of an umbrella (country) brand and regional or provincial (city) sub-brands. The idea is to develop a brand management structure with target group-specific sub-brands and a place (e.g. city) umbrella brand. Very much like the modern organizational structures of marketing departments in companies (Homburg et al., 2000; Workman et al., 1998), the marketing structure of places should be organized by their target groups (Braun, 2008) as shown in Figure 2. We call this conceptual model the Place Brand Centre, including a branded house approach with target group-specific sub-brands for all different groups chosen to be targeted and a place umbrella brand that is represented by the shared overall place brand perception by the entire target audience. In our concept, firstly, the perception of the target group specific place sub-brand will be influenced by the communicated place sub-brand and the specific offer of the place –what we call the place physics (black arrows). Secondly, the perception of the target group is also influenced through the communicated umbrella city brand and the overall place physics (gray arrows). Finally, the perception is additionally influenced –in line with our secondargument – through the perception of the other place sub-brands (white arrows). The overall place umbrella brand perception on the other hand is built by the communicated place umbrella brand; by the place physics; and finally by the perception of the different sub-brands.Figure 2: The conceptual framework of the Place Brand CentreIn 2008, for example, the city of Berlin started a successful internal branding campaign (be Berlin ), aiming to strengthen the identity of the Berlin residents (Kavaratzis and Kalandides, 2009). As of 2009 the city has tried to use this brand to also attract tourists and investors (Berlin Partner GmbH, 2009), but has met less success because the concept is not fitting for tourists and investors (i.e. how should I be Berlin, if I am not living in Berlin?). In regards to the Place Brand Centre we would recommend developing distinct sub-brands for tourists (like visit Berlin ) and for investors (invest in Berlin ), which could enable target group-specific brand communication. This will be helpful for building strong sub-brand perceptions within the target groups. However, it is important to highlight that these sub-brands are not independent. A communicated tourist brand (visit Berlin ) with a focus on the cultural offerings of the city (museums, theatre, etc.) will also influence the sub-brand and overall brand perception of residents or companies and will be influenced by them, too.Drawing from our fourth argument, another advantage of creating target group-specific sub-brands would be the already established organizational structure in city governments (e.g. the separation of tourism office and business development). By employing this concept,Communicated Place Umbrella BrandOverall Place Brand PerceptionTarget Groupspecific PlaceSub-BrandPerception Target Group specific Place Sub-Brand PerceptionTarget Group specific Place Sub-Brand Perception CommunicatedPlace Sub-Brand Communicated Place Sub-Brand Communicated Place Sub-BrandPlace Physics Place Brand Managementpolicy-making procedures and place sub-brand management should be more efficient, which will lead to new tasks for the place brand management of the place umbrella brand. In our model, the co-ordination, monitoring and communication between the sub-brand units become key aspects.But still, the branding process is not limited to communication. The most important point is the place physics – the real characteristics of a place –because they strongly influence the perception of the place brand. In this regard, place brand management also means developing the place to fulfil the customers‟ demands; and in a second step to communicate an honest picture of the place (Ashworth & Voogd, 1990; Morgan et al., 2002; Trueman et al., 2004).4.Discussion and Implication for Place Brand ManagementIn our opinion, the Place Brand Centre approach fulfils all criteria for a good translation of a marketing concept to the context of places. The model will be helpful for place brand managers dealing with the diverse target audience and it is bound to improve the target group-specific communication. Place sub-brand managers could concentrate more on the specific demands of their target audience and identify their place competitors more easily (Zenker, Eggers et al., 2009). In addition, a target group-specific sub-brand is likely to increase the positive brand effects, such as brand identification by the target audience, because the customer will identify more with a matching specific brand than with a general one-fits-all place brand. Furthermore, we believe that public protests about place brand management and the exclusive focus on a special target group (e.g. tourists or the creative class) – an example being the current …not in our name‟ campaign for the city of Hamburg (Gaier, 2009; Hammelehle, 2009) – could also be avoided with this strategy.For academia we recommend two different main directions for further empirical research: First, it should be empirically proofed that a target group-specific place brand has a stronger impact on dependent outcomes (e.g. place satisfaction or place identity) than a simple one-for-all place brand. Second, the actual discussion in Hamburg, concerning the complexity of the phenomenon of diverse place brand stakeholders, warrants more research about the general question of place brand management in relation to place governance.With our Place Brand Centre approach, we also hope to improve the actual discussion of place brand management for cities, stimulate future empirical research, and encourage the general interest in this still-young academic field of place marketing and branding.ReferencesAaker, D. A. (2004). Brand Portfolio Management. New York: The Free Press.Aaker, D. A., & Joachimstahler, E. (2000). Brand Leadership. New York: The Free Press. Anholt, S. (2006). Anholt City Brand Index –“How the World Views Its Cities” (2nd ed.).Bellevue: Global Market Insight, WA.Anholt, S. (2007). Competitive Identity: The New Brand Management for Nations, Cities and Regions. New York: Palgrave Macmillan.Ashworth, G. J., & Voogd, H. (1990). Selling the city: marketing approaches in public sector urban planning. London: Belhaven.Azevedo, A. (2009). Are You Proud To Live Here? A Residents Oriented Place Marketing Audit (Attachment, Self-Esteem and Identity). Paper presented at the 38th EuropeanMarketing Academy Conference, Nantes, France.Berg, L. van den, & Braun, E. (1999). Urban Competitiveness, Marketing and the Need for Organising Capacity. Urban Studies, 36, 987-999.Berlin Partner GmbH. (2009). Be Berlin Kampagne. Berlin Partner GmbH. Retrieved from: http://www.sei-berlin.de.Balakrishnan, M.S. (2009). Strategic branding of destinations: a framework. European Journal of Marketing, 43, 5/6, 611-629.Bhattacharya, C. B., & Sen, S. (2003). Consumer-Company Identification: A Framework for Understanding Consumers' Relationships with Companies. Journal of Marketing, 67,April, 76-88.Brakus, J. J., Schmitt, B. H., & Zarantonello, L. (2009). Brand Experience: What Is It? How Is It Measured? Does It Affect Loyalty? Journal of Marketing, 73, May, 52-68. Braun, E. (2008) City Marketing: Towards an Integrated Approach, ERIM PhD Series in Research and Management, 142, Erasmus Research Institute of Management (ERIM), Rotterdam, available at: /1765/13694.Bruhn, M., & Grund, M. A. (2000). Theory, Development and Implementation of National Customer Satisfaction Indices: The Swiss Index of Customer Satisfaction (SWICS).Total Quality Management, 11, 7, 1017-1028.Burgess, J. (1982). Selling Places: Environmental Images for the Executive. Regional Studies, 16, 11–17.Buurma, H. (2001). Public policy marketing: marketing exchange in the public sector.European Journal of Marketing, 35, 11/12, 1287-1300.De Carlo, M., Canali, S., Pritchard, A., & Morgan, N. (2009). Moving Milan towards Expo 2015: designing culture into a city brand. Journal of Place Management andDevelopment, 2, 1, 8-22.Dooley, G., & Bowie, D. (2005). Place brand architecture: Strategic management of the brand portfolio. Journal of Place Branding, 1, 4, 402-419.Florida, R. (2004). The Rise of the Creative Class. New York: Basic Books.Gaier, T. (2009). Not In Our Name, Marke Hamburg! Retrieved from:http://www.buback.de/nion/Grabow, B., Hollbach-Grömig, B., & Birk, F. (2006). City Marketing - Current Developments An Overview. In F. Birk, B. Grabow & B. Hollbach-Grömig (Eds.), Stadtmarketing - Status quo und Perspektiven (pp. 19-34). Berlin: Deutsches Institut für Urbanistik. Hammelehle, S. (2009, 6th November). Gentrifizierung in Hamburg - Alster, Michel, Protest.SpiegelOnline, pp. 1-3.Hankinson, G. (2004). Relational network brands: Towards a conceptual model of place brands. Journal of Vacation Marketing, 10, 2, 109-121.Hankinson, G. (2005). Destination brand images: a business tourism perspective. Journal of Services Marketing, 19, 1, 24-32.Hildreth, J. (n.d.). The Saffron European City Brand Barometer. Retrieved from Saffron Brand Consultants, Saffron Brand Consultants website: http://saffron-/news-views/publications/Homburg, C., Workman, J. P., & Jensen, O. (2000). Fundamental Changes in Marketing Organization: The Movement Toward a Customer-Focused Organizational Structure.Journal of the Academy of Marketing Science, 28, 4, 459-478.Hospers, G.-J. (2003). Creative Cities in Europe: Urban Competitiveness in the Knowledge Economy. Intereconomics, September/October, 260-269.Iversen, N. M., & Hem, L. E. (2008). Provenance associations as core values of place umbrella brands. European Journal of Marketing, 42, 5/6, 603-626.Kavaratzis, M. (2005). Place Branding: A Review of Trends and Conceptual Models. The Marketing Review, 5, 329-342.Kavaratzis, M. (2008). From City Marketing to City Branding: An Interdisciplinary Analysis with Reference to Amsterdam, Budapest and Athens, PhD thesis, Groningen:Rijksuniversiteit Groningen.Kavaratzis, M. (2009). Cities and their brands: Lessons from corporate branding. Place Branding and Public Diplomacy, 5, 1, 26-37.Kavaratzis, M., & Ashworth, G. J. (2005). City branding: an effective assertion of identity ora transitory marketing trick? Tijdschrift voor Economische en Sociale Geografie, 96,5, 506-514.Kavaratzis, M., & Kalandides, A. (2009). Place Branding as a Strategic Instrument in Urban Development. Paper presented at the 5th International Colloquium – Academy ofMarketing: Brand, Identity and Reputation SIG, Cambridge, UK.Keller, K. L. (1993). Conceptualizing, Measuring, and Managing Customer-Based Brand Equity. Journal of Marketing, 57, 1, 1-22.Keller, K. L., & Lehmann, D. R. (2006). Brands and Branding: Research Findings and Future Priorities. Marketing Science, 25, 6, 740-759.Knox, S., & Bickerton, D. (2003). The six conventions of corporate branding. European Journal of Marketing, 37, 7/8, 998-1016.Kotler, P., & Gertner, D. (2002). Country as Brand, Product, and Beyond: A Place Marketing and Brand Management Perspective. Journal of Brand Management, 9, 4-5, 249-261. Kotler, P., Haider, D. H., & Rein, I. (1993). Marketing Places: Attracting Investment, Industry, and Tourism to Cities, States, and Nations. New York: The Free Press. Kotler, P., Asplund, C., Rein, I., & Haider, D. H. (1999). Marketing Places Europe: How to Attract Investments, Industries, Residents and Visitors to European Cities,Communities, Regions and Nations. London: Pearson Education Ltd.Kotler, P., Hamlin, M. A., Rein, I., & Haider, D. H. (2002). Marketing Asian Places: Attracting Investment, Industry, and Tourism to Cities, States, and Nations. Singapore: John Wiley & Sons (Asia).Kotler, P., & Levy, S. J. (1969). Broadening the Concept of Marketing. Journal of Marketing, 33, January, 10-15.Low Kim Cheng, P., & Taylor, J. L. (2007). Branding of Former Soviet Cities: The Case of Almaty. The ICFAI Journal of Brand Management, IV, 44, 7-13.Lynch, K. (1960). The image of the city. Cambridge: MIT Press.Medway, D., & Warnaby, G. (2008). Alternative perspectives on marketing and the place brand. European Journal of Marketing, 42, 5/6, 641-653.Morgan, N., Pritchard, A., & Piggott, R. (2002). New Zealand, 100% Pure. The Creation of a Powerful Niche Destination Brand. Journal of Brand Management, 9,4-5, 335-354.O'Leary, R. & Iredale, I. (1976). The marketing concept: quo vadis? European Journal of Marketing, 10, 3, 146-157.。

相关文档
最新文档