GATE A Unicode-based infrastructure supporting multilingual information extraction

合集下载

Computer-Vision计算机视觉英文ppt

Computer-Vision计算机视觉英文ppt
At the same time, AI MIT laboratory has attracted many famous scholars from the world to participate in the research of machine vision,which included the theory of machine vision , algorithm and design of system .
Its mainstream research is divided into three stages:
Stage 1: Research on the visual basic method ,which take the model world as the main object;
Stage 2: Research on visual model ,which is based on the computational theory;
the other is to rebuild the three dimensional object according to the two-dimensional projection images .
History of computer vision
1950s: in this period , statistical pattern recognition is most applied in computer vision , it mainly focuse on the analysis and identification of two-dimensional image,such as: optical character recognition, the surface of the workpiece, the analysis and interpretation of the aerial image.

语音质量评估

语音质量评估

语⾳质量评估语⾳质量评估,就是通过⼈类或⾃动化的⽅法评价语⾳质量。

在实践中,有很多主观和客观的⽅法评价语⾳质量。

主观⽅法就是通过⼈类对语⾳进⾏打分,⽐如MOS、CMOS和ABX Test。

客观⽅法即是通过算法评测语⾳质量,在实时语⾳通话领域,这⼀问题研究较多,出现了诸如如PESQ和P.563这样的有参考和⽆参考的语⾳质量评价标准。

在语⾳合成领域,研究的⽐较少,论⽂中常常通过展⽰频谱细节,计算MCD(mel cepstral distortion)等⽅法作为客观评价。

所谓有参考和⽆参考质量评估,取决于该⽅法是否需要标准信号。

有参考除了待评测信号,还需要⼀个⾳质优异的,没有损伤的参考信号;⽽⽆参考则不需要,直接根据待评估信号,给出质量评分。

近些年也出现了MOSNet等基于深度⽹络的⾃动语⾳质量评估⽅法。

语⾳质量评测⽅法以下简单总结常⽤的语⾳质量评测⽅法。

主观评价:MOS[1], CMOS, ABX Test客观评价有参考质量评估(intrusive method):ITU-T P.861(MNB), ITU-T P.862(PESQ)[2], ITU-T P.863(POLQA)[3], STOI[4], BSSEval[5]⽆参考质量评估(non-intrusive method)传统⽅法基于信号:ITU-T P.563[6], ANIQUE+[7]基于参数:ITU-T G.107(E-Model)[8]基于深度学习的⽅法:AutoMOS[9], QualityNet[10], NISQA[11], MOSNet[12]此外,有部分的⽅法,其代码已开源::该仓库包括MOSNet, SRMR, BSSEval, PESQ, STOI的开源实现和对应的源仓库地址。

ITU组织已公布⾃⼰实现的P.563: 。

GitHub上⾯的微⼩修改版使其能够在Mac上编译。

在语⾳合成中会⽤到的计算MCD:此外,有⼀本书⽤来具体叙述评价语⾳质量:Quality of Synthetic Speech: Perceptual Dimensions, Influencing Factors, and Instrumental Assessment (T-Labs Series in Telecommunication Services)[13]。

spatio-temporall...

spatio-temporall...

Spatio-Temporal LSTM with Trust Gates for3D Human Action Recognition817 respectively,and utilized a SVM classifier to classify the actions.A skeleton-based dictionary learning utilizing group sparsity and geometry constraint was also proposed by[8].An angular skeletal representation over the tree-structured set of joints was introduced in[9],which calculated the similarity of these fea-tures over temporal dimension to build the global representation of the action samples and fed them to SVM forfinal classification.Recurrent neural networks(RNNs)which are a variant of neural nets for handling sequential data with variable length,have been successfully applied to language modeling[10–12],image captioning[13,14],video analysis[15–24], human re-identification[25,26],and RGB-based action recognition[27–29].They also have achieved promising performance in3D action recognition[30–32].Existing RNN-based3D action recognition methods mainly model the long-term contextual information in the temporal domain to represent motion-based dynamics.However,there is also strong dependency between joints in the spatial domain.And the spatial configuration of joints in video frames can be highly discriminative for3D action recognition task.In this paper,we propose a spatio-temporal long short-term memory(ST-LSTM)network which extends the traditional LSTM-based learning to two con-current domains(temporal and spatial domains).Each joint receives contextual information from neighboring joints and also from previous frames to encode the spatio-temporal context.Human body joints are not naturally arranged in a chain,therefore feeding a simple chain of joints to a sequence learner can-not perform well.Instead,a tree-like graph can better represent the adjacency properties between the joints in the skeletal data.Hence,we also propose a tree structure based skeleton traversal method to explore the kinematic relationship between the joints for better spatial dependency modeling.In addition,since the acquisition of depth sensors is not always accurate,we further improve the design of the ST-LSTM by adding a new gating function, so called“trust gate”,to analyze the reliability of the input data at each spatio-temporal step and give better insight to the network about when to update, forget,or remember the contents of the internal memory cell as the representa-tion of long-term context information.The contributions of this paper are:(1)spatio-temporal design of LSTM networks for3D action recognition,(2)a skeleton-based tree traversal technique to feed the structure of the skeleton data into a sequential LSTM,(3)improving the design of the ST-LSTM by adding the trust gate,and(4)achieving state-of-the-art performance on all the evaluated datasets.2Related WorkHuman action recognition using3D skeleton information is explored in different aspects during recent years[33–50].In this section,we limit our review to more recent RNN-based and LSTM-based approaches.HBRNN[30]applied bidirectional RNNs in a novel hierarchical fashion.They divided the entire skeleton tofive major groups of joints and each group was fedSpatio-Temporal LSTM with Trust Gates for3D Human Action RecognitionJun Liu1,Amir Shahroudy1,Dong Xu2,and Gang Wang1(B)1School of Electrical and Electronic Engineering,Nanyang Technological University,Singapore,Singapore{jliu029,amir3,wanggang}@.sg2School of Electrical and Information Engineering,University of Sydney,Sydney,Australia******************.auAbstract.3D action recognition–analysis of human actions based on3D skeleton data–becomes popular recently due to its succinctness,robustness,and view-invariant representation.Recent attempts on thisproblem suggested to develop RNN-based learning methods to model thecontextual dependency in the temporal domain.In this paper,we extendthis idea to spatio-temporal domains to analyze the hidden sources ofaction-related information within the input data over both domains con-currently.Inspired by the graphical structure of the human skeleton,wefurther propose a more powerful tree-structure based traversal method.To handle the noise and occlusion in3D skeleton data,we introduce newgating mechanism within LSTM to learn the reliability of the sequentialinput data and accordingly adjust its effect on updating the long-termcontext information stored in the memory cell.Our method achievesstate-of-the-art performance on4challenging benchmark datasets for3D human action analysis.Keywords:3D action recognition·Recurrent neural networks·Longshort-term memory·Trust gate·Spatio-temporal analysis1IntroductionIn recent years,action recognition based on the locations of major joints of the body in3D space has attracted a lot of attention.Different feature extraction and classifier learning approaches are studied for3D action recognition[1–3].For example,Yang and Tian[4]represented the static postures and the dynamics of the motion patterns via eigenjoints and utilized a Na¨ıve-Bayes-Nearest-Neighbor classifier learning.A HMM was applied by[5]for modeling the temporal dynam-ics of the actions over a histogram-based representation of3D joint locations. Evangelidis et al.[6]learned a GMM over the Fisher kernel representation of a succinct skeletal feature,called skeletal quads.Vemulapalli et al.[7]represented the skeleton configurations and actions as points and curves in a Lie group c Springer International Publishing AG2016B.Leibe et al.(Eds.):ECCV2016,Part III,LNCS9907,pp.816–833,2016.DOI:10.1007/978-3-319-46487-950。

地下市政基础设施普查管线数据整合方案探讨

地下市政基础设施普查管线数据整合方案探讨

智能管廊NO.10 202349智能城市 INTELLIGENT CITY 地下市政基础设施普查管线数据整合方案探讨潘誉(广州市城市规划勘测设计研究院,广东 广州 510060)摘要:文章根据广州市地下市政基础设施普查管线数据升级项目需求,基于已有测绘成果以及调查和收集到的各类管线数据,采用几何特征关联、属性特征关联等矢量数据融合的方法整合多源数,较大程度上避免重复测量,降低大量数据处理人员成本,形成一套集管理信息、技术信息和地质灾害隐患信息于一体的准确、全面的地下管线数据库。

关键词:市政基础设施;地下管线;空间数据融合;管线数据处理中图分类号:TU990.3文献标识码:A文章编号:2096-1936(2023)10-0049-03DOI:10.19301/ki.zncs.2023.10.015Exploring a data integration scheme for underground municipalinfrastructure survey pipeline data in urban areasPAN YuAbstract:According to the requirements of the pipeline data upgrading project of Guangzhou underground municipal infrastructure survey, based on the existing surveying and mapping results and various pipeline data collected and investigated, the paper adopts vector data fusion methods such as geometric feature association and attribute feature association to integrate multiple sources, which avoids repeated measurements to a large extent and reduces the cost of a large number of data processing personnel, to form a set of management information, technical information and geological hazard information in one of the accurate, comprehensive underground pipeline database.Key words:municipal infrastructure; underground pipelines; spatial data fusion; pipeline data processing城市地下市政基础设施存在统筹协调不够、运行管理不到位等问题[1]。

语义三元组提取-概述说明以及解释

语义三元组提取-概述说明以及解释

语义三元组提取-概述说明以及解释1.引言1.1 概述概述:语义三元组提取是一种自然语言处理技术,旨在从文本中自动抽取出具有主谓宾结构的语义信息。

通过将句子中的实体与它们之间的关系抽取出来,形成三元组(subject-predicate-object)的形式,从而获得更加结构化和可理解的语义信息。

这项技术在信息检索、知识图谱构建、语义分析等领域具有广泛的应用前景。

概述部分将介绍语义三元组提取的基本概念、意义以及本文所要探讨的重点内容。

通过对语义三元组提取技术的介绍,读者可以更好地理解本文后续内容的研究意义和应用场景。

1.2 文章结构本文将分为三个主要部分,分别是引言、正文和结论。

在引言部分,将从概述、文章结构和目的三个方面介绍本文的主题内容。

首先,我们将简要介绍语义三元组提取的背景和意义,引出本文的研究对象。

接着,我们将介绍文章的整体结构,明确各个部分的内容安排和逻辑关系。

最后,我们将阐明本文的研究目的,明确本文要解决的问题和所带来的意义。

在正文部分,将主要分为三个小节。

首先,我们将介绍语义三元组的概念,包括其定义、特点和构成要素。

接着,我们将系统梳理语义三元组提取的方法,包括基于规则的方法、基于统计的方法和基于深度学习的方法等。

最后,我们将探讨语义三元组在实际应用中的场景,包括知识图谱构建、搜索引擎优化和自然语言处理等方面。

在结论部分,将对前文所述内容进行总结和展望。

首先,我们将概括本文的研究成果和亮点,指出语义三元组提取的重要性和必要性。

接着,我们将展望未来研究方向和发展趋势,探索语义三元组在智能技术领域的潜在应用价值。

最后,我们将用简洁的语言作出结束语,强调语义三元组提取对于推动智能化发展的意义和价值。

1.3 目的本文的目的是介绍语义三元组提取这一技术,并探讨其在自然语言处理、知识图谱构建、语义分析等领域的重要性和应用价值。

通过对语义三元组概念和提取方法的讨论,希望能够帮助读者更好地理解和应用这一技术,提高对文本语义信息的理解和利用能力。

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II

安徽省江南十校2024-2025学年高三上学期10月第一次综合素质检测英语试卷

安徽省江南十校2024-2025学年高三上学期10月第一次综合素质检测英语试卷

安徽省江南十校2024-2025学年高三上学期10月第一次综合素质检测英语试卷一、听力选择题1.What does Jerry do for a living now?A.He makes videos.B.He reports news.C.He writes storybooks. 2.Why did the man join the soccer club?A.To get credits.B.To make some friends.C.To satisfy his interest. 3.What does the man tell the woman to do?A.Complete the project.B.Take a break.C.Get him some coffee. 4.What did the woman probably do last night?A.She went to a pool.B.She finished a report.C.She planned a project. 5.What is the main topic of the conversation?A.Grocery shopping.B.Food preservation.C.Cooking techniques.听下面一段较长对话,回答以下小题。

6.What did the boy spend an hour doing today?A.Concentrating on handling balls.B.Walking his dog.C.Practicing shooting.7.How soon will the dinner be ready?A.In 20 minutes.B.In 30 minutes.C.In 40 minutes.听下面一段较长对话,回答以下小题。

8.Why is Emily planting trees?A.She wants to celebrate Tree-Planting Day.B.She is participating in a school project.C.She hopes to have some fun.9.What does Emily say about oak trees?A.They provide a habitat and a food source for wildlife.B.They are easier to take care of than other trees.C.They have hard wood and are long-lasting.10.What will the speakers do next?A.Choose a tree.B.Visit a forest.C.Pick a spot.听下面一段较长对话,回答以下小题。

Teledyne Test Tools T3AFG200 T3AFG350 T3AFG500

Teledyne Test Tools T3AFG200   T3AFG350   T3AFG500

Debug with Confidence200 MHz – 500 MHzTeledyne Test Tools T3AFG200 / T3AFG350 / T3AFG500 range of function/arbitrary generators are a series of dual-channel waveform generators with specifications of up to 500 MHz maximum bandwidth, 2.4 GSa/s maximum sampling rate and 16-bit vertical resolution. The proprietary Arbitrary & Pulse techniques used in the T3AFG200 / T3AFG350 / T3AFG500 models helps to solve the weaknesses inherent in traditional DDS generators when generating arbitrary, square and pulse waveforms. With the above advantages the T3AFG200 / T3AFG350 / T3AFG500 generators can provide users with a variety of high fidelity and low jitter signals, which can meet the growing requirements of a wide range of complex applications.Tools for Improved Debugging●Deep Memory – 20 Mpts/Ch. G enerate complex arbitrary waveforms.●Wide Range of Modulation Types – AM, DSB-AM, FM, PM, FSK, ASK, PWM, Sweep, Burst, and PSK. Q uickly set up modulated waveforms.●High Resolution – 16 bit resolution. G enerate waveforms with low noise, low spurious signal content and high dynamic range. ●Bandwidth Models up to 500 MHz. W ide choice of bandwidths.●Built In Arbitrary Waveforms. L oad and replay built in Arbitrary Waveforms. ●PRBS, I/Q and user Defined Waveform capability. S upport for complex applications.●Single and dual channel models also available, starting from 5 MHz. I nquire about the T3AFG5, T3AFG10, T3AFG40, T3AFG80 and T3AFG120.Key Specifications2FunctionT3AFG200, T3AFG350, T3AFG500Excellent Performance●Bandwidths from 200 MHz to 500 MHz ●All Models have 2 Channels ●20 Mpts/Channel memoryGreat Connectivity●USB host port for mass storage ●USB device port (USBTMC) ●LAN port on 2 channel modelsThe rise/fall times can be set independently to a minimum of 1 ns (2 ns on T3AFG200) at any frequency and to a maximum of 75 s.The T3AFG range of Function/Arbitrary Waveform Generators support a wide range of modulation types including AM, FM, PM, FSK, ASK, PSK, PWM and DSB-AM.Burst mode supports ‘N Cycle’ and ‘Gated’ modes with the Burst source being configured as ‘Internal’, ‘External’ or ‘Manual’.Output amplitude into a high impedance load can be as high 20 Vpp depending on frequencyand waveform type.Ordering Information3Smart Capabilities●Sweep output carrier can be Sine, Square, Ramp and Arbitrary waveforms. Linear or Log sweep. ●Burst output under internal or external signal control ●Waveforms types include PRBS (PRBS3 – PRBS32) ●Frequency Resolution 1 µHz●DSB-AM: Double Sideband AM modulation Function ●10 Order Harmonic Function ●Optional IQ Modulation (T3AFG-IQ)●Multi-Language User InterfaceThe counter functionality, accessed via the rear panel BNC, gives a DC or AC coupled counter capability from 100 mHz to 400 MHz.The Teledyne Test Tools T3AFG200, T3AFG350 and T3AFG500, with its low jitter design, can generate waveforms with exceptional edge stability. With better jitter performance comes better edge stability, and higher confidence in your circuit design.Sweep mode supports ‘Linear’ and ‘Log’ sweep, with ‘Up’ and ‘Down’ direction, and Sweep sourcecan be configured as ‘Internal’, ‘External’ or ‘Manual’.High Fidelity output with 80 dB dynamic range. Sine wave non-harmonic spurious artifacts are-60 dBc ≤ 350 MHz and -55 dBc > 350 MHz.4Gaussian noise with adjustable bandwidth up to 500 MHz, depending on model. Wide bandwidth Gaussian noise can be added to other waveforms to simulate real-world scenarios in which the signal contains a large degree ofnoise.The T3AFG200, T3AFG350 and T3AFG500 optionally supports IQ signal generation with symbol rates between 250 Symbols/s to 37.5 MSymbols/s, providing ASK, PSK, QAM, FSK, MSK and multi-tone signals.The built-in quadrature modulator provides the possibility to generate IQ signals from baseband to 500 MHz intermediate frequency (depending on T3AFG model).The EasyIQ software is necessary to generate an IQ waveform when using the T3AFG-IQ option.The EasyIQ software is a PC program used to download IQ baseband waveform data to the T3AFG200, T3AFG350 or T3AFG500 through a USB or LAN device interface.T3AFG-IQ, Optional IQ Signal GenerationPhase Locked Operation ModeThe ‘Phase-Locked’ mode automatically aligns the phases of each output. While ‘Independent’ mode permits the two output channels to be used as twoindependent waveform generators.5The T3AFG200, T3AFG350 and T3AFG500 havewaveform combining capability whereby Channel 1 and Channel 2 can be combined to a user selected output. The combined waveform can be output on both Ch 1 and Ch 2 simultaneously, or just on a single output,Waveform CombiningCh 1 or Ch 2, whilst the other channel outputs the un-combined waveform for that channel. Easily combine basic waveforms (sine, square, ramp, pulse, etc), random noise, modulation signals, burst signals and Arb waveforms.The harmonics function gives the user the ability to add higher-order elements to the signal being generated.Harmonic FunctionThe PRBS capability gives the flexibility to generate PRBS waveforms from PRBS3 to PRBS32 at up to 300 Mbps with edge rates from 1 ns to 1 µs. An added differential mode provides an easy way to generatePRBSdifferential PRBS signals using both output channels. Easily set outputs to common logic levels such as TTL, ECL, LVCMOS, LVPECL and LVDS using built-inpresets.616 Bit Resolution●T3AFG200 / T3AFG350 / T3AFG500 are all16 bit resolution●4 x higher resolution than 14 bit systems ●Lower levels of Harmonic Distortion●Lower levels of non-harmonic spurious signals ●Improved dynamic range ●Enhanced signal fi delityI/O Connectivity●LAN and USB connection●10 MHz Reference Input and Output●The Aux Input/Output BNC Connector supports the Trigger Input, Trigger/Sync Output,external modulation input, external sweep/burst trigger input and external gate input ●External Counter input14 Bit Resolution 16 Bit Resolution14 Bit ResolutionLess accurate w aveform generation78910Ordering information11© 2020 Teledyne Test Tools is a brand and trademark of Teledyne LeCroy Inc. All rights reserved. Specifications, prices, availability and delivery subject to change without notice. Product brand or brand names are trademarks or requested trademarks of their respective holders.T3 stands for Teledyne Test Company ProfileTeledyne LeCroy is a leading provider of oscilloscopes, protocolanalyzers and related test and measurement solutions thatenable companies across a wide range of industries to designand test electronic devices of all types. Since our foundingin 1964, we have focused on creating products that improveproductivity by helping engineers resolve design issuesfaster and more effectively. Oscilloscopes are tools used bydesigners and engineers to measure and analyze complexelectronic signals in order to develop high-performancesystems and to validate electronic designs in order to improvetime to market.The Teledyne Test Tools brand extends the Teledyne LeCroyproduct portfolio with a comprehensive range of testequipment solutions. This new range of products deliversa broad range of quality test solutions that enable engineersto rapidly validate product and design and reduce time-to-market. Designers, engineers and educators rely on TeledyneTest Tools solutions to meet their most challenging needs fortesting, education and electronics validation.Location and Facilities Headquartered in Chestnut Ridge, New York, TeledyneTest Tools and Teledyne LeCroy has sales, service anddevelopment subsidiaries in the US and throughoutEurope and Asia. Teledyne Test Tools and Teledyne LeCroyproducts are employed across a wide variety of industries,including semiconductor, computer, consumer electronics,education, military/aerospace, automotive/industrial, andtelecommunications.Teledyne LeCroy (US Headquarters)700 Chestnut Ridge Road Chestnut Ridge, NY. USA 10977-6499 Phone: 800-553-2769 or 845-425-2000Fax Sales: 845-578-5985Phone Support: 1-800-553-2769Email Sales: *******************************Email Support: ************************** Web Site: /Teledyne LeCroy (European Headquarters)Teledyne LeCroy GmbH Im Breitspiel 11c D-69126 Heidelberg, Germany Phone: +49 6221 82700Fax: +49 6221 834655Phone Service: +49 6221 8270 85Phone Support: +49 6221 8270 28 Email Sales: *******************************Email Service: *******************************Email Support: t ********************************Web Site: /germanyWorld wide support contacts can be found at:https:///support/contact/#Distributed by:24july20。

基于多级全局信息传递模型的视觉显著性检测

基于多级全局信息传递模型的视觉显著性检测

2021⁃01⁃10计算机应用,Journal of Computer Applications 2021,41(1):208-214ISSN 1001⁃9081CODEN JYIIDU http ://基于多级全局信息传递模型的视觉显著性检测温静*,宋建伟(山西大学计算机与信息技术学院,太原030006)(∗通信作者电子邮箱wjing@ )摘要:对神经网络中的卷积特征采用分层处理的思想能明显提升显著目标检测的性能。

然而,在集成分层特征时,如何获得丰富的全局信息以及有效融合较高层特征空间的全局信息和底层细节信息仍是一个没有解决的问题。

为此,提出了一种基于多级全局信息传递模型的显著性检测算法。

为了提取丰富的多尺度全局信息,在较高层级引入了多尺度全局特征聚合模块(MGFAM ),并且将多层级提取出的全局信息进行特征融合操作;此外,为了同时获得高层特征空间的全局信息和丰富的底层细节信息,将提取到的有判别力的高级全局语义信息以特征传递的方式和较低层次特征进行融合。

这些操作可以最大限度提取到高级全局语义信息,同时避免了这些信息在逐步传递到较低层时产生的损失。

在ECSSD 、PASCAL -S 、SOD 、HKU -IS 等4个数据集上进行实验,实验结果表明,所提算法相较于较先进的NLDF 模型,其F -measure (F )值分别提高了0.028、0.05、0.035和0.013,平均绝对误差(MAE )分别降低了0.023、0.03、0.023和0.007。

同时,所提算法在准确率、召回率、F -measure 值及MAE 等指标上也优于几种经典的图像显著性检测方法。

关键词:显著性检测;全局信息;神经网络;信息传递;多尺度池化中图分类号:TP391.413文献标志码:AVisual saliency detection based on multi -level global information propagation modelWEN Jing *,SONG Jianwei(School of Computer and Information Technology ,Shanxi University ,Taiyuan Shanxi 030600,China )Abstract:The idea of hierarchical processing of convolution features in neural networks has a significant effect onsaliency object detection.However ,when integrating hierarchical features ,it is still an open problem how to obtain rich global information ,as well as effectively integrate the global information and of the higher -level feature space and low -leveldetail information.Therefore ,a saliency detection algorithm based on a multi -level global information propagation model was proposed.In order to extract rich multi -scale global information ,a Multi -scale Global Feature Aggregation Module(MGFAM )was introduced to the higher -level ,and feature fusion operation was performed to the global information extracted from multiple levels.In addition ,in order to obtain the global information of the high -level feature space and the rich low -level detail information at the same time ,the extracted discriminative high -level global semantic information was fused with the lower -level features by means of feature propagation.These operations were able to extract the high -level global semantic information to the greatest extent ,and avoid the loss of this information when it was gradually propagated to the lower -level.Experimental results on four datasets including ECSSD ,PASCAL -S ,SOD ,HKU -IS show that compared with the advanced NLDF (Non -Local Deep Features for salient object detection )model ,the proposed algorithm has the F -measure (F )valueincreased by 0.028、0.05、0.035and 0.013respectively ,the Mean Absolute Error (MAE )decreased by 0.023、0.03、0.023and 0.007respectively ,and the proposed algorithm was superior to several classical image saliency detection methods in terms of precision ,recall ,F -measure and MAE.Key words:saliency detection;global information;neural network;information propagation;multi -scale pooling引言视觉显著性源于认知学中的视觉注意模型,旨在模拟人类视觉系统自动检测出图片中最与众不同和吸引人眼球的目标区域。

Finding community structure in networks using the eigenvectors of matrices

Finding community structure in networks using the eigenvectors of matrices
Finding community structure in networks using the eigenvectors of matrices
M. E. J. Newman
Department of Physics and Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI 48109–1040
We consider the problem of detecting communities or modules in networks, groups of vertices with a higher-than-average density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as “modularity” over possible divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a number of possible algorithms for detecting community structure, as well as several other results, including a spectral measure of bipartite structure in neteasure that identifies those vertices that occupy central positions within the communities to which they belong. The algorithms and measures proposed are illustrated with applications to a variety of real-world complex networks.

GigaVUE HC系列产品数据手册说明书

GigaVUE HC系列产品数据手册说明书

GigaVUE HC SeriesScalable Traffic Intelligence for Small to Large Enterprises and Service ProvidersKey BenefitsThe GigaVUE HC series consists of three models: GigaVUE-HC1, GigaVUE-HC2, and GigaVUE-HC3.Management, Integration, and InstallationSmall footprint with low space, power and cooling needs Modular for flexibility and scalability as needs change Rapid programmatic response to detectable events Advanced integration with tools, controllers and other infrastructure systemsTraffic Forwarding for Network and Security Operations Optimize the delivery of your network traffic to your monitoring and security tools, enabling:• Eliminating contention for network data access• Targeting specific flows to specific tools with network and application awareness• Sharing traffic load across multiple tools’ instances, even for encapsulated trafficSelectively aggregate and replicate traffic at line rate Optimize the content of the delivered traffic, enabling:• Removing duplicate packets• Feeding non-packet based tools with flow and/or rich meta data• Removing unwanted/undesirable protocol headers and/or payload content• Obfuscating private or sensitive dataReuse existing tools for current and new network links Scale network coverage and tool deployment, with USB MGMT PTP CON RDY PTPPPS FAN PWRM/SSTACK STACK P/S PPS(IN)RDY POWERH/SSMT-HC3-C05C5C4C3C2C1RDYPOWERH/SPRT-HC3-X24X1/X2X3/X4X5/X6X7/X8X9/X10X11/X12X13/X14X15/X16X17/X18X19/X20X21/X22X23/X24RDY POWERH/SPRT-HC3-X24X1/X2X3/X4X5/X6X7/X8X9/X10X11/X12X13/X14X15/X16X17/X18X19/X20X21/X22X23/X24RDYC1C2C3C4C5C6C7C8POWERH/SPRT-HC3-C08Q08X1/X2RDY PWR FAN PTP PPS M/SStack/PTP Mgnt / Con G1 / G2G3 / G4USB X3/X4X5/X6X7/X8X9/X10X11/X12TAP1TAP2TAP3TAP4RDY POWER ON/OF TAP1TAP2TAP3TAP4RDYPOWER ON/OF Fan PPS RearRdy PwrM/SLockPTPIEEE 1588StackMgmt PortMgmtCon-soleGigaVUE-HC211234T A P H C 0G 100C 0RdyPwrTAP 1TAP 2TAP 3TAP 4TAP 5TAP 6TAP 7TAP 8TAP 9TAP 10TAP 11TAP 12X1X2RdyPwr S M T H C 0X 16X3X4X5X6X7X8X9X10X11X12X13X14X15X16H/SX1X2RdyPwr B P S H C 0D 25A 4GX3X4X5X6X7X8X9X10X11X12X13X14X15X16SX / SR50 umP R T H C 0Q 06Rdy Pwr Q1LNK ENA Q1LNK ENA Q1LNK ENA Q1LNK ENA Q1LNK ENA Q1LNK ENAAs a key product family within the Gigamon Visibility and Analytics Fabric™, the GigaVUE HC series enables comprehensive traffic and security intelligence at scale. These next-generation network packet brokers are an ideal choice to enhance your security and monitoring solutions.inline and out-of-band security and monitoring tools.GigaVUE HC Series is used to provide visibility for active and passive security as well as network and application monitoring.The GigaVUE HC Series Modelsacross medium-sized branch offices.GigaVUE-HC3A 3RU form factor offers trafficintelligence at scale to meet thedemands of large enterprises andservice providers.GigaVUE-HC3GigaVUE-HC3Key Features and BenefitsNetwork and Traffic Access Three modular chassis models with portspeed and media options:• 100Mb, 1000Mb and 10Gb copper• 1Gb, 10Gb, 25Gb, 40Gb and 100Gbmultimode and single-mode fiberCompatible with SFP, SFP+, QSFP+ andQSFP28 MSA-compliant transceivers,as offered by Gigamon • Scale from low to high density systems:–Cost-effective for only whatis needed–Increased flexibilityPort configurability:• Full flexibility in selecting ports as ingress, intermediate, interconnect or egress functions• Unidirectional and bi-direction ports • Tunneling (e.g. L2GRE, ERSPAN, TCP, VXLAN)• Enable agile response to changes in monitoring infrastructure and monitoring needs• Facilitate passive out-of-band and active inline monitoring via the same HC node• Allow virtualized traffic to be accessed, or backhauled between locations, over an IP network – and with reliable delivery (using TCP)Core Intelligence Flow Mapping®:• Aggregation and replication–Selective any-to-any port mapping• Filtering–Layer 2 to 7 rules–Ingress aggregate and egress• Load-balancing–Layers 2 to 4 hashing criteria–Session stickiness • Access traffic from any link to any tool, even for different link rates• Remove issues with asymmetric routing and link aggregation (LAG)• Optimize tools by forwarding only traffic of interest or dropping traffic not of interest• Spread load across multiple tool instances of same typeInline Bypass:• Optional physical bypass for100M/1G/10G/25G/40G/100G link rates and copper/fiber (multimode, single mode) media types• Aggregate multiple network segments • Filter and load-balance towards inline applications/tools• Easily configure simple and complex tool chains• Customizable heartbeat packets for positive (through-path) and negative (block) tests • Remove multiple points of network failure• Provide full visibility for each inline security tool type (e.g. IPS, WAF)• Easily deploy security in layers solutions, for both active and passive scenarios• Seamlessly migrate tools from passive out-of-band to active inline mode • Reduce likelihood of network impact due to malfunctioning active inline toolsVLAN port tagging• Pinpoint source of traffic Clustering and Fabric Mapping• Enable resilient traffic forwarding• Manage up to 32 nodes in a cluster asa single virtual node• Enact end-to-end Flow Mapping,across clusters, scaling to hundredsof nodesTraffic Intelligence Adaptive packet filtering, Advancedload-balancing, Deduplication, Headerstripping, Masking, NetFlow generation,Slicing, SSL/TLS decryption, Advancedtunneling, Advanced flow slicing Refer to the GigaSMART® datasheet found hereApplication Intelligence Application Filtering, ApplicationMetadata, Video Data Records Refer to the GigaSMART datasheet found hereSubscriber Intelligence Flow Sampling, GTP Correlation,SIP/RTP Correlation, 5G & CUPScorrelation, Subscriber-aware Metadata*Refer to the GigaSMART datasheet found hereNetwork Detection ThreatINSIGHT Sensor Refer to the GigaSMART datasheetfound hereManagement Local and remote management using:• Command line interface (CLI)(T elnet/SSH)• Web GUI (HTTP/HTTPS)• XML API (HTTP/HTTPS)• Fabric Manager (HTTP/HTTPS)• SNMP (v1, v2, v3)• Syslog • Easy to manage via a web GUI or via CLI for users already familiarwith Cisco• Easy integration with applications using CLI or RESTful API• Support SDN paradigm• Manage and orchestrate from single pane of glass• Alerts can be received by any Syslog server or SNMP managerUser access:• Role-based Access Control (RBAC) –Multi-tenant user access–Flexible user/role definedprivileges, screen viewsand access• AAA security with local and remote authentication (LDAP, RADIUS, TACACS+)• Adhere to corporate IT security policies• Meet corporate IT authentication policySystem Field replaceable hardware:• Port modules• AC and DC power supplies• Fan trays• Control card • Achieve five nines highly available uptime• Without needing to replace or remove the chassis, you can:–Scale as needs change–Upgrade features and capabilitiesMetrics and statistics:• Management CPU resources • Switching ASIC resources • Port utilization• Flow map throughput • Facilitate troubleshooting• Guide capacity planning and traffic forward rules* Available with 5.11 releaseChassis Maximum CapabilitiesATTRIBUTE GIGAVUE-HC1GIGAVUE-HC2GIGAVUE-HC3 Size Small (1RU)Medium (2RU)Large (3RU) Throughput604Gbps960Gbps 6.4Tbps No. of port modules244No. of GigaSMART modules 3 (2 front, 1 built-in) 5 (4 front, 1 rear) 4 (front) No. of GigaSMART engines358 (2 per module) No. of ports and speeds10/100Mb20 (4 built-in UTP)72***-1Gb40 (12 built-in SFP+ and 4built-in UTP)96-10Gb36 (12 built-in SFP+)9612825Gb--12840Gb82464100Gb-8‡64Physical bypass options10/100/1000Mb copper,1/10Gb SX/SR Fiber 10/100/1000Mb copper,1/10Gb SX/SR Fiber,1/10Gb LX/LR Fiber,40Gb SR4 Fiber40/100Gb SR4 Fiber,10/25Gb SR Fiber(using breakout),40/100Gb LR4 Fiber*** Using module with SKU TAP-HCO-G100C0Field Swappable Port and GigaSMART ModulesPRODUCT DESCRIPTIONGigaVUE-HC1 Modules PRT-HC1-X1212 x 1Gb/10Gb (SFP/SFP+) portsPRT-HC1-Q04X08* 4 x 40Gb (QSFP+) & 8 x 1Gb/10Gb (SFP/SFP+) ports• QSFP+ Port Modes: 1 x 40Gb or 4 x 10GbBPS-HC1-D25A241Gb/10Gb Bypass combo module• 2 pairs of SX/SR 50/125μm Bypass + 4 x 10Gb/1Gb(SFP+/SFP) ports• 100Mb/1000Mb embeddedTAP-HC1-G10040TAP and Bypass module• 4 pairs of copper (RJ-45) TAP or Bypass• Each pair can be individually configured as TAPor Bypass• 100Mb/1000Mb embeddedSMT-HC1-S Third generation GigaSMART front module with:• One GigaSMART engine• No front portsRefer to the GigaSMART datasheet found here formore informationGigaVUE-HC2 Modules PRT-HC0-X2424 x 10Gb/1Gb (SFP+/SFP) ports modulePRT-HC0-Q06 6 x 40Gb (QSFP+) ports modulePRT-HC0-C02 2 x 100Gb (QSFP28) ports module• Supports 100GBASE-SR4• PRT-HC0-C02 requires Control Card Version 2• 40Gb Bypass and 1Gb/10GbBPS-HC0-Q25A28Combo module• 2 pairs of 40G SR4 Bypass + 8 x 10Gb/1Gb (SFP+/SFP)ports• 1Gb/10Gb BypassBPS-HC0-D25A4G Combo module• 4 pairs of SX/SR 50/125μm Bypass + 16 x 10Gb/1Gb(SFP+/SFP) ports• 1Gb/10Gb BypassBPS-HC0-D35C4G Combo module• 4 pairs of LX/LR single-mode Bypass + 16 x 10Gb/1Gb(SFP+/SFP) ports• 1Gb/10Gb Bypass* Available with 5.11 releaseGigaVUE-HC2 Modules TAP-HC0-D25AC0TAP module• 12 x SX/SR 50/125μm TAP pair• 50/50 split ratio• 1Gb/10Gb embeddedTAP-HC0-D25BC0TAP module• 12 x SX/SR 62.5/125μm TAP pair• 50/50 split ratio• 1Gb/10Gb embeddedTAP-HC0-D35CC0TAP module• 12 x LX/LR TAP pair• 50/50 split ratio• 1Gb/10Gb embeddedTAP-HC0-G100C0TAP and Bypass module• 12 x copper (RJ-45) TAP or Bypass pair• Each pair can be individually configured as TAP or Bypass• 100Mb/1000Mb embeddedSMT-HC0-Q02X081Second generation GigaSMART front module with:• One GigaSMART engine• 2 x 40Gb (QSFP+), 8 x 10Gb/1Gb (SFP+/SFP) portsRefer to the GigaSMART datasheet found here formore informationSMT-HC0-X16First generation GigaSMART front module with:• One GigaSMART engine• 16 x 10Gb/1Gb (SFP+/SFP) portsRefer to the GigaSMART datasheet found here formore informationSMT-HC0-R First generation GigaSMART rear module with:• One GigaSMART engine• No portsRefer to the GigaSMART datasheet found here formore information1 SMT-HC0-Q02X08 requires Control Card Version2 (CTL-HC0-002)GigaVUE-HC3 Modules PRT-HC3-C16216 x 100Gb/40Gb (QSFP28/QSFP+) ports module• Port Modes: 1 x 100Gb/40Gb, 4 x 25Gb1 or 4 x 10Gb1PRT-HC3-C08Q088 x 100Gb QSFP28 ports module• Port Modes: 1 x 100Gb, 2 x 40Gb, 4 x 25Gb1, 2 or4 x 10Gb1PRT-HC3-X2424 x 25Gb2/10Gb (SFP28/SFP+) ports module• Port Modes: 1 x 25Gb/10GbBPS-HC3-C25F2G100Gb/40Gb/25Gb/10Gb Bypass combo module• 2 x 100Gb/40Gb SR4 Bypass pairs• Up to 8 x 10Gb SR Bypass pairs• 16 x 25Gb2/10Gb (SFP28/SFP+) portsBPS-HC3-Q35C2G40Gb/25Gb/10Gb Bypass combo module• 2 x 40Gb LR4 Bypass pairs• 16 x 25Gb2/10Gb (SFP28/SFP+) portsBPS-HC3-C35C2G100Gb/40Gb/25Gb/10Gb Bypass combo module• 2 x 100Gb LR4 Bypass pairs• 16 x 25Gb2/10Gb (SFP28/SFP+) portsSMT-HC3-C05GigaSMART front module with:• Two GigaSMART engines• 5 x 100Gb QSFP ports• Port Modes: 1 x 100Gb, 1 x 40Gb, 4 x 25Gb1, 2 or4 x 10Gb1Refer to the GigaSMART® datasheet found here formore information.1 Requires MPO-to-4xLC breakout cable or the PNL-M341 or PNL-M343 modules for G-TAP M Series2 Requires Control Card Version 2 (CTL-HC3-002)Physical Dimensions and WeightsPRODUCT NAME HEIGHT WIDTH DEPTH WEIGHTGigaVUE-HC1GigaVUE-HC1base chassis(includes built-insecond generationGigaSMART engine)1.75in (4.5cm)17.26in (43.85cm)without ears19.5in (495mm)With PSU handleand card ejector:20.92in (53.18 cm)20.88lbs (9.47kg)With ears: 21.12lbs(9.58kg)PRT-HC1-X12 1.6in (4.10cm) 4.65in (11.8cm)10.13in (24.98cm) 1.50lbs (0.68kg) PRT-HC1-Q04X08 1.6in (4.10cm) 4.65in (11.8cm)10.13in (24.98cm) 1.50lbs (0.68kg) BPS-HC1-D25A24module1.6in (4.10cm) 4.65in (11.80cm)10.13in (24.98cm)2.2lb (0.99kg)TAP-HC1-G10040module1.6in (4.10cm) 4.65in (11.8cm)10.13in (24.98cm) 1.50lbs (.68kg) SMT-HC1-S* 1.6in (4.10cm) 4.65in (11.80cm)10.13in (24.98cm)2.54lb (1.15kg)GigaVUE-HC2GigaVUE-HC2base chassis 2RU3.5in (8.9cm)17.26in (43.85cm)without ears24.2in (61.6cm)without cablemanagement27.0in (68.6cm)with cablemanagement36.8lbs (16.7kg)PRT-HC0-X24module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm)2.12lbs (0.96kg)PRT-HC0-Q06module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm)2.40lbs (1.09kg)PRT-HC0-C02module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm)2.30lbs (1.09kg)BPS-HC0-Q25A28 module1.6in (4.1cm)8.0in (20.3cm)10.5in (26.7cm) 3.14lbs (1.42kg)BPS-HC0-D25A4G module1.6in (4.1cm)8.0in (20.3cm)10.5in (26.7cm) 3.60lbs (1.63kg)BPS-HC0-D25B4G module1.6in (4.1cm)8.0in (20.3cm)10.5in (26.7cm) 3.60lbs (1.63kg)BPS-HC0-D35C4G module1.6in (4.1cm)8.0in (20.3cm)10.5in (26.7cm) 3.60lbs (1.63kg)TAP-HC0-D25AC0 module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm) 3.50lbs (1.59kg)TAP-HC0-D25BC0 module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm) 3.50lbs (1.59kg)TAP-HC0-D35CC0 module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm) 3.50lbs (1.59kg)TAP-HC0-G100C0module1.6in (4.1cm)8.0in (20.3cm)9.4in (23.8cm) 3.20lbs (1.45kg) * Available with 5.10 releasePRODUCT NAME HEIGHT WIDTH DEPTH WEIGHT GigaVUE-HC2SMT-HC0-Q02X08module1.6in (4.1cm)8.0in (20.3cm)10.2in (26.0cm) 4.1lbs (1.86kg)SMT-HC0-X16module1.6in (4.1cm)8.0in (20.3cm)10.2in (26.0cm) 4.40lbs (2.00kg)SMT-HC0-Rmodule1.6in (4.1cm)9.3in (23.5cm)13.2in (33.6cm) 4.40lbs (2.00kg)0GigaVUE-HC3GigaVUE-HC3base chassis 3RU5.25in (13.34cm)17.26in (43.85cm)without ears29.1in (74.0cm)without cablemanagement33.5in (85.0cm)with cablemanagement88.0lbs (40.00kg)PRT-HC3-C16module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm) 6.00lbs (2.72kg)PRT-HC3-C08Q08 module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm)2.40lbs (1.09kg)PRT-HC3-X24module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm)2.12lbs (0.96kg)BPS-HC3-C25F2Gmodule1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm) 6.40lbs (2.90kg)BPS-HC3-Q35C2G module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm) 6.05lbs (2.74kg)BPS-HC3-C35C2G module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm) 6.05lbs (2.74kg)SMT-HC3-C05module1.9in (4.7cm)8.5in (21.7cm)16.1in (41.0cm) 4.40lbs (2.00kg) Power SpecificationsPRODUCT LINE COMPONENT SPECIFICATIONSGivaVUE-HC1Power Configurations• 1 + 1 Power: 2 Power Supply ModulesMax power consumption/heat output • 212 Watts; 722.9 BTU/hr• Fully populated system with all ports at 100 percent traffic loadAC power supply modules • Min/max voltage: 100V–127V AC, 200V–240V AC, 50/60Hz • Max PSM input current: 5.8A @ 100V, 2.9A @ 200VDC power supply modules • Min/max voltage: -40.5V to -60V DC • MaxPSMinputcurrent:**********PRODUCT LINE COMPONENT SPECIFICATIONS GivaVUE-HC2Power configurations• 1 + 1 power: 2 power supply modulesMax power consumption/heat output • 960 Watts; 3276 BTU/hr (Control Card Version s 1 and 2)• Fully populated system with all ports at 100 percent traffic loadAC power supply modules• Min/max voltage: 100V–240V AC, 50/60Hz, 8.4A @ 200 V• Max PSM input current: 14.0A @ 100VDC power supply modules• Min/max voltage: -36V to -72V DC• Max PSM input current: 35A @ -36VGivaVUE-HC3Power configurations• 1 + 1 power: 2 power supply modules• 2 + 2 power: 4 power supply modulesMax power consumption/heat output • 1850 Watts; 6312.4 BTU/hr (Control Card Version 1)• 2000 Watts; 6824.3 BTU/hr (Control Card Version 2)• Fully populated system with all ports at 100 percent traffic loadAC power supply modules • Min/max voltage: 100V–115V AC, 200V–240V AC, 50/60Hz • Max PSM input current: 14A @ 100V, 10A @ 200VDC power supply modules• Min/max voltage: -40V to -72V DC• Max PSM input current: 48A @ -40VEnvironmental SpecificationsASPECT SPECIFICATIONSOperating temperature32.F to 104.F (0.C to 40.C)Operating relative humidity20–80 percent, non-condensing Recommended storage temperature-4.F to 158.F (-20.C to 70.C) Recommended storage relative humidity15–85 percent, non-condensing Altitude Systems: Up to 13,000 ft (3.96km)Power Supply Modules: Up to 10,000 ft (3.05km)Standards and ProtocolsTYPE STANDARDSProtocols IEEE 802.3-2012, IEEE 802.1Q VLAN, IEEE 802.3 10BASE-T, IEEE 802.3u 100BASE-TX, IEEE 802.3ab 1000BASE-T, IEEE 802.3z 1000BASE-X, IEEE 802.3ae 10000BASE-X, IEEE 802.3ba, RFC 783 TFTP, RFC 791IP, RFC 793 TCP, RFC 826 ARP, RFC 854 T elnet, RFC 768 UDP, RFC 792 ICMP, SNMP v1/v2c & v3, RFC 2131DHCP client, RFC 1492 TACACS+, and support for IPv4 and IPv6ComplianceASPECT GIGAVUE STANDARDSafety HC1UL 60950-1; CSA C22.2 EN 60950-1; IEC-60950-1:2005(2nd Edition) + Am 1:2009 +Am 2:2013HC2UL 60950-1; CSA C22.2 EN 60950-1; IEC-60950-1HC3UL 60950-1, 2nd Edition; CAN/CSA C22.2 No. 60950-1-07, 2nd Edition;EN 60950-1:2006/ A11:2009/ A1:2010/A12:2011/A2:2013; IEC 60950-1:2005(2nd Edition) + Am 1:2009 + Am 2:2013Emissions HC1FCC Part 15, Class A; VCCI Class A; EN55022/CISPR-22 Class A; Australia/New ZealandAS/NZS CISPR-22 Class A: RCM; EU: CE Mark EN 55022 Class A, China CCC, TaiwanBSMI, Korea KCC, Russia EACHC2FCC Part 15, Class A; VCCI Class A; EN55022/CISPR-22 Class A; Australia/New ZealandAS/NZS CISPR-22 Class A; CE Mark EN 55022 Class A, China CCC, Taiwan BSMI,Korea KCC, Russia EACHC3FCC Part 15, Class A; VCCI Class A; EN55022/CISPR-22 Class A; Australia/New ZealandAS/NZS CISPR-22 Class A; EU:CE Mark EN 55022 Class A; Taiwan BSMI, Korea KCC,Russia EACImmunity HC1ETSI EN300 386 V1.3.2, EN61000-4-2, EN 61000-4-3, 61000-4-4, EN 61000-4-5, EN61000-4-6, EN 61000-3-2HC2HC3ETSI EN300 386 V1.6.1:2012; EN61000-3-2; EN61000-3-3; EN61000-4-2; EN61000-4-3; EN61000-4-4; EN61000-4-5; EN61000-4-6; EN61000-4-8; EN61000-4-11 Environment HC1RoHS 6: EU directive 2002/95/ECHC2HC3EU RoHS 6, EU Directive 2011/65/EU; 2006/1907/EC (REACH); ISTA 2ANEBS HC1Level 3 (GVS-HC102/2)HC2Level 3 (GVS-HC2A1/2)HC3Level 3 (GVS-HC301/2)Security HC1FIPS 140-2HC2FIPS 140-2, UC APL, Common CriteriaHC3FIPS 140-2PRODUCT CATEGORY PART NUMBER DESCRIPTIONBase Hardware GVS-HC101GigaVUE-HC1 node, 12 1G/10G cages, 4 10/100/1000M copper, fan tray,2 power supplies, AC powerGVS-HC102 GigaVUE-HC1 node, 12 1G/10G cages, 4 10/100/1000M copper, fan tray,2 power supplies, DC powerBPS-HC1-D25A24Bypass Combo Module, GigaVUE-HC1, 2 SX/SR 50/125 BPS pairs,4 10G cagesTAP-HC1-G10040TAP and Bypass module, GigaVUE-HC1, 10/100/1000M copper,4 TAPs or BPC pairsPRT-HC1-X12Port Module, GigaVUE-HC1, 12x10G/1G SFP+/SFPPRT-HC1-Q04X08*Port Module, GigaVUE-HC1, 4x40G QSFP+ and 8x10G SFP+ cagesSMT-HC1-S GigaSMART, HC Series, Front Module w/o ports (includesSlicing, Masking, Source Port and GigaVUE Tunneling De-Encapsulation SW) Licenses-Refer to the GigaSMART® datasheet found here for more information.Fan and Power Supplies FAN-TAXQ0GigaVUE-TA10, TA40, HC1 fan assembly, each (2 required on TA10,3 on TA40 and HC1)PWR-TAXQ1Power Supply Module, GigaVUE-TA10, TA40, or HC1, AC, each PWR-TAXQ2Power Supply Module, GigaVUE-TA10, TA40, or HC1 DC, each* Available with 5.11 releasePRODUCT CATEGORY PART NUMBER DESCRIPTIONBase Hardware GVS-HC2A1GigaVUE-HC2 base unit with chassis, Control Card Version 2, 1 fan Tray, CLI,2 power supplies, AC powerGVS-HC2A2GigaVUE-HC2 base unit with chassis, Control Card Version 2, 1 fan tray, CLI,2 power supplies, DC powerCTL-HC0-002Control Card Version 2, GigaVUE-HC2PRT-HC0-X24Port Module, HC Series, 24x10GPRT-HC0-Q06Port Module, HC Series, 6x40GPRT-HC0-C02Port Module, HC Series, 2x100G QSFP28 cages. Requires Control CardVersion 2BPS-HC0-D25A4G Bypass Combo Module, HC Series, 4 SX/SR 50/125 BPS pairs, 16 10G cagesBPS-HC0-D25B4G Bypass Combo Module, HC Series, 4 SX/SR 62.5/125 BPS pairs,16 10G cagesBPS-HC0-D35C4G Bypass Combo Module, HC Series, 4 LX/LR BPS pairs, 16 10Gb cagesBPS-HC0-Q25A28Bypass Combo Module, GigaVUE-HC2, 2 40G SR4 BPS pairs, 8 10G cagesTAP-HC0-D25AC0TAP module, HC Series, SX/SR Internal TAP module 50/125, 12 TAPsTAP-HC0-D25BC0TAP module, HC Series, SX/SR Internal TAP module 62.5/125, 12 TAPsTAP-HC0-D35CC0TAP module, HC Series, LX/LR Internal TAP module, 12 TAPsTAP-HC0-G100C0TAP and Bypass Module, HC Series, copper, 12 TAP or BPS pairsSMT-HC0-Q02X08GigaSMART, HC Series, Front Module, 2 40Gb, 8 10Gb cages (includesSlicing, Masking, Source Port and GigaVUE Tunneling De-Encapsulation SW) SMT-HC0-R GigaSMART, HC Series, Rear Module (includes Slicing, Masking, Source Portand GigaVUE Tunneling De-Encapsulation SW)SMT-HC0-X16GigaSMART, HC Series, Front Module, 16 10Gb cages (includes Slicing,Masking, Source Port and GigaVUE Tunneling De-Encapsulation SW) Licenses-Refer to the GigaSMART® datasheet found here for more information.Fan and Power Supplies FAN-HC200GigaVUE-HC2 fan assembly, each (1 required) PWR-HC201Power supply module, GigaVUE-HC2, AC PWR-HC202Power supply module, GigaVUE-HC2, DC© 2019-2021 Gigamon. All rights reserved. Gigamon and the Gigamon logo are trademarks of Gigamon in the United States and/or other countries. Gigamon trademarks can be found at /legal-trademarks . All other trademarks are the trademarks of their respective owners. Gigamon reserves the right to change, modify, transfer, or otherwise revise this publication without notice.Worldwide Headquarters 3300 Olcott Street, Santa Clara, CA 95054 USA +1 (408) 831-4000 | PRODUCT CATEGORY PART NUMBERDESCRIPTION Base HardwareGVS-HC301GigaVUE-HC3 base unit with chassis, Control Card, 5 fan modules, CLI, 2 power supplies, AC power GVS-HC302GigaVUE-HC3 base unit with chassis, Control Card, 5 fan modules, CLI, 2 power supplies, DC power GVS-HC3A1GigaVUE-HC3 base unit with chassis, Control Card v2, 5 fan modules, CLI, 2 power supplies, AC power GVS-HC3A2GigaVUE-HC3 base unit with chassis, Control Card v2, 5 fan modules, CLI, 2 power supplies, DC power CTL-HC3-002Control Card Version 2, GigaVUE-HC3, each PRT-HC3-C16Port Module, GigaVUE-HC3, 16x100G QSFP28 cages PRT-HC3-C08Q08Port Module, GigaVUE-HC3, 8x100G QSFP28 cages and 8x40G QSFP+ cages PRT-HC3-X24Port Module, GigaVUE-HC3, 24x10G BPS-HC3-C25F2GBypass Combo Module, GigaVUE-HC3, 2 40/100Gb SR4 BPS pairs, 16 10G cages BPS-HC3-Q35C2GBypass Combo Module, GigaVUE-HC3, 2 40Gb LR BPS pairs, 16 10G cages BPS-HC3-C35C2GBypass Combo Module, GigaVUE-HC3, 2 100Gb LR BPS pairs, 16 10G cages SMT-HC3-C05GigaSMART, GigaVUE-HC3, 5x100G QSFP28 cages(includes Slicing, Masking, Source Port and GigaVUE Tunneling De-Encapsulation SW)Licenses -Refer to the GigaSMART® datasheet found here for more information.Fan and Power Supplies FAN-HC300GigaVUE-HC3 fan assembly, each (5 required)PWR-HC301Power supply module, GigaVUE-HC3, AC (each)PWR-HC302Power supply module, GigaVUE-HC3, DC (each)Support and ServicesGigamon offers a range of support and maintenance services. For details regarding the Gigamon Limited Warranty and our product support and software maintenance programs, visit /support-and-services/overview-and-benefits . For More InformationFor more information about the Gigamon Platform or to contact your local representative, please visit: .Learn more or get a demo: /contact-us。

[002]唯美清新ppt模板

[002]唯美清新ppt模板
Core:Shared Value
Outcomes & Results Cloud & New Consumption Models Software Replacing Hardware OpEx Replacing CapEx Big Data / Analytics
Over 600,000 patients have been benefiting from Cisco’s connected healthcare solutions as the result of Connecting Sichuan Project since 2008.
and Speed
Over 600,000 patients have been benefiting from Cisco’s connected healthcare solutions as the result of Connecting Sichuan Project since 2008.
Natalie has extensive experience working with children and adults in educational, community and clinical settings. She is a full member and a Registered Psychologist with the Singapore Psychological Society (SPS). She is also an Associate Member with the American Psychological Association (APA).
Promoting & Institutionalizing Innovation Culture

堆叠自动编码器的稀疏表示方法(Ⅲ)

堆叠自动编码器的稀疏表示方法(Ⅲ)

堆叠自动编码器的稀疏表示方法自动编码器是一种无监督学习的神经网络模型,它通过学习数据的内部表示来提取特征。

堆叠自动编码器则是由多个自动编码器叠加而成的深层网络模型。

在实际应用中,堆叠自动编码器通过学习更加抽象的特征表示,可以用于特征提取、降维和生成数据等多个领域。

在这篇文章中,我们将探讨堆叠自动编码器的稀疏表示方法,以及其在深度学习中的重要性。

稀疏表示是指在特征提取过程中,只有少数单元才被激活。

在堆叠自动编码器中,通过引入稀疏表示方法,可以让网络学习到更加鲁棒和有意义的特征。

稀疏表示可以有效地降低特征的冗余性,提高网络的泛化能力,使得网络能够更好地适应未见过的数据。

同时,稀疏表示还可以减少模型的计算复杂度,提高模型的训练效率。

因此,稀疏表示在深度学习中具有重要的意义。

在堆叠自动编码器中,稀疏表示的方法有很多种,其中最常用的方法之一是使用稀疏编码器。

稀疏编码器是一种特殊的自动编码器,它通过引入稀疏约束来学习稀疏表示。

在训练过程中,稀疏编码器会对每个隐藏单元引入稀疏性约束,使得只有少数隐藏单元被激活。

这样可以有效地提高特征的鲁棒性和泛化能力。

同时,稀疏编码器还可以使用稀疏性约束来降低特征的冗余性,提高特征的表达能力。

除了稀疏编码器,堆叠自动编码器还可以通过正则化方法来实现稀疏表示。

正则化是一种常用的方法,它可以通过引入额外的惩罚项来控制模型的复杂度。

在堆叠自动编码器中,可以通过引入L1正则化项来推动隐藏单元的稀疏性。

L1正则化项可以使得很多隐藏单元的激活值为0,从而实现稀疏表示。

通过正则化方法实现稀疏表示的堆叠自动编码器具有较好的鲁棒性和泛化能力,同时可以减少模型的计算复杂度,提高模型的训练效率。

另外,堆叠自动编码器还可以通过引入降噪自动编码器来实现稀疏表示。

降噪自动编码器是一种特殊的自动编码器,它可以通过在输入数据上添加噪声来训练模型。

在实际应用中,通过引入随机噪声,可以有效地降低模型对输入数据的敏感度,提高网络的鲁棒性。

MIAOW_Architecture_Whitepaper

MIAOW_Architecture_Whitepaper

MIAOW WhitepaperHardware Description and Four Research Case StudiesAbstractGPU based general purpose computing is developing as a viable alternative to CPU based computing in many do-mains.Today’s tools for GPU analysis include simulators like GPGPU-Sim,Multi2Sim and Barra.While useful for modeling first-order effects,these tools do not provide a detailed view of GPU microarchitecture and physical design.Further,as GPGPU research evolves,design ideas and modifications de-mand detailed estimates of impact on overall area and power. Fueled by this need,we introduce MIAOW,an open source RTL implementation of the AMD Southern Islands GPGPU ISA,ca-pable of running unmodified OpenCL-based applications.We present our design motivated by our goals to create a realistic,flexible,OpenCL compatible GPGPU capable of emulating a full system.Wefirst explore if MIAOW is realistic and then use four case studies to show that MIAOW enables the following: physical design perspective to“traditional”microarchitecture, new types of research exploration,validation/calibration of simulator-based characterization of hardware.Thefindings and ideas are contributions in their own right,in addition to MIAOW’s utility as a tool for others’research.1.IntroductionThere is active and widespread ongoing research on GPU architecture and more specifically on GPGPU architecture. Tools are necessary for such explorations.First,we compare and contrast GPU tools with CPU tools.On the CPU side,tools span performance simulators,em-ulators,compilers,profiling tools,modeling tools,and more recently a multitude of RTL-level implementations of micro-processors-these include OpenSPARC[39],OpenRISC[38], Illinois Verilog Model[56],LEON[18],and more recently FabScalar[11]and PERSim[7].In other efforts,clean slate CPU designs have been built to demonstrate research ideas. These RTL-level implementations allow detailed microarchi-tecture exploration,understanding and quantifying effects of area and power,technology-driven studies,prototype building studies on CPUs,exploring power-efficient design ideas that span CAD and microarchitecture,understanding the effects of transient faults on hardware structures,analyzing di/dt noise,and hardware reliability analysis.Some specific exam-ple research ideas include the following:Argus[30]showed–with a prototype implementation on OpenRISC how to build lightweight fault detectors;Blueshift[19]and power bal-anced pipelines[46]consider the OpenRISC and OpenSPARC pipelines for novel CAD/microarchitecture work.On the GPU side,a number of performance simula-tors[5,2,12,28],emulators[53,2],compilers[29,13,54],GPUs;ii)Flexible:it should beflexible to accommodate research studies of various types,the exploration of forward-looking ideas,and form an end-to-end open source tool;iii) Software-compatible:It should use standard and widely avail-able software stacks like OpenCL or CUDA compilers to en-able executing various applications and not be tied to in-house compiler technologies and languages.portion of the CU denotes the registerfile and SRAM stor-age as indicated in Figure1(b)).First,observe that in all three designs,the registerfiles need some special treatment besides writing Verilog RTL.A full ASIC design results in reducedflexibility,long design cycle and high cost,and makes it a poor research platform,since memory controller IP and hard macros for SRAM and registerfiles may not be redis-tributable.Synthesizing for FPGA sounds attractive,but there are several resource constraints that must be accommodated tigate case studies along the three perspectives.Section8 1MIAOW was not designed to be a replica of existing commercial GPG-PUs.Building a model that is an exact match of an industry implementation requires reverse engineering of low level design choices and hence was not our goal.The aim when comparing MIAOW to commercial designs was to show that our design is reasonable and that the quantitative results are in similar range.We are not quantifying accuracy since we are defining a new microarchitecture and thus there is no reference to compare to.Instead we compare to a nearest neighbor to show trends are similar.Direction Research idea MIAOW-enabledfindingsTraditional µarch Thread-blockcompaction◦Implemented TBC in RTL◦Significant design complexity◦Increase in critical path lengthNew directions Circuit-failureprediction(Aged-SDMR)◦Implemented entirely inµarch◦Idea works elegantly in GPUs◦Small area,power overheads Timingspeculation(TS)◦Quantified TS error-rate on GPU◦TS framework for future studiesValidation of sim-ulator studiesTransient faultinjection◦RTL-level fault injection◦More gray area than CPUs(dueto large RegFile)◦More silent structuresTable2:Case studies summaryconcludes.The authors have no affiliation with AMD or GPU manufacturers.All information about AMD products used and described is either publicly available(and cited)or reverse-engineered by authors from public documents.2.MIAOW ArchitectureThis section describes MIAOW’s ISA,processor organization, microarchitecture of compute units and pipeline organization, and provides a discussion of design choices.2.1.ISAMIAOW implements a subset of the Southern Islands ISA which we summarize below.The architecture state and regis-ters defined by MIAOW’s ISA includes the program counter, execute mask,status registers,mode register,general purpose registers(scalar s0-s103and vector v0-v255),LDS,32-bit memory descriptor,scalar condition codes and vector con-dition codes.Program control is defined using predication and branch instructions.The instruction encoding is of vari-able length having both32-bit and64-bit instructions.Scalar instructions(both32-bit and64-bit)are organized in5for-mats[SOPC,SOPK,SOP1,SOP2,SOPP].Vector instructions come in4formats of which three[VOP1,VOP2,VOPC]use 32-bit instructions and one[VOP3]uses64-bit instructions to address3operands.Scalar memory reads(SMRD)are 32-bit instructions involved only in memory read operations and use2formats[LOAD,BUFFER_LOAD].Vector memory instructions use2formats[MUBUF,MTBUF],both being 64-bits wide.Data share operations are involved in reading and writing to local data share(LDS)and global data share (GDS).Four commonly used instruction encodings are shown in Table4.Two memory addressing modes are supported-base+offset and base+register.Of a total of over400instructions in SI,MIAOW’s instruc-tion set is a carefully chosen subset of95instructions and the generic instruction set is summarized in Table4.This subset was chosen based on benchmark profiling,the type of operations in the data path that could be practically im-plemented in RTL by a small design team,and elimination of graphics-related instructions.In short,the ISA defines a processor which is a tightly integrated hybrid of an in-order core and a vector core all fed by a single instruction supply and memory supply with massive multi-threading capabil-ity.The complete SI ISA judiciously merges decades of re-search and advancements within each of those designs.From a historical perspective,it combines the ideas of two classical machines:the Cray-1vector machine[45]and the HEP multi-threaded processor[49].The recent Maven[27]design is most closely related to MIAOW and is arguably moreflexible and includes/explores a more diverse design space.From a practical standpoint of exploring GPU architecture,we feel it falls short on realism and software compatibility.2.2.MIAOW Processor Design OverviewFigure1shows a high-level design of a canonical AMD South-ern Islands compliant GPGPU.The system has a host CPU that assigns a kernel to the GPGPU,which is handled by the GPU’s ultra-threaded dispatcher.It computes kernel assign-ments and schedules wavefronts to CUs,allocating wavefront slots,registers and LDS space.The CUs shown in Figure1(b) execute the kernels and are organized as scalar ALUs,vector ALUs,a load-store unit,and an internal scratch pad memory (LDS).The CUs have access to the device memory through the memory controller.There are L1caches for both scalar data accesses and instructions and a unified L2cache.The MIAOW GPGPU adheres to this design and consists of a simple dispatcher,a configurable number of compute units, memory controller,OCN,and a cached memory hierarchy2. MIAOW allows scheduling up to40wavefronts on each CU.2.3.MIAOW Compute Unit MicroarchitectureFigure3shows the high-level microarchitecture of MIAOW with details of the most complex modules and Figure4shows the pipeline organization.Below is a brief description of the functionalities of each microarchitectural component–further details are deferred to an accompanying technical report. Fetch(Fig.3b)Fetch is the interface unit between the Ultra-Threaded Dispatcher and the Compute Unit.When a wave-front is scheduled on a Compute Unit,the Fetch unit receives the initial PC value,the range of registers and local memory which it can use,and a unique identifier for that wavefront. The same identifier is used to inform the Dispatcher when execution of the wavefront is completed.It also keeps track of the current PC for all executing wavefronts.Wavepool(Fig.3b)The wavepool unit serves as an instruc-tion queue for all fetched instructions.Up to40wavefronts–supported by40independent queues–can be resident in the compute unit at any given time.The wavepool works closely with the fetch unit and the issue unit to keep instructionsflow-ing through the compute unit.Decode This unit handles instruction decoding.It also col-lates the two32-bit halves of64-bit instructions.The Decode Unit decides which unit will execute the instruction based on the instruction type and also performs the translation of logical register addresses to physical addresses.2The reference design includes a64KB GDS,which we omitted in our design since it is rarely used in performance targeted benchmarksSI Term nVidia term DescriptionCompute Unit(CU)SM A compute unit is the basic unit of computation and contains computation resources,architectural storage resources(registers),and local memory.Workitem Thread The basic unit of computation.It typically represents one input data point.Sometimesreferred to as a’thread’or a’vector lane’.Wavefront Warp A collection of64work-items grouped for efficient processing on the compute unit.Eachwavefront shares a single program counter.Workgroup Thread-block A collection of work-items working together,capable of sharing data and synchronizingwith each other.Can comprise more than one wavefront,but is mapped to a single CU.Local data store(LDS)Sharedmemory Memory space that enables low-latency communication between work-items within a work-group,including between work-items in a wavefront.Size:32kb limit per workgroup.Global data share(GDS)Global memory Storage used for sharing data across multiple workgroups.Size:64KB. Device memory Device memory Off-chip memory provided by DRAM possibly cached in other on-chip storage.Table3:Definition of Southern Islands ISA terms and correspondence to NVIDIA/CUDA terminologyBase0Instr Q WF0++VTail | Head | Tail+ance which uses evaluation content in Section4.In short,our design choices lead to a realistic and balanced design. Fetch bandwidth(1)We optimized the design assuming instruction cache hits and single instruction fetch.In contrast, the GCN specification has fetch bandwidth on the order of16 or32instructions per fetch,presumably matching a cache-line. It includes an additional buffer between fetch and wavepool to buffer the multiple fetched instructions for each wavefront. MIAOW’s design can be changed easily by changing the inter-face between the Fetch module and Instruction memory. Wavepool slots(6)Based on the back-of-the-envelope anal-ysis of load balance,we decided on6wavepool slots.Our design evaluations show that all6slots of the wavepool are filled50%of the time-suggesting that this is a reasonable and balanced estimate considering our fetch bandwidth.We ex-pect the GCN design has many more slots to accommodate the wider fetch.The number of queue slots is parameterized and can be easily changed.Since this pipeline stage has smaller area,it has less impact on area and power.Issue bandwidth(1)We designed this to match the fetch bandwidth and provide a balanced machine as confirmed in our evaluations.Increasing the number of instructions issued per cycle would require changes to both the issue stage and the register read stage,increasing register read pared to our single-issue width,GCN’s documentation suggests an issue bandwidth of5.For GCN this seems an unbalanced de-sign because it implies issuing4vector and1scalar instruction every cycle,while each wavefront is generally composed of 64threads and the vector ALU being16wide.We suspect the actual issue width for GCN is lower.#of integer&floating point functional units(4,4)We incorporate four integer and fourfloating point vector func-tional units to match industrial designs like the GCN and the high utilization by Rodinia benchmarks indicate the number is justified.These values are parameterizable in the top level module and these are major contributors to area and power. #of register ports(1,5)We use two registerfile designs. Thefirst design is a single ported SRAM based registerfile generated using synopsys design compiler which is heavily banked to reduce contention.In simulations,we observed that there was contention on less then1%of the accesses and hence we are using a behavioral module.This deci-sion will result in a model with a small under-estimation of area and power and over-estimation of performance.This design,however,is likely to be similar to GCN and we report the power/area/performance results based on this registerfile. Since it includes proprietary information and the configuration cannot be distributed,we have a second verison-aflip-flop based registerfile design which hasfive ports.While we have explored these two registerfile designs,many register compil-ers,hard macros,and modeling tools like CACTI are available providing a spectrum of accuracy andfidelity for MIAOW’s users.Researchers can easily study various configurations[4] by swapping out our module.#of slots in Writeback Queue per functional unit(1)To simplify implementation we used one writeback queue slot, which proved to be sufficient in design evaluation.The GCN design indicates a queuing mechanism to arbitrate access to a banked registerfile.Our design choice here probably impacts realism significantly.The number of writeback queue slots is parameterized and thus providesflexibility.The area and power overhead of each slot is negligible.Types of functional units GCN and other industry GPUs have more specialized FUs to support graphic computations. This choice restricts MIAOW’s usefulness to model graph-ics workloads.It has some impact on realism andflexibility depending on the workloads studied.However this aspect is extendable by creating new datapath modules.3.ImplementationIn this section wefirst describe MIAOW’s hybrid implementa-tion strategy of using synthesizable RTL and behavioral mod-els and the tradeoffs introduced.We then briefly describe our verification strategy,physical characteristics of the MIAOW prototype,and a quantitative characterization of the prototype.3.1.Implementation summaryFigure2(c)shows our implementation denoting components implemented in synthesizable RTL vs.PLI or C/C++models. Compute Unit,Ultra-threaded dispatcher As described in AMD’s specification for SI implementations,“the heart of GCN is the new Compute Unit(CU)”and so we focus our attention to the CU which is implemented in synthesizable Verilog RTL.There are two versions of the ultra threaded dis-patcher,a synthesizable RTL module and a C/C++model.The C/C++model can be used in simulations where dispatcher area and power consumption are not relevant,saving simulation time and easing the development process.The RTL design can be used to evaluate complexity,area and power of different scheduling policies.OCN,L2-cache,Memory,Memory Controller Simpler PLI models are used for the implementation of OCN and mem-ory controller.The OCN is modeled as a cross-bar between CUs and memory controllers.To provideflexibility we stick to a behavioral memory system model,which includes device memory(fixed delay),instruction buffer and LDS.This mem-ory model handles coalescing by servicing diverging memory requests.We model a simple and configurable cache which is non-blocking(FIFO based simple MSHR design),set asso-ciative and write back with a LRU replacement policy.The size,associativity,block size,and hit and miss latencies are programmable.A user has the option to integrate more sophis-ticated memory sub-system techniques[48,20].3.2.Verification and Physical DesignWe followed a standard verificationflow of unit tests and in-house developed random program generator based regres-sion tests with architectural trace comparison to an instruction emulator.Specifically,we used Multi2sim as our referenceinstruction emulator and enhanced it in various ways with bug-fixes and to handle challenges in the multithreaded nature and out-of-order retirement of wavefronts.We used the AMD OpenCL compiler and device drivers to generate binaries. Physical design was relatively straight-forward using Syn-opsys Design Compiler for synthesis and IC Compiler for place-and-route with Synopsys32nm library.Based on De-sign Compiler synthesis,our CU design’s area is15mm2and it consumes on average1.1W of power across all benchmarks. We are able to synthesize the design at an acceptable clock period range of4.5ns to8ns,and for our study we have chosen yout introduces challenges because of the dominant usage of SRAM and registerfiles and automaticflat layout withoutfloorplanning fails.While blackboxing these produceda layout,detailed physical design is future work.3.3.FPGA ImplementationIn addition to software emulation,MIAOW was successfully synthesized on a state-of-art very large FPGA.This variant, dubbed Neko,underwent significant modifications in order tofit the FPGA technology process.We used a Xilinx Vir-tex7XC7VX485T,which has303,600LUTs and1,030block RAMs,mounted on a VC707evaluation boardDesign Neko is composed of a MIAOW compute unit at-tached to an embedded Microblaze softcore processor via the AXI interconnect bus.The Microblaze implements the ultra-threaded dispatcher in software,handles pre-staging of data into the registerfiles,and serves as an intermediary for access-ing memory(Neko does not interface directly to a memory controller).Due to FPGA size limits,Neko’s compute unit has a smaller number of ALUs(one SIMD and SIMF)than a standard MIAOW compute unit which has four SIMD and four SIMF units for vector integer andfloating point operations respectively.The consequence of this is that while Neko can perform any operation a full compute unit can,its throughput is lower due to the fewer computational resources.Mapping the ALUs to Xilinx provided IP cores(or DSP slices)may help infitting more onto the FPGA as the SIMD and especially SIMF units consume a large proportion of the LUTs.This however changes the latencies of these significantly(multi-plication using DSP slices is a6stage pipeline,while using 10DSPs can create a1stage pipeline)and will end up re-quiring modifications to the rest of the pipeline and takes away from ASIC realism.We defer this for future work.One other difference is Neko’s registerfile architecture.Mapping MIAOW’s registerfiles naively toflip-flops causes excessive usage and routing difficulties considering,especially with the vector ALU registerfile which ing block RAMs is not straight-forward either,they only support two ports each,fewer than what the registerfiles need.This issue was ultimately resolved by banking and double-clocking the BRAMs to meet port and latency requirements.Resource Utilization and Use Case Table6presents break-downs of resource utilization by the various modules of theModule LUT Count#BRAMs Module LUT Count#BRAMs Decode3474-SGPR6478Exec8689-SIMD36890-Fetch222901SIMF55918-Issue36142-VGPR2162128SALU1240-Wavepool27833-Total195285137Table6:Resource utilizationMIAOW does not aim to be an exact match of any industry implementation.To check if quantitative results of the afore-mentioned metrics follow trends similar to industry GPGPU designs,we compare MIAOW with the AMD Tahiti GPU, which is also a SI GPU.In cases where the relevant data is not available for Tahiti,we use model data,simulator data,or data from NVIDIA GPUs.Table7summarizes the methodology and key results and show MIAOW is realistic.For performance studies we choose six OpenCL bench-marks that are part of the Multi2sim environment,which we list along with three characteristics–#work groups,#wave-fronts per workgroup,and#compute-cycles per work group: BinarySearch(4,1,289),BitonicSort(1,512,97496),Matrix-Transpose(4,16,4672),PrefixSum(1,4,3625),Reduction (4,1,2150),ScanLargeArrays(2,1,4).MIAOW can also run four Rodinia[9]benchmarks at this time–kmeans,nw, backprop and gaussian.We use these longer benchmarks for the case studies in Section5onward3.5.Physical Design PerspectiveDescription:Fung et al.proposed Thread Block Com-paction(TBC)[16].which belongs in a large body of work 3Others don’t run because of they use instructions outside MIAOW’s subset.Area analysisGoal◦Is MIAOW’s total area and breakdown across modules representative of industry designs?Method◦Synthesized with Synopsys1-ported register-file◦For release,5-portedflip-flop based regfile.◦Compare to AMD Tahiti(SI GPU)implemented at28nm;scaled to32nm for absolute comparisonsKey results◦Area breakdown matches intuition;30%in functional units &54%in registerfiles.◦Total area using1-port Synopsys RegFile9.31mm2com-pared to6.92mm2for Tahiti CU◦Higher area is understandable:our design is not mature, designers are not as experienced,our functional units are quite inefficient(from ),and not optimized as indus-try functional units would be.Power analysisGoal◦Is MIAOW’s total power and breakdown across modules representative of industry designs?Method◦Synopsys Power Compiler runs with SAIF activityfile generated by running benchmarks through VCS.◦Compared to GPU power models of NVIDIA GPU[22].Breakdown and total power for industry GPUs not publiclyavailable.Key results◦MIAOW breakdown:FQDS:13.1%,RF:16.9%FU:69.9%◦NVIDIA breakdown:FQDS:36.7%,RF:26.7%FU:36.7%◦Compared to model more power in functional units(likely because of MIAOW’s inefficient FUs);FQDS and RF roughly similar contributions in MIAOW and model.◦Total power is1.1Watts.No comparison reference avail-able.But we feel this is low.Likely because Synopsys32nm technology library is targeted to low power design(1.05V, 300MHz typical frequencyPerformance analysisGoal◦Is MIAOW’s performance realistic?Method◦Failed in comparing to AMD Tahiti performance using AMD performance counters(bugs in vendor drivers).◦Compared to similar style NVIDIA GPU Fermi1-SM GPU.◦Performance analysis done by obtaining CPI for each classof instructions across benchmarks.◦Performed analysis to evaluate balance and sizingKeyresults◦CPI breakdown across execution units is below.CPI DMin DMax BinS BSort MatT PSum Red SLA Scalar13333333Vector16 5.4 2.1 3.1 5.5 5.4 5.5 Memory110014.1 3.8 4.6 6.0 6.8 5.5 Overall1100 5.1 1.2 1.7 3.6 4.4 3.0 NVidia1_20.5 1.9 2.18 4.77.5◦MIAOW is close on3benchmarks.◦On another three,MIAOW’s CPI is2×lower,the reasonsfor which are many:i)the instructions on the NVIDIA GPUare PTX-level and not native assembly;ii)cycle measurementitself introduces noise;and iii)microarchitectures are different,so CPIs will be different.◦CPIs being in similar range shows MIAOW’s realism◦The#of wavepool queue slots was rarely the bottleneck:in50%of the cycles there was at least one free slot available(with2available in20%of cycles).◦The integer vector ALUs were all relatively fully occupiedacross benchmarks,while utilization of the3rd and4th FPvector ALU was less than10%.◦MIAOW seems to be a balanced design.Table7:Summary of investigations of MIAOW’s realism on warp scheduling[31,44,16,43,35,25,24],any of which we could have picked as a case study.TBC,in particular,aims to increase functional unit utilization on kernels with irregular controlflow.The fundamental idea of TBC is that,whenever a group of wavefronts face a branch that forces its work-items to follow the divergent program paths,the hardware should dy-namically reorganize them in new re-formed wavefronts that contain only those work-items following the same path.Thus, we replace the idle work-items with active ones from other wavefronts,reducing the number of idle SIMD lanes.Groups of wavefronts that hit divergent branches are also forced to run in similar paces,reducing even more work-item level diversion on such kernels.Re-formed wavefronts are formed observing the originating lane of all the work-items:if it occupies the lane0in wavefront A,it must reoccupy the same lane0in re-formed wavefront B.Wavefront forming mechanism is com-pletely local to the CU,and it happens without intervention from the ultra-threaded dispatcher.In this study we investigate the level of complexity involved in the implementation of such microarchitecture innovations in RTL.Infrastructure and Methodology We follow the imple-mentation methodology described in[16].In MIAOW,the modules that needed significant modifications were:fetch, wavepool,decode,SALU,issue and the vector registerfile. The fetch and wavepool modules had to be adapted to support the fetching and storage of instructions from the re-formed wavefronts.We added two instructions to the decode mod-ule:fork and join which are used in SI to explicitly indicate divergent branches.We added the PC stack(for recovery after reconvergence)and modified the wavefront formation logic in the SALU module,as it was responsible for handling branches. Although this modification is significant,it does not have a huge impact on complexity,as it does not interfere with any other logic in the SALU apart from the branch unit.The issue and VGPR modules suffered more drastic modifi-cations,shown infigure6.In SI,instructions provide register addresses as an offset with the base address being zero.When a wavefront is being dispatched to the CU,the dispatcher allo-cates registerfile address space and calculates the base vector and scalar registers.Thus,wavefronts access different register spaces on the same registerfile.Normally,all work-items in the wavefront access the same register but different pages of the registerfile as shown in the upper-left corner of6,and the register absolute address is calculated during decode.But with TBC,this assumption does not hold anymore.In a re-formed wavefront all the work-items may access registers with the same offset but different base values(from different originat-ing wavefronts).This leads to modifications in the issue stage, now having to maintain information about register occupancy by offset for each re-formed wavefront,instead of absolute global registers.In the worst case scenario,issue has to keep track of256registers for each re-formed wavefront in contrast to1024for the entire CU in the original implementation.In figure6,the baseline issue stage observed in the lower-leftcorner and in the lower-right are the modifications for TBC, adding a level of dereference to the busy table search.In VGPR,we now must maintain a table with the base registers from each work-item within a re-formed wavefront and reg-ister address is calculated for each work-item in access time. Thus,there are two major sources of complexity overheads in VGPR,the calculation and the routing of different addresses to each register page as shown in the upper-right corner of6. We had to impose some restrictions to our design due to ar-chitectural limitations:first,we disallowed the scalar register file and LDS accesses during divergence,and therefore,wave-front level synchronization had to happen at GDS.We also were not able to generate code snippets that induced the SI compiler to use fork/join instructions,therefore we used hand-written assembly resembling benchmarks in[16].It featured a loop with a divergent region inside,padded with vector instruc-tions.We controlled both the number of vector instructions in the divergent region and the level of diversion.Our baseline used post-denominator stack-based reconver-gence mechanism(PDOM)[33],without any kind of wave-front formation.We compiled our tests and ran them on two versions of MIAOW:one with PDOM and other with TBC. Quantitative results The performance results obtained matched the results from[16]:Similar performance was ob-served when there was no divergence and a performance in-crease was seen for divergent workloads.However,our most important results came from synthesis.We observed that the modifications made to implement TBC were mostly in the regions in the critical paths of the design.The implementation of TBC caused an increase of32%in our critical path delay from8.00ns to10.59ns.We also observed that the issue stage area grew from0.43mm2to1.03mm2.Analysis Our performance results confirm the ones obtained by Fung et al.,however,the RTL model enabled us to imple-ment TBC in further detail and determine that critical path delay increases.In particular,we observed that TBC affects the issue stage significantly where most of the CU control state is present dealing with major microarchitectural events.TBC reinforces the pressure over the issue stage making it harder to track such events.We believe that the added complexity suggests that a microarchitectural innovation may be needed involving further design refinements and re-pipelining,not just implementation modifications.The goal of this case study is not to criticize the TBC work or give afinal word on its feasibility.Our goal here is to show that,by having a detailed RTL model of a GPGPU,one can better evaluate the complexity of any proposed novelties. 6.New types of research exploration6.1.Sampling DMR on GPUsDescription:Balasubramanian et al.proposed a novel tech-nique of unifying the circuit failure prediction and detection in CPUs using Virtually Aged Sampling DMR[6](Aged-SDMR).They show that Aged-SDMR provides low design complexity,low overheads,generality(supporting various types of wearout including soft and hard breakdown)and high accuracy.The key idea was to“virtually”age a processor by reducing its voltage.This effectively slows down the gates, mimicking the effect of wearout and exposes the fault,and Sampling-DMR is used to detect the exposed fault.They show that running in epochs and by sampling and virtually aging1%of the epochs provides an effective system.Their design(shown in Figure7)is developed in the context of multi-core CPUs and requires the following:i)operating system involvement to schedule the sampled threads,ii)some kind of system-level checkpoints(like Revive[41],ReviveIO[34], Safetynet[51])at the end of every epoch,iii)some system and microarchitecture support for avoiding incoherence be-tween the sampled threads[50],iv)some microarchitecture support to compare the results of the two cores,and v)a subtle but important piece,gate-level support to insert a clock-phase shifting logic for fast paths.Because of these issues Aged-SDMR’s ideas cannot directly be implemented for GPUs to achieve circuit failure prediction.With reliability becoming important for GPUs[10],having this capability is desirable. Our Design:GPUs present an opportunity and problem in adapting these ideas.They do not provide system-level check-points nor do they lend themselves to the notion of epochs making(i),(ii)and(iii)hard.However,the thread-blocks(or workgroups)of compute kernels are natural candidates for a piece of work that is implicitly checkpointed and whose gran-ularity allows it to serve as a body of work that is sampled and run redundantly.Furthermore,the ultra-threaded dispatcher can implement all of this completely in the microarchitecture without any OS support.Incoherence between the threads can be avoided by simply disabling global writes from the sampled thread since other writes are local to a workgroup/compute-unit anyway.This assumption will break and cause correctness issues when a single thread in a wavefront does read-modify-writes to a global address.We have never observed this in our workloads and believe programs rarely do pari-sion of results can be accomplished by looking at the global stores instead of all retired instructions.Finally,we reuse the clock-phase shifting circuit design as it is.This overall design, of GPU-Aged-SDMR is a complete microarchitecture-only solution for GPU circuit failure prediction.Figure7shows the implemenation mechanism of GPU-Aged-SDMR.Sampling is done at a workgroup granularity with the ultra-threaded dispatcher issuing a redundant work-group to two compute units(checker and checked compute units)at a specified sampling rate,i.e for a sampling rate of1%,1out of100work groups are dispatched to another compute unit called checker.This is run under the stressed conditions and we disable the global writes so that it does not affect the normal execution of the workgroups in the checked CU.We could use a reliability manager module that compares all retired instructions or we can compute a checksum of the retiring stores written to global memory from the checker and。

基于响应分组的仲裁器PUF偏置控制方法

基于响应分组的仲裁器PUF偏置控制方法

现代电子技术Modern Electronics Technique2024年5月1日第47卷第9期May 2024Vol. 47 No. 90 引 言基于静态随机存取存储器(SRAM )的现场可编程门阵列(FPGA )通常缺少用于存储密钥的片上非易失性存储器,因此难以保证应用的安全性。

而物理不可克隆函数(PUF )技术具有从芯片制造过程中不可控的工艺偏差中提取硬件指纹的能力,可为FPGA 提供轻量级安全解决方案[1]。

典型的PUF 主要包括基于存储器的SRAM PUF [2]、蝶形PUF [3]和基于延时的环形振荡器PUF [4]、仲裁器PUF [5]等。

其中,仲裁器PUF 能够以较少的硬件开销产生大量响应,是最具应用潜力的轻量级PUF 之一。

仲裁器PUF 根据两个可配置路径之间的延迟差产生一个响应位。

其设计基本原则是对两条延时路径进行对称布局和布线,保证两条路径具有相同的标称延迟,使响应完全依赖于工艺偏差引入的随机延迟变化。

基于响应分组的仲裁器PUF 偏置控制方法刘海龙, 严清虎, 何佳洛(湖北大学 人工智能学院, 湖北 武汉 430062)摘 要: 针对在现场可编程门阵列(FPGA )平台上实现的仲裁器物理不可克隆函数(PUF )响应唯一性和稳定性较差的问题,提出一种基于响应分组的仲裁器PUF 偏置控制方法。

在基于可编程延时线(PDL )的仲裁器PUF 电路中插入多路选择器(MUX )粗调开关单元和PDL 微调开关单元,使路径延时可受调节激励控制。

通过实时改变调节激励,控制每个响应分组中有效响应的汉明重量达到50%可提高响应唯一性;通过偏置控制筛选出延时差异较大的响应可提高响应稳定性。

在Xilinx XC7Z020 FPGA 器件上实现带偏置控制功能的64级仲裁器PUF 电路,仅消耗143个查找表(LUT )和425个触发器(DFF )资源。

在温度为-20~80 ℃、供电电压0.9~1.1 V 范围内,该仲裁器PUF 响应唯一性为49.89%,有效响应稳定性可达到100%。

attention gate 模块中的含义

attention gate 模块中的含义

Attention Gate 模块是指在神经网络中用于实现注意力机制的一种模块。

在深度学习领域,注意力机制已经被广泛应用于各种任务中,如自然语言处理、计算机视觉和强化学习等。

它的作用是使神经网络能够更加聚焦和灵活地处理输入数据,提高模型的性能和泛化能力。

在 attention gate 模块中,主要包含了三个核心部分:注意力权重的计算、特征的加权叠加和门控机制。

这三个部分协同工作,以实现对输入数据的有针对性的关注和整合,从而更好地捕捉数据中的重要特征,提高模型的建模能力。

在注意力权重的计算中,通常会使用不同的方法来确定输入数据中各部分的重要程度。

这个过程可以是通过学习到的参数来实现,也可以是通过对输入数据进行一系列的转换和计算得到的。

这个步骤的目的是为了确定输入数据中不同部分的重要性,以便模型在处理数据时能够有针对性地进行相关的计算和操作。

特征的加权叠加是指根据计算得到的注意力权重,对输入数据的特征进行加权求和的操作。

这个过程能够使模型更加关注那些重要的特征,从而减少无关特征的干扰,提高模型的鲁棒性和泛化能力。

在这个过程中,通常会根据计算得到的权重,对输入数据的特征进行线性组合,以得到加权后的特征表示。

门控机制是指在 attention gate 模块中引入的一种机制,用于调节注意力权重的分配和特征的整合。

这个机制通常包括了一些参数和激活函数,可以根据输入数据和模型的状态来动态地调整注意力权重和特征的权重,以使模型能够更加灵活地处理不同的输入数据。

通过这种灵活的调节机制,模型可以更好地适应不同的任务和数据分布,提高其泛化能力和适应性。

attention gate 模块在神经网络中起到了重要的作用。

它通过实现注意力机制,使模型能够更加灵活地处理输入数据,提高模型的性能和泛化能力。

在实际的应用中,attention gate 模块已经被广泛应用于各种领域,取得了显著的成效。

随着深度学习和注意力机制的不断发展,相信 attention gate 模块也会在未来的研究和应用中发挥更加重要的作用。

堆叠自动编码器的特征提取方法(八)

堆叠自动编码器的特征提取方法(八)

堆叠自动编码器的特征提取方法自动编码器作为一种无监督学习的神经网络模型,在特征提取方面具有较强的能力。

而堆叠自动编码器则是在这一基础上进行了更深层次的特征提取和抽象。

本文将介绍堆叠自动编码器的特征提取方法,并探讨其在实际应用中的优势和局限性。

一、基本原理堆叠自动编码器是由多个自动编码器组成的深层神经网络模型。

在训练过程中,每个自动编码器都会逐层地学习数据的特征,并将这些特征提取出来,然后作为下一层自动编码器的输入。

通过这种方式,整个深层网络可以逐渐学习到数据的高层次抽象特征,从而实现更有效的特征提取和表示。

二、特征提取方法在堆叠自动编码器中,每个自动编码器都可以看作是一个特征提取器。

通过对输入数据进行编码和解码操作,自动编码器可以学习到数据的隐含特征,并将其作为输出。

在堆叠自动编码器中,每一层的编码器都可以看作是对原始数据进行一次特征提取,而解码器则是对提取出的特征进行重构和还原。

特征提取的过程可以分为两个阶段。

首先,编码器会将输入数据映射到一个低维的表示空间中,从而实现对数据的特征提取。

然后,解码器会将这个低维表示空间中的特征映射回原始数据的维度空间中,从而实现对数据的重构。

通过这种方式,堆叠自动编码器可以学习到数据的高层次抽象特征,并将其用于后续的任务,如分类、聚类等。

三、优势和局限性堆叠自动编码器的特征提取方法具有一定的优势和局限性。

首先,由于堆叠自动编码器可以学习到数据的高层次抽象特征,因此在一些复杂的数据集上,其特征提取效果往往会优于传统的特征提取方法。

其次,堆叠自动编码器还可以通过无监督学习的方式,自动地学习到数据的特征,无需手动设计特征提取器,从而节省了大量的人力和时间成本。

然而,堆叠自动编码器的特征提取方法也存在一定的局限性。

首先,由于堆叠自动编码器是一个深层神经网络模型,因此在训练过程中需要较大的数据集和较长的训练时间,从而增加了计算成本和时间成本。

其次,堆叠自动编码器在训练过程中也容易出现过拟合的问题,需要采取一定的正则化手段来避免这一问题的发生。

基于GATE的语义理解论文

基于GATE的语义理解论文

摘要:本文基于GATE框架定制了基于中文并限定领域的信息抽取系统,以之用于在人机交互中对自然语言的处理,以此来解决对自然语言的业务问询请求。

关键词:语义理解信息抽取1.引言随着人工智能的发展,人机交互的深入,人们越来越倾向于用自然语言,而不是原有的生硬的关键词元素输入,与智能系统进行交互。

事实上,用户更习惯于用自然语言来描述一个问题,而不是用一系列的关键词,例如使用“我想看刘德华的电影”,而不是“刘德华 and 电影”。

而研究显示,用自然语言来描述对信息的需求比用关键词准确得多,同时用户也更容易做到。

这一需求的出现,引发了自然语言处理领域的快速发展,带来了一系列人机交互模式的变革,例如智能搜索引擎的出现,Siri的快速蔓延与发展。

本文就利用自然语言进行业务问询请求进行了初探,定制中文信息抽取系统研究语义理解。

2.语义理解目前,对自然语言的理解策略是针对某一领域知识库,在进行特殊处理之后,对用户提出的问题,系统可采用适当的策略给出理解与分析,而且能够针对用户要求进行相关的统计和针对具体情况给出适当的建议。

机器对语言的分析和理解是一个层次化的过程,这个过程一般分为4个层次:语音分析、语法分析、语义分析和语用分析。

在这4个层次中,针对语义的分析是人机交互最重要的内容,也是本文研究的核心。

2.1语义分析的基本概念语义分析是指通过分析找出词义、结构意义及其结合意义,从而确定语言所表达的真正含义或概念。

为达到理解语言的目的,需要进行3步工作:首先,理解出现的每个词;其次,从词义构造理解语句意义的结构;最后,从句子语义结构表示言语的结构。

2.2本文所采用的语义分析主要算法在本文的研究中,进行语义分析主要采用正则文法规则匹配算法。

正则文法是自然语言领域中经常使用的一种文法形式,和正则表达式,有限状态机具有一一对应关系,适用于基于规则的文本匹配与内容理解。

标注模板引擎格式进行编写,规则部分独立于引擎,更易于维护。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

GATE:A Unicode-based Infrastructure Supporting MultilingualInformation ExtractionKalina Bontcheva and Diana Maynard and Valentin Tablan andHamish CunninghamDept.of Computer Science,University of SheffieldRegent Court,211Portobello St,Sheffield,S14DP,UK[K.Bontcheva,D.Maynard,V.Tablan,H.Cunningham]@AbstractNLP infrastructures with comprehensive multi-lingual support can substantially decrease theoverhead of developing Information Extraction(IE)systems in new languages by offering sup-port for different character encodings,language-independent components,and clean separa-tion between linguistic data and the algorithmsthat use it.This paper will present GATE–a Unicode-aware infrastructure that offersextensive support for multilingual InformationExtraction with a special emphasis on low-overhead portability between languages.GATEhas been used in many research and commer-cial projects at Sheffield and elsewhere,includ-ing Information Extraction in Bulgarian,Roma-nian,Russian,and many other languages.1IntroductionGATE(Cunningham02)1is an architecture,de-velopment environment and framework for build-ing systems that process human language.It has been in development at the University of Sheffield since1995,and has been used for many R&D projects,including Information Extraction in multiple languages and for multiple tasks and clients.The GATE architecture defines almost every-thing in terms of components-reusable units of code that are specialised for a specific task.There are three main types of components:•Language Resources(LRs)store some kind of linguistic data such as documents,corpora, ontologies and provide services for accessing it.At the moment all the predefined LRs are text based but the model doesn’t constrict the data format so the framework could be extended to handle multimedia documents as well.•Processing Resources(PRs)are resources whose character is principally programatic or 1GATE is implemented in Java and is freely availablealgorithmic such as a POS tagger or a parser.In most cases PRs are used to process the data provided by one or more LRs but that is not a requirement.•Visual Resources(VRs)are graphical com-ponents that are displayed by the user inter-face and allow the visualisation and editing of other types of resources or the control of the executionflow.The GATE framework defines some basic lan-guage resources such as documents and corpora, provides resource discovery and loading facilities and supports various kinds of input output oper-ations such as format decoding,file or database persistence.GATE uses a single unified model of annotation -a modified form of the TIPSTER format(Grish-man97)which has been made largely compatible with the Atlas format(Bird&Liberman99),and uses the now standard mechanism of‘stand-offmarkup’(Thompson&McKelvie97).Annota-tions are characterised by a type and a set of fea-tures represented as attribute-value pairs.The annotations are stored in structures called anno-tation sets which constitute independent layers of annotation over the text content.The advantage of converting all formatting in-formation and corpus markup into a unified rep-resentation,i.e.the annotations,is that NLP ap-plications do not need to be adapted for the dif-ferent formats of each of the documents,which are catered for by the GATE formatfilters(e.g. some corpora such as BNC come as SGML/XML files,while others come as email folders,HTML pages,news wires,or Word documents).The work for the second version of GATE started in1999and led to a complete redesign of the system and a100%Java implementation.One of the additions brought by version2is full sup-port for Unicode data allowing the users to open,ferent from the default one for the underlying platform.2Information Extraction in GATE Provided with GATE is a set of reusable pro-cessing resources for common NLP tasks.(None of them are definitive,and the user can replace and/or extend them as necessary.)These are packaged together to form ANNIE,A Nearly-New IE system,but can also be used individually or coupled together with new modules in order to create new applications.For example,many other NLP tasks might require a sentence splitter and POS tagger,but would not necessarily require re-sources more specific to IE tasks such as a named entity transducer.The system is in use for a vari-ety of IE and other tasks,sometimes in combina-tion with other sets of application-specific mod-ules.ANNIE consists of the following main process-ing resources:tokeniser,sentence splitter,POS tagger,gazetteer,finite state transducer(based on GATE’s built-in regular expressions over an-notations language(Cunningham et al.02b)), and orthomatcher.The resources communicate via GATE’s annotation API,which is a directed graph of arcs bearing arbitrary feature/value data,and nodes rooting this data into document content(in this case text).The tokeniser splits text into simple tokens, such as numbers,punctuation,symbols,and words of different types(e.g.with an initial cap-ital,all upper case,etc.).The aim is to limit the work of the tokeniser to maximise efficiency,and enable greaterflexibility by placing the burden of analysis on the grammars.This means that the tokeniser does not need to be modified for differ-ent applications or text types and frequently does not need to be modified for new languages,i.e., tends to be fairly language-independent.The sentence splitter is a cascade offinite-state transducers which segments the text into sentences.This module is required for the tag-ger.Both the splitter and tagger are domain-and application-independent.The POS tagger is a modified version of the Brill tagger,which produces a part-of-speech (POS)tag as an annotation on each word or sym-bol.Neither the splitter nor the tagger are a mandatory part of the NE system,but the anno-(described below),in order to increase its power and coverage.For languages where no POS tag-ger is available it can be left out,often without major implications on the system’s performance on some IE tasks.Alternatively,the English POS tagger can easily be adapted to a new language using a bi-lingual lexicon(see Section4.3).The gazetteer consists of lists such as cities, organisations,days of the week,etc.It not only consists of entities,but also of names of useful indicators,such as typical company designators (e.g.‘Ltd.’),titles,etc.The gazetteer lists are compiled intofinite state machines,which can match text tokens.The semantic tagger consists of hand-crafted rules written in the JAPE(Java Annotations Pat-tern Engine)language(Cunningham et al.02b), which describe patterns to match and annotations to be created as a result.JAPE is a version of CPSL(Common Pattern Specification Lan-guage)(Appelt96),which providesfinite state transduction over annotations based on regular expressions.A JAPE grammar consists of a set of phases,each of which consists of a set of pattern/action rules,and which run sequentially. Patterns can be specified by describing a specific text string,or annotations previously created by modules such as the tokeniser,gazetteer,or doc-ument format analysis.Rule prioritisation(if ac-tivated)prevents multiple assignment of annota-tions to the same text string.The orthomatcher is another optional mod-ule for the IE system.Its primary objective is to perform co-reference,or entity tracking,by recog-nising relations between entities.It also has a secondary role in improving named entity recog-nition by assigning annotations to previously un-classified names,based on relations with existing entities.3Support for MultilingualDocuments and Corpora in GATE The most important types of Language Resources that are predefined in GATE are documents and corpora.A corpus is defined in GATE as a list of documents.Documents in GATE are typically created starting from an external resource such as afile situated either on a local disk or at an arbitrary location on the Internet.Text needs to be con-Figure1:The GUK Unicode editor using a Ko-rean virtual keyboard.ing(or charset),in order to be saved into or read from afile.There are many different encodings used worldwide,some of them designed for a par-ticular language,others covering the entire range of characters defined by Unicode.GATE uses the facilities provided by Java and so it has access to over100different encodings including the most popular local ones,such as ISO8859-1in West-ern countries or ISO-8859-9in Eastern Europe, and some Unicode ones e.g.UTF-8or UTF-16. Once processed,the documents can also be saved back to their original format and encoding. Apart from being able to read several character encodings,GATE supports a range of popularfile formats such as HTML,XML,email,some types of SGML and RTF.Another important aspect is displaying and editing multilingual documents.GATE uses largely the Java Unicode support for display-ing multilingual documents.Editing multilingual documents(and also language-specific grammars, gazetteer lists,etc)is provided by GUK–GATE’s Unicode Kit.GUK provides input methods for a large number of languages(see Figure1),allows the definition of new ones,and also provides a Unicode-based text editor.So far GATE has been used successfully to create corpora and process documents in a wide range of languages–Slavic(e.g.,Bulgarian,Rus-sian),Germanic(Maynard et al.01;Gamb¨a ck& Olsson00),Romance(e.g.,Romanian)(Pastra et al.02),Asian(e.g.,Hindi,Bengali)(McEnery et al.00),Chinese,Arabic,and Cebuano(Maynard 4Adapting the GATE Components to Multiple LanguagesThe use of the Java platform implies that all processing resources that access textual data will internally use Unicode to represent data,which means that all PRs can virtually be used for text in any Unicode supported language.Most PRs, however,need some kind of linguistic data in or-der to perform their tasks(e.g.a parser will need a grammar)which in most cases is language spe-cific.In order to make the algorithms provided with GATE(in the form of PRs)as language-independent as possible,and as a good design principle,there is always a clear distinction be-tween the algorithms-presented in the form of machine executable code-and their linguistic resources which are typically externalfiles.All PRs use the textual data decoding mechanisms when loading the external resources so these re-sources can be represented in any supported en-coding which allows for instance a gazetteer list to contain localised names.This design made it possible to port our information extraction sys-tem ANNIE from English to other languages by simply creating the required linguistic resources.4.1The Unicode TokeniserOne of the PRs provided with GATE is the to-keniser which not only handles Unicode data but is actually built around the Unicode standard, hence its name of“GATE Unicode Tokeniser”. Like many other GATE PRs,the tokeniser is based on afinite state machine(FSM)which is an efficient way of processing text.In order to provide a language independent solution,the to-keniser doesn’t use the actual text characters as input symbols,but rather their categories as de-fined by the Unicode standard.As part of our work on multilingual IE,the rule-set of the tokeniser was improved,because origi-nally it was intended for Indo-European languages and therefore only handled a restricted range of characters(essentially,thefirst256codes,which follow the arrangement of ISO-8859-1(Latin1). We created a modified version of this which would deal with a wider range of Unicode characters, such as those used in Chinese,Arabic,Indic lan-guages etc.There is some overlap between the Unicode characters used for different languages. Codes which represent letters,punctuation,sym-multiple languages or scripts are grouped together in several locations,and characters with common characteristics are grouped together contiguously (for example,right-to-left scripts are grouped to-gether).Character code allocation is therefore not correlated with language-dependent collation. In order to enable the tokeniser to han-dle other Unicode characters,we had tofind the relevant character types and their sym-bolic equivalents(e.g.type5has the sym-bolic equivalent“OTHER LETTER”;type8 characters are usually at the beginning or end of a word and have the symbolic equivalent “COMBINING SPACING MARK”or“NON-SPACING MARK”).Rules covering these types were added to the tokeniser in order to discover the tokens correctly in a variety of other lan-guages.Even more importantly,having discov-ered the technique for extending the tokeniser in this way,it will be easy to add any further new types as necessary,depending on the language (since we have not covered all possibilities).4.2Localising the Gazetteer ListsThe GATE gazetteer processing resource enables gazetteer lists to be described in3ways:ma-jorType,minorType and language.The major and minor types enable entries to be classified ac-cording to two dimensions or at2levels of gran-ularity–for example a list of cities might have a majorType“location”and minorType“city”. The language classification enables the creation of parallel lists,one for each language.For example,for our Cebuano2IE experiment (see Section5.4)we had the same structure for the Cebuano lists as for their English counterparts, and simply altered the language label,to differen-tiate between the two.This is useful for languages where names of English entities can be found in the texts in the other language(e.g.for Ce-buano–“Cebu City Police Office”).To recognise these successfully we required both the English gazetteer(to recognise“Office”)and the Cebuano gazetteer(to recognise“Cebu City”,which is not in the English gazetteer).Using both gazetteers improved recall and did not appear to affect preci-sion,since English entities did not seem to be am-biguous with Cebuano entities or proper nouns. However,this might not be the case for other, closer languages.4.3Multilingual Adaptation of the POSTaggerThe Hepple POS tagger,which is freely avail-able in GATE as part of ANNIE,is similar to the Brill’s transformation-based tagger(Brill92), but differs mainly in that it uses a decision list variant of Brill’s algorithm.This means that in classifying any instance,only one transformation can apply.It is also written in Java.In order to adapt the POS tagger to a new lan-guage,one would typically need to re-train it on a big part-of-speech annotated corpus.However, there are no such corpora for many languages,so we experimented with adapting the POS tagger to a new language without such training data,only using a bilingual lexicon.As part of the ANNIE adaptation to Cebuano (see Section 5.4),we tested whether we could adapt the Hepple tagger to Cebuano using a bilin-gual Cebuano-English lexicon with POS informa-tion.Onfirst appearances it seemed that Ce-buano word order and morphology is similar to English,and it also has similar orthography.The rules for English(derived from training on the Wall Street Journal)would clearly not be appli-cable for Cebuano,so we used an empty ruleset, but we decided that many of the default heuris-tics might still be appropriate.The heuristics are essentially as follows:1.look up the word in the lexicon2.if no lexicon entry found:•if capitalised return NNP•if word containes”-”return JJ•if word contains a digit return CD•if word ends in”ed”,”us”,”ic”,”ble”,”ive”,”ish”,”ary”,”ful”,”ical”,”less”return JJ•if word ends in”s”return NNS•if word ends in”ly”return RB•if word ends in”ing”return VBG•if none of the above matched return NN 3.apply the trained rules to make changes tothe assigned categories based on the context Some of these heuristics make little sense for Ce-buano because it is unusual for Cebuano words to have endings such as“ic”,“ly”,“ing”etc.Thisthe lexicon,the tag returned will be NNP(proper noun)if capitalised,or NN(common noun)if not, which is appropriate.However,these heuristics cannot be changed without modifying the code of the POS tagger itself,therefore we left them un-changed even though most of them did not apply. Adapting the tagger did have a number of prob-lems,mostly associated with the fact that while the English lexicon(used for the tagger)consists only of single-word entries,the Cebuano lexicon contained many multi-word entries(such as mahi-tungod sa honi(musical)).The tagger expects lexicon entries to have a single word entry per line,followed by one or more POS tags,each sep-arated by a single space.We therefore modified the lexicon so that the delimiter between the lexical entry and the POS tag(s)was a“#”rather than a space,and adapted the tagging mechanism to recognise this.As a re-sult,the ANNIE POS tagger now has the option of processing multi-word entries,which are very important in a number of languages,e.g.,the AN-NIE adaptation to Hindi also required this.In order to evaluate the portability and use-fulness of such low-overhead adaptation of the POS tagger,we repeated the same experiment for Hindi,using a relatively small English-Hindi bilin-gual lexicon.The results were67%correctness as evaluated by a native Hindi speaker.Whereas such correctness may not be sufficient for deeper linguistic processing(e.g.,parsing),it is sufficient for named entity recognition.Next we discuss how these multilingual process-ing resources were used to perform information extraction in a variety of languages.5Information Extraction in Multiple LanguagesRobust tools for multilingual information extrac-tion are becoming increasingly sought after now that we have capabilities for processing texts in different languages and scripts.While the ANNIE IE system in GATE is English-specific, some of the modules can be reused directly(e.g. the Unicode-based tokeniser can handle Indo-European languages),and/or easily customised for new languages(Pastra et al.02).So far,AN-NIE has been adapted to do IE in Bulgarian,Ro-manian,Bengali,Greek,Spanish,Swedish,Ger-man,Italian,French,Hindi,and Cebuano,and Russian,as part of the MUSE project3.5.1NE in Slavonic languagesThe Bulgarian NE recogniser(Paskaleva et al.02) was built using three main processing resources: a tokeniser,a gazetteer and a semantic gram-mar built using JAPE.There was no POS tag-ger available in Bulgarian,and consequently we had no need of a sentence splitter either.The main changes to the system were in terms of the gazetteer lists(e.g.lists offirst names,days of the week,locations etc.were tailored for Bulgarian), and in terms of some of the pattern matching rules in the grammar.For example,Bulgarian makes far more use of morphology than English does, e.g.91%of Bulgarian surnames could be directly recognised using morphological information.The lack of a POS tagger meant that many rules had to be specified in terms of orthographic features rather than parts of speech.An example Bulgar-ian text with highlighted named entities is shown in Figure2.5.2NE in RomanianThe Romanian NE recogniser(Hamza et al.03) was developed from ANNIE in a similar way to the Bulgarian one,using tokeniser,gazetteer and a JAPE semantic grammar(see Figure3). Romanian is moreflexible language than En-glish in terms of word order;also agglutinative e.g.definite articles attach to nouns,making a definite and indefinite form of both common and proper nouns.As with Bulgarian,the tokeniser did not need to be modified,while the gazetteer lists and gram-mar rules needed some changes,most of which were fairly minor.For both Bulgarian and Roma-nian,the modifications necessary were easily im-plemented by a native speaker who did not require any other specialist skills beyond a basic grasp of the JAPE language and the GATE architecture. No Java skills or other programming knowledge was necessary.The Gate Unicode kit was invalu-able in enabling the preservation of the diacritics in Romanian,by saving them with UTF-8encod-ing.In order to evaluate the language-independence of ANNIE’s named entity recognition rules we ran an experiment comparing the performance of the English grammars with the Romanian gazetteerFigure2:Bulgarian named entities in GATE Entity Type Precision RecallAddress0.810.81Date0.670.77Location0.880.96Money0.820.47Organisation0.750.39Percent10.82Person0.680.78Overall0.820.67Table1:Average P+R per entity type,obtainedwith English NER grammar setEntity Type Precision RecallAddress0.960.93Date0.950.94Location0.920.97Money0.980.92Organisation0.950.89Percent10.99Person0.880.92Overall0.950.94 Table2:Average P+R per entity type,obtained with Romanian NER grammar setlists to the performance of the Romanian gram-mars,which were an extended set containing some rules specific to the language.The results are shown in Tables1and2respectively and show that without any adaptation of the grammars, only by collecting gazetteer lists for Romanian, ANNIE was able to achieve82%precision and 67%recall.Once the system was customised to Romanian the performance was in line with that 5.3NE in other languagesANNIE has also been adapted to perform NE recognition on English,French and German di-alogues in the AMITIES project4,a screenshot of which is shown in Figure4.Since French and German are more similar to English in many ways than e.g.Slavonic languages,it was very easy to adapt the gazetteers and grammars accordingly.Figure4:AMITIES multilingual dialogue5.4Surprise languagesWe carried out further experiments as part of the TIDES-based“surprise language program”, which requires various NLP tasks such as IE,IR, summarisation and MT to be carried out in a month on a surprise language,the nature of which is not known in advance.Here we will concentrate on the dry-run ex-periment which ran for10days on the Cebuano language,which is spoken by24%of the popula-tion in the Philippines,and is the lingua franca of the South Philippines.As part of that effort,we adapted ANNIE to the Cebuano language.Figure3:Romanian news text annotated in GATECebuano system P R F Baseline system P R F Person716568Person363636 Organization757173Organization314738 Location737876Location65712 Date8310092Date425849 Total767977.5Total4541.743 Table3:NE results on the news textsWe succeeded in our adaptation task,without the help of a native speaker,because most of the rules for NE recognition in English are based on POS tags and gazetteer lookup of candidate and context words(more detail is given in e.g. (Cunningham et al.02b)).Assuming similar morphological structure and word order,the de-fault grammars are therefore not highly language-specific(as discussed in Section 5.2).We did not have time to make a detailed linguistic study of Cebuano,however it turned out that after we adapted the ANNIE part-of-speech(POS)tagger and gazetteer lists to Cebuano,the rules were per-forming successfully without any adaptation. The performance was boosted further by us-ing ANNIE’s orthographic coreference module (orthomatcher)to boost recognition of unknown words.This works by matching entities tagged as Unknown(i.e.,unclassified entity)with other types of entities(Person,Location etc.)if they match according to the coreference rules.For ex-ample,“Smith”on its own might be tagged as Unknown,but if“John Smith”is tagged as a tities and retag“Smith”as a Person.Our experi-ment showed that the rules were not particularly language-specific,given a language with similar morphology and word order,so the orthomatcher can be used directly,without modification in such languages(we had a similar experience with Bul-garian,Romanian and Chinese).Manual inspec-tion of texts showed that the orthomatcher was helpful in improving recall.For example,it recog-nised“Pairat”as a Person due to coreference with “Leo Pairat”which was correctly recognised as a Person by the named entity grammars.Although we were not focusing on coreference per se,we no-ticed that many coreferences were correctly iden-tified,which proves indeed that the rules used are not particularly language-specific.We evaluated the performance of the adapted system on21news documents from two different Cebuano web sites.These texts were annotated by a local Cebuano speaker prior to our experi-ment,and the automated scoring tools in GATE (Cunningham et al.02a)were used to evaluate the results of the system.The results(in termsTable3,together with with the results from our baseline system,the default ANNIE system for English,which we ran on the same test set.AN-NIE typically scores for Precision and Recall in the90th percentile for English news texts.6ConclusionIn this paper we presented GATE–a Unicode-based NLP infrastructure particularly suitable for the creation and multilingual adaptation of In-formation Extraction systems.The different pre-processing components,i.e.,tokeniser,gazetteer, and POS tagger are designed to be easily adapt-able to new languages.As demonstrated by our experience with adapting ANNIE to a variety of languages,the named entity recognition and coreference algorithms are relatively language in-dependent and also easy to adapt or extend to new languages.The advantages of our approach is that it requires little involvement of native speakers (mainly for evaluation purposes and possibly for gazetteer creation)and a small amount of anno-tated data.Therefore,fast adaptations from one language to another are possible with relatively low overhead,unlike many machine learning-based IE systems(e.g.,(Bikel et al.99))which require big amounts of annotated data.However, for languages where such big amounts of anno-tated data does exist,we have now created an automatic gazetteer acquisition method that can be used to reduce further the overhead of porting ANNIE to new languages.Future work will continue in the direction of improving multilingual support,among other things.The most important issues to be ad-dressed are integration of morphological tools for improved support for inflected languages(e.g., (Declerck&Crispi03)),automatic language and encoding identification(e.g.,(Ignat et al.03)), and further work on automatic acquisition of gazetteer lists from annotated corpora. AcknowledgementsWork on GATE has been funded by EPSRC grants GR/K25267 (GATE),GR/M31699(GATE2),GR/N15764/01(IRC AKT),and GR/N19106(EMILLE)and several smaller grants.We would like to thank Markus Kramer from Max Planck Insti-tute,Nijmegen,for providing us with the IPA and Chinese input methods.The work on adapting ANNIE to Bulgarian was carried out in collaboration with Elena Paskaleva,Milena Yankova,and Galia Angelova,as part of the EC-funded project BIS-21BULGARIAN INFORMATION SOCIETY,CENTER OF EXCELLENCE FOR EDUCATION,SCIENCE AND TECHNOLOGY IN21CENTURY References(Appelt96)D.E.Appelt.The Common Pattern Specification Lan-guage.Technical report,SRI International,Artificial Intelligence Center,1996.(Bikel et al.99)D.Bikel,R.Schwartz,and R.M.Weischedel.An Algorithm that Learns What’s in a Name.Machine Learning, Special Issue on Natural Language Learning,34(1-3),Feb.1999.(Bird&Liberman99)S.Bird and M.Liberman.A Formal Framework for Linguistic Annotation.Technical Report MS-CIS-99-01,Department of Computer and Information Science, University of Pennsylvania,1999./-abs/cs.CL/9903003.(Brill92)E.Brill.A simple rule-based part-of-speech tagger.In Proceedings of the Third Conference on Applied Natural Lan-guage Processing,Trento,Italy,1992.(Cunningham02)H.Cunningham.GATE,a General Architecture for Text puters and the Humanities,36:223–254,2002.(Cunningham et al.02a)H.Cunningham, D.Maynard, K.Bontcheva,and V.Tablan.GATE:A Framework and Graph-ical Development Environment for Robust NLP Tools and Ap-plications.In Proceedings of the40th Anniversary Meeting of the Association for Computational Linguistics,2002.(Cunningham et al.02b)H.Cunningham, D.Maynard, K.Bontcheva,V.Tablan,and C.Ursu.The GATE User Guide./,2002.(Declerck&Crispi03)T.Declerck and C.Crispi.Multilingual Lin-guistic Modules for IE Systems.In Proceedings of Workshop on Information Extraction for Slavonic and other Central and Eastern European Languages(IESL’03),Borovets,Bulgaria, 2003.(Gamb¨a ck&Olsson00)B.Gamb¨a ck and F.Olsson.Experiences of Language Engineering Algorithm Reuse.In Second Inter-national Conference on Language Resources and Evaluation (LREC),pages155–160,Athens,Greece,2000.(Grishman97)R.Grishman.TIPSTER Architecture De-sign Document Version 2.3.Technical report,DARPA, 1997./div894/894.02/-related projects/tipster/.(Hamza et al.03)O.Hamza,K.Bontcheva,D.Maynard,V.Tablan, and d Entity Recognition in Romanian.In Proceedings of Workshop on Information Extraction for Slavonic and other Central and Eastern European Languages (IESL’03),Borovets,Bulgaria,2003.(Ignat et al.03)C.Ignat,B.Pouliquen,A.Ribeiro,and R.Stein-berger.Extending and Information Extraction Tool Set to Eastern-European Languages.In Proceedings of Workshop on Information Extraction for Slavonic and other Central and Eastern European Languages(IESL’03),Borovets,Bulgaria, 2003.(Maynard et al.01)D.Maynard,V.Tablan,C.Ursu,H.Cunning-ham,and d Entity Recognition from Diverse Text Types.In Recent Advances in Natural Language Process-ing2001Conference,pages257–274,Tzigov Chark,Bulgaria, 2001.(Maynard et al.03)D.Maynard,V.Tablan,and H.Cunningham.Ne recognition without training data on a language you don’t speak.In ACL Workshop on Multilingual and Mixed-language Named Entity Recognition:Combining Statistical and Sym-bolic Models,Sapporo,Japan,2003.(McEnery et al.00) A.M.McEnery,P.Baker,R.Gaizauskas, and H.Cunningham.EMILLE:Building a Corpus of South Asian Languages.Vivek,A Quarterly in Artificial Intelligence, 13(3):23–32,2000.(Paskaleva et al.02) E.Paskaleva,G.Angelova,M.Yankova, K.Bontcheva,H.Cunningham,and Y.Wilks.Slavonic named entities in gate.Technical Report CS-02-01,University of Sheffield,2002.(Pastra et al.02)K.Pastra, D.Maynard,H.Cunningham, O.Hamza,and Y.Wilks.How feasible is the reuse of grammars for Named Entity Recognition?In Proceedings of3rd Language Resources and Evaluation Conference,2002.(Thompson&McKelvie97)H.Thompson and D.McKelvie.Hy-perlink semantics for standoffmarkup of read-only documents.。

相关文档
最新文档