An Ant-based Clustering System for Knowledge Discovery in DNA Chip Analysis Data

合集下载

基于生态安全格局的高原城市生态修复关键区域识别——以昆明市为例

基于生态安全格局的高原城市生态修复关键区域识别——以昆明市为例

第 40 卷 ,第 3 期 2023 年6 月15 日国土资源科技管理Vol. 40,No.3Jun. 15,2023 Scientific and Technological Management of Land and Resourcesdoi:10.3969/j.issn.1009-4210.2023.03.002基于生态安全格局的高原城市生态修复关键区域识别——以昆明市为例刘凤莲,刘 艳(云南财经大学 国土资源与持续发展研究所,云南 昆明 650221)摘 要:构建区域生态安全格局,识别生态修复关键区域,是在生态文明建设背景下,实施区域生态修复、维护区域生态安全、提高区域生态环境质量的重要举措。

本文以典型的高原城市——昆明市为研究区,利用形态学空间格局分析和景观连通性识别生态源地,通过成本路径工具提取生态廊道,运用电路理论确定生态“夹点”、生态障碍点等生态修复关键区域。

研究表明:(1)昆明市生态源地面积总计6721.78km2,占全市总面积的31.99%,主要土地利用类型为林地和水域。

(2)共提取出91条生态廊道,其中重要廊道19条,重要生态廊道集中分布在昆明市的北部。

(3)共识别生态“夹点”75处,面积66.06km2;生态障碍点4处,面积49.24km2;生态断裂点41处。

研究可为昆明市国土空间生态修复关键区域识别和区域生态保护修护工作的开展提供参考。

关键词:生态安全格局;生态修复分区;形态学空间格局分析;电路理论;昆明市中图分类号:X171.4 文献标志码:A 文章编号:1009-4210-(2023)03-017-14 Identification of Key Areas for Ecological Restoration in Plateau Cities Based on the Ecological Security Pattern: A Case Study of KunmingLIU Feng-lian,LIU Yan(Institute of Land & Resources and Sustainable Development,Yunnan University of Finance andEconomics,Kunming 650221,Yunnan,China)Abstract: Constructing a regional ecological security pattern and identifying key areas for ecological restoration were important measures to pursue regional ecological restoration,maintain regional ecological security,and improve the regional ecological environment quality under the context of ecological civilization construction. This paper, taking Kunming,a typical plateau city,as the study area,applied morphological spatial pattern analysis and landscape connectivity to identify ecological source areas,extracted the ecological corridors with the cost path as a tool,and identified the key areas for ecological restoration based on the circuit theory,such as ecological pinch points and ecological barrier points. The 收稿日期:2023-03-16;改回日期:2023-03-30基金项目:云南省教育厅科学研究基金项目(2021J0592);云南财经大学引进人才项目(2022D13)作者简介:刘凤莲(1981—),女,博士,硕士生导师,从事土地利用与区域可持续发展研究。

智能控制课件蚁群优化算法

智能控制课件蚁群优化算法

实验数据(算法复杂度)
摘自Ant Colony Optimization
4 实例:JSP
Job-shop Scheduling Problem
M:机器数量 J :任务数 ojm:工序 djm:工时
O ,o jm, :工序集合
JSP(Muth & Thompson 6x6)
m.t Job1 3.1 Job2 2.8 Job3 3.5 Job4 2.5 Job5 3.9 Job6 2.3
Update the shortest tour found
TSP蚁群算法(ant-cycle)
Step 4.2
For every edge (i,j) For k:=1 to m do
m
ij
k ij
k 1
k ij
Q Lk
0
if (i, j) tour described by tabuk otherwise
TSP蚁群算法(ant-cycle)
Step 6
If (NC < NCMAX) and (not stagnation behavior) then Empty all tabu lists Goto step 2 else Print shortest tour Stop
3 蚁群算法调整与参数设置
符合TSP规则; 完成一次旅行后,在经过的路径上释放信息
素; 无需按原路返回。
实例:TSP(参数与机制)
路径上的信息素浓度 ij (t) 信息素更新
ij (t n) ij (t) ij
信息素释放(ant-cycle)
m
ij
k ij
k 1
k ij
Q Lk
if k - th ant uses edge (i, j) in its tour (between time t and t n)

高中英语科技前沿词汇单选题50题

高中英语科技前沿词汇单选题50题

高中英语科技前沿词汇单选题50题1. In the field of artificial intelligence, the process of training a model to recognize patterns is called _____.A. data miningB. machine learningC. deep learningD. natural language processing答案:B。

本题主要考查人工智能领域中的相关概念。

选项A“data mining”指数据挖掘,侧重于从大量数据中提取有价值的信息。

选项B“machine learning”指机器学习,强调通过数据让模型自动学习和改进,符合训练模型识别模式的描述。

选项C“deep learning”是机器学习的一个分支,专注于使用深度神经网络。

选项D“natural language processing”是自然语言处理,主要涉及对人类语言的理解和处理。

2. When developing an AI system for image recognition, the most important factor is _____.A. large datasetsB. advanced algorithmsC. powerful hardwareD. skilled developers答案:A。

在开发用于图像识别的人工智能系统时,选项A“large datasets”(大型数据集)是最重要的因素,因为丰富的数据能让模型学习到更多的特征和模式。

选项B“advanced algorithms”((先进算法)虽然重要,但没有足够的数据也难以发挥作用。

选项C“powerful hardware”((强大的硬件)有助于提高处理速度,但不是最关键的。

选项D“skilled developers”((熟练的开发人员)是必要的,但数据的质量和数量对系统性能的影响更为直接。

蚁群系统文献翻译

蚁群系统文献翻译

郑州航空工业管理学院毕业论文(设计)英文翻译2012 届网络工程专业0810073班级翻译题目蚁群系统姓名韩敏学号081007308指导教师刘双红职称讲师2012 年 5 月19日蚁群系统:一个合作学习模式解决旅行商问题的方法摘要本文介绍了蚁群系统(ACS),应用于解决旅行商问题(TSP)的分布式算法。

ACS就是一些合作代理所谓的蚂蚁合作找到TSP问题的良好解决方案。

蚂蚁通过播撒一种信息素来间接合作,在生成解决方案的同时把信息素存放在TSP 图形的边缘。

我们研究 ACS 通过运行试验了解其操作。

结果表明,ACS优于其他自然灵感的算法,如模拟退火和进化的自然启发算法计算。

我们得出结论与ACS-3-opt比较,在本地搜索过程中,优化的ACS通过一些算法来表现最优的的TSP和ATSP。

索引术语——适应行为、旅行商问题的蚁群、紧急行为。

一、导论蚂蚁算法是基于自然的比喻蚁群。

真正的蚂蚁能够找到从食物源到自己的窝[3]的最短路径,[22]不利用视觉线索[24]而通过感知信息素。

在移动的时候,蚂蚁在地上洒下信息素,并趋向于朝其他蚂蚁播撒的信息素方向移动。

在图.1我们展示了蚂蚁利用信息素找到两点之间的最短路径。

考虑图(a):蚂蚁到达一个抉择点,他们要决定向左还是向右。

因为没有线索表明哪一个是最佳选择。

他们随机选择。

可以预期,一般情况下,是一半蚂蚁左转一半蚂蚁右转。

两蚂蚁从左至右(名称开头为L)和从右至左移动(名字开头为R)。

图(b)和(c)显示在紧随其后的瞬间发生了什么,假设所有的蚂蚁以大约相同的速度移动。

虚线代表蚂蚁沉积在地面上的信息素。

由于较低的路径比上面一个短,更多的蚂蚁将平均访问,因此信息素累积更快。

经过短暂的过渡时期,两个路径信息素量的差异就足够大,从而影响进入系统的新蚂蚁的决定。

图(d)显示,从现在开始,新进入的蚂蚁在概率上更倾向于选择较短的路径,因为在决定点时,他们感知到了短路径上的有较多的信息素。

高考英语阅读理解(科普环保)易错剖析含解析

高考英语阅读理解(科普环保)易错剖析含解析

高考英语阅读理解(科普环保)易错剖析含解析一、高中英语阅读理解科普环保类1.犇犇阅读短文,从每题所给的四个选项(A、B、C和D)中,选出最佳选项。

A shark moving around the coastline is normally a worrying sight,but this waterborne drone (无人机) threatens floating rubbish instead of people.Developed by Dutch company RanMarine, the WasteShark takes nature as its inspiration with its whale shark-like mouth. Responsible for collecting waste, the drone will begin operations in Dubai Marina in November after a year of trials with local partner Ecocoast.According to RanMarine, the WasteShark is available in both autonomous and remote-controlled models. Measuring just over five feet by three-and-a-half feet (1.5 meters by 1.1 meter), it can carry up to 352 pounds of rubbish (159.6 kg) and has an operational battery life of 16 hours.By 2016 there were approximately 150 million tons of plastic in the world's oceans. One paper from December 2014 estimated that over a quarter of a million tons of ocean plastic pollution was afloat."WasteShark also has the abilities to gather air and water quality data, remove chemicals out of the water such as oil, and heavy metals, and scan the seabed to read its depth and outlines," said Oliver Cunningham, one of the co-founders of RanMarine. "Fitted with a collision-avoidance system, the drone uses laser imaging detection and ranging technology to detect an object in its path and stop or back up if the object approaches.""Our drones are designed to move through a water system, whether it's around the perimeter (周边) or through the city itself. The drones are that last line of defense between the city and the open ocean," added Cunningham. "WasteSharks are operating in Dubai, South Africa and the Netherlands and cost $ 17, 000 for the remote-controlled model and just under $ 23, 000 for the autonomous model."Dubai-based operator Ecocoast has two WasteShark drones. Co-founder Dana Liparts says they will clean waterfronts for clients including hotels and environmental authorities and that Ecocoast' intention is to have the collected rubbish recycled or upcycled. However, Liparts argues that cleaning waterways doesn't have a one-size-fits-all solution and requires a combination of new technology, preventative measures and changing people's attitudes towards littering.(1)What do we know about the WasteShark?A. It can frighten sharks away.B. It is an ocean explorer.C. It is a rubbish collector.D. It can catch fish instead of people.(2)What does Paragraph 4 mainly tell us?A. The causes of ocean pollution.B. The dangers of using plastics.C. The severity of ocean garbage pollution.D. The importance of ocean protection.(3)What will the WasteShark do with an approaching object?A. Avoid crashing into it.B. Break it into pieces.C. Swallow it.D. Fly over it.(4)Which of the following ideas does Liparts agree with?A. The WasteShark should be used more widely.B. More measures should be taken to make water clean.C. The production cost of WasteSharks should be reduced.D. People should take a positive attitude to new technology.【答案】(1)C(2)C(3)A(4)B【解析】【分析】本文是一篇说明文,介绍一种水上无人机可以用于清理浮在水面上的垃圾。

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2 Fuzzy Neural

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2 Fuzzy Neural

Adaptive Dynamic Surface Control for Uncertain Nonlinear Systems With Interval Type-2FuzzyNeural NetworksYeong-Hwa Chang,Senior Member,IEEE,and Wei-Shou ChanAbstract—This paper presents a new robust adaptive control method for a class of nonlinear systems subject to uncertainties. The proposed approach is based on an adaptive dynamic sur-face control,where the system uncertainties are approximately modeled by interval type-2fuzzy neural networks.In this paper, the robust stability of the closed-loop system is guaranteed by the Lyapunov theorem,and all error signals are shown to be uniformly ultimately bounded.In addition to simulations,the proposed method is applied to a real ball-and-beam system for performance evaluations.To highlight the system robustness, different initial settings of ball-and-beam parameters are con-sidered.Simulation and experimental results indicate that the proposed control scheme has superior responses,compared to conventional dynamic surface control.Index Terms—Ball-and-beam system,dynamic surface control, interval type-2fuzzy neural network.I.IntroductionN ONLINEAR phenomena,popularly existing in physical systems,have attracted a lot of attention in both industry and academia.The design of stabilizing controller for a class of nonlinear systems is usually full of challenges,especially for the systems with uncertainties.To achieve desired control goals,several techniques have been developed for uncertain nonlinear systems.In[1],a fuzzy hyperbolic model,consisting of fuzzy,neural network,and linear models,was proposed for a class of complex systems.In[2],the guaranteed cost control problem was presented for uncertain stochastic T-S fuzzy systems with multiple time delays.Recently,type-2fuzzy logic has been developed to deal with uncertain information in intelligent control systems[3]–[5].The concept of type-2fuzzy sets(T2FSs)is an extension of type-1fuzzy sets (T1FSs).Type-2fuzzy techniques with uncertainties in the antecedent and/or consequent membership functions have at-tracted much interest.The membership functions of T2FSs are Manuscript received June23,2012;revised December5,2012;accepted March4,2013.Date of publication April19,2013;date of current version January13,2014.This work was supported in part by the National Science Council,Taiwan,under Grant NSC101-2221-E-182-041-MY2,and by the High Speed Intelligent Communication Research Center,Chang Gung Uni-versity,Taiwan.This paper was recommended by Associate Editor H.Zhang. The authors are with the Department of Electrical Engineering, Chang Gung University,Kwei-Shan,Tao-Yuan333,Taiwan(e-mail: yhchang@.tw;d9721002@.tw).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TCYB.2013.22535483-D,where a footprint of uncertainty is particularly addressed. The primary membership function grade in type-2can be a subset in the universe[0,1].Moreover,corresponding to each primary membership,there is a secondary membership,which is also in[0,1].T2FSs provide additional degrees of freedom such that modelling and minimizing the effects of uncertainties become more feasible.However,the same works usually cannot be executed by type-1fuzzy logic systems(T1FLSs) [6],[7].Although the types of membership functions are different,the structures of type-1fuzzy logic control(T1FLC) and type-2fuzzy logic control(T2FLC)are similar,in which a fuzzifier,a rule base,a fuzzy inference engine,and an output processor are included.In particular,the output processor of a T2FLC has a type reducer.The outcome of a type reducer is a type-1fuzzy set,and then a crisp output can be obtained through a defuzzifier.However,heavy computational loads would be the main concern with regard to T2FLSs[8].To simplify the computation complexity,the grades of secondary membership functions can be simply set to1,which becomes the so-called interval T2FLSs(IT2FLCs).IT2FLCs have been successfully applied in many control applications due to the capability of modeling uncertainties.In[9],an ant optimized fuzzy controller is applied to a wheeled-robot for wall-following under a reinforcement learning,where the interval type-2fuzzy sets are adopted in the antecedent part of each fuzzy rule to improve robustness.Recently,interval type-2 fuzzy neural networks(IT2FNN),combining the capability of interval type-2fuzzy reasoning to handle uncertain information and the capability of artificial neural networks to learn from processes,have been popularly addressed.In[10],an IT2FNN control system was designed for the motion control of a linear ultrasonic motor based moving stage.The proposed controller combines the merits of FLS and neural networks,and the update laws of the network parameters are obtained based on the backpropagation algorithm.However,the stability of the IT2FNN control system cannot be guaranteed.As a breakthrough in the area of nonlinear control,a system-atic and recursive design procedure,named the backstepping control,has been provided to design a stabilizing controller for a class of strict-feedback nonlinear systems.The backstepping control is basically attributed to the stabilization consideration of subsystems using the Lyapunov stability theory.A number of applications have been addressed in the utilization of backstepping techniques.In[11],the problem of a one degree-2168-2267c 2013IEEEFig.1.Structure of IT2FNN.of-freedom master-slave teleoperated device was discussed by the adaptive backstepping haptic control approach.The adaptive backstepping control was also applied to a plane-type 3-DOF precision positioning table,in which the unmodeled nonlinear effects and parameter uncertainties were considered [12].In[13],a backstepping controller design was investigated for the maneuvering of helicopters to autonomously track predefined trajectories.In[14],a voltage-controlled active magnetic bearing system was addressed,where a robust con-troller was designed to overcome unmodeled dynamics and parametric uncertainties by backstepping techniques.In[15], a backstepping oriented nonlinear controller was proposed to improve the shift quality of vehicles with clutch-to-clutch gearshifts.In[16],theflocking of multiple nonholonomic wheeled robots was studied,where distributed controllers were proposed with the backstepping techniques,graph theory and singular perturbation theory.The position control of a highly-nonlinear electrohydraulic system was considered,where an indirect adaptive backstepping was utilized to deal with param-eter variations[17].In[18],a neural network and backstepping output feedback controller was proposed for leader-follower-based multirobot formation control.In[19],an adaptive back-stepping approach was presented for the trajectory tracking problem of a closed-chain robot,where radial basis function neural networks were used to compensate complicated non-linear terms.In[20],an adaptive neural network backstepping output-feedback control was presented for a class of uncertain stochastic nonlinear strict-feedback systems with time-vary delays.Although backstepping control can be applied to a wide range of systems,however,there is a substantial drawback of the explosion of terms with conventional backstepping techniques.The dynamic surface control(DSC)has been proposed to overcome the“explosion of terms"problem, caused by repeated differentiations.Similar to the backstep-ping approach,DSC is also an iteratively systematic design procedure.The complexity arisen from the explosion terms in conventional backstepping control methods can be avoided by introducingfirst-order low passfilters[21]–[23].In[24], an integrator-backstepping-based DSC method for a two-axis piezoelectric micropositioning stage was proposed,where the robustness of tracking performance can be improved.In[25],a feedback linearization strategy including the dynamic surface control was developed to improve paper handling performance subject to noises.In[26]and[27],an adaptive DSC was adopted for mobile robots,where the exact parameters of robot kinematics and dynamics including actuator dynamics were assumed to be unknown a priori.In[28],a T-S fuzzy model based adaptive DSC was proposed for the balance control of a ball-and-beam system.Based on the T-S fuzzy modelling,the dynamic model of the ball-and-beam system was formulated as a strict feedback form with modelling errors.An observer-based adaptive robust controller was developed via DSC tech-niques for the purpose of achieving high-performance for servo mechanisms with unmeasurable states[29].A robust adaptive DSC approach was presented for a class of strict-feedback single-input-single-output nonlinear systems,where radial-basis-function neural networks were used to approximate those unknown system functions[30],[31].In[32],a novel-function approximator was constructed by combining a fuzzy-logic system with a Fourier series expansion to approximate the un-known system function depending nonlinearly on the periodic disturbances.Based on this approximator,an adaptive DSC was proposed for strict-feedback and periodically time-varyingCHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS295systems with unknown control-gain functions.In[33]and [34],an adaptive fuzzy backstepping dynamic surface control approach was developed for a class of multiple-input multiple-output nonlinear systems and uncertain stochastic nonlinear strict-feedback systems,respectively.In[35],an adaptive DSC was proposed for a class of stochastic nonlinear systems with the standard output-feedback form,where neural networks were used to approximate the unknown system functions. In[36],a neural-network-based decentralized adaptive DSC control design method was proposed for a class of large-scale stochastic nonlinear systems,where neural network method was used to approximate the unknown nonlinear functions. In this paper,a robust adaptive control method is presented for a class of uncertain nonlinear systems.The proposed con-trol scheme is based on adaptive DSC techniques combining with IT2FNN learning(IT2FNNADSC).To achieve a desired tracking goal,a stabilizing control action can be systematically derived.Also,the uncertainty modelling can be attained using IT2FNNs.The IT2FNN has the ability to accurately approxi-mate nonlinear systems or functions with uncertainties based on the merits of both the type-2fuzzy logic and the neural network.From the Lyapunov stability theorem,it is shown that the closed-loop system is stable,and all error signals are uniformly ultimately bounded.In addition,a ball-and-beam system is utilized for performance validations,including both simulation and experimental results.Moreover,the form of strict-feedback systems is a special case of addressed general nonlinear systems.Thus,the proposed IT2FNNADSC control method can be easily applied to many applications of interest. The organization of this paper is as follows.In Section II, the dynamic characteristics of uncertain nonlinear systems are discussed.The structure and design philosophy of IT2FNNs and the design procedures of the adaptive DSC were intro-duced in Section III.The stability analysis is investigated in Section IV.In Section V,a ball-and-beam system is applied for performance validations,where both simulation and ex-perimental results are included.The concluding remarks are given in Section VI.II.PreliminariesA.Problem FormulationConsider a class of nonlinear systems as follows.˙x1=f1(x)+g1(x1)x2˙x2=f2(x)+g2(x1,x2)x3...˙x i=f i(x)+g i(x1,x2,...,x i)x i+1...˙x n=f n(x)+g n(x)u(t)(1)where i=1,2,...,n−1,x=[x1x2···x n]T∈R n is the state vector,u(t)∈R is the control input.The f i,g i, i=1,2,...,n are smooth system and virtual control-gain functions,respectively.In this paper,the state vector x is assumed to be in a compact set in R n.Definition1:System(1)is uniformly ultimately bounded with ultimate bound b if there exist positive constants band Fig.2.Interval type-2membership function with an uncertain mean. c,independent of t0≥0,and for every a∈(0,c),there is T=T(a,b)≥0,independent of t0such that[37]x(t0) ≤a⇒ x(t) ≤b,∀t≥t0+T.(2) With the consideration of system uncertainties,the perturbed model of(1)can be represented in˙x1=f1(x)+g1(x1)x2+ 1(x,t)˙x2=f2(x)+g2(x1,x2)x3+ 2(x,t)...˙x i=f i(x)+g i(x1,x2,...,x i)x i+1+ i(x,t)...˙x n=f n(x)+g n(x)u(t)+ n(x,t)(3)where i=1,2,...,n−1.The i(x,t),i=1,2,...,n are bounded uncertainties,consisting of unknown modelling errors,parameter uncertainties,and/or external disturbances. Denote x1d(t)as a control goal of x1(t).The control objective is to develop an adaptive controller such that the corresponding closed-loop system of(3)is stable,all errors are uniformly ultimately bounded,and the magnitude of the tracking error x1(t)−x1d(t)can be arbitrary small as t→∞. Assumption1:There exist some constantsγi>0,ψi>0, i=1,2,...,n such thatψi≤ g i(x1,x2,...,x i) ≤γi.(4) Assumption2:The reference command x1d(t)is a continu-ous and bounded function of degree2such that x1d,˙x1d,and ¨x1d are well defined.Remark1:In particular casesf i(x)=f i(x1,x2,...,x i),i=1,2,...,n(5) the representation of(1)turns out to be a class of nonlinear systems with a general strict-feedback form.B.Interval Type-2Fuzzy Neural NetworksRecently,interval type-2fuzzy neural networks handling uncertainty problems have been popularly addressed.The design procedures of IT2FNNs will be introduced in this section.First,the IF-THEN rules for an IT2FNN can be expressed asR j:IF x(1)1is˜F1j AND···AND x(1)n is˜F njTHEN y(5)is[w(4)jL w(4)jR],j=1,2,...,m(6)296IEEE TRANSACTIONS ON CYBERNETICS,VOL.44,NO.2,FEBRUARY2014 where˜F ij is an interval type-2fuzzy set of the antecedentpart,[w(4)jL w(4)jR]is a weighting interval set of the consequentpart,and x(1)i is the input of IT2FNN,i=1,2,...,n.Thenetwork structure of IT2FNN is shown in Fig.1,in whichthe superscript is used to identify the layer number.Thefunctionality of each layer of the IT2FNN will be introducedin sequence.1)Input Layer:In this layer,x(1)1,...,x(1)n,denoted as thestate variables of(3),are the inputs of IT2FNN.2)Membership Layer:In this layer,each node performsits work as an interval type-2fuzzy membership function.Asshown in Fig.2,the associated Gaussian membership functionregarding the i th input and j th nodes is represented asμ(2)˜F ij (x(1)i)=exp⎛⎝−12x(1)i−m(2)ijσ(2)ij2⎞⎠≡N(m(2)ij,σ(2)ij,x(1)i)(7)where m(2)ij∈[m(2)ijL m(2)ijR]is an uncertain mean,andσ(2)ij is a variance,i=1,...,n,j=1,...,m.In Fig.2,the upper mem-bership function(UMF)and the lower membership function(LMF)are denoted as¯μ(2)˜F ij (x(1)i)andμ(2)˜F ij(x(1)i),respectively.The upper and lower bounds of each type-2Gaussian membership function can be respectively represented as¯μ(2)˜F ij (x(1)i)=⎧⎪⎨⎪⎩N(m(2)ijL,σ(2)ij,x(1)i),x(1)i<m(2)ijL1,m(2)ijL≤x(1)i≤m(2)ijRN(m(2)ijR,σ(2)ij,x(1)i),x(1)i>m(2)ijR,(8)μ(2)˜F ij (x(1)i)=⎧⎨⎩N(m(2)ijL,σ(2)ij,x(1)i),x(1)i>m(2)ijL+m(2)ijR2N(m(2)ijR,σ(2)ij,x(1)i),x(1)i≤m(2)ijL+m(2)ijR2.(9)Therefore,the outputs of membership layer can be describedas a collection of intervals[μ(2)˜F ij (x(1)i)¯μ(2)˜F ij(x(1)i)],i=1,...,n,j=1,...,m.3)Rule Layer:The operation of this layer is to multiply the input signals and output the result of product.For example, the output of the j th node in this layer is calculated as[f(3)j ¯f(3)j]=n i=1μ(2)˜F ij(x(1)i) n i=1¯μ(2)˜F ij(x(1)i).(10)4)Type-Reduction Layer:Type-reducer has been applied to reduce a type-2fuzzy set to a type-reduced set[6],[38]. This type-reduced set is then defuzzified to derive a crisp output.To simplify the computation complexity,singleton output fuzzy sets with the center-of-sets type reduction method are considered such that the output y(4)=[y(4)L y(4)R]is an interval type-1set.Then,the output of the type-reduction layercan be expressed asy(4)= mj=1f(3)j w(4)jmj=1f(3)j(11)where w(4)j=[w(4)jL w(4)jR]is consequent interval weighting andf(3)j=[f(3)j ¯f(3)j]is the interval degree offiring.Withoutloss of generality,w(4)jR and w(4)jL are assumed to be assigned in ascending order,i.e.,w(4)1L≤w(4)2L≤···≤w(4)mL and w(4)1R≤w(4)2R≤···≤w(4)mR.With reference to[39]and[40],the calculation of y(4)L and y(4)R can be derived asy(4)L=lj=1¯f(3)jw(4)jL+mj=l+1f(3)jw(4)jLj=1f j+j=l+1f(3)j,(12) y(4)R=rj=1f(3)jw(4)jR+mj=r+1¯f(3)jw(4)jRrj=1f(3)j+mj=r+1¯f(3)j.(13)Obviously,the numbers l and r in(12)and(13)are essential in the switching between the lowerfiring strength f(3)jand the upperfiring strength¯f(3)j.In general,the values of l and r can be obtained from a sorting process.In[9],a simplified type of reduction was proposed as follows:ˆy(4)L=mj=1f(3)jw(4)jLmj=1f(3)j≡W T L F(14)ˆy(4)R=mj=1¯f(3)jw(4)jRmj=1¯f(3)j≡W T R¯F(15)where W L and W R are the weighting vectors connected between the rule layer and type-reduction layer,W T L=w(4)1L w(4)2L···w(4)mL,W T R=w(4)1R w(4)2R···w(4)mR;F and¯F are the weightedfiring strength vectorF=f(3)1mj=1f(3)jf(3)2mj=1f(3)j···f(3)mmj=1f(3)jT,¯F=¯f(3)1mj=1¯f(3)j¯f(3)2mj=1¯f(3)j···¯f(3)mmj=1¯f(3)jT.It is noted that only the lower and upper extremefiring-strengths are used in(14)and(15),respectively,such that the computation burden can be reduced.5)Output Layer:The output y(5)is determined as the average ofˆy(4)L andˆy(4)Ry(5)=ˆy(4)L+ˆy(4)R2.(16) Remark2:The results obtained so far are mainly on the structure description of IT2FNNs.It is more interesting to investigate how to model the uncertain terms in(3)with IT2FNNs.Moreover,the closed-loop stability affected by the approximation errors will be addressed later on.III.Design of IT2FNNADSCIn this paper,the dynamic surface control method is utilized to design a stabilizing controller for a class of uncertain nonlinear systems of(3).With IT2FNNs,uncertainty terms of(3)can be approximately modeled asi(x,t)=12W T i F i+εi(17) where W T i=W T iL W T iRis an ideal constant weights vector, F i=F T i¯F T iTis thefiring strength vector,andεi is the approximation error,i=1,2,...,n.In(17),there exist some unknown bounded constants W M andεM such that W i ≤CHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS297 W M and εi ≤εM,respectively.Since there is no a prioriknowledge to determine exact W i s,an adaptive mechanismis applied such that the estimatedˆW i can asymptoticallyconverge to W i.Let the estimation error be defined as follows:˜Wi(t)=ˆW i(t)−W i,i=1,2,...,n.(18)For the aforementioned uncertain nonlinear systems,an adap-tive dynamic surface control(ADSC)will be proposed topreserve the closed-loop stability.The design procedures re-garding the ADSC are discussed in the following.To stayconcise,the notations i,i=1,2,...,n,will be used torepresent the uncertainties of(3).Step3.1:Defines1=x1−x1d.(19)It is desired to track the reference command x1d with giveninitial conditions.From(3)and(17),the derivative of(19)canbe obtained as˙s1=f1+g1x2+12W T1F1+ε1−˙x1d.(20)Let¯x2be a stabilizing function to be determined for(3)¯x2=g−11(−f1−12ˆW T1F1−k1s1+˙x1d)(21)where k1is a positive constant.An adaptive law is designated as˙ˆW 1=121F1s1−η 1ˆW1(22)with a symmetric positive definite matrix 1andη>0.Let x2d be the regulated output of¯x2through a low-passfilter τ2˙x2d(t)+x2d(t)=¯x2,x2d(0)=¯x2(0).(23) With a properly chosenτ2,the smoothed x2d can be equiva-lently considered the required¯x2.Step3.2:Analogous to the discussion in Step3.1,an error variable is defined ass2=x2−x2d.(24) From(3)and(17),the derivative of(24)can be obtained as ˙s2=f2+g2x3+12W T2F2+ε2−˙x2d.(25) Let¯x3be a stabilizing function to be determined for(3)¯x3=g−12(−f2−12ˆW T2F2−k2s2+˙x2d)(26)where k2is a positive constant.An adaptation law is designed as˙ˆW 2=122F2s2−η 2ˆW2(27)with a symmetric positive definite matrix 2.Let x3d be the smoothed variable of¯x3through a low-passfilterτ3˙x3d(t)+x3d(t)=¯x3,x3d(0)=¯x3(0).(28) With a properly chosenτ3,the smoothed x3d can be equiva-lently considered the required¯x3.Step3.i:The state tracking error of x i is defined ass i=x i−x id.(29) From(3)and(17),the derivative of(29)can be obtained as˙s i=f i+g i x i+1+12W T i F i+εi−˙x id.(30) Let¯x i+1be a stabilizing function to be determined for(3)¯x i+1=g−1i(−f i−12ˆW T i F i−k i s i+˙x id)(31) where k i is a positive constant.Similarly,an adaptive law is given as˙ˆWi=12i F i s i−η iˆW i(32) with a positive definite matrix i= T i.Similarly,let x i+1,d be the smoothed counterpart of¯x i+1through a low-passfilter τi+1˙x i+1,d(t)+x i+1,d(t)=¯x i+1,x i+1,d(0)=¯x i+1(0).(33) With a properly chosenτi+1,d,the smoothed x i+1,d can be equivalently considered the required¯x i+1.Step3.n:The state tracking error of x n is defined ass n=x n−x nd.(34) From(3)and(17),the derivative of(34)can be obtained as˙s n=f n+g n u(t)+12W T n F n+εn−˙x nd.(35) Let u(t)be a stabilizing function to be determined for(3) u(t)=g−1n(−f n−12ˆW T n F n−k n s n+˙x nd)(36) where k n is a positive constant.Analogous to the aforemen-tioned discussion,an adaptive law is given as˙ˆWn=12n F n s n−η nˆW n(37) where n is a symmetric matrix, n>0.Remark3:So far,only the design procedures,combining DSC approach and IT2FNN,are identified step by step.The derivation of control actions and learning rules in each step will be investigated later,where the closed-loop stability will also be discussed in detail.IV.Stability AnalysisIn this section,the overall stability of a class of uncertain nonlinear systems will be investigated,where a DSC controller with an IT2FNN adaption is adopted.In this paper,let 0and n be feasible sets of IT2FNNADSC such that0={(x1d,˙x1d,¨x1d):x1d+˙x1d+¨x1d≤p0}(38)n=(s i,˜W i,h i):12ni=1s2i+˜W T i −1i˜W i+12ni=2h2i≤p(39)where p0and p are positive constants.It is noticed that 0 and n are compact sets in R3and R(2m+2)n−1.298IEEE TRANSACTIONS ON CYBERNETICS,VOL.44,NO.2,FEBRUARY 2014Theorem 1:Consider a class of uncertain nonlinear systems of (3).With the DSC controller and IT2FNN adaptive laws designed as (19),(21)–(24),(26)–(29),(31)–(34),(36)–(37),the corresponding s i and ˜Wi are uniformly ultimately bounded,i =1,2,...,n ,where (38)and (39)are satisfied.Proo f :First,the errors between the stabilizing variables and the smoothed variables are defined ash i +1=x i +1,d −¯x i +1=x i +1,d −g −1i (−f i −12ˆW Ti F i −k i s i +˙x id )(40)where i =1,2,...,n −1.From (30),(31),and (40),the derivative of s i can be derived as˙s i=g i s i +1+g i h i +1−12˜W Ti F i +εi −k i s i(41)in which i =1,2,...,n −1.From (35)and (36),the derivativeof the tracking error s n can be obtained as˙s n =−k n s n −12˜W Tn F n +εn .(42)From (33)and (40),the dynamics of the smoothed variablex id can be formulated as˙x id (t )=¯x i −x id τi =−h iτi,i =2,3,...,n.(43)Therefore,from (40)and (43),the dynamic changes of h i +1can be derived in the following:˙h i +1=˙x i +1,d −˙¯x i +1=−h i +1τi +1+B i +1,i =1,2,...,n −1(44)in whichB i +1(s 1,...,s n ,h 2,...,h n ,ˆW T 1,...,ˆW T i ,x 1d ,˙x 1d ,¨x 1d )=g −2i [(˙f i +12˙ˆW T i F i +12ˆW T i ˙F i +k i ˙s i+˙h i τi )g i −˙g i (f i +12ˆW TiF i −˙x id +k i s i )].Choose a Lyapunov function candidate asV =12 n i =1 s 2i +˜W T i −1i ˜W i +12 n i =2h 2i .(45)The derivative of (45)can be obtained as˙V = n i =1 s i ˙s i +˜W T i −1i ˙ˆW i + n i =2h i ˙h i .(46)From (41),(42),(44),and (46),it leads to ˙V≤s 1(g 1s 2+g 1h 2−12˜W T1F 1+ε1−k 1s 1)+s 2(g 2s 3+g 2h 3−12˜W T2F 2+ε2−k 2s 2)+···+s i (g i s i +1−12˜W Ti F i +g i h i +1+εi −k i s i )+···+s n (−k n s n −12˜W Tn F n +εn )+ n i =1˜W T i −1i ˙ˆW i + n i =2 −h 2i τi + h i B i.(47)It is true thats i s i +1≤s 2i +s 2i +1,s i h i +1≤s 2i +h 2i +1,i =1,2,...,n −1(48)ands i εi ≤s 2i +ε2i ,i =1,2,...,n.(49)Therefore,the following inequality can be obtained by substi-tuting (4),(22),(27),(32),(37),and (48),(49)into (47)˙V ≤ n −1i =1(2γi s 2i +γi s 2i +1+γi h 2i +1)+ n i =1(−k i s 2i +ε2i +s 2i )+ n i =1(−η˜W T i ˆW i )+ n i =2 −h 2i τi+ h i B i .(50)Let α0>0be a constant such that k 1=2γ1+1+α0,k n =γn −1+1+α0,and k i =2γi +γi −1+1+α0,i =2,3,...,n −1,are satisfied.With reference to [30],using ˜Wi 2− W i 2≤2˜WT i ˆW i ,it gives ˙V ≤ ni =1(−α0s 2i +ε2i )+ n −1i =1(γi h 2i +1)+ n i =2 −h 2iτi + h i B i− n i =112η˜W i 2− W i 2 ≤ n i =1 −α0s 2i −η2λmax ( −1i )˜W T i −1i ˜W i+ n −1i =1(γi h 2i +1)+ n i =2 −h 2i τi + h i B i+ n i =1 ε2i +η2 W i 2.(51)Denoted ε2i +η2 W i 2=c i ,it leads to c i ≤ε2M +η2W 2M =c M ,i =1,2,...,n .From the positive definiteness of i ,a positive ηcan be determined such that η2λmax ( −1i)≥α0.From (51),itresults in˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M + n −1i =1(γi h 2i +1)+ n i =2 −h 2iτi+ h i B i .(52)Therefore,from the assumptions of boundedness,smoothness,and compactness,B i is bounded on 0× n ,i.e.,there exists an M i >0such that B i ≤M i .Let1τi =γi −1+M 2i 2β+α0,i =2,3,...,n (53)where βis a positive number.Substituting (53)into (52),the following inequality can be obtained:˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M + n i =2 − M 2i2β+α0 h 2i+ h i B i .(54)Notice that h i B i ≤h 2i B 2i 2β+ β2,i =2,3,...,n .Then,one can have˙V ≤ n i =1 −α0 s 2i +˜W T i −1i ˜W i +nc M +(n −1)β2+ n i =2 −α0h 2i −(M 2i −B 2i )h 2i 2β ≤ n i =1−α0 s 2i +˜W T i −1i ˜W i − n i =2(α0h 2i )+nc M +(n −1)β2=−2α0V +nc M +(n −1)β2.(55)Solving (55)leads toV (t )≤12α0 nc M +(n −1)β2+ V (0)−12α0nc M +(n −1)β2e (−2α0t ),∀t >0.(56)CHANG AND CHAN:ADAPTIVE DYNAMIC SURFACE CONTROL FOR UNCERTAIN NONLINEAR SYSTEMS299Fig.3.Experiment setup of the ball-and-beam system.From (56),it can be observed that V (t )is uniformly ultimately bounded,and the proof is completed.ٗRemark 4:By increasing the value of α0,the quantity of12α0(nc M +(n −1)β2)can be made arbitrarily small.Also,the convergence of V (t )implies that s i ,˜Wi ,and h i are uniformly ultimately bounded.Thus a large enough α0can make s 1=x 1−x 1d arbitrarily small.Equivalently,the stability of the proposed IT2FNNADSC control system can be guaranteed,and the purpose of tracking can be achieved in the boundness of steady-state errors.V .Example:Ball-and-Beam SystemTo verify the feasibility of the proposed IT2FNNADSC,anend-point driven ball-and-beam system is considered,where the scheme diagram is shown in Fig.3,and the associated dynamic equations are described as follows [41]:˙x 1=x 2+ 1(x ,t )˙x 2=Ax 1x 24−Agx 3+ 2(x ,t )˙x 3=x 4+ 3(x ,t )˙x 4=B cos x 3[C cos l b x 3d u −Dx 4cos l b x 3d−E −Hx 1]−BGx 1x 2x 4+ 4(x ,t )(57)where x (t )=[x 1(t )x 2(t )x 3(t )x 4(t )]T is the state vector,x 1is the ball position (m),x 2is the ball velocity (m/sec),x 3is the beam angle (rad),x 4is the angular velocity of the beam(rad/sec),A = 1+m −1B J B R −2 −1,B = J B +J b +m B x 21 −1,C =n g K b l b (R a d )−1,D = n g K b l b 2 R a d 2 −1,E =0.5l b m b g ,H =m B g ,G =2m B ,and u (t )is the input voltage of a DC motor.The physical parameters of the ball-and-beam system are listed in Table I.Remark 5:The representation of (57)can be fitted into the general form of (3).In consequence,the proposed IT2FNNADSC will be applied for the position control of a ball-and-beam system,where the ball is desired to be asymptotically balanced at a designated position.TABLE IParameters of the Ball-and-Beam System Symbol DefinitionValue m B Mass of the ball 0.0217kg m b Mass of the beam 0.334kg R Radius of the ball 0.00873m l b Beam length0.4m d Radius of the large gear 0.04mJ B Ball inertia 6.803×10−7kgm 2J b Beam inertia0.017813kgm 2K b Back-EMF constant 0.1491Nm/A R a Armature resistance 18.91 g Acceleration of gravity 9.8m /s 2n gGear ratio4.2TABLE IIParameters of ADSCi 1234k i 2840.887434.943727τi N/A0.0250.0250.025 i 100I 32α025η0.5A.Simulation ResultsFrom (21),(26),(31),and (57),stabilizing functions can be obtained as follows:¯x 2=−12ˆW T1F 1−k 1s 1+˙x 1d(58)¯x 3=−1Ag −12ˆW T 2F 2−k 2s 2−Ax 1x 24+˙x 2d (59)and¯x 4=−12ˆWT3F 3−k 3s 3+˙x 3d (60)where the employing neural networks are adaptively obtained from (32).From (36)and (57),the IT2FNNADSC based control action can be drived asu (t )=1BC cos x 3cos l b x 3d·[B cos x 3(Dx 4cos l b x 3d +E +Hx 1)+BGx 1x 2x 4+˙x4d −k 4s 4−12ˆW T4F 4].(61)The parameters for the conventional DSC and the IT2FNNADSC are initially given in Tables II and III,in which F 1=F 2=F 3=F 4=⎡⎣f (3)1 16j =1f (3)j···f (3)16 16j =1f (3)j¯f (3)1 16j =1¯f (3)j ···¯f (3)16 16j =1¯f(3)j ⎤⎦T ,[f (3)j¯f (3)j ]=[ 4i =1μ(2)˜F ij 4i =1¯μ(2)˜F ij ],j =1,2,...,16,and ˆW 1(0)=ˆW 2(0)=ˆW 3(0)=ˆW 4(0)=[ˆw(4)1L (0)···ˆw (4)16L (0)ˆw (4)1R (0)···ˆw (4)16R (0)]T .The rule table of an IT2FNN is shown in Table IV,where PS (positive small),PB (positive big),Neg (negative),Pos (positive),and Zero are interval type-2fuzzy sets.The input and output membership functions for the corresponding interval type-2fuzzy sets are shown in Fig.4.。

四大安全会议论文题目

四大安全会议论文题目

2009and2010Papers:Big-4Security ConferencespvoOctober13,2010NDSS20091.Document Structure Integrity:A Robust Basis for Cross-site Scripting Defense.Y.Nadji,P.Saxena,D.Song2.An Efficient Black-box Technique for Defeating Web Application Attacks.R.Sekar3.Noncespaces:Using Randomization to Enforce Information Flow Tracking and Thwart Cross-Site Scripting Attacks.M.Van Gundy,H.Chen4.The Blind Stone Tablet:Outsourcing Durability to Untrusted Parties.P.Williams,R.Sion,D.Shasha5.Two-Party Computation Model for Privacy-Preserving Queries over Distributed Databases.S.S.M.Chow,J.-H.Lee,L.Subramanian6.SybilInfer:Detecting Sybil Nodes using Social Networks.G.Danezis,P.Mittal7.Spectrogram:A Mixture-of-Markov-Chains Model for Anomaly Detection in Web Traffic.Yingbo Song,Angelos D.Keromytis,Salvatore J.Stolfo8.Detecting Forged TCP Reset Packets.Nicholas Weaver,Robin Sommer,Vern Paxson9.Coordinated Scan Detection.Carrie Gates10.RB-Seeker:Auto-detection of Redirection Botnets.Xin Hu,Matthew Knysz,Kang G.Shin11.Scalable,Behavior-Based Malware Clustering.Ulrich Bayer,Paolo Milani Comparetti,Clemens Hlauschek,Christopher Kruegel,Engin Kirda12.K-Tracer:A System for Extracting Kernel Malware Behavior.Andrea Lanzi,Monirul I.Sharif,Wenke Lee13.RAINBOW:A Robust And Invisible Non-Blind Watermark for Network Flows.Amir Houmansadr,Negar Kiyavash,Nikita Borisov14.Traffic Morphing:An Efficient Defense Against Statistical Traffic Analysis.Charles V.Wright,Scott E.Coull,Fabian Monrose15.Recursive DNS Architectures and Vulnerability Implications.David Dagon,Manos Antonakakis,Kevin Day,Xiapu Luo,Christopher P.Lee,Wenke Lee16.Analyzing and Comparing the Protection Quality of Security Enhanced Operating Systems.Hong Chen,Ninghui Li,Ziqing Mao17.IntScope:Automatically Detecting Integer Overflow Vulnerability in X86Binary Using Symbolic Execution.Tielei Wang,Tao Wei,Zhiqiang Lin,Wei Zou18.Safe Passage for Passwords and Other Sensitive Data.Jonathan M.McCune,Adrian Perrig,Michael K.Reiter19.Conditioned-safe Ceremonies and a User Study of an Application to Web Authentication.Chris Karlof,J.Doug Tygar,David Wagner20.CSAR:A Practical and Provable Technique to Make Randomized Systems Accountable.Michael Backes,Peter Druschel,Andreas Haeberlen,Dominique UnruhOakland20091.Wirelessly Pickpocketing a Mifare Classic Card.(Best Practical Paper Award)Flavio D.Garcia,Peter van Rossum,Roel Verdult,Ronny Wichers Schreur2.Plaintext Recovery Attacks Against SSH.Martin R.Albrecht,Kenneth G.Paterson,Gaven J.Watson3.Exploiting Unix File-System Races via Algorithmic Complexity Attacks.Xiang Cai,Yuwei Gui,Rob Johnson4.Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86Processors.Bart Coppens,Ingrid Verbauwhede,Bjorn De Sutter,Koen De Bosschere5.Non-Interference for a Practical DIFC-Based Operating System.Maxwell Krohn,Eran Tromer6.Native Client:A Sandbox for Portable,Untrusted x86Native Code.(Best Paper Award)B.Yee,D.Sehr,G.Dardyk,B.Chen,R.Muth,T.Ormandy,S.Okasaka,N.Narula,N.Fullagar7.Automatic Reverse Engineering of Malware Emulators.(Best Student Paper Award)Monirul Sharif,Andrea Lanzi,Jonathon Giffin,Wenke Lee8.Prospex:Protocol Specification Extraction.Paolo Milani Comparetti,Gilbert Wondracek,Christopher Kruegel,Engin Kirda9.Quantifying Information Leaks in Outbound Web Traffic.Kevin Borders,Atul Prakash10.Automatic Discovery and Quantification of Information Leaks.Michael Backes,Boris Kopf,Andrey Rybalchenko11.CLAMP:Practical Prevention of Large-Scale Data Leaks.Bryan Parno,Jonathan M.McCune,Dan Wendlandt,David G.Andersen,Adrian Perrig12.De-anonymizing Social Networks.Arvind Narayanan,Vitaly Shmatikov13.Privacy Weaknesses in Biometric Sketches.Koen Simoens,Pim Tuyls,Bart Preneel14.The Mastermind Attack on Genomic Data.Michael T.Goodrich15.A Logic of Secure Systems and its Application to Trusted Computing.Anupam Datta,Jason Franklin,Deepak Garg,Dilsun Kaynar16.Formally Certifying the Security of Digital Signature Schemes.Santiago Zanella-Beguelin,Gilles Barthe,Benjamin Gregoire,Federico Olmedo17.An Epistemic Approach to Coercion-Resistance for Electronic Voting Protocols.Ralf Kuesters,Tomasz Truderung18.Sphinx:A Compact and Provably Secure Mix Format.George Danezis,Ian Goldberg19.DSybil:Optimal Sybil-Resistance for Recommendation Systems.Haifeng Yu,Chenwei Shi,Michael Kaminsky,Phillip B.Gibbons,Feng Xiao20.Fingerprinting Blank Paper Using Commodity Scanners.William Clarkson,Tim Weyrich,Adam Finkelstein,Nadia Heninger,Alex Halderman,Ed Felten 21.Tempest in a Teapot:Compromising Reflections Revisited.Michael Backes,Tongbo Chen,Markus Duermuth,Hendrik P.A.Lensch,Martin Welk22.Blueprint:Robust Prevention of Cross-site Scripting Attacks for Existing Browsers.Mike Ter Louw,V.N.Venkatakrishnan23.Pretty-Bad-Proxy:An Overlooked Adversary in Browsers’HTTPS Deployments.Shuo Chen,Ziqing Mao,Yi-Min Wang,Ming Zhang24.Secure Content Sniffing for Web Browsers,or How to Stop Papers from Reviewing Themselves.Adam Barth,Juan Caballero,Dawn Song25.It’s No Secret:Measuring the Security and Reliability of Authentication via’Secret’Questions.Stuart Schechter,A.J.Bernheim Brush,Serge Egelman26.Password Cracking Using Probabilistic Context-Free Grammars.Matt Weir,Sudhir Aggarwal,Bill Glodek,Breno de MedeirosUSENIX Security2009promising Electromagnetic Emanations of Wired and Wireless Keyboards.(Outstanding Student Paper)Martin Vuagnoux,Sylvain Pasini2.Peeping Tom in the Neighborhood:Keystroke Eavesdropping on Multi-User Systems.Kehuan Zhang,XiaoFeng Wang3.A Practical Congestion Attack on Tor Using Long Paths,Nathan S.Evans,Roger Dingledine,Christian Grothoff4.Baggy Bounds Checking:An Efficient and Backwards-Compatible Defense against Out-of-Bounds Errors.Periklis Akritidis,Manuel Costa,Miguel Castro,Steven Hand5.Dynamic Test Generation to Find Integer Bugs in x86Binary Linux Programs.David Molnar,Xue Cong Li,David A.Wagner6.NOZZLE:A Defense Against Heap-spraying Code Injection Attacks.Paruj Ratanaworabhan,Benjamin Livshits,Benjamin Zorn7.Detecting Spammers with SNARE:Spatio-temporal Network-level Automatic Reputation Engine.Shuang Hao,Nadeem Ahmed Syed,Nick Feamster,Alexander G.Gray,Sven Krasser8.Improving Tor using a TCP-over-DTLS Tunnel.Joel Reardon,Ian Goldberg9.Locating Prefix Hijackers using LOCK.Tongqing Qiu,Lusheng Ji,Dan Pei,Jia Wang,Jun(Jim)Xu,Hitesh Ballani10.GATEKEEPER:Mostly Static Enforcement of Security and Reliability Policies for JavaScript Code.Salvatore Guarnieri,Benjamin Livshits11.Cross-Origin JavaScript Capability Leaks:Detection,Exploitation,and Defense.Adam Barth,Joel Weinberger,Dawn Song12.Memory Safety for Low-Level Software/Hardware Interactions.John Criswell,Nicolas Geoffray,Vikram Adve13.Physical-layer Identification of RFID Devices.Boris Danev,Thomas S.Heydt-Benjamin,Srdjan CapkunCP:Secure Remote Storage for Computational RFIDs.Mastooreh Salajegheh,Shane Clark,Benjamin Ransford,Kevin Fu,Ari Juels15.Jamming-resistant Broadcast Communication without Shared Keys.Christina Popper,Mario Strasser,Srdjan Capkun16.xBook:Redesigning Privacy Control in Social Networking Platforms.Kapil Singh,Sumeer Bhola,Wenke Lee17.Nemesis:Preventing Authentication and Access Control Vulnerabilities in Web Applications.Michael Dalton,Christos Kozyrakis,Nickolai Zeldovich18.Static Enforcement of Web Application Integrity Through Strong Typing.William Robertson,Giovanni Vigna19.Vanish:Increasing Data Privacy with Self-Destructing Data.(Outstanding Student Paper)Roxana Geambasu,Tadayoshi Kohno,Amit A.Levy,Henry M.Levy20.Efficient Data Structures for Tamper-Evident Logging.Scott A.Crosby,Dan S.Wallach21.VPriv:Protecting Privacy in Location-Based Vehicular Services.Raluca Ada Popa,Hari Balakrishnan,Andrew J.Blumberg22.Effective and Efficient Malware Detection at the End Host.Clemens Kolbitsch,Paolo Milani Comparetti,Christopher Kruegel,Engin Kirda,Xiaoyong Zhou,XiaoFeng Wang 23.Protecting Confidential Data on Personal Computers with Storage Capsules.Kevin Borders,Eric Vander Weele,Billy Lau,Atul Prakash24.Return-Oriented Rootkits:Bypassing Kernel Code Integrity Protection Mechanisms.Ralf Hund,Thorsten Holz,Felix C.Freiling25.Crying Wolf:An Empirical Study of SSL Warning Effectiveness.Joshua Sunshine,Serge Egelman,Hazim Almuhimedi,Neha Atri,Lorrie Faith Cranor26.The Multi-Principal OS Construction of the Gazelle Web Browser.Helen J.Wang,Chris Grier,Alex Moshchuk,Samuel T.King,Piali Choudhury,Herman VenterACM CCS20091.Attacking cryptographic schemes based on”perturbation polynomials”.Martin Albrecht,Craig Gentry,Shai Halevi,Jonathan Katz2.Filter-resistant code injection on ARM.Yves Younan,Pieter Philippaerts,Frank Piessens,Wouter Joosen,Sven Lachmund,Thomas Walter3.False data injection attacks against state estimation in electric power grids.Yao Liu,Michael K.Reiter,Peng Ning4.EPC RFID tag security weaknesses and defenses:passport cards,enhanced drivers licenses,and beyond.Karl Koscher,Ari Juels,Vjekoslav Brajkovic,Tadayoshi Kohno5.An efficient forward private RFID protocol.Come Berbain,Olivier Billet,Jonathan Etrog,Henri Gilbert6.RFID privacy:relation between two notions,minimal condition,and efficient construction.Changshe Ma,Yingjiu Li,Robert H.Deng,Tieyan Li7.CoSP:a general framework for computational soundness proofs.Michael Backes,Dennis Hofheinz,Dominique Unruh8.Reactive noninterference.Aaron Bohannon,Benjamin C.Pierce,Vilhelm Sjoberg,Stephanie Weirich,Steve Zdancewicputational soundness for key exchange protocols with symmetric encryption.Ralf Kusters,Max Tuengerthal10.A probabilistic approach to hybrid role mining.Mario Frank,Andreas P.Streich,David A.Basin,Joachim M.Buhmann11.Efficient pseudorandom functions from the decisional linear assumption and weaker variants.Allison B.Lewko,Brent Waters12.Improving privacy and security in multi-authority attribute-based encryption.Melissa Chase,Sherman S.M.Chow13.Oblivious transfer with access control.Jan Camenisch,Maria Dubovitskaya,Gregory Neven14.NISAN:network information service for anonymization networks.Andriy Panchenko,Stefan Richter,Arne Rache15.Certificateless onion routing.Dario Catalano,Dario Fiore,Rosario Gennaro16.ShadowWalker:peer-to-peer anonymous communication using redundant structured topologies.Prateek Mittal,Nikita Borisov17.Ripley:automatically securing web2.0applications through replicated execution.K.Vikram,Abhishek Prateek,V.Benjamin Livshits18.HAIL:a high-availability and integrity layer for cloud storage.Kevin D.Bowers,Ari Juels,Alina Oprea19.Hey,you,get offof my cloud:exploring information leakage in third-party compute clouds.Thomas Ristenpart,Eran Tromer,Hovav Shacham,Stefan Savage20.Dynamic provable data possession.C.Christopher Erway,Alptekin Kupcu,Charalampos Papamanthou,Roberto Tamassia21.On cellular botnets:measuring the impact of malicious devices on a cellular network core.Patrick Traynor,Michael Lin,Machigar Ongtang,Vikhyath Rao,Trent Jaeger,Patrick Drew McDaniel,Thomas Porta 22.On lightweight mobile phone application certification.William Enck,Machigar Ongtang,Patrick Drew McDaniel23.SMILE:encounter-based trust for mobile social services.Justin Manweiler,Ryan Scudellari,Landon P.Cox24.Battle of Botcraft:fighting bots in online games with human observational proofs.Steven Gianvecchio,Zhenyu Wu,Mengjun Xie,Haining Wang25.Fides:remote anomaly-based cheat detection using client emulation.Edward C.Kaiser,Wu-chang Feng,Travis Schluessler26.Behavior based software theft detection.Xinran Wang,Yoon-chan Jhi,Sencun Zhu,Peng Liu27.The fable of the bees:incentivizing robust revocation decision making in ad hoc networks.Steffen Reidt,Mudhakar Srivatsa,Shane Balfe28.Effective implementation of the cell broadband engineTM isolation loader.Masana Murase,Kanna Shimizu,Wilfred Plouffe,Masaharu Sakamoto29.On achieving good operating points on an ROC plane using stochastic anomaly score prediction.Muhammad Qasim Ali,Hassan Khan,Ali Sajjad,Syed Ali Khayam30.On non-cooperative location privacy:a game-theoretic analysis.Julien Freudiger,Mohammad Hossein Manshaei,Jean-Pierre Hubaux,David C.Parkes31.Privacy-preserving genomic computation through program specialization.Rui Wang,XiaoFeng Wang,Zhou Li,Haixu Tang,Michael K.Reiter,Zheng Dong32.Feeling-based location privacy protection for location-based services.Toby Xu,Ying Cai33.Multi-party off-the-record messaging.Ian Goldberg,Berkant Ustaoglu,Matthew Van Gundy,Hao Chen34.The bayesian traffic analysis of mix networks.Carmela Troncoso,George Danezis35.As-awareness in Tor path selection.Matthew Edman,Paul F.Syverson36.Membership-concealing overlay networks.Eugene Y.Vasserman,Rob Jansen,James Tyra,Nicholas Hopper,Yongdae Kim37.On the difficulty of software-based attestation of embedded devices.Claude Castelluccia,Aurelien Francillon,Daniele Perito,Claudio Soriente38.Proximity-based access control for implantable medical devices.Kasper Bonne Rasmussen,Claude Castelluccia,Thomas S.Heydt-Benjamin,Srdjan Capkun39.XCS:cross channel scripting and its impact on web applications.Hristo Bojinov,Elie Bursztein,Dan Boneh40.A security-preserving compiler for distributed programs:from information-flow policies to cryptographic mechanisms.Cedric Fournet,Gurvan Le Guernic,Tamara Rezk41.Finding bugs in exceptional situations of JNI programs.Siliang Li,Gang Tan42.Secure open source collaboration:an empirical study of Linus’law.Andrew Meneely,Laurie A.Williams43.On voting machine design for verification and testability.Cynthia Sturton,Susmit Jha,Sanjit A.Seshia,David Wagner44.Secure in-VM monitoring using hardware virtualization.Monirul I.Sharif,Wenke Lee,Weidong Cui,Andrea Lanzi45.A metadata calculus for secure information sharing.Mudhakar Srivatsa,Dakshi Agrawal,Steffen Reidt46.Multiple password interference in text passwords and click-based graphical passwords.Sonia Chiasson,Alain Forget,Elizabeth Stobert,Paul C.van Oorschot,Robert Biddle47.Can they hear me now?:a security analysis of law enforcement wiretaps.Micah Sherr,Gaurav Shah,Eric Cronin,Sandy Clark,Matt Blaze48.English shellcode.Joshua Mason,Sam Small,Fabian Monrose,Greg MacManus49.Learning your identity and disease from research papers:information leaks in genome wide association study.Rui Wang,Yong Fuga Li,XiaoFeng Wang,Haixu Tang,Xiao-yong Zhou50.Countering kernel rootkits with lightweight hook protection.Zhi Wang,Xuxian Jiang,Weidong Cui,Peng Ning51.Mapping kernel objects to enable systematic integrity checking.Martim Carbone,Weidong Cui,Long Lu,Wenke Lee,Marcus Peinado,Xuxian Jiang52.Robust signatures for kernel data structures.Brendan Dolan-Gavitt,Abhinav Srivastava,Patrick Traynor,Jonathon T.Giffin53.A new cell counter based attack against tor.Zhen Ling,Junzhou Luo,Wei Yu,Xinwen Fu,Dong Xuan,Weijia Jia54.Scalable onion routing with torsk.Jon McLachlan,Andrew Tran,Nicholas Hopper,Yongdae Kim55.Anonymous credentials on a standard java card.Patrik Bichsel,Jan Camenisch,Thomas Gros,Victor Shouprge-scale malware indexing using function-call graphs.Xin Hu,Tzi-cker Chiueh,Kang G.Shin57.Dispatcher:enabling active botnet infiltration using automatic protocol reverse-engineering.Juan Caballero,Pongsin Poosankam,Christian Kreibich,Dawn Xiaodong Song58.Your botnet is my botnet:analysis of a botnet takeover.Brett Stone-Gross,Marco Cova,Lorenzo Cavallaro,Bob Gilbert,MartinSzydlowski,Richard A.Kemmerer,Christopher Kruegel,Giovanni VignaNDSS20101.Server-side Verification of Client Behavior in Online Games.Darrell Bethea,Robert Cochran and Michael Reiter2.Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs.S.Wolchok,O.S.Hofmann,N.Heninger,E.W.Felten,J.A.Halderman,C.J.Rossbach,B.Waters,E.Witchel3.Stealth DoS Attacks on Secure Channels.Amir Herzberg and Haya Shulman4.Protecting Browsers from Extension Vulnerabilities.Adam Barth,Adrienne Porter Felt,Prateek Saxena,and Aaron Boodman5.Adnostic:Privacy Preserving Targeted Advertising.Vincent Toubiana,Arvind Narayanan,Dan Boneh,Helen Nissenbaum and Solon Barocas6.FLAX:Systematic Discovery of Client-side Validation Vulnerabilities in Rich Web Applications.Prateek Saxena,Steve Hanna,Pongsin Poosankam and Dawn Song7.Effective Anomaly Detection with Scarce Training Data.William Robertson,Federico Maggi,Christopher Kruegel and Giovanni Vignarge-Scale Automatic Classification of Phishing Pages.Colin Whittaker,Brian Ryner and Marria Nazif9.A Systematic Characterization of IM Threats using Honeypots.Iasonas Polakis,Thanasis Petsas,Evangelos P.Markatos and Spiros Antonatos10.On Network-level Clusters for Spam Detection.Zhiyun Qian,Zhuoqing Mao,Yinglian Xie and Fang Yu11.Improving Spam Blacklisting Through Dynamic Thresholding and Speculative Aggregation.Sushant Sinha,Michael Bailey and Farnam Jahanian12.Botnet Judo:Fighting Spam with Itself.A.Pitsillidis,K.Levchenko,C.Kreibich,C.Kanich,G.M.Voelker,V.Paxson,N.Weaver,S.Savage13.Contractual Anonymity.Edward J.Schwartz,David Brumley and Jonathan M.McCune14.A3:An Extensible Platform for Application-Aware Anonymity.Micah Sherr,Andrew Mao,William R.Marczak,Wenchao Zhou,Boon Thau Loo,and Matt Blaze15.When Good Randomness Goes Bad:Virtual Machine Reset Vulnerabilities and Hedging Deployed Cryptography.Thomas Ristenpart and Scott Yilek16.InvisiType:Object-Oriented Security Policies.Jiwon Seo and Monica m17.A Security Evaluation of DNSSEC with NSEC3.Jason Bau and John Mitchell18.On the Safety of Enterprise Policy Deployment.Yudong Gao,Ni Pan,Xu Chen and Z.Morley Mao19.Where Do You Want to Go Today?Escalating Privileges by Pathname Manipulation.Suresh Chari,Shai Halevi and Wietse Venema20.Joe-E:A Security-Oriented Subset of Java.Adrian Mettler,David Wagner and Tyler Close21.Preventing Capability Leaks in Secure JavaScript Subsets.Matthew Finifter,Joel Weinberger and Adam Barth22.Binary Code Extraction and Interface Identification for Security Applications.Juan Caballero,Noah M.Johnson,Stephen McCamant,and Dawn Song23.Automatic Reverse Engineering of Data Structures from Binary Execution.Zhiqiang Lin,Xiangyu Zhang and Dongyan Xu24.Efficient Detection of Split Personalities in Malware.Davide Balzarotti,Marco Cova,Christoph Karlberger,Engin Kirda,Christopher Kruegel and Giovanni VignaOakland20101.Inspector Gadget:Automated Extraction of Proprietary Gadgets from Malware Binaries.Clemens Kolbitsch Thorsten Holz,Christopher Kruegel,Engin Kirda2.Synthesizing Near-Optimal Malware Specifications from Suspicious Behaviors.Matt Fredrikson,Mihai Christodorescu,Somesh Jha,Reiner Sailer,Xifeng Yan3.Identifying Dormant Functionality in Malware Programs.Paolo Milani Comparetti,Guido Salvaneschi,Clemens Kolbitsch,Engin Kirda,Christopher Kruegel,Stefano Zanero4.Reconciling Belief and Vulnerability in Information Flow.Sardaouna Hamadou,Vladimiro Sassone,Palamidessi5.Towards Static Flow-Based Declassification for Legacy and Untrusted Programs.Bruno P.S.Rocha,Sruthi Bandhakavi,Jerry I.den Hartog,William H.Winsborough,Sandro Etalle6.Non-Interference Through Secure Multi-Execution.Dominique Devriese,Frank Piessens7.Object Capabilities and Isolation of Untrusted Web Applications.Sergio Maffeis,John C.Mitchell,Ankur Taly8.TrustVisor:Efficient TCB Reduction and Attestation.Jonathan McCune,Yanlin Li,Ning Qu,Zongwei Zhou,Anupam Datta,Virgil Gligor,Adrian Perrig9.Overcoming an Untrusted Computing Base:Detecting and Removing Malicious Hardware Automatically.Matthew Hicks,Murph Finnicum,Samuel T.King,Milo M.K.Martin,Jonathan M.Smith10.Tamper Evident Microprocessors.Adam Waksman,Simha Sethumadhavan11.Side-Channel Leaks in Web Applications:a Reality Today,a Challenge Tomorrow.Shuo Chen,Rui Wang,XiaoFeng Wang Kehuan Zhang12.Investigation of Triangular Spamming:a Stealthy and Efficient Spamming Technique.Zhiyun Qian,Z.Morley Mao,Yinglian Xie,Fang Yu13.A Practical Attack to De-Anonymize Social Network Users.Gilbert Wondracek,Thorsten Holz,Engin Kirda,Christopher Kruegel14.SCiFI-A System for Secure Face Identification.(Best Paper)Margarita Osadchy,Benny Pinkas,Ayman Jarrous,Boaz Moskovich15.Round-Efficient Broadcast Authentication Protocols for Fixed Topology Classes.Haowen Chan,Adrian Perrig16.Revocation Systems with Very Small Private Keys.Allison Lewko,Amit Sahai,Brent Waters17.Authenticating Primary Users’Signals in Cognitive Radio Networks via Integrated Cryptographic and Wireless Link Signatures.Yao Liu,Peng Ning,Huaiyu Dai18.Outside the Closed World:On Using Machine Learning For Network Intrusion Detection.Robin Sommer,Vern Paxson19.All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution(but might have been afraid to ask).Thanassis Avgerinos,Edward Schwartz,David Brumley20.State of the Art:Automated Black-Box Web Application Vulnerability Testing.Jason Bau,Elie Bursztein,Divij Gupta,John Mitchell21.A Proof-Carrying File System.Deepak Garg,Frank Pfenning22.Scalable Parametric Verification of Secure Systems:How to Verify Ref.Monitors without Worrying about Data Structure Size.Jason Franklin,Sagar Chaki,Anupam Datta,Arvind Seshadri23.HyperSafe:A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity.Zhi Wang,Xuxian Jiang24.How Good are Humans at Solving CAPTCHAs?A Large Scale Evaluation.Elie Bursztein,Steven Bethard,John C.Mitchell,Dan Jurafsky,Celine Fabry25.Bootstrapping Trust in Commodity Computers.Bryan Parno,Jonathan M.McCune,Adrian Perrig26.Chip and PIN is Broken.(Best Practical Paper)Steven J.Murdoch,Saar Drimer,Ross Anderson,Mike Bond27.Experimental Security Analysis of a Modern Automobile.K.Koscher,A.Czeskis,F.Roesner,S.Patel,T.Kohno,S.Checkoway,D.McCoy,B.Kantor,D.Anderson,H.Shacham,S.Savage 28.On the Incoherencies in Web Browser Access Control Policies.Kapil Singh,Alexander Moshchuk,Helen J.Wang,Wenke Lee29.ConScript:Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser.Leo Meyerovich,Benjamin Livshits30.TaintScope:A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection.(Best Student Paper)Tielei Wang,Tao Wei,Guofei Gu,Wei Zou31.A Symbolic Execution Framework for JavaScript.Prateek Saxena,Devdatta Akhawe,Steve Hanna,Stephen McCamant,Dawn Song,Feng MaoUSENIX Security20101.Adapting Software Fault Isolation to Contemporary CPU Architectures.David Sehr,Robert Muth,CliffBiffle,Victor Khimenko,Egor Pasko,Karl Schimpf,Bennet Yee,Brad Chen2.Making Linux Protection Mechanisms Egalitarian with UserFS.Taesoo Kim and Nickolai Zeldovich3.Capsicum:Practical Capabilities for UNIX.(Best Student Paper)Robert N.M.Watson,Jonathan Anderson,Ben Laurie,Kris Kennaway4.Structuring Protocol Implementations to Protect Sensitive Data.Petr Marchenko,Brad Karp5.PrETP:Privacy-Preserving Electronic Toll Pricing.Josep Balasch,Alfredo Rial,Carmela Troncoso,Bart Preneel,Ingrid Verbauwhede,Christophe Geuens6.An Analysis of Private Browsing Modes in Modern Browsers.Gaurav Aggarwal,Elie Bursztein,Collin Jackson,Dan Boneh7.BotGrep:Finding P2P Bots with Structured Graph Analysis.Shishir Nagaraja,Prateek Mittal,Chi-Yao Hong,Matthew Caesar,Nikita Borisov8.Fast Regular Expression Matching Using Small TCAMs for Network Intrusion Detection and Prevention Systems.Chad R.Meiners,Jignesh Patel,Eric Norige,Eric Torng,Alex X.Liu9.Searching the Searchers with SearchAudit.John P.John,Fang Yu,Yinglian Xie,Martin Abadi,Arvind Krishnamurthy10.Toward Automated Detection of Logic Vulnerabilities in Web Applications.Viktoria Felmetsger,Ludovico Cavedon,Christopher Kruegel,Giovanni Vigna11.Baaz:A System for Detecting Access Control Misconfigurations.Tathagata Das,Ranjita Bhagwan,Prasad Naldurg12.Cling:A Memory Allocator to Mitigate Dangling Pointers.Periklis Akritidis13.ZKPDL:A Language-Based System for Efficient Zero-Knowledge Proofs and Electronic Cash.Sarah Meiklejohn,C.Chris Erway,Alptekin Kupcu,Theodora Hinkle,Anna Lysyanskaya14.P4P:Practical Large-Scale Privacy-Preserving Distributed Computation Robust against Malicious Users.Yitao Duan,John Canny,Justin Zhan,15.SEPIA:Privacy-Preserving Aggregation of Multi-Domain Network Events and Statistics.Martin Burkhart,Mario Strasser,Dilip Many,Xenofontas Dimitropoulos16.Dude,Where’s That IP?Circumventing Measurement-based IP Geolocation.Phillipa Gill,Yashar Ganjali,Bernard Wong,David Lie17.Idle Port Scanning and Non-interference Analysis of Network Protocol Stacks Using Model Checking.Roya Ensafi,Jong Chun Park,Deepak Kapur,Jedidiah R.Crandall18.Building a Dynamic Reputation System for DNS.Manos Antonakakis,Roberto Perdisci,David Dagon,Wenke Lee,Nick Feamster19.Scantegrity II Municipal Election at Takoma Park:The First E2E Binding Governmental Election with Ballot Privacy.R.Carback,D.Chaum,J.Clark,J.Conway,A.Essex,P.S.Herrnson,T.Mayberry,S.Popoveniuc,R.L.Rivest,E.Shen,A.T.Sherman,P.L.Vora20.Acoustic Side-Channel Attacks on Printers.Michael Backes,Markus Durmuth,Sebastian Gerling,Manfred Pinkal,Caroline Sporleder21.Security and Privacy Vulnerabilities of In-Car Wireless Networks:A Tire Pressure Monitoring System Case Study.Ishtiaq Rouf,Rob Miller,Hossen Mustafa,Travis Taylor,Sangho Oh,Wenyuan Xu,Marco Gruteser,Wade Trappe,Ivan Seskar 22.VEX:Vetting Browser Extensions for Security Vulnerabilities.(Best Paper)Sruthi Bandhakavi,Samuel T.King,P.Madhusudan,Marianne Winslett23.Securing Script-Based Extensibility in Web Browsers.Vladan Djeric,Ashvin Goel24.AdJail:Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements.Mike Ter Louw,Karthik Thotta Ganesh,V.N.Venkatakrishnan25.Realization of RF Distance Bounding.Kasper Bonne Rasmussen,Srdjan Capkun26.The Case for Ubiquitous Transport-Level Encryption.Andrea Bittau,Michael Hamburg,Mark Handley,David Mazieres,Dan Boneh27.Automatic Generation of Remediation Procedures for Malware Infections.Roberto Paleari,Lorenzo Martignoni,Emanuele Passerini,Drew Davidson,Matt Fredrikson,Jon Giffin,Somesh Jha28.Re:CAPTCHAs-Understanding CAPTCHA-Solving Services in an Economic Context.Marti Motoyama,Kirill Levchenko,Chris Kanich,Damon McCoy,Geoffrey M.Voelker,Stefan Savage29.Chipping Away at Censorship Firewalls with User-Generated Content.Sam Burnett,Nick Feamster,Santosh Vempala30.Fighting Coercion Attacks in Key Generation using Skin Conductance.Payas Gupta,Debin GaoACM CCS20101.Security Analysis of India’s Electronic Voting Machines.Scott Wolchok,Erik Wustrow,J.Alex Halderman,Hari Prasad,Rop Gonggrijp2.Dissecting One Click Frauds.Nicolas Christin,Sally S.Yanagihara,Keisuke Kamataki3.@spam:The Underground on140Characters or Less.Chris Grier,Kurt Thomas,Vern Paxson,Michael Zhang4.HyperSentry:Enabling Stealthy In-context Measurement of Hypervisor Integrity.Ahmed M.Azab,Peng Ning,Zhi Wang,Xuxian Jiang,Xiaolan Zhang,Nathan C.Skalsky5.Trail of Bytes:Efficient Support for Forensic Analysis.Srinivas Krishnan,Kevin Z.Snow,Fabian Monrose6.Survivable Key Compromise in Software Update Systems.Justin Samuel,Nick Mathewson,Justin Cappos,Roger Dingledine7.A Methodology for Empirical Analysis of the Permission-Based Security Models and its Application to Android.David Barrera,H.Gunes Kayacik,Paul C.van Oorschot,Anil Somayaji8.Mobile Location Tracking in Metropolitan Areas:malnets and others.Nathanial Husted,Steve Myers9.On Pairing Constrained Wireless Devices Based on Secrecy of Auxiliary Channels:The Case of Acoustic Eavesdropping.Tzipora Halevi,Nitesh Saxena10.PinDr0p:Using Single-Ended Audio Features to Determine Call Provenance.Vijay A.Balasubramaniyan,Aamir Poonawalla,Mustaque Ahamad,Michael T.Hunter,Patrick Traynor11.Building Efficient Fully Collusion-Resilient Traitor Tracing and Revocation Schemes.Sanjam Garg,Abishek Kumarasubramanian,Amit Sahai,Brent Waters12.Algebraic Pseudorandom Functions with Improved Efficiency from the Augmented Cascade.Dan Boneh,Hart Montgomery,Ananth Raghunathan13.Practical Leakage-Resilient Pseudorandom Generators.Yu Yu,Francois-Xavier Standaert,Olivier Pereira,Moti Yung14.Practical Leakage-Resilient Identity-Based Encryption from Simple Assumptions.Sherman S.M.Chow,Yevgeniy Dodis,Yannis Rouselakis,Brent Waters15.Testing Metrics for Password Creation Policies by Attacking Large Sets of Revealed Passwords.Matt Weir,Sudhir Aggarwal,Michael Collins,Henry Stern16.The Security of Modern Password Expiration:An Algorithmic Framework and Empirical Analysis.Yinqian Zhang,Fabian Monrose,Michael K.Reiter17.Attacks and Design of Image Recognition CAPTCHAs.Bin Zhu,JeffYan,Chao Yang,Qiujie Li,Jiu Liu,Ning Xu,Meng Yi18.Robusta:Taming the Native Beast of the JVM.Joseph Siefers,Gang Tan,Greg Morrisett19.Retaining Sandbox Containment Despite Bugs in Privileged Memory-Safe Code.Justin Cappos,Armon Dadgar,JeffRasley,Justin Samuel,Ivan Beschastnikh,Cosmin Barsan,Arvind Krishnamurthy,Thomas Anderson20.A Control Point for Reducing Root Abuse of File-System Privileges.Glenn Wurster,Paul C.van Oorschot21.Modeling Attacks on Physical Unclonable Functions.Ulrich Ruehrmair,Frank Sehnke,Jan Soelter,Gideon Dror,Srinivas Devadas,Juergen Schmidhuber22.Dismantling SecureMemory,CryptoMemory and CryptoRF.Flavio D.Garcia,Peter van Rossum,Roel Verdult,Ronny Wichers Schreur23.Attacking and Fixing PKCS#11Security Tokens.Matteo Bortolozzo,Matteo Centenaro,Riccardo Focardi,Graham Steel24.An Empirical Study of Privacy-Violating Information Flows in JavaScript Web Applications.Dongseok Jang,Ranjit Jhala,Sorin Lerner,Hovav Shacham25.DIFC Programs by Automatic Instrumentation.William Harris,Somesh Jha,Thomas Reps26.Predictive Black-box Mitigation of Timing Channels.Aslan Askarov,Danfeng Zhang,Andrew Myers27.In Search of an Anonymous and Secure Lookup:Attacks on Structured Peer-to-peer Anonymous Communication Systems.Qiyan Wang,Prateek Mittal,Nikita Borisov28.Recruiting New Tor Relays with BRAIDS.Rob Jansen,Nicholas Hopper,Yongdae Kim29.An Improved Algorithm for Tor Circuit Scheduling.Can Tang,Ian Goldberg30.Dissent:Accountable Anonymous Group Messaging.Henry Corrigan-Gibbs,Bryan Ford31.Abstraction by Set-Membership—Verifying Security Protocols and Web Services with Databases.Sebastian Moedersheim。

2023年高考英语新时政热点阅读 13 科学技术(含解析)

2023年高考英语新时政热点阅读 13 科学技术(含解析)

2023年高考英语新热点时文阅读-科学技术01(河北省示范性高中2022-2023学年高三9月调研考试英语试题)Housing ranks high among the numerous challenges that still need to be overcome before humans can colonize(征服) Mars. The brave pioneers that make the six-month voyage to the Red Planet will need a place to live in as soon as they land. While the best solution would be to have the structures ready before they get there, it has so far been a challenge given that most construction robots have never made it out of the laboratory. Now, there may be a bit of hope thanks to Massachusetts Institute of Technology’s newly revealed Digital Construction Platform (DCP).The DCP comprises a double arm system that is fitted on a tracked vehicle. As the larger arm moves, the smaller, precision motor robotic arm builds the structure by shooting out the necessary construction material, ranging from insulation foam(绝缘泡沫) to concrete. The team of researchers led by Ph. D.Steven Keating say that unlike other 3-D printers that are limited to building objects that fit within their overall enclosure, DCP’s free moving systems can be used to construct structures of any size.The team recently demonstrated the DCP’s building skills on an empty field in Mountain View, CA.The robot began by creating a mold with expanding foam that hardens when dry. It then constructed the building, layer by layer, using sensors to raise itself higher as it progressed. The final product was a sturdy “home” that had 50-foot diameter walls and a 12-foot high roof with room for essentials like electricity wires and water pipes to be inserted inside. Even more impressive? It took a mere 14 hours to “print”!The researchers’ next plan is to make the DCP smart enough to analyze the environment where the structure is going to be built and determine the material densities best suited for the area. However, that’s noteven the best part. Future DCP models are going to be solar-powered, autonomous, and, most importantly, capable of sourcing construction components from its surroundings. This means the robot can be sent to remote, disaster-stricken areas, and perhaps even to Mars, to build shelters using whatever material is available.1.What do we learn from the first paragraph?A.Housing pioneers on Mars is a reality.B.Colonizing Mars is out of the question.C.Building structures on Mars is in the testing phases.D.Finding a liveable place on Mars is a top priority.2.How does the DCP differ from other 3-D printers?A.It consumes less time.B.It comes in more different sizes.C.It is more environmentally friendly.D.It can build more diverse structures.3.What is the third paragraph mainly about?A.The successful case of the DCP.B.The working principle of the DCP.C.The instructions of using the DCP.D.The limitation of the DCP’s function.4.What might be the biggest highlight of future DCP ?A.Being powered by solar.B.Building shelters anywhere.C.Collecting building materials on site.D.Analyzing building material densities.02(2022·河南·洛宁县第一高级中学高三开学考试)Climate science has been rapidly advancing in recent years, but the foundations were laid hundreds of years ago.In the 1820s, French scientist Joseph Fourier theorized that Earth must have some way of keeping heat and that the atmosphere may play some role. In 1850, American scientist Eunice Newton Foote put thermometers(温度计)in glass bottles and experimented with placing them in sunlight. Inside the bottles, Foote compared dry air, wet air, N2, O2 and CO, and found that the bottle containing humid air warmed upmore and stayed hotter longer than the bottle containing dry air,and that it was followed by the bottle containing CO2. In 1859, Irish scientist John Tyndall began measuring how much heat different gases in the atmosphere absorb. And in 1896, Swedish scientist Svante Arrhenius concluded that more CO2in the atmosphere would cause the planet to heat up: These findings planted some of the earliest seeds of climate science.The first critical breakthrough happened in 1967 when Syukuro Manabe and Richard Wetherald connected energy absorbed by the atmosphere to the air movement vertically over Earth.They built a model which first included all the main physical processes related to climate changes. The predictions and the explanations based on their model still hold true in the real world almost half a century later.The model was improved in the 1980s by Klaus Hasselmann who connected short-term weather patterns with long-term climate changes. Hasselmann found that even random weather data could yield insight into broader patterns.“ The greatest uncertainty in the model remains what human beings will do. Figuring it out is 1,000 times harder than understanding the physics behind climate changes,” Manabe said.“ There are many things we can do to prevent climate change. The whole question is whether people will realize that something which will happen in20 or 30 years is something you have to respond to now.”So, it’s up to us to solve the problem that these pioneers helped the world understand.5.What does the word “humid” underlined in paragraph 2 mean?A.Cool.B.Cold.C.Dry.D.Wet.6.What is Klaus Hasselmann’s contribution to climate science?A.He found that CO2 causes global warming.B.He invented a unique measuring instrument.C.He improved Manabe and Wetherald’s model.D.He built a reliable model on climate change.7.What is paragraph 5 mainly about?A.The biggest problem with the climate model.B.The necessity for human beings to take action now.C.The challenge of understanding climate change.D.Measures to be taken to prevent climate change.8.Which of the following can be the best title for the text?A.Negative Effects of the Global WarmingB.Historic Breakthroughs in Climate ScienceC.Main Causes Leading to Climate ChangeD.Difficulties of Preventing Climate Change03(2022·河北邯郸·高三开学考试)To effectively interact with humans in crowded social settings, such as malls, hospitals, and other public spaces, robots should be able to actively participate in both group and one-to-one interactions. Most existing robots, however, have been found to perform much better when communicating with individual users than with groups of conversing humans. Hooman Hedayati and Daniel Szafir, two researchers at University of North Carolina at Chapel Hill, have recently developed a new data-driven technique that could improve how robots communicate with groups of humans.One of the reasons why many robots occasionally misbehave while participating in a group conversation is that their actions heavily rely on data collected by their sensors. Sensors, however, are prone (易于遭受) to errors, and can sometimes be disturbed by sudden movements and obstacles in the robot’s surroundings.“If the robot’s camera is masked by an obstacle for a second, the robot might not see that person, and as a result, it ignores the user,” Hedayati explained. “Based on my experience, users find these misbehaviors disturbing. The key goal of our recent project was to help robots detect and predict the position of an undetected person within the conversational group.”The technique developed by Hedayati and Szafir was trained on a series of existing datasets. By analyzing the positions of other speakers in a group, it can accurately predict the position of an undetected user.In the future, the new approach could help to enhance the conversational abilities of both existing and newly developed robots. This might in turn make them easier to serve in large public spaces, including malls, hospitals, and other public places. “The next step for us will be to improve the gaze behavior of robots in a conversational group. People find robots with a better gaze behavior more intelligent. We want to improve the gaze behavior of robots and make the human-robot conversational group more enjoyable for humans.” Hedayati said.9.What is the technique developed by Hedayati and Szafir based on?A.Data.B.Cameras.C.Existing robots.D.Social settings.10.What is mainly talked about in Paragraph 2?A.The working procedure of robots.B.The ability of robots to communicate.C.The experience of the researchers.D.The shortcomings of existing robots.11.What will happen if a robot’s camera is blocked?A.It will stop working.B.It will break down.C.It will abuse its user.D.It will misbehave.12.What do we know about the new data-driven technique?A.It is considered a failure.B.It has been used in malls.C.It gets satisfactory result.D.It only works with new robots.04(2021·浙江湖州·高三阶段练习)Researchers say they have used brain waves of a paralyzed man who cannot speak to produce words from his thoughts onto a computer. A team led by Dr. Edward Chang at the University of California, San Francisco, carried out the experiment.“Most of us take for granted how easily we communicate through speech,” Chang told The Associated Press. “It’s exciting to think we’re at the very beginning of a new chapter, a new field to ease the difficulties of patients who lost that ability.” The researchers admit that such communication methods for paralysis victims will require years of additional research. But, they say the new study marks an important step forward.Today, paralysis victims who cannot speak or write have very limited ways of communicating. For example, a victim can use a pointer attached to a hat that lets him move his head to touch words or letters on a screen. Other devices can pick up a person’s eye movements. But such methods are slow and a very limited replacement for speech.Using brain signals to work around disabilities is currently a hot field of study. Chang’s team built their experiment on earlier work. The process uses brain waves that normally control the voice system. The researchers implanted electrodes on the surface of the man’s brain, over the area that controls speech. A computer observed the patterns when he attempted to say common words such as “water” or “good.” Overtime, the computer became able to differentiate between 50 words that could form more than 1,000 sentences. Repeatedly given questions such as “How are you today?” or “Are you thirsty,” the device enabled the man to answer “I am very good” or “No, I am not thirsty.” The words were not voiced, but were turned into text on the computer.In an opinion article published with the study, Harvard brain doctors Leigh Hochberg and Sydney Cash called the work a “pioneering study.” The two doctors said the technology might one day help people with injuries, strokes or diseases like Lou Gehrig’s. People with such diseases have brains that “prepare messages for delivery, but those messages are trapped,” they wrote.13.How is the new method different from the current ones?A.It involves a patient’s brain waves.B.It can pick up a patient’s eye movements.C.It is a very limited replacement for speech.D.It can help a patient regain his speech ability.14.What does the underlined word “differentiate” in paragraph 4 mean?A.Organize.B.Learn.C.Distinguish.D.Speak.15.What was Leigh Hochberg and Sydney Cash’s attitude towards the study?A.Positive.B.Negative.C.Doubtful.D.Critical.16.Which of the following is the best title for the text?A.Researchers Found Good Methods to Help Paralyzed PatientsB.Device Uses Brain Waves of Paralyzed Man to Help Him CommunicateC.Years of Additional Work Needed to Improve the Communication MethodsD.Device Uses Brain Waves of Paralyzed Man to Cure His Speaking Disability05(2022·安徽·高三开学考试)When people think of farming today, they usually picture a tractor (拖拉机) rather than horses in the farmland. That’s because tractors that relied on engines revolutionized farming in the late 1800s. Now a new type of tractor can do the same in the 21st century.Agriculture has been changing dramatically in the last few decades. The push for innovation is fed by the need to produce larger amounts of food for a growing world population. Autonomous tractors may be the key to solving this challenge. They can be used to carry out labor-intensive farming while allowing farmersto do other work. A big plus is that it can increase crop output while reducing costs because the autonomous machines can work in all weather conditions without any rest.Part of push for automation is a shortage of farm workers due to people’s desire to have higher paying jobs with better work conditions. Farm owners are competing against companies like Amazon and restaurants that are raising wages to attract workers. “With labor shortages and the increase in the hourly wages that have to be paid in order to be competitive, all of a sudden automation seems like a more reasonable decision,” said David Swartz, a professor at Penn State University.Many believe the time is ripe for an autonomous revolution because robotics is already in use in agriculture. One company that is working to bring autonomous tractors into main stream farming is Blue and White Robotics, an Israeli agricultural technology company, whose mission is to make a fully autonomous farm. The company released an autonomous tractor kit in February 2021 that can be fixed on any existing tractor. The kit includes camera detection, speed controls, as well as an anti-crash system. Blue and White’s kit is being used by West Coast growers in the US. It may soon come to a farm near you.17.What contributes to the agricultural revolution according to Paragraph 2?A.The urge to feed more people.B.The extreme weather conditions.C.The need to reduce farming cost.D.The desire for automatic farming.18.What is Swartz’s attitude to automation?A.Critical.B.Negative.C.Supportive.D.Indifferent.19.What can be inferred about Blue and White’s kit?A.It has been widely used.B.It can be made in many firms.C.It can improve safety of tractors.D.It will detect the way of farming.20.What may be a suitable title for the text?A.Automation Is Transforming Agriculture B.Big Companies Are Making A Difference C.Driverless Tractors Are Worth Investing D.Traditional Farming Is Falling out of Date参考答案:1.C2.D3.A4.C【导语】本文是一篇说明文。

蚁群算法蚂蚁算法中英文对照外文翻译文献

蚁群算法蚂蚁算法中英文对照外文翻译文献

人工智能基础知识人工智能(AI)是一门涉及计算机科学、认知心理学和统计学等多学科交叉的研究领域。

它的目标是使计算机具有类似于人类智能的能力,能够感知、学习、推理、理解和决策。

在人工智能的学习中,有一些基础知识是必须理解的。

本文将介绍人工智能的基础知识,并探讨其在不同领域的应用。

一、机器学习机器学习是人工智能中最重要的分支之一。

它是一种通过数据和经验来训练和改进计算机算法的方法。

机器学习算法可以自动发现和提取数据中的模式和规律,并根据这些模式和规律进行预测和决策。

机器学习可以分为监督学习、无监督学习和强化学习三种类型。

监督学习使用带标签的数据作为输入,通过学习输入和输出之间的关系来进行预测。

无监督学习则使用无标签的数据,通过学习数据的分布和模式来进行聚类和分类。

强化学习是一种通过奖励和惩罚来引导机器学习的方法,机器在环境中以试错的方式学习并优化自己的行为。

二、深度学习深度学习是机器学习中的一个重要分支,也是近年来人工智能取得巨大突破的关键技术。

它通过模拟人脑神经网络的结构和功能,来进行更加复杂的模式识别和决策。

深度学习使用多层的神经网络模型进行计算,并通过大量的数据来训练网络中的权重和参数。

深度学习的一个重要特点是端到端的学习,即从输入到输出的所有过程都由网络自动学习而无需手动设计特征。

这使得深度学习在图像识别、语音识别和自然语言处理等领域取得了重大突破。

三、自然语言处理自然语言处理是人工智能中涉及语言处理的一个重要领域。

它研究如何使计算机能够理解、生成和处理自然语言的方法和技术。

自然语言处理包括词法分析、句法分析、语义分析和语言生成等方面。

它可以用于文本分类、信息检索、机器翻译和智能对话系统等应用。

近年来,深度学习的发展使得自然语言处理在理解和生成自然语言方面取得了巨大进展。

四、计算机视觉计算机视觉是人工智能中涉及图像和视频处理的一个重要领域。

它研究如何使计算机能够感知、理解和处理图像和视频信息。

基于遗传改进蚁群聚类算法的电力客户价值评价

基于遗传改进蚁群聚类算法的电力客户价值评价

1 电力客户价值评价指标体系
客户终生价值 (customer lifetime value , CLV) 包含企业与客户在整个生命周期内交易所能获得 的全部收益。目前对 CLV 的研究主要从企业、客 户和企业–客户 3 个视角出发; 而从企业视角对 CLV 进行研究是企业进行客户细分的重要标准 [13] 。因 此本文从企业视角对电力客户价值进行评价。 从企业视角来看,CLV 包括当前价值和潜在 价值 2 个方面:当前价值指客户当前消费模式不变 的情况下,在未来会给企业带来的价值;潜在价值 指通过调动客户购买积极性或向别人推荐产品和 服务等会给企业带来的价值。根据已有的参考资 本文构建 料[2-5,13],并考虑电力客户实际运营情况, 了电力客户价值评价指标体系,如图 1 所示。
ABSTRACT: It is an important procedure for service resource allocation of power supply enterprises to evaluate power consumer value. Based on the analysis on ant colony clustering algorithm (ACCA) and in allusion to the blindness in the setup of parametric combination of ACCA, its low convergence speed and easily falling into local convergence, a new method to evaluate power customer value, in which ACCA is improved by genetic algorithm (GA), is proposed. In the proposed method, the parameters of ACCA are optimized by GA, and then the clustering evaluation of power customer value is performed. Results of case study show that the clustering performance of the proposed method is evidently enhanced and the convergence is speeded up and local convergence can be avoided, in addition, the subjective factor during the evaluation is decreased. The proposed method is applied to evaluate ten industrial customers of a certain urban power supply company, and the evaluation results show that the proposed method is accurate, efficient and practicable. The features of various types of power customers are summarized and some suggestions on optimal allocation of service resources of power supply enterprises are put forward. KEY WORDS: power customer value; evaluation index system; ant colony clustering algorithm (ACCA); ACCA optimized by genetic algorithm; service resources optimization 摘要: 对电力客户价值进行评价是供电企业优化服务资源配 置的重要步骤。 分析了蚁群聚类算法, 并针对蚁群聚类算法 进行评价时参数组合设置盲目性、 收敛速度慢、 容易陷入局 部收敛的缺点, 提出了运用遗传算法改进蚁群聚类算法评价 电力客户价值的新方法。 该新方法利用遗传算法对蚁群聚类 算法的参数进行优化,进而再对电力客户价值进行聚类评 价。通过实例验证表明,该新方法聚类性能有较大的提升, 能够提升收敛速度和避免陷入局部收敛, 并且减少了聚类评

基于避障路径规划的无人直升机空地跟踪控制

基于避障路径规划的无人直升机空地跟踪控制

第44卷 第1期2024 年2月辽宁石油化工大学学报JOURNAL OF LIAONING PETROCHEMICAL UNIVERSITYVol.44 No.1Feb. 2024引用格式:杨静雯,李涛,杨欣,等.基于避障路径规划的无人直升机空地跟踪控制[J].辽宁石油化工大学学报,2024,44(1): 71-79.YANG Jingwen,LI Tao,YANG Xin,et al.Collaborative Air⁃Ground Tracking Control of Unmanned Helicopter Based on Obstacle Avoidance Path Planning[J].Journal of Liaoning Petrochemical University,2024,44(1):71-79.基于避障路径规划的无人直升机空地跟踪控制杨静雯,李涛,杨欣,冀明飞(南京航空航天大学自动化学院,江苏南京 211106)摘要: 针对无人直升机(Unmanned Aerial Helicopter,UAH)在空地协同跟踪过程中的避障和控制问题,提出了新型路径避障规划和跟踪控制设计方法。

针对不确定性的线性UAH模型,通过对UAH警示范围内二维环境信息进行处理判断,借助摸墙算法(Wall⁃Following Algorithm) 提出合适的避障策略,计算避障路径的行进角度以及能够弥补绕行距离的跟踪速度;将所得避障方法拓展至三维环境中,根据水平和垂直方向上的障碍物信息确定UAH飞行角度,从而减小由避障环节所带来的绕行距离;在上述避障算法的基础上,引入人工神经网络(Approximate Nearest Neighbor,ANN)估计模型不确定项,进而结合前馈补偿与最优控制等技术建立了跟踪控制设计方案。

仿真结果表明,所提避障策略和控制算法有效。

关键词: 无人直升机; 空地跟踪; 避障路径规划; 人工神经网络中图分类号: TP13 文献标志码: A doi:10.12422/j.issn.1672⁃6952.2024.01.011Collaborative Air⁃Ground Tracking Control of Unmanned Helicopter Based onObstacle Avoidance Path PlanningYANG Jingwen,LI Tao,YANG Xin,JI Mingfei(College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing Jiangsu 211106,China)Abstract: The paper aims to study the problem of obstacle avoidance in air⁃ground cooperative tracking control for the unmanned aerial helicopter (UAH),in which a new approach of designing the path obstacle avoidance plan and controller design is proposed. Initially, as for the uncertain linear UAH,by processing and judging two⁃dimensional environmental information within the warning range for the UAH,an obstacle avoidance strategy is proposed with the help of wall⁃following algorithm,and the flight angle of obstacle avoidance path and the tracking speed that can make up for bypass distance are calculated.Secondly,the proposed obstacle avoidance method is extended to the three⁃dimensional case,and the flight angle of the UAH is determined based on the obstacle information in the horizontal and vertical directions,which can reduce the bypass distance caused by the obstacle avoidance link as possible.Thirdly,based on two derived obstacle avoidance algorithms above,the artificial neural network (ANN) is introduced to estimate model uncertainty,and then the tracking control design schemes are established by using feedforward compensation and optimal control technologies.some simulations demonstrate the effectiveness of the proposed obstacle avoidance strategy and control algorithm.Keywords: Unmanned aerial helicopter ; Clearing tracking; Obstacle avoidance path planning; Artificial neural networ无人直升机(Unmanned Aerial Helicopter,UAH)是一种能够利用机载航空电子系统,通过无线电远距离遥控或无人干预下自主完成控制任务的飞行器。

Survey of clustering data mining techniques

Survey of clustering data mining techniques

A Survey of Clustering Data Mining TechniquesPavel BerkhinYahoo!,Inc.pberkhin@Summary.Clustering is the division of data into groups of similar objects.It dis-regards some details in exchange for data simplifirmally,clustering can be viewed as data modeling concisely summarizing the data,and,therefore,it re-lates to many disciplines from statistics to numerical analysis.Clustering plays an important role in a broad range of applications,from information retrieval to CRM. Such applications usually deal with large datasets and many attributes.Exploration of such data is a subject of data mining.This survey concentrates on clustering algorithms from a data mining perspective.1IntroductionThe goal of this survey is to provide a comprehensive review of different clus-tering techniques in data mining.Clustering is a division of data into groups of similar objects.Each group,called a cluster,consists of objects that are similar to one another and dissimilar to objects of other groups.When repre-senting data with fewer clusters necessarily loses certainfine details(akin to lossy data compression),but achieves simplification.It represents many data objects by few clusters,and hence,it models data by its clusters.Data mod-eling puts clustering in a historical perspective rooted in mathematics,sta-tistics,and numerical analysis.From a machine learning perspective clusters correspond to hidden patterns,the search for clusters is unsupervised learn-ing,and the resulting system represents a data concept.Therefore,clustering is unsupervised learning of a hidden data concept.Data mining applications add to a general picture three complications:(a)large databases,(b)many attributes,(c)attributes of different types.This imposes on a data analysis se-vere computational requirements.Data mining applications include scientific data exploration,information retrieval,text mining,spatial databases,Web analysis,CRM,marketing,medical diagnostics,computational biology,and many others.They present real challenges to classic clustering algorithms. These challenges led to the emergence of powerful broadly applicable data2Pavel Berkhinmining clustering methods developed on the foundation of classic techniques.They are subject of this survey.1.1NotationsTo fix the context and clarify terminology,consider a dataset X consisting of data points (i.e.,objects ,instances ,cases ,patterns ,tuples ,transactions )x i =(x i 1,···,x id ),i =1:N ,in attribute space A ,where each component x il ∈A l ,l =1:d ,is a numerical or nominal categorical attribute (i.e.,feature ,variable ,dimension ,component ,field ).For a discussion of attribute data types see [106].Such point-by-attribute data format conceptually corresponds to a N ×d matrix and is used by a majority of algorithms reviewed below.However,data of other formats,such as variable length sequences and heterogeneous data,are not uncommon.The simplest subset in an attribute space is a direct Cartesian product of sub-ranges C = C l ⊂A ,C l ⊂A l ,called a segment (i.e.,cube ,cell ,region ).A unit is an elementary segment whose sub-ranges consist of a single category value,or of a small numerical bin.Describing the numbers of data points per every unit represents an extreme case of clustering,a histogram .This is a very expensive representation,and not a very revealing er driven segmentation is another commonly used practice in data exploration that utilizes expert knowledge regarding the importance of certain sub-domains.Unlike segmentation,clustering is assumed to be automatic,and so it is a machine learning technique.The ultimate goal of clustering is to assign points to a finite system of k subsets (clusters).Usually (but not always)subsets do not intersect,and their union is equal to a full dataset with the possible exception of outliersX =C 1 ··· C k C outliers ,C i C j =0,i =j.1.2Clustering Bibliography at GlanceGeneral references regarding clustering include [110],[205],[116],[131],[63],[72],[165],[119],[75],[141],[107],[91].A very good introduction to contem-porary data mining clustering techniques can be found in the textbook [106].There is a close relationship between clustering and many other fields.Clustering has always been used in statistics [10]and science [158].The clas-sic introduction into pattern recognition framework is given in [64].Typical applications include speech and character recognition.Machine learning clus-tering algorithms were applied to image segmentation and computer vision[117].For statistical approaches to pattern recognition see [56]and [85].Clus-tering can be viewed as a density estimation problem.This is the subject of traditional multivariate statistical estimation [197].Clustering is also widelyA Survey of Clustering Data Mining Techniques3 used for data compression in image processing,which is also known as vec-tor quantization[89].Datafitting in numerical analysis provides still another venue in data modeling[53].This survey’s emphasis is on clustering in data mining.Such clustering is characterized by large datasets with many attributes of different types. Though we do not even try to review particular applications,many important ideas are related to the specificfields.Clustering in data mining was brought to life by intense developments in information retrieval and text mining[52], [206],[58],spatial database applications,for example,GIS or astronomical data,[223],[189],[68],sequence and heterogeneous data analysis[43],Web applications[48],[111],[81],DNA analysis in computational biology[23],and many others.They resulted in a large amount of application-specific devel-opments,but also in some general techniques.These techniques and classic clustering algorithms that relate to them are surveyed below.1.3Plan of Further PresentationClassification of clustering algorithms is neither straightforward,nor canoni-cal.In reality,different classes of algorithms overlap.Traditionally clustering techniques are broadly divided in hierarchical and partitioning.Hierarchical clustering is further subdivided into agglomerative and divisive.The basics of hierarchical clustering include Lance-Williams formula,idea of conceptual clustering,now classic algorithms SLINK,COBWEB,as well as newer algo-rithms CURE and CHAMELEON.We survey these algorithms in the section Hierarchical Clustering.While hierarchical algorithms gradually(dis)assemble points into clusters (as crystals grow),partitioning algorithms learn clusters directly.In doing so they try to discover clusters either by iteratively relocating points between subsets,or by identifying areas heavily populated with data.Algorithms of thefirst kind are called Partitioning Relocation Clustering. They are further classified into probabilistic clustering(EM framework,al-gorithms SNOB,AUTOCLASS,MCLUST),k-medoids methods(algorithms PAM,CLARA,CLARANS,and its extension),and k-means methods(differ-ent schemes,initialization,optimization,harmonic means,extensions).Such methods concentrate on how well pointsfit into their clusters and tend to build clusters of proper convex shapes.Partitioning algorithms of the second type are surveyed in the section Density-Based Partitioning.They attempt to discover dense connected com-ponents of data,which areflexible in terms of their shape.Density-based connectivity is used in the algorithms DBSCAN,OPTICS,DBCLASD,while the algorithm DENCLUE exploits space density functions.These algorithms are less sensitive to outliers and can discover clusters of irregular shape.They usually work with low-dimensional numerical data,known as spatial data. Spatial objects could include not only points,but also geometrically extended objects(algorithm GDBSCAN).4Pavel BerkhinSome algorithms work with data indirectly by constructing summaries of data over the attribute space subsets.They perform space segmentation and then aggregate appropriate segments.We discuss them in the section Grid-Based Methods.They frequently use hierarchical agglomeration as one phase of processing.Algorithms BANG,STING,WaveCluster,and FC are discussed in this section.Grid-based methods are fast and handle outliers well.Grid-based methodology is also used as an intermediate step in many other algorithms (for example,CLIQUE,MAFIA).Categorical data is intimately connected with transactional databases.The concept of a similarity alone is not sufficient for clustering such data.The idea of categorical data co-occurrence comes to the rescue.The algorithms ROCK,SNN,and CACTUS are surveyed in the section Co-Occurrence of Categorical Data.The situation gets even more aggravated with the growth of the number of items involved.To help with this problem the effort is shifted from data clustering to pre-clustering of items or categorical attribute values. Development based on hyper-graph partitioning and the algorithm STIRR exemplify this approach.Many other clustering techniques are developed,primarily in machine learning,that either have theoretical significance,are used traditionally out-side the data mining community,or do notfit in previously outlined categories. The boundary is blurred.In the section Other Developments we discuss the emerging direction of constraint-based clustering,the important researchfield of graph partitioning,and the relationship of clustering to supervised learning, gradient descent,artificial neural networks,and evolutionary methods.Data Mining primarily works with large databases.Clustering large datasets presents scalability problems reviewed in the section Scalability and VLDB Extensions.Here we talk about algorithms like DIGNET,about BIRCH and other data squashing techniques,and about Hoffding or Chernoffbounds.Another trait of real-life data is high dimensionality.Corresponding de-velopments are surveyed in the section Clustering High Dimensional Data. The trouble comes from a decrease in metric separation when the dimension grows.One approach to dimensionality reduction uses attributes transforma-tions(DFT,PCA,wavelets).Another way to address the problem is through subspace clustering(algorithms CLIQUE,MAFIA,ENCLUS,OPTIGRID, PROCLUS,ORCLUS).Still another approach clusters attributes in groups and uses their derived proxies to cluster objects.This double clustering is known as co-clustering.Issues common to different clustering methods are overviewed in the sec-tion General Algorithmic Issues.We talk about assessment of results,de-termination of appropriate number of clusters to build,data preprocessing, proximity measures,and handling of outliers.For reader’s convenience we provide a classification of clustering algorithms closely followed by this survey:•Hierarchical MethodsA Survey of Clustering Data Mining Techniques5Agglomerative AlgorithmsDivisive Algorithms•Partitioning Relocation MethodsProbabilistic ClusteringK-medoids MethodsK-means Methods•Density-Based Partitioning MethodsDensity-Based Connectivity ClusteringDensity Functions Clustering•Grid-Based Methods•Methods Based on Co-Occurrence of Categorical Data•Other Clustering TechniquesConstraint-Based ClusteringGraph PartitioningClustering Algorithms and Supervised LearningClustering Algorithms in Machine Learning•Scalable Clustering Algorithms•Algorithms For High Dimensional DataSubspace ClusteringCo-Clustering Techniques1.4Important IssuesThe properties of clustering algorithms we are primarily concerned with in data mining include:•Type of attributes algorithm can handle•Scalability to large datasets•Ability to work with high dimensional data•Ability tofind clusters of irregular shape•Handling outliers•Time complexity(we frequently simply use the term complexity)•Data order dependency•Labeling or assignment(hard or strict vs.soft or fuzzy)•Reliance on a priori knowledge and user defined parameters •Interpretability of resultsRealistically,with every algorithm we discuss only some of these properties. The list is in no way exhaustive.For example,as appropriate,we also discuss algorithms ability to work in pre-defined memory buffer,to restart,and to provide an intermediate solution.6Pavel Berkhin2Hierarchical ClusteringHierarchical clustering builds a cluster hierarchy or a tree of clusters,also known as a dendrogram.Every cluster node contains child clusters;sibling clusters partition the points covered by their common parent.Such an ap-proach allows exploring data on different levels of granularity.Hierarchical clustering methods are categorized into agglomerative(bottom-up)and divi-sive(top-down)[116],[131].An agglomerative clustering starts with one-point (singleton)clusters and recursively merges two or more of the most similar clusters.A divisive clustering starts with a single cluster containing all data points and recursively splits the most appropriate cluster.The process contin-ues until a stopping criterion(frequently,the requested number k of clusters) is achieved.Advantages of hierarchical clustering include:•Flexibility regarding the level of granularity•Ease of handling any form of similarity or distance•Applicability to any attribute typesDisadvantages of hierarchical clustering are related to:•Vagueness of termination criteria•Most hierarchical algorithms do not revisit(intermediate)clusters once constructed.The classic approaches to hierarchical clustering are presented in the sub-section Linkage Metrics.Hierarchical clustering based on linkage metrics re-sults in clusters of proper(convex)shapes.Active contemporary efforts to build cluster systems that incorporate our intuitive concept of clusters as con-nected components of arbitrary shape,including the algorithms CURE and CHAMELEON,are surveyed in the subsection Hierarchical Clusters of Arbi-trary Shapes.Divisive techniques based on binary taxonomies are presented in the subsection Binary Divisive Partitioning.The subsection Other Devel-opments contains information related to incremental learning,model-based clustering,and cluster refinement.In hierarchical clustering our regular point-by-attribute data representa-tion frequently is of secondary importance.Instead,hierarchical clustering frequently deals with the N×N matrix of distances(dissimilarities)or sim-ilarities between training points sometimes called a connectivity matrix.So-called linkage metrics are constructed from elements of this matrix.The re-quirement of keeping a connectivity matrix in memory is unrealistic.To relax this limitation different techniques are used to sparsify(introduce zeros into) the connectivity matrix.This can be done by omitting entries smaller than a certain threshold,by using only a certain subset of data representatives,or by keeping with each point only a certain number of its nearest neighbors(for nearest neighbor chains see[177]).Notice that the way we process the original (dis)similarity matrix and construct a linkage metric reflects our a priori ideas about the data model.A Survey of Clustering Data Mining Techniques7With the(sparsified)connectivity matrix we can associate the weighted connectivity graph G(X,E)whose vertices X are data points,and edges E and their weights are defined by the connectivity matrix.This establishes a connection between hierarchical clustering and graph partitioning.One of the most striking developments in hierarchical clustering is the algorithm BIRCH.It is discussed in the section Scalable VLDB Extensions.Hierarchical clustering initializes a cluster system as a set of singleton clusters(agglomerative case)or a single cluster of all points(divisive case) and proceeds iteratively merging or splitting the most appropriate cluster(s) until the stopping criterion is achieved.The appropriateness of a cluster(s) for merging or splitting depends on the(dis)similarity of cluster(s)elements. This reflects a general presumption that clusters consist of similar points.An important example of dissimilarity between two points is the distance between them.To merge or split subsets of points rather than individual points,the dis-tance between individual points has to be generalized to the distance between subsets.Such a derived proximity measure is called a linkage metric.The type of a linkage metric significantly affects hierarchical algorithms,because it re-flects a particular concept of closeness and connectivity.Major inter-cluster linkage metrics[171],[177]include single link,average link,and complete link. The underlying dissimilarity measure(usually,distance)is computed for every pair of nodes with one node in thefirst set and another node in the second set.A specific operation such as minimum(single link),average(average link),or maximum(complete link)is applied to pair-wise dissimilarity measures:d(C1,C2)=Op{d(x,y),x∈C1,y∈C2}Early examples include the algorithm SLINK[199],which implements single link(Op=min),Voorhees’method[215],which implements average link (Op=Avr),and the algorithm CLINK[55],which implements complete link (Op=max).It is related to the problem offinding the Euclidean minimal spanning tree[224]and has O(N2)complexity.The methods using inter-cluster distances defined in terms of pairs of nodes(one in each respective cluster)are called graph methods.They do not use any cluster representation other than a set of points.This name naturally relates to the connectivity graph G(X,E)introduced above,because every data partition corresponds to a graph partition.Such methods can be augmented by so-called geometric methods in which a cluster is represented by its central point.Under the assumption of numerical attributes,the center point is defined as a centroid or an average of two cluster centroids subject to agglomeration.It results in centroid,median,and minimum variance linkage metrics.All of the above linkage metrics can be derived from the Lance-Williams updating formula[145],d(C iC j,C k)=a(i)d(C i,C k)+a(j)d(C j,C k)+b·d(C i,C j)+c|d(C i,C k)−d(C j,C k)|.8Pavel BerkhinHere a,b,c are coefficients corresponding to a particular linkage.This formula expresses a linkage metric between a union of the two clusters and the third cluster in terms of underlying nodes.The Lance-Williams formula is crucial to making the dis(similarity)computations feasible.Surveys of linkage metrics can be found in [170][54].When distance is used as a base measure,linkage metrics capture inter-cluster proximity.However,a similarity-based view that results in intra-cluster connectivity considerations is also used,for example,in the original average link agglomeration (Group-Average Method)[116].Under reasonable assumptions,such as reducibility condition (graph meth-ods satisfy this condition),linkage metrics methods suffer from O N 2 time complexity [177].Despite the unfavorable time complexity,these algorithms are widely used.As an example,the algorithm AGNES (AGlomerative NESt-ing)[131]is used in S-Plus.When the connectivity N ×N matrix is sparsified,graph methods directly dealing with the connectivity graph G can be used.In particular,hierarchical divisive MST (Minimum Spanning Tree)algorithm is based on graph parti-tioning [116].2.1Hierarchical Clusters of Arbitrary ShapesFor spatial data,linkage metrics based on Euclidean distance naturally gener-ate clusters of convex shapes.Meanwhile,visual inspection of spatial images frequently discovers clusters with curvy appearance.Guha et al.[99]introduced the hierarchical agglomerative clustering algo-rithm CURE (Clustering Using REpresentatives).This algorithm has a num-ber of novel features of general importance.It takes special steps to handle outliers and to provide labeling in assignment stage.It also uses two techniques to achieve scalability:data sampling (section 8),and data partitioning.CURE creates p partitions,so that fine granularity clusters are constructed in parti-tions first.A major feature of CURE is that it represents a cluster by a fixed number,c ,of points scattered around it.The distance between two clusters used in the agglomerative process is the minimum of distances between two scattered representatives.Therefore,CURE takes a middle approach between the graph (all-points)methods and the geometric (one centroid)methods.Single and average link closeness are replaced by representatives’aggregate closeness.Selecting representatives scattered around a cluster makes it pos-sible to cover non-spherical shapes.As before,agglomeration continues until the requested number k of clusters is achieved.CURE employs one additional trick:originally selected scattered points are shrunk to the geometric centroid of the cluster by a user-specified factor α.Shrinkage suppresses the affect of outliers;outliers happen to be located further from the cluster centroid than the other scattered representatives.CURE is capable of finding clusters of different shapes and sizes,and it is insensitive to outliers.Because CURE uses sampling,estimation of its complexity is not straightforward.For low-dimensional data authors provide a complexity estimate of O (N 2sample )definedA Survey of Clustering Data Mining Techniques9 in terms of a sample size.More exact bounds depend on input parameters: shrink factorα,number of representative points c,number of partitions p,and a sample size.Figure1(a)illustrates agglomeration in CURE.Three clusters, each with three representatives,are shown before and after the merge and shrinkage.Two closest representatives are connected.While the algorithm CURE works with numerical attributes(particularly low dimensional spatial data),the algorithm ROCK developed by the same researchers[100]targets hierarchical agglomerative clustering for categorical attributes.It is reviewed in the section Co-Occurrence of Categorical Data.The hierarchical agglomerative algorithm CHAMELEON[127]uses the connectivity graph G corresponding to the K-nearest neighbor model spar-sification of the connectivity matrix:the edges of K most similar points to any given point are preserved,the rest are pruned.CHAMELEON has two stages.In thefirst stage small tight clusters are built to ignite the second stage.This involves a graph partitioning[129].In the second stage agglomer-ative process is performed.It utilizes measures of relative inter-connectivity RI(C i,C j)and relative closeness RC(C i,C j);both are locally normalized by internal interconnectivity and closeness of clusters C i and C j.In this sense the modeling is dynamic:it depends on data locally.Normalization involves certain non-obvious graph operations[129].CHAMELEON relies heavily on graph partitioning implemented in the library HMETIS(see the section6). Agglomerative process depends on user provided thresholds.A decision to merge is made based on the combinationRI(C i,C j)·RC(C i,C j)αof local measures.The algorithm does not depend on assumptions about the data model.It has been proven tofind clusters of different shapes,densities, and sizes in2D(two-dimensional)space.It has a complexity of O(Nm+ Nlog(N)+m2log(m),where m is the number of sub-clusters built during the first initialization phase.Figure1(b)(analogous to the one in[127])clarifies the difference with CURE.It presents a choice of four clusters(a)-(d)for a merge.While CURE would merge clusters(a)and(b),CHAMELEON makes intuitively better choice of merging(c)and(d).2.2Binary Divisive PartitioningIn linguistics,information retrieval,and document clustering applications bi-nary taxonomies are very useful.Linear algebra methods,based on singular value decomposition(SVD)are used for this purpose in collaborativefilter-ing and information retrieval[26].Application of SVD to hierarchical divisive clustering of document collections resulted in the PDDP(Principal Direction Divisive Partitioning)algorithm[31].In our notations,object x is a docu-ment,l th attribute corresponds to a word(index term),and a matrix X entry x il is a measure(e.g.TF-IDF)of l-term frequency in a document x.PDDP constructs SVD decomposition of the matrix10Pavel Berkhin(a)Algorithm CURE (b)Algorithm CHAMELEONFig.1.Agglomeration in Clusters of Arbitrary Shapes(X −e ¯x ),¯x =1Ni =1:N x i ,e =(1,...,1)T .This algorithm bisects data in Euclidean space by a hyperplane that passes through data centroid orthogonal to the eigenvector with the largest singular value.A k -way split is also possible if the k largest singular values are consid-ered.Bisecting is a good way to categorize documents and it yields a binary tree.When k -means (2-means)is used for bisecting,the dividing hyperplane is orthogonal to the line connecting the two centroids.The comparative study of SVD vs.k -means approaches [191]can be used for further references.Hier-archical divisive bisecting k -means was proven [206]to be preferable to PDDP for document clustering.While PDDP or 2-means are concerned with how to split a cluster,the problem of which cluster to split is also important.Simple strategies are:(1)split each node at a given level,(2)split the cluster with highest cardinality,and,(3)split the cluster with the largest intra-cluster variance.All three strategies have problems.For a more detailed analysis of this subject and better strategies,see [192].2.3Other DevelopmentsOne of early agglomerative clustering algorithms,Ward’s method [222],is based not on linkage metric,but on an objective function used in k -means.The merger decision is viewed in terms of its effect on the objective function.The popular hierarchical clustering algorithm for categorical data COB-WEB [77]has two very important qualities.First,it utilizes incremental learn-ing.Instead of following divisive or agglomerative approaches,it dynamically builds a dendrogram by processing one data point at a time.Second,COB-WEB is an example of conceptual or model-based learning.This means that each cluster is considered as a model that can be described intrinsically,rather than as a collection of points assigned to it.COBWEB’s dendrogram is calleda classification tree.Each tree node(cluster)C is associated with the condi-tional probabilities for categorical attribute-values pairs,P r(x l=νlp|C),l=1:d,p=1:|A l|.This easily can be recognized as a C-specific Na¨ıve Bayes classifier.During the classification tree construction,every new point is descended along the tree and the tree is potentially updated(by an insert/split/merge/create op-eration).Decisions are based on the category utility[49]CU{C1,...,C k}=1j=1:kCU(C j)CU(C j)=l,p(P r(x l=νlp|C j)2−(P r(x l=νlp)2.Category utility is similar to the GINI index.It rewards clusters C j for in-creases in predictability of the categorical attribute valuesνlp.Being incre-mental,COBWEB is fast with a complexity of O(tN),though it depends non-linearly on tree characteristics packed into a constant t.There is a similar incremental hierarchical algorithm for all numerical attributes called CLAS-SIT[88].CLASSIT associates normal distributions with cluster nodes.Both algorithms can result in highly unbalanced trees.Chiu et al.[47]proposed another conceptual or model-based approach to hierarchical clustering.This development contains several different use-ful features,such as the extension of scalability preprocessing to categori-cal attributes,outliers handling,and a two-step strategy for monitoring the number of clusters including BIC(defined below).A model associated with a cluster covers both numerical and categorical attributes and constitutes a blend of Gaussian and multinomial models.Denote corresponding multivari-ate parameters byθ.With every cluster C we associate a logarithm of its (classification)likelihoodl C=x i∈Clog(p(x i|θ))The algorithm uses maximum likelihood estimates for parameterθ.The dis-tance between two clusters is defined(instead of linkage metric)as a decrease in log-likelihoodd(C1,C2)=l C1+l C2−l C1∪C2caused by merging of the two clusters under consideration.The agglomerative process continues until the stopping criterion is satisfied.As such,determina-tion of the best k is automatic.This algorithm has the commercial implemen-tation(in SPSS Clementine).The complexity of the algorithm is linear in N for the summarization phase.Traditional hierarchical clustering does not change points membership in once assigned clusters due to its greedy approach:after a merge or a split is selected it is not refined.Though COBWEB does reconsider its decisions,its。

H2O2-1

H2O2-1

compared with the monometallic ones. Ag–M (M ¼ Au, Pd and Pt) alloy nanoparticles (NPs) have demonstrated that their peroxidase-like activity is similar to horseradish peroxidase (HRP).21 The detection of hydrogen peroxide, as with other sensing measurements like the detection of glucose,22 hydrazine,23 and heavy metal ions24 etc., has been applied in a variety areas, such as the foodstuff industry, environment monitoring, the pharmaceutical industry and so on. In comparison with the materials of enzyme-free H2O2 sensors, a natural enzyme is generally high-cost, suffers from environmental factors, and fails to provide long-term stability. Thus, the development of enzyme-free sensors with peroxidase-like activity and enhanced performance should be carried forward. Nowadays, researchers have focused on excellent electrochemical biosensors, which can possess simple preparation, high sensitivity, rapid response, a low limit of detection and high selectivity.25 Silver decorated materials have been obtained with an obviously improved performance of H2O2 detection, such as Ag/ZnO,26 Ag-MWCNT27 and Ag-DNA.28 But the sensitivity of Ag/ZnO and Ag-MWCNT is insufficient compared with others, and the preparation of Ag-DNA is complicated. Therefore, it is desirable to further modify the sensor performance of Ag-based nanostructures. This inspires us to explore the

贝叶斯网络结构学习总结

贝叶斯网络结构学习总结

贝叶斯⽹络结构学习总结完备数据集下的贝叶斯⽹络结构学习:基于依赖统计分析的⽅法—— 通常利⽤统计或是信息论的⽅法分析变量之间的依赖关系,从⽽获得最优的⽹络结构对于基于依赖统计分析⽅法的研究可分为三种:基于分解的⽅法(V结构的存在)Decomposition of search for v-structures in DAGsDecomposition of structural learning about directed acylic graphsStructural learning of chain graphs via decomposition基于Markov blanket的⽅法Using Markov blankets for causal structure learningLearning Bayesian network strcture using Markov blanket decomposition基于结构空间限制的⽅法Bayesian network learning algorithms using structural restrictions(将这些约束与pc算法相结合提出了⼀种改进算法,提⾼了结构学习效率)(约束由Campos指出包括1、⼀定存在⼀条⽆向边或是有向边 2、⼀定不存在⼀条⽆向边或有向边 3、部分节点的顺序)常⽤的算法:SGS——利⽤节点间的条件独⽴性来确定⽹络结构的⽅法PC——利⽤稀疏⽹络中节点不需要⾼阶独⽴性检验的特点,提出了⼀种削减策略:依次由0阶独⽴性检验开始到⾼阶独⽴性检验,对初始⽹络中节点之间的连接进⾏削减。

此种策略有效地从稀疏模型中建⽴贝叶斯⽹络,解决了SGS算法随着⽹络中节点数的增长复杂度呈指数倍增长的问题。

TPDA——把结构学习过程分三个阶段进⾏:a)起草(drafting)⽹络结构,利⽤节点之间的互信息得到⼀个初始的⽹络结构;b)增厚(thickening)⽹络结构,在步骤a)⽹络结构的基础上计算⽹络中不存在连接节点间的条件互信息,对满⾜条件的两节点之间添加边;。

安菲诺公司天线设计指南

安菲诺公司天线设计指南

VSWR/Return Loss/Insertion Loss
VSWR Return loss (dB) Input power Loss (dB) 2.0 2.3 2.6 3 3.5 4.5 -9.5 -8 -7 -6 -5.1 -4 88.9% 84.5% 80% 75% 69% 64% -0.51 -0.73 -0.96 -1.25 -1.6 -1.94
SAA Summary
• Advanced measurement equipments: Anechoic Chamber, SAR Measurement, CMU200 for CDMA/GSM/DCS/PCS • Fully equipped QC lab to provide on site production monitoring. • Over 4000sqm of manufacturing floor. • International Management and Development Team • Wide products range including external, internal antennas, hinge and other custom molded parts. • We have full China content beside global delivery capabilities. • Competitive pricing.
Antenna feed method
Coaxial feed • Pros, good shielding, stable impedance • Cons, cost high, difficult to orient Strip-line or Microstrip • Pros, simple fabrication, easy applied to PCB design • Cons, RF radiation suppression, loss

雅思110625机经-含黄蚂蚁原文

雅思110625机经-含黄蚂蚁原文

20110625 A/G Passage 1 海洋声纳系统的定位与测量主要题型: TFNG, Multi-choice大意:研究海洋的方法。

通过声纳测海底深度,探测大型海洋动物,观测海水温度变化等等。

本篇阅读属于事物说明类。

主要讲述声纳系统的定义和作用。

本文各段大意明确。

可以良好发挥P-tag技术并应用在Multi-choice题宏观定位中。

TFNG题中部分可采用特殊定位词定位。

本篇定位难度不高。

难度:低 Passage 2 用蚂蚁等昆虫防治农作物虫害主要题型 TFNG, Matching原文选自于New Scientist杂志2001年4月刊。

再一次说明雅思文章往往来源于英美大众科普类期刊。

本篇按历史顺序写成,含较多时间并要求考生用时间与事件匹配。

本篇中出现部分长难句,需要使用句子主干阅读法分析主要信息。

难度:中高 Passage 3 国际机构语言障碍培训方案主要题型: Matching, TFNG, 带选项Summary原文:In 1476, the farmers of Berne in Switzerland decided, according to this story, there was only one way to rid their fields of the cutworms attacking their crops. They took the pests to court. The worms were tried, found guilty and excommunicated by the archbishop. In China, farmers had a more practical approach to pest control. Rather than rely on divine intervention, they put their faith in frogs, ducks and ants. Frogs and ducks were encouraged to snap up the pests in the paddies and the occasional plague of locusts. But the notion of biological control began with an ant. More specifically, the story says, it started with the predatory yellow citrus ant Oecophylla smaragdina, which has been polishing off pests in the orange groves of southern China for at least 1700 years. The yellow citrus ant is a type of weaver ant, which binds leaves and twigs with silk to form a neat, tent-like nest. In the beginning, farmers made do with the odd ants' nest here and there. But it wasn't long before growing demand led to the development of a thriving trade in nests and a new type of agriculture--ant farming.The story explains that citrus fruits evolved in the Far East and the Chinese discovered the delights of their flesh early on. As the ancestral home of oranges, lemons and pomelos, China also has the greatest diversity of citrus pests. And the trees that produce the sweetest fruits, the mandarins--or kan--attract a host of plant-eating insects, from black ants andsap-sucking mealy bugs to leaf-devouring caterpillars. With so many enemies, fruit growers clearly had to have some way of protecting their orchards.The West did not discover the Chinese orange growers' secret weapon until the early20th century. At the time, Florida was suffering an epidemic of citrus canker and in 1915 Walter Swingle, a plant physiologist working for the US Department of Agriculture, was, the story says, sent to China in search of varieties of orange that were resistant to the disease. Swingle spent some time studying the citrus orchards around Guangzhou, and there he came across the story of the cultivated ant. These ants, he was told, were "grown" by the people of a small village nearby who sold them to the orange growers by the nestful.The earliest report of citrus ants at work among the orange trees appears in a book on tropical and subtropical botany written by His Han in AD 304. "The people of Chiao-Chih sell in their markets ants in bags of rush matting. The nests are like silk. The bags are all attached to twigs and leaves which, with the ants inside the nests, are for sale. The ants are reddish-yellow in colour, bigger than ordinary ants. In the south if the kan trees do not have this kind of ant, the fruits will all be damaged by many harmful insects, and not a single fruit will be perfect."The story goes on to say that the long tradition of ants in the Chinese orchards only began to waver in the 1950s and 1960s with the introduction of powerful organic (I guess the author means chemical insecticides. Although most ruit growers switched to chemicals, a few hung onto their ants. Those who abandoned ants in favour of chemicals quickly became disillusioned. As costs soared and pests began to develop resistance to the chemicals, growers began to revive the old ant patrols. They had good reason to have faith in their insect workforce. Research in the early 1960s showed that as long as there were enough ants in the trees, they did an excellent job of dispatching some pests--mainly the larger insects--and had modest success against others. Trees with yellow ants produced almost 20 per cent more healthy leaves than those without. More recent trials have shown that these trees yield just as big a crop as those protected by expensive chemical sprays.One apparent drawback of using ants--and one of the main reasons for the early scepticism by Western scientists--was that citrus ants do nothing to control mealybugs,waxy-coated scale insects which can do considerable damage to fruit trees. In fact, the ants protect mealy bugs in exchange for the sweet honeydew they secrete. The orange growers always denied this was a problem but Western scientists thought they knew better.Research in the 1980s suggests that the growers were right all along. Where mealy bugs proliferate under the ants' protection they are usually heavily parasitised and this limits the harm they can do.Orange growers who rely on carnivorous ants rather than poisonous chemicals maintain a better balance of species in their orchards. While the ants deal with the bigger insect pests,other predatory species keep down the numbers of smaller pests such as scale insects and aphids. In the long run, ants do a lot less damage than chemicals--and they're certainly more effective than excommunication.大意:介绍大型国际机构语言障碍问题,并给出某机构的3种解决方案,包括使用母公司语言,同传,内部培训等等;最后补充其它机构的另一种方案。

Experience Summary

Experience Summary

Amin Ahmad amin.ahmad@ Certifications and Training∙IBM Certified Solution Developer – XML and Related Technologies.∙Sun Certified Web Component Developer for Java 2 Platform, Enterprise Edition (310-080). ∙Sun Certified Programmer for the Java 2 Platform (310-025).∙IBM MQSeries training.EducationBowling Green State UniversityBachelor of Science, majoring in Computer Science and in Pure Mathematics∙Graduated summa cum laude (cumulative grade point average 3.95/4.00)Experience Summary∙Eight years of cumulative industry experience including seven years of experience as a Java EE developer and one year as an SAP ABAP technical developer. Extensive experience in the financial sector, with Fortune 500 firms, and with large state governments.∙Served as software architect for large and medium-sized systems, as a technical, and as a framework/component designer.∙Excellent experience with important programming methodologies including: model driven architectures, object-oriented, component/container-oriented, and aspect-oriented design.Strong knowledge of object-oriented patterns including most patterns referenced by the Group of Four. Good knowledge of J2EE design patterns, including most design patterns covered in Sun’s Java EE Design Patterns guide.∙Extensive Java EE design and implementation experience using a full spectrum of technolo-gies include Servlets, Java Server Pages, Enterprise Java Beans, and JMS. Experienced in designing for very large systems handling millions of transactions per day.∙Extensive experience designing and implementing Eclipse RCP applications (utilizing OSGi), as well as writing plug-ins for the Eclipse platform. Extensive experience with the Eclipse Modeling Framework (EMF) and the Graphical Editor Framework (GEF).∙Strong background in rich client development using Swing and SWT/JFace toolkits.∙Excellent industry experience developing efficient, portable, web-based user interfaces using HTML, DHTML, JavaScript, CSS 2, and the Google Web Toolkit. Some SVG experience.∙Strong experience utilizing various XML server and client-side technologies in J2EE solu-tions, including DTD, W3C Schema, DOM 2, DOM 3, SAX, XPath, XSL-FOP, XSL-T, JAXB, Castor XML, and, to a lesser extent, JiBX and JDom.∙Extensive systems integration experience using IBM MQSeries, and including both the IBM MQSeries API for Java and the JMS API. Familiarity with installing and administering a va-riety of JMS providers, including IBM MQSeries 5.20, OpenJMS 0.7.4, JORAM, SunOne Message Queue, and SwiftMQ. Strong JMS knowledge, including hands-on experience with implementation of server-session pooling.∙Good experience with SAP R/3 3.1H in the Plant Management, Sales and Distribution, and Finance modules. Areas of experience include basic reporting, interactive reporting, BDC ses-sions, dialog programming, program optimization (especially OpenSQL query optimization), layout sets (debugging), OLE, and general debugging tasks.Resume of Amin Ahmad 1 of 17∙Good experience administering BEA WebLogic 8 and 9, including configuring clustering.Also experienced in deploying J2EE applications to IBM WebSphere (2.03, 3.5, 4.0, 5.0), Jakarta Tomcat (3, 4, 5, and 6 series).∙Experience with JDBC 2, SQL, and a number of RDBMS systems including DB2, UDB, Oracle, Firebird (Interbase branch), MySQL, and Access. Have defined schemas in the fifth normal form (5NF) through DDL following SQL 92 standards, and of implementing indices, synthetic keys, and de-normalization as appropriate to improve performance.∙Experience with UML design, especially using sequence and class diagrams. Experience with Rational Rose and Omondo UML.∙Excellent experience with several industry-standard Java integrated development environ-ments, including Visual Age 3.5, Eclipse 1.0, 2.1.x, 3.x, and, to a lesser extent, WebSphere Studio Application Developer 4.0 and 5.0, and Forte for Java, Community Edition.∙Experienced in designing and implementing modular build systems with inbuilt quality con-trol mechanisms using Ant 1.5 and Ant 1.6.∙Good experience with many quality control and testing tools, including JUnit, JProbe, hprof, OptimizeIt, Pasta Package Analyzer, CodePro Code Audit, Clover Code Coverage, and Sun DocCheck Doclet.∙Excellent experience with SVN and CVS version control systems, including Eclipse integra-tion. Some experience with CMVC, including authoring a CMVC plug-in to provide Eclipse1.0 and WSAD integration.∙Good experience with issue management systems including Bugzilla and Mantis. Experience installing and administering Mantis.∙Some Python 2.4.1 scripting experience.∙Enjoy typesetting using the Miktex distribution of TeX and LaTeX. Experienced in typeset-ting Arabic-script languages using ArabTex.∙ A background in helping others improve their technical skills: Mentored junior and mid-level programmers; published a variety of articles including for IBM DeveloperWorks; served as a technical interviewer for American Management Systems and as an instructor on Java funda-mentals for American Express; participated as a presenter in several J2EE Architecture fo-rums at American Express.∙Demonstrated desire to contribute to the open source software community. I maintain five GPL and LGPL software projects.∙Excellent communication skills and a strong desire to develop high-quality software. Employment HistoryCGI, Inc. October 2006 – August 2007Sr. Java EE consultant for Florida Safe Families Network (FSFN)—a large welfare automation project servicing approximately 70,000 transactions per hour. Responsible for system integration, security integration, team mentoring, and infrastructure development.∙Responsible for integrating FSFN authentication with IBM Tivoli Directory Server. Designed and implemented a JNDI-based solution using LDAP custom controls to parse extended serv-er response codes. The solution also implemented a custom pooling solution that was stress tested to over 5,000 authentications per minute.Resume of Amin Ahmad 2 of 17∙Designed and implemented a single sign on mechanism for FSFN and the Business Objects-based reporting subsystem. Designed and supervised implementation of data synchronization logic between the two systems.∙Involved in design and implementation of auditing infrastructure, whereby the details of any transaction in the system can be recalled for later analysis.∙Extensive infrastructure development, including:1.Configuration of WebLogic 9 server environments, include clustered environments. Ex-tensive work with node manager and Weblogic plugin for Apache. Implemented automat-ic pool monitor/re-starter tool using JMX.2.Developed and implemented improved standards for build versioning. Automated a va-riety of build, deployment, and code delivery tasks, including JSP compilation usingJSPC for WebLogic. This involved extensive use of Ant 1.7 and bash scripting.3.Extensive configuration of Apache, including configuring proxies, reverse proxies, instal-ling PHP, and the Apache Tomcat Connector.4.Created and administered MoinMoin 1.5.7 wiki to track environment information and toserve as a general developer wiki.5.Designed and implemented backup jobs and automated system checks using cron.∙Collaborated on design and implementation of search modules, including person search.1.Double metaphone (SOUNDEX), nickname, wildcard, and exact searches based on first,middle and last names. Over twenty available search criteria including a variety of demo-graphics such as age and address.2.Round trip time of under 10 seconds for complex searches returning 100,000+ results,and near-instantaneous results for most common searches. Achieving this level of per-formance required extensive manipulation of clustering and non-clustering indexes on the DB2 8.1 mainframe system, as well as work with stored procedures, JDBC directional re-sult sets, and absolute cursor positioning.∙Designed and implemented various data migration tools in Java.∙Tasked with ensuring that all developers were able to work productively on their topics. This mainly involved helping developers troubleshoot Struts, JSP, and JavaScript issues, and im-proving their knowledge of these languages and frameworks.Freescale Semiconductor August 2006 – September 2006 Collaborated on design and development of Eclipse 3.2.x-based port of Metrowerks CodeWar-rior’s command window functionality.∙Implemented a wide variety of shell features including tab-based auto-completion, auto-completion of file system paths, scrollable command history, user-configurable font face and color, and custom key bindings. This required, among other things:1.Implementation of custom SWT layouts.2.Extensive use of JFlex for lexical analysis of output data streams for auto-coloration. Resume of Amin Ahmad 3 of 173.Changes to standard image conversion process to preserve alpha values when movingbetween Sun J2SE and SWT image formats.Webalo January 2006 – August 2006 Sr. Software EngineerResp onsible for research and development of Webalo’s suite of Java-based, web-service based mobile middleware.∙Implemented a Java MIDP 2.0, CLDC 1.1-based ―Webalo User Agent‖. The Webalo User Agent resides on a user’s mobile device and allows interaction with enterprise data exposed through web services.1.Conducted research of the U.S. mobile phone market to (a) determine compatibility ofthe MIDP Webalo User Agent with phones currently in use, and to (b) determine if sim-ple changes to the technology stack could improve its compatibility.2.Implemented a custom data-grid component, optimized for low-resolution devices, withadvanced features such as automatic layout, tooltip support, column selection, and drill down support. Conducted extensive testing against a Samsung A880 device.3.Implemented a high performance, memory efficient XML parser suitable for use in anembedded environment, using JFlex 1.4.1. The parser has a smaller codebase, smaller memory footprint, faster parsing times, and fewer bugs than the older, hand-written re-cursive-decent parser.4.Focused on optimizing distribution footprint of the Webalo User Agent without sacrific-ing compatibility. Ultimately reduced size from 1.2MB to 270KB using an extensive tool chain including Pngcrush, ImageMagick 6.2.9, ProGuard 3.0, and 7-zip.5.Extensive use of Sun’s Wireless Java Toolkit 2.3 and 2.5, EclipseME, and AntennaHouse J2ME extensions for Ant. To a lesser extent, performed testing of the Webalo User Agent using MIDP toolkits from Samsung, Motorola, Sprint, and Palm (Treo 650 and 700 series).∙Designed and implemented JMX-based system monitoring for the Webalo Mobile Dashboard.Also, implemented a web-based front-end for accessing system monitoring information.∙Worked on the design and implementation of a Flash-based, mobile mashup service designer.1.Extended Webalo’s existing, proprietary tool for converting Java code to ActionScript byadding support for converting inner classes, anonymous inner classes, overloaded me-thods, and shadowed fields. Use was made of JavaCC and The Java Language Specifica-tion, Second Edition.2.Designed and implemented a framework for remote service invocation from the Flashclient to the Java EE-based server. Significant features of the framework include an XML-based serialization format and a remote proxy generator written in Java.3.Wrote Flash-based UI Framework on which the various mashup-related wizards thatcomprise the product are implemented.4.Implemented a library of XML functions with an API that is compatible in both Java andFlash environments.eCredit August 2005 – August 2006 Responsible for the end-to-end design and implementation of a visual authoring environment for commercial credit and collections processes.Resume of Amin Ahmad 4 of 17∙The authoring environment is implemented atop the Eclipse Rich Client Platform (Eclipse RCP), version 3.1, running Java 5.0. Design and development are bridged through the use of a model driven architecture, using EMF, version 2.2. The visual editor component of the system is implemented using the Graphical Editor Framework (GEF).∙Business processes are serialized into XML format for easy interchange with other systems. ∙Implemented a Windows-based installer using the Nullsoft Scriptable Install System (NSIS). Ohio Department of Taxation June 2005 – August 2005 Independent ConsultantConsulted on the design and implementation of a J2EE-based taxation system for handling taxpayer registration, returns, and workflow requirements. The system has the following characte-ristics:∙DB2 v8.0 was used as a data store and triggers were utilized extensively to implement data auditing requirements.∙Business logic was implemented within WebSphere 5.0 using EJB session and entity beans. ∙ A standard Model 2 architecture was implemented utilizing Jakarta Struts and the Tiles Document Assembly Framework.Expertise was also provided in the following areas:∙Implementation of a data access service. Wherever possible, the layered service implements component design principles such as inversion-of-control. Key features include:1.Automatic data pagination, which is a key to maintaining service level requirements.2. A scheme to decouple the work of query authors from the user interface designers.3.All aspects of a data access instance (query) are located in a cohesive, class-based unit. ∙Reviewed logical schema model. Provided feedback for indices, normalization opportunities, as well as general enhancements in the light of business requirements.∙Design of database population and deletion scripts.∙Definition of a strategy for automated quality assurance using Load Runner.∙Extensions to Struts custom tags to improve productivity of the user interface team.∙Training and mentoring developers.NYSE, New York Stock Exchange February 2005 – May 2005 Independent ConsultantServed as a senior consultant for the design and implementation of a fraud tracking system. Business logic was implemented within WebSphere 5.0, while the front-end rich client was implemented atop the Eclipse platform. Expertise was provided in the following areas:∙Formulated a JUnit-based unit testing strategy for server side business components, and provided an initial proof-of-concept.∙Proposed procedures for basic incident tracking, as well as procedures for utilizing JUnit for regression testing of incidents. Provided a proof-of-concept system using Mantis 1.0.0 run-ning on Apache HTTP Server and MySQL.∙Audited logging procedures within server-side business logic and made several proposals, including the use of Log4J nested diagnostic contexts to improve the ability to correlate log-ging statements.∙Audited server-side security model and issued several recommendations.∙Implemented a rigorous type system to increase front-end developer productivity and provided a detailed, three-stage roadmap for its evolution.Resume of Amin Ahmad 5 of 17∙Performed a comprehensive audit of the rich-client tier of the application and issued a findings report. Also performed a small-scale feasibility analysis on the use of the Eclipse Modeling Framework to improve productivity.∙Designed and implemented a data grid component to standardize display and manipulation of tabular data within the system. Key features include:1. A user interface modeled after Microsoft Excel and Microsoft Access. Features of theuser interface include column reordering, column show/hide, record-set navigator, selec-tion indicators, and row numbering.2.Support for multi-column sorting, multi-column filter specification using Boolean sheetsand supporting a variety of match operators including regular expressions.3.Support for persistent sort and filter profiles, column ordering, and column widths usingthe java.util.prefs API.4.Microsoft Excel data export.5. A developer API that greatly simplifies loading data stored in Java Beans.6.Data formatting is tightly integrated with the application’s type system.eCredit September 2004 – January 2005 Served as a senior resource in the design and implementation of the Mobius Process Development Environment (PDE). The PDE, which serves as a business rules and process flow authoring sys-tem for the Mobius business process engine, is provided as a feature for Eclipse 3.1 and Java 5.0. ∙Designed and implemented a shared workspace using Jakarta Slide 2.1 WebDAV repository.The client view of the repository is represented using an EMF 2.1 native ECore model that is loaded and persisted during plug-in activation and deactivation, and provides support for workspace synchronization, asynchronous deep and shallow refresh, locking and unlocking of resources, check-in and check-out capabilities, and a drag and drop operation for resources. In addition, a WebDAV-compliant recycle bin was implemented. Etag-based content caching was implemented to improve workspace performance. Responsiveness was increased through heavy use is made of the Eclipse 3.0 Job API.∙Designed and implemented a secure licensing model for the PDE. Licenses, which contain roles and their expiration dates for a specific principal, are obtained from a licensing server after providing a valid license key. Licenses are signed by the licensing server to prevent modification and encrypted us ing the PDE client’s public key to prevent unauthorized access.Implementation entailed the use of strong (2048 bit) RSA-based X.509 certificates for license signing and symmetric key encryption, as well as the AES symmetric cipher (Rijndael va-riant) for bulk data encryption. Heavy use is made of the Sun JCE implementation and the Bouncy Castle 1.25 JCE implementation.∙Designed and implemented a runtime view of business process servers. Wizards were included for adding new servers, and, existing servers can be visually explored and operated on. For example, servers can be started and stopped, new business processes can be deployed on particular servers, and runtime logs for particular processes can be opened. The client view of the runtime environment is represented as an EMF 2.1 annotated Java ECore model. The view provides Job-based asynchronous refresh for enhanced responsiveness.∙Implemented web services-based integration with business process servers. This involved the use of SOAP, SAAJ, and JAX-RPC, including the use of wscompile for client-side stub gen-eration.Resume of Amin Ahmad 6 of 17∙Implemented editors and viewers for several types of XML documents in the Mobius system.Models, edit frameworks, and basic editors were automatically generated from W3C schemas using EMF 2.1. Editors were then heavily customized to includea.Master-detail block support.b.Writeable and read-only modes that are visually indicated through a customization tothe toolbar. Read-only mode is automatically enabled whenever a file is opened butnot locked by the current user,c.Automatic upload to WebDAV repository as part of the save sequence.d.Enforcement of semantic constraints using problem markers for edited files.∙Designed and implemented five wizards using the JFace Wizard framework.∙Provided Eclipse mentoring for eCredit employees assigned to the project.Islamic Society of Greater Columbus February 2004 – June 2004 Provided pro bono consulting expertise towards the development of a web-enabled membership and donation database. This system served to streamline the workflow of the organization and represented an upgrade from the old fat-client system written in Microsoft Access, which suffered for many problems including data synchronization and an inefficient user interface. In addition, a variety of reporting and mass mailing features were added.∙The data tier was implemented using the Firebird 1.5 RDBMS. The schema was normalized to the fifth normal form (5NF), and synthetic keys and indices were added to optimize per-formance. Name searching was optimized through indexed soundex columns in the individual entity information table.∙ A data converter and cleanser was implemented to move data from the old data tier (Microsoft Access) to the new one (Firebird).∙The application tier utilized the model-view-controller architecture pattern and was imple-mented using Java Servlets and Java Server Pages. Data access utilized the data access object pattern.∙The presentation tier made heavy use of CSS2 to minimize data transfer requirements while maximizing cross browser portability of the application.∙The application was deployed as a WAR file to Apache Tomcat 5.0, running on a Windows XP system and was tested using both Internet Explorer 6 and Mozilla Firebird 0.8 browsers. CGI-AMS August 2003 – June 2004 Independent ConsultantConsulted as an architect for the Office Field Audit Support Tool (OFAST) project, a large project for the Ohio Department Taxation employing approximately twenty-five full-time staff, involving the design, development, testing, and deployment of a Swing-based auditing tool used by tax auditors across the state. Tax rules were coded in a custom language optimized for rules processing.∙Responsible for designing and implementing the architecture of the OFAST user interface.The architecture has the following key components:1.Screen definitions are stored in XML files and are dynamically built at runtime fromthose definitions. A W3C XML Schema, veri fied using Sun’s MSV 1.5 and IBM Web-Sphere Studio 5.0 schema validation facilities, was authored for the screen definition lan-guage.Resume of Amin Ahmad 7 of 172. A data grid type whose values can be bound dynamically to an entity array. The data gridalso supports an Excel 2000 look and feel, dynamic column sorting with visual indicators for ascending, descending, and default sort orders, line numbers, multiple layers of trans-parency, cell editors and renderers that vary from row to row in the same column, and da-ta cells whose values can be dynamically bound to reference data groups stored in the database (and are hence selectable through a combo box).3.An extensible strong-typing system for managing user input. A type contains operationsfor data validation, data parsing, and data formatting, as well as operations for returning a default renderer and editor. A type’s operations are strictly defined mathematically ther e-by enhancing flexibility and maintainability.4. A form header component that allows forms to be associated with a suggestive image, atitle, and instructions. Instructions can be dynamically modified at runtime with warning and error messages.5. A wizard framework to allow complex, sequential operations that comprise many logicalpages to be developed rapidly. Built in type validations.∙Architected and implemented the user interface and business logic tiers for the Corporate Franchise Tax audit type. Additionally, participated in functional requirements analysis as well as functional design activities.Tax forms, which constitute the basis for an audit, vary from year to year and are dynamically generated from database metadata information. In addition, corporate tax forms are inter-linked, receiving values from one another during the course of computations. The user inter-face makes heavy use of Swing, especially tables, and provides a rich set of features to sup-plement standard data entry operations, including, but not limited to: visual indicators for overridden, sourced, and calculated fields; line and form-level notes, cell-specific tooltips to indicate sourcing relationships; dynamic, multi-layered line shading to indicate selected lines and lines with discrepancies; custom navigation features including navigate-to-source func-tionality.∙Mentored developers and introduced more rigorous quality control procedures into the application development lifecycle, including: implementation of an Ant 1.5 and 1.6-based modular build system, W3C Schemas for all XML document types, and a number of standa-lone quality-control programs written using Jakarta CLI.∙Designed a visual editor for creating and editing user interface definition files using the Eclipse Modeling Framework 2.0 and W3C XML Schema 1.1.∙Designed and implemented an integrated development environment for the custom rules proc-essing language in Java Swing. The development environment provides the following features to developers:1.File and file set loading for entity definition files, trace files, user interface definitionfiles, and decision table files as well as the ability to export and import file sets.2.File viewers for each of the file types, including syntax highlighting (using the JEdit Syn-tax Package) and tree views for XML files. The trace file view includes integrated step-into support to restore the state of the rules engine to a given point in the program execu-tion.3.An entity view to support viewing individual entities and their metadata.4. A three-page wizard to guide users through the process of creating new decision tables.This wizard includes sophisticated regular expression checks (using Jakarta ORO and Perl 5 regular expressions) on the new decision table, as well as a number of other integ-rity checks.Resume of Amin Ahmad 8 of 175. A table editor component consisting of functions for compiling, deleting, saving, andvalidating decision tables, as well as four sub-screens for providing access to table data.The component also automatically logs and tracks modification histories for every table, in addition to performing transparent validation of table structure. The first screen pro-vides access to table metadata as well as read-only access to the audit log for the table.The second screen supports table modifications and includes an editor that supports auto format plus diagnostic compile features. The third screen supports a call-to view, show-ing which tables are called from the current decision table. Finally, the fourth screen pro-vides read-only access to the raw XML source for the table. Manipulation of the decision table document was accomplished using DOM 3 features in Xerces.6. A feature to export documentation for all decision tables, including comments and formaldeclarations, into hyperlinked HTML files for easy viewing in a browser.7.An expression evaluator for dynamically executing actions and conditions within the cur-rent context and displaying the results.8.Stack explorers that allow the entities on the rules e ngine’s runtime stacks to be recu r-sively explored, starting from either the data or entity stacks. Every entity’s attributes, i n-cluding array, primitive, and entity-type attributes, can be inspected.9. A navigator view that allows convenient access to all resources loaded into the IDE. Thenavigator view dynamically updates as resources are added or removed, and provides support for decorators to further indicate the state of the loaded resources. For example, active resources use a bold font, unsaved resources include a floppy disk decorator and an asterisk after the name, and invalid decision tables have a yellow warning sign decorator.er experience optimized for JDK 1.4, but supports graceful degradation under JDK 1.3using reflection.11.Support for internationalization through the use of resource bundles.∙Designed and supervised implementation of POI-based Microsoft Excel integration to replace existing Jawin COM-based integration. Resulting implementation was an order of magnitude faster than COM-based implementation and used considerably less memory as well.∙Provided two four-hour training seminars as well as a number of shorter training lectures for state employees. The purpose of the lectures was to provide junior-level programmers with the necessary information to begin programming in the OFAST environment.American Express May 2002 – August 2003 Worked as a senior developer and junior architect within the Architecture Team of American Ex-press’s Interactive Technologies Division.Standards Governance∙Involved in authoring several internal, strategic, position papers regarding J2EE strategy within American Express. For example, I was involved in defining a strategy for enterprise services.∙Designed and taught key parts of an internal Java training class for American Express Employees. Developed a curriculum consisting of lectures, programming assignments, and group exams to prepare students for the Java Programmer Exam.∙Lectured on Java, XML manipulation technologies, and data binding at several Architecture Forums and Java Forums.ConsultingProvided consulting to application teams. Generally consulting requests fell into one of three cate-gories. Overall, dozens of consulting requests were addressed.Resume of Amin Ahmad 9 of 17。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Abstract—Biological data has several characteristics that strongly differentiate it from typical business data. It is much more complex, usually large in size, and continuously changes. Until recently business data has been the main target for discovering trends, patterns or future expectations. However, with the recent rise in biotechnology, the powerful technology that was used for analyzing business data is now being applied to biological data. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular functions of the DNA sequences. DNA chips are now being used to perform experiments and DNA analysis processes are being used by researchers. Clustering is one of the important processes used for grouping together similar entities. There are many clustering algorithms such as hierarchical clustering, self-organizing maps, K-means clustering and so on. In this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data. We implemented the system using an Ant-Colony clustering algorithm. The system decides the number of clusters automatically. The system processes the input biological data, runs the Ant-Colony algorithm, draws the Topic Map, assigns clusters to the genes and displays the output. We tested the algorithm with a test data of 100 to1000 genes and 24 samples and show promising results for applying this algorithm to clustering DNA chip data. Keywords—Ant Colony System, Biological data, Clustering, DNA chip.I.I NTRODUCTIONIOLOGICAL science is being revolutionized by the availability of abundant information regarding complete genome sequences for many different organisms. Organisms are complex and genomes can be immense, and thus Manuscript received June 15, 2006. This work was supported in part by the Korean Ministry of Commerce, Industry and Energy, and also in part by the second stage of the BK21 program of the Ministry of Education and Human Resources Development.Minsoo Lee is with the Dept of Computer Science and Engineering, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemoon-Ku, Seoul, Korea 120-750 (e-mail: mlee@ ewha.ac.kr).Yun-mi Kim is with the Dept of Computer Science and Engineering, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemoon-Ku, Seoul, Korea 120-750 (e-mail: cherish11@).Yearn Jeong Kim is with the Dept of Computer Science and Engineering, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemoon-Ku, Seoul, Korea 120-750 (e-mail: inverno@).Yoon-kyung Lee is with the Dept of Computer Science and Engineering, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemoon-Ku, Seoul, Korea 120-750 (e-mail: polyandry@).Hyejung Yoon is with the Dept of Computer Science and Engineering, Ewha Womans University, 11-1 Daehyun-Dong, Seodaemoon-Ku, Seoul, Korea 120-750 (e-mail: auroree@). new and powerful technologies are being developed to analyze large numbers of genes and proteins as a complement to traditional methodologies that study a small number at a time. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular function of the DNA sequences. The recently developed DNA chips, in other words DNA microarrays, have emerged as a prime candidate for such high performance analysis methods [1][2][3].In order to analyze DNA chips, a DNA analysis process is carried out. The steps that must be followed are: performing the experiments on DNA chips, scanning the results of the experiments, carrying out quality control and normalization in order to filter out the data, performing feature selection that selects specific parts, doing clustering and classification, and storing the result into a bio data warehouse to provide integrated analysis results to users [4].One of such important analysis processes is clustering, which is the process of grouping together similar entities. There are many clustering algorithms like hierarchical clustering, self-organizing maps, K-means clustering and so on. But in this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data that are complex, large in amount, and are variable. Algorithms that imitate the ecosystem are generally used to solve very complex problems. The ecosystem is very complex and there exist many living things within it. In an ecosystem, each individual is assumed to be showing optimal behavior. The algorithms are designed according to such individuals’ movement. They reflect biodiversity and living things’ complexity. The ecosystem algorithms have the following benefits. First, they provide a solution based on a solid statistical model. In other words, they do not rely on a single solution but have more flexibility due to the fact that they are based on a statistical method that considers several solutions at one time. This allows the algorithm to find good solutions that may be missed by other algorithms. Second, algorithms that imitate the ecosystem make use of the interaction among the possible solutions. For example, the genetic algorithm allows solutions to pair together and create new solutions. Third, these algorithms allow exceptions. Therefore, solutions that are not typical but are actually better solutions can be found. Because of these reasons, the algorithms are suitable to solve the complex problem of analyzing mass amounts of complex biological data.In this paper, we implemented a DNA chip data clustering system using the Ant-Colony clustering algorithm. The Ant Colony Optimization algorithm (ACO) uses a probabilisticAn Ant-based Clustering System for Knowledge Discovery in DNA Chip Analysis DataMinsoo Lee, Yun-mi Kim, Yearn Jeong Kim, Yoon-kyung Lee, and Hyejung YoonBtechnique for solving computational problems which can be reduced to finding close to optimal paths through graphs. They are inspired by the behavior of ants in finding paths from the colony to food [5].The developed system works as follows. The first step loads the input data, and the second and third steps create the Topic Map while assigning clusters. In order to assign clusters, it is necessary to store the node’s last position, compute the distance between the nodes, assign clusters and merge clusters. Our system can dynamically decide the number of clusters. The problem of deciding the number of clusters is an important issue in the field of data mining. The last step is displaying the results.The organization of this paper is as follows. Section II provides a survey of related work about clustering techniques and algorithms that imitate the ecosystem. Section III gives an overview of the DNA chip analysis process. Section IV explains the Ant-based Clustering system for DNA chip data analysis. Section V describes the implementation and experimental results of the Ant-based Clustering system. Section VI gives the conclusion and future work.II.R ELATED R ESEARCHThe objective of our work is to implement a clustering system for analyzing DNA chip data. Clustering is the process of grouping together similar entities. Clustering is appropriate when there is no a priori knowledge about the data. In such circumstances, the only possible approach is to study the similarity between different samples or experiments. There are many existing clustering algorithms: hierarchical clustering, self-organizing maps, K-means clustering and many others. Hierarchical clustering approaches calculate the distance between individual data points and then group together those that are close. The distances between the groups themselves can also be computed and used to create groups of groups. These can iteratively be organized into a 'tree' in which the closest groups constitute nearby 'branches' – far from groups that are less similar [6]. Hierarchical clustering strategies are easy to implement but suffer because the decision about where to create branches and in what order the branches should be presented can be arbitrary. K-means clustering requires a parameter, k, the number of expected clusters, and the initial cluster centers are selected randomly. In each iteration of the algorithm, all of the profiles are assigned to the cluster whose center they are nearest to (using the distance metric), and then the cluster center is recalculated based on the profiles within the cluster [7]. Instead of simply partitioning data into disjoint clusters, self-organizing maps organize the clusters into a 'map' where similar clusters are close to each other [8]. The number and topological configuration of the clusters are pre-specified. The computational details are similar to K-means clustering except that cluster centers are recalculated at each iteration using both the profiles within the cluster as well as the profiles in adjacent clusters. Over many iterations the clusters conform to the pre-specified topology [9].Because of the characteristics of the DNA chip data such as the high complexity, large in amount, and variable properties, we propose a clustering algorithm which uses an algorithm that imitates the ecosystem. There currently are many algorithms that imitate the ecosystem. The Genetic algorithm, Neural Network algorithm, Particle Swarm algorithm and Ant Colony algorithm are the most popular algorithms. The genetic algorithm (GA) is a search technique used in computer science to find approximate solutions to optimization and search problems. Specifically it falls into the category of local search techniques and is therefore generally an incomplete search. Genetic algorithms are a particular class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination)[10]. Neural Networks is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. NNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons [11]. Particle Swarm Optimization (PSO) is a recently proposed algorithm by James Kennedy and R. C. Eberhart in 1995, motivated by social behavior of organisms such as bird flocking and fish schooling. PSO as an optimization tool, provides a population-based search procedure in which individuals called particles change their position (state) with time. In a PSO system, particles fly around in a multidimensional search space. During flight, each particle adjusts its position according to its own experience, and according to the experience of a neighboring particle, making use of the best position encountered by itself and its neighbor [12]. The Ant Colony Optimization algorithm (ACO) that our system uses is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs [13].There are some work on clustering algorithms based on the Ant Colony algorithm. Yuqing et al. proposed the K-means clustering algorithm based on Ant Colony [14]. And Xiao at al. proposed gene clustering using self-organizing maps and particle swarm optimization [15]. Handl et al. proposed Ant-based clustering [16]. Our system extends this work to adapt ACO to DNA chip data analysis.III.DNA C HIP A NALYSIS P ROCESSDNA chips, in other words microarrays, are a tool for gene expression analysis. The DNA chip consists of the probe which is a single strand DNA printed on the solid substrate. The types of the chip or the name of the chip depends on the method of the chip fabrication. The idea behind the bio-chip is that the DNA in the solution that contains sequences complementary to the sequences of the DNA deposited on the surface of the array willbe hybridized to those complementary sequences. Usually, this interrogation is done by washing the array with a solution containing ssDNA, called a target.To analyze the DNA chip, an overview of the steps that must be gone through are as follows. First, the researchers perform experiments using DNA chips. Afterwards the DNA analysis process is carried out by first scanning the results of the experiments. Next, quality control and normalization processes are carried out to revise errors. During this step, a lot of data is filtered and the quantity of the test data decreases. And then feature selection that selects specific parts is performed. The next step is the data mining work such as classification and clustering. Finally, the results are stored in the biological data warehouse and the warehouse system provides analysis results of integrated biological information to users.The key to interpreting DNA chip data is in the DNA material that is used to hybridize on the array. Since the target is labeled with a fluorescent dye, a radioactive element, the hybridization spot can be detected and quantified easily.After the hybridization, the Scanning work of the results of the experiment is continued. The scanner scans the spots and converts quantity to numeric values namely expression values. This process is called image processing. The next process, Quality control and normalization, are performed in order to get rid of unnecessary data that can influence the whole expression values. In other words, through quality control and normalization, the data are filtered and the data that are meaningful and significant only remain as a result. And it also adjusts for any bias which arises from variations in the DNA chip technology rather than from biological differences between the RNA samples of the printed genes.In order to discover meaningful information from the raw data obtained so far, data mining techniques are used. The most well-known techniques are clustering and classification techniques. First, clustering is performed. Clustering is similar to classification in that data is being grouped. However, unlike classification, the groups are not predefined. Instead, the grouping is accomplished by finding similarities between data according to characteristics found in the actual data. The groups are called clusters. Clustering is appropriate when there is no a priori knowledge about the data. In such circumstances, the only possible approach is to study the similarity between different samples or experiments. In fact, clustering is the process of grouping together similar entities. The algorithm will treat all inputs as a set of n numbers or an n-dimensional vector. Similarity is measured in many different ways, and the final result of the clustering depends on the formula used.The next data mining task performed is classification. For classification, the resulting data from Q.C. and normalization is used as input to create the classification rules. The rules are applied to the set of test data and a prediction is made on which class the data point should belong to considering the accuracy of the rule.Finally, the results are stored in the biological data warehouse. A data warehouse is a subject oriented, integrated, and nonvolatile, time variant collection of data that can support complex analysis and decision making processes. The warehouse system provides analysis of integrated biologicalinformation to users [4]. The whole process is shown in Fig. 1.Fig. 1 DNA chip analysis processIV.A RCHITECTURE OF A NT-B ASED C LUSTERING S YSTEM The Ant-based Clustering System for DNA chip data analysis processes the data in four stages. The first stage loads the input data and the second stage creates a Topic Map while executing the Ant algorithm. The Topic Map is a map that displays the movement of the ants. The third stage assigns the clusters. And the last stage displays the results. In this section, we show the detail stages of the Ant-based Clustering System.A.System Flow of Ant-based Clustering SystemIn our system, the DNA chip data is stored in the database after going through the quality control and normalization. During the input stage, the system connects to the database and brings portions of the normalized biological data into the system memory. The next task is to run the Ant Colony algorithm and draw the Topic Map. The Topic Map shows the ants’ movements that perform the clustering. While drawing the Topic Map, the system uses the Ant Colony algorithm underneath. The ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. They are inspired by the behavior of ants in finding paths from the colony to food [13]. Each ant moves a biological data point and the ants’ movements are displayed via the Topic Map. The next step is to assign cluster numbers to each gene. For this work, the system uses a distance matrix which stores the values that indicate the distance among the genes. The system decides the cluster’s number for each gene with a threshold value which the user decides. This threshold represents the proximity limit for identifying genes as the same cluster. The last step is displaying the results. The system displays information such as the number of genes, the number of samples, iteration count, and the cluster number for each gene and so on. The overall system flow is shown in Fig. 2.Fig. 2 System flow of Ant-based Clustering SystemB. Data Input ProcessThe DNA chip data stored in the database is composed of many tables but our system only uses a table named TBDC_NORM_RESULT. The table consists of three attributes. The attributes are gene data, sample data, and expression values. Fig. 3 shows the TBDC_NORM_RESULT table.Gene Name Sample Name Expression Value293842 N00287 5.11359232185826 293842 N00288 -1.3389279170472 … … …293843 N00287 1.82213569753198 293843 N00288 2.70243282124646 … ……Fig. 3 TBDC_NORM_RESULT table in DBThe system connects to the database and loads portions of the values in memory. And then the system divides the values into regular intervals and assigns different colors to the genes. The reason for this is to provide a better view of the data in the Topic Map for the user.C. Ant Colony Clustering and Drawing the Topic MapWhile drawing the Topic Map, the system runs the Ant Colony Clustering algorithm. Ants create a large group consisting of the largest number of members in the group. Ants also have the longest history in creating colonies. Their behavior has been optimized for a long period of time and is suggested as a good way to imitate their behavior to solve a complex problem. Also, the algorithm is simple and can find the optimal solution using the characteristics of the ants. Ants scatter pheromone on the path that they follow. The larger amount of pheromone the path has, the more probability it has to be chosen. The amount of pheromone deposited on the path is incremented proportion to the quality of the candidate solution. Each path followed by an ant will be a candidate solution for a given problem. Each point of the Topic Map represents each ant. These ants randomly move in the map. Gene data items that are scattered within this map can be pickedup, transported and dropped by the ants. The picking and dropping operations are biased by the similarity and density of gene data items within the ants’ local neighborhood; ants are likely to pick up gene data items that are either isolated or surrounded by dissimilar ones; they tend to drop them in the vicinity of similar ones. In this way, a clustering of the elements on the map is obtained. One of the algorithm’s special features is that it generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid; it has been said to perform both a vector quantization and a topographic mapping at the same time, much as do self-organizing feature maps (SOMs, [17]) [16]. The detail algorithm is described in the next paragraph.The Ant Colony Clustering algorithm starts with an initialization phase, in which (i) all gene data items are randomly scattered on the Topic map; (ii) each ant randomly picks up one gene data item; and (iii) each ant is placed at a random position on the grid. Subsequently, the sorting phase starts: this is a simple loop, in which (i) one ant is randomly selected; (ii) the ant performs a step of a given step size (in a randomly determined direction) on the grid; and (iii) the ant (probabilistically) decides whether to drop its gene data item. In the case of a ‘drop’-decision, the ant drops the gene at its current grid position (if this grid cell is not occupied by another gene data item), or in the immediate neighborhood of it (it locates a nearby free grid cell by means of a random search). It then immediately searches for a new gene data item to pick up. This is done using an index that stores the positions of all ‘free’ data items on the grid: the ant randomly selects one data item out of the index, proceeds to its position on the grid, and evaluates the neighborhood function f*(i), and (probabilistically) decides whether to pick up the gene data item. It continues this search until a successful picking operation occurs. Only then the loop is repeated with anotherant. For the picking and dropping decisions the followingthreshold formulae are used:The algorithm is shown in Fig. 4. The movements of the ants are shown by the colored moving points.D.Assigning ClustersAfter drawing the Topic Map, the last position of nodes can be obtained. To assign a cluster number to each node, the system computes the distance between any two nodes using their last position and stores the information in the distance matrix. When the system computes the distance, it uses the concept of Euclidean distance. Euclidean distance is the straight line distance between two points. The system assigns a cluster number based on the distance calculation. The distance matrix is shown in Fig. 5.Node 1 Node 2 … Node n Node 1 0 0.3425673 … 38.231435 Node 2 0.3425673 0 … 38.563426... ... ... ... ... Node n 38.231435 38.563426 0Fig. 5 The distance matrixThe process of assigning cluster numbers is divided into two phases. First, there is an assigning phase and then a merging phase for merging nearby clusters. During the assigning phase, the system initializes each node’s cluster number to each node index. It iteratively chooses one point, and then compares the distances to other nodes. If the distances between the selected node and other nodes are smaller than a threshold, the Ant-based Clustering System assigns the same cluster number to those nodes. The threshold is decided by the user. This process is repeated until all nodes are compared with their neighboring nodes and are assigned a cluster number. For the merging phase, the system computes the distance between two clusters. The system chooses one node from each cluster. The selected nodes should have the shortest distance between the two clusters. If the distance between the selected nodes is smaller than a threshold, the system merges the two clusters. The threshold used in this phase is not the same threshold with the threshold of the assigning phase. This phase is repeated until all clusters are evaluated. Through this process, the system decides the number of clusters automatically. Our system can solve the problem of deciding the number of clusters which is an important issue in the data mining field.E. Displaying Clustering RresultsThe final displayed results for the Ant-based Clustering System are the Topic Map and the cluster number of each node. The Topic Map is a user interface which shows the ants’ movement. The node’s cluster information includes nine kinds of detail information: node indices of the genes, the number of genes and the number of samples, the number of iterations, coordinates of the last positions of the nodes, the nodes which each cluster has, the cluster number to which each node belongs, the number of clusters, the time taken, and the Pearson correlation. The Topic Map is shown in Figure 6 and the results are shown in Fig. 7.Fig. 6 Output : Topic MapFig. 7 Output: The node informationV.E XPERIMENTAL R ESULTSA.SetupThe Ant-based Clustering System has been implemented in Java. We used the Java 2 standard edition development kit version 1.5.0_07 as the development language. And we used the Eclipse SDK 3.2 as the IDE for development. The database server for this system is Oracle 9i. To easily manage the database, we also used Orange 3.1.B. Experimental ResultsFirst, we experimented with a test data set of 100 genes and 24 samples and then 1000 genes and 24 samples. We performed100 iterations for the first data set and performed 1000iterations for the second data set. The performance results are shown in Fig. 8 and the clustering results are shown in Fig. 9.Genes Samples iterationTime 100 24 100 1656ms1000 24 1000 23656ms Fig. 8 Execution time for clustering on the test dataFig. 9 Clustering result of the first test data setVI. C ONCLUSIONIn this paper, we proposed a clustering method that makes use of an algorithm that imitates the ecosystem. We implemented the system using the Ant Colony clustering algorithm. The system works in several steps such as loading the input data, running the Ant Colony algorithm while drawing the Topic Map, assigning clusters and displaying the results. We have tested the algorithm with data sets consisting of 100 genes and 24 samples and then 1000 genes and 24 samples. The results of the clustering are very promising and can also be visually verified. Future work includes performing more extensive experiments on different types of DNA chip data, and also work on areas of more optimization.R EFERENCES[1] DJ Lockhart, HL Dong, MC Byrne, MT Follettie, MV Gallo, MS Chee,M Mittmann, CW Wang, M Kobayashi, H Horton,EL Brown, Expression monitoring by hybridization to high-density oligonucleotide arrays, Nature Biotechnology, 14(13):1675-1680, 1996.[2] JL DeRisi, VR Iver, PO Brown, Exploring the metabolic and genetic control of gene expression on a genomic scale, Science,278(5338):680-686, 1997.[3] C Debouck, PN Goodfellow, DNA microarrays in drug discovery and development", Nature Genetics, 21(1 suppl):48-50, 1999. [4] David Bowtell, Joseph Sambrook, DNA Microarrays, CSHL Press, 2002[5] WIKIPEDIA, /wiki/Ant_colony_optimization [6] Michael B. Eisen, Paul T. Spellman, Patrick O. Browndagger, and DavidBotstein, Cluster analysis and display of genome-wide expression patterns, Proceedings of the National Academy of Sciences of the UnitedStates of America (PNAS), 95:25, 1998.[7] G. Sherlock, Analysis of large-scale gene expression data, Brief Bioinform. vol. 2, pp.350-362, 2001.[8] P Toronen, M Kolehmainen, G Wong, E Castren , Analysis of geneexpression data using self-organizing maps, FEBS Letters, 451(2):142-146, 1999.[9] DNA chip, http://mbel.kaist.ac.kr/research/DNAchip_en.html. [10] WIKIPEDIA, /wiki/Genetic_algorithm.[11] Aleksander I. and Morton H., An introduction to neural computing, 2nd edition.[12] Particle Swarm Optimization Homepage, / ~mohan/pso/.[13] WIKIPEDIA, /wiki/Ant_colony_optimization. [14] Peng Yuqing, Hou Xiangdan, Liu Shang, The K-means Clustering Algorithm based on Density and Ant colony, IEEE Int. Conf. Neural Networks & Signal Processing Nanjing, China, December 14-17, 2003. [15]Xiang Xiao, Ernst R. Dow, Russell Eberhart, Zina Ben Miled, Robert J. Oppelt, Gene Clustering Using Self-Organizing Maps and Particle Swarm Optimization, IEEE International Workshop On High Performance Computational Biology, 2003.[16] Julia Handl, Joshua Knowles, Marco Dorigo, Ant-Based Clustering: A Comparative Study of its relative performance with respect to k-means, average link and 1D-SOM, IRIDIA-Technical Report Series, 2003.[17] T.Kohonen, Self-Organizing Maps, Springer-Verlag, Berlin, Germany, 1995.。

相关文档
最新文档