Abstract A Survey of Recent Methods for Efficient Retrieval of Similar Time Sequences

合集下载

自由变形技术及其应用

自由变形技术及其应用

计算机研究与发展ISSN 100021239ΠCN 1121777ΠTPJournal of Computer Research and Development 47(2):3442352,2010 收稿日期:2008-04-19;修回日期:2009-06-16 基金项目:国家“九七三”重点基础研究发展计划基金项目(2004CB318000);国家自然科学基金项目(60773179,60803076,60904070);浙江省自然科学基金项目(Y1090718);浙江大学CAD &CG 国家重点实验室开放课题基金项目(A0804)自由变形技术及其应用徐 岗1 汪国昭2 陈小雕1,31(杭州电子科技大学计算机学院 杭州 310018)2(浙江大学数学系图形图像研究所 杭州 310027)3(浙江大学CAD &CG 国家重点实验室 杭州 310027)(xugangzju @ )Free Form Deformation and Its ApplicationXu Gang 1,Wang Guozhao 2,and Chen Xiaodiao 1,31(College of Com puter Science ,H angz hou Dianz i Universit y ,H angz hou 310018)2(I nstitute of Com puter Gra phics and I mage Processing ,De partment of M athematics ,Zhej iang Universit y ,H angz hou310027)3(S tate Key L aboratory of CA D &CG,Zhej iang Universit y ,H angz hou 310027)Abstract Research on object deformation is always t he hot point in t he area of comp uter grap hics and comp uter aided design.Free form deformation ,which is one of t he important technologies of object deformation ,has been int roduced into t he pop ular modeling software and animation software successf ully.In t his paper ,t he research on f ree form deformation in t he past twenty years has been conducted as a comprehensive survey.Firstly ,t he aut hors classify t he existing technologies into accurate free form deformation and non accurate f ree form deformatio n ,and t hen f rom t he deformation tools used in t he different met hods ,t he non accurate free form deformation is classified into four classes :volume based met hod ,surface based met hods ,curve based met hods and point based met hods.Furt hermore ,t hey also compare t he existing no n accurate met hods of f ree form deformation by listing t heir advantages and disadvantages f rom t he view of convenience of deformation tools creation ,efficiency of parameterization ,interactivity for users ,versatility and t heir application in pop ular modeling software.The int rinsic relations between t hem are also presented.Finally ,t heir applications in several p ractical fields and t heir research directions in t he f ut ure are introduced.This survey is not only valuable for t he scientific researchers in t his field but also valuable for t he p ractical users of 3D modeling software and 3D animation software.K ey w ords free form deformation ;geomet ric modeling ;animation technique ;object editing ;survey摘 要 物体变形一直是计算机图形学和辅助设计中的一个热点问题.自由变形方法作为物体变形的核心技术,已被成功集成到当今主流的造型软件及动画软件中.对20年来自由变形技术的发展作了详细的综述,对现有技术进行了系统的分类,即将其分为非精确自由变形和精确自由变形,并根据所使用的变形工具的不同,将非精确自由变形技术分为4类:基于体的变形、基于曲面的变形、基于曲线的变形、基于点的变形.进一步比较了各类技术在变形工具的创建、参数化效率、变形工具的修改、多功能性等方面的优缺点,并分析了它们之间的内在联系.最后对其应用及未来工作进行了简要的介绍.该综述不仅对于该领域的研究人员具有重要的参考价值,而且对于三维造型师及三维动画师也有一定的指导意义.关键词 自由变形;几何造型;动画技术;物体编辑;综述中图法分类号 TP391.41 随着计算机科学在硬件及软件方面的飞速发展,计算机图形学已经广泛应用到现代社会中的各个领域.计算机图形学算法大体上可以分为绘制算法、造型算法及动画设计算法.对现有的简单物体进行变形以得到复杂物体,以及通过变形技术来交互地实现动画效果更是最近图形学领域研究的热点[122].在计算机图形学领域,物体变形技术大致可以分为两类:基于物理的方法和基于几何的方法[3].基于物理的方法是将物体看成具有一定物理性质的形体而不是单纯的几何体[4],依据现实世界中的客观物理规律来对物体进行变形操作,从而能够生成逼真的变形效果,多用于游戏动画、虚拟现实等领域,其缺点是计算量和存储量大.基于几何的方法是将物体抽象为三维空间中的几何体,运用计算辅助几何设计和数字几何处理技术来实现对物体形状的编辑,该类方法简单易行,计算量小,但其逼真度不高,多用于计算机辅助设计和游戏动画领域.自由变形技术是几何变形方法的典型代表,它最早由Sederberg和Parry在1986年提出[5],并在最近20年得到了突飞猛进的发展.该方法受到人们广泛重视的原因在于:首先,在对物体进行整体和局部的形状修改时突破了传统方法的局限性;其次,其独立于物体的表示形式,非常容易集成于现有的软件造型系统.另外,该方法在产生物体的形状动画方面具有良好的交互性和可控性.自由变形技术现已广泛应用于几何造型、计算机动画、科学数据可视化等领域,并已经被成功地集成于当今流行的造型及动画软件中,如3D Max,Maya及Softimage等.本文对20多年来自由变形技术的发展进行了详细的综述,对其进行了系统分类,比较分析了各类方法的优缺点,并简要介绍了相关应用及将来的研究方向.其经过20多年的发展,已有相当数量的相关工作,但其思想精髓却是一致的,所不同的是选用的变形工具和构造的变形参数空间,这也是该综述视角选择的依据和特点所在.根据变形过程中是否需要对待变形物体进行采样,可将其分为非精确自由变形和精确自由变形两大类.1 非精确自由变形在非精确自由变形过程中,首先需要对待变形物体进行采样,然后对采样点或控制顶点进行变形.因此,该方法与物体的拓扑和表示形式无关.根据所使用的变形工具的不同,可将非精确自由变形技术分为4类:基于体的变形、基于曲面的变形、基于曲线的变形、基于点的变形.1.1 基于体的变形Sederberg和Parry[5]在1986年提出了基于Bézier体的变形方法,并首次提出了自由变形的概念.该方法提供了一种空间变形方法框架:首先将待变形物体嵌入到三变量张量积Bézier参数体,当参数空间的形状改变时,将变形传递给待变形物体.其主要步骤如下:1)创建变形工具.定义一个三变量张量积Bézier参数体,包括参数空间和控制顶点网格.2)将物体映射到参数空间中.在待变形物体上取一些采样点,计算每个采样点在参数空间中相应的参数坐标.3)修改变形工具.用户修改张量积Bézier体的控制顶点,使张量积Bézier体的形状发生改变.4)物体发生变形.保持每个采样点在参数空间中相应的参数坐标不变,由于张量积Bézier体的形状发生改变,故每个采样点的空间位置也会相应地发生改变,从而实现了对物体的变形操作.在早期工作中,三变量张量积Bézier体的控制顶点网格为一平行六面体,故变形的参数空间也为一平行六面体.所以该算法第2步中参数化过程实际上是坐标系统之间的一个仿射变换,参数化比较容易,但变形效果不是特别理想.如果待变形物体完全嵌入Bézier体则该变形为整体变形,如果部分嵌入Bézier体则为局部变形,但此时就要对连续性进行考虑.Davis和Burton,Kalra等人[627]分别独立提出了基于有理张量积Bézier体的自由变形方法.该变形方法由于对Bézier体的每个控制顶点都引入了543徐 岗等:自由变形技术及其应用一个权因子,因而增加了变形的自由度.为了能在不考虑连续性的情况下进行局部变形,Greissmair, Purgathofer[8]及Comninos[9]分别提出了基于均匀B 样条体的自由变形技术.Lamousin和Waggenspack[10]提出了基于非均匀有理B样条体的自由变形方法,不仅能够进行局部变形,而且引入了节点和权因子作为自由度.上述基于B样条体的方法,由于基函数的次数与控制顶点的个数无关,因而可以在不增加计算复杂度的情况下增加控制顶点,从而达到更好的用户交互.在上述算法中,初始的变形工具均为一个平行六面体.Coquillart[11]在1990年提出了一种新的自由变形方法,其初始的变形工具是对平行六面体进行一些编辑操作后的形状,可以得到如圆柱形的初始Bézier体.该方法可以使初始变形工具的形状更接近待变形物体的形状,使得控制顶点的偏移能够和变形效果更好地结合起来.但若对物体进行局部变形,仍会存在连续性问题.为克服这一缺点, Bechmann等人[12]在1996年提出了利用分段Bézier 四面体的变形方法,能够保证C1连续.由于NU RBS体在拓扑结构上仍有很大的限制,故上述变形方法都还不是基于任意拓扑的变形方法.MacCracken及Joy[13]提出了一种基于细分原则的自由变形方法,通过调整初始体网格的形状来控制变形效果.其最大优点是变形工具为任意拓扑,但这同时也增加了计算的复杂度.Song和Yang[14]提出了一种基于加权T样条体的自由变形方法.由于T样条的控制网格允许出现T字形状,更加容易逼近可变形物体的形状.笔者利用有理DMS样条体作为变形工具,提出了一种新的自由变形技术[15],该技术将现今存在的自由变形技术的大多数优点统一到同一个技术框架之中,例如局部变形、任意拓扑的控制晶格、光滑变形、多分辨率变形及直接修改等.除了上述方法外,Hua和Qin[16]及Qing等人[17]分别独立提出了基于梯度场的自由变形方法,其基本思想与Sederberg的方法类似.基于体的自由变形方法的控制顶点太多,用户无法准确预测变形效果,对变动每一个控制顶点所影响的区域也很难控制,因此不利于用户交互.通常情况下,用户只是通过调整边界面上的控制顶点来调整变形,而内部的控制顶点基本不在调整范围之列.1.2 基于曲面的变形冯结青等人[18]提出了第1个基于参数曲面的自由变形方法.其主要步骤如下:1)创建变形工具.初始变形工具为XO Y平面上的矩形网格(为准均匀的B样条曲面,记为S(u, v)).2)将物体映射到参数空间中.在待变形物体上取采样点,然后对每个采样点赋予一个参数坐标.对于待变形物体的一个点X(x,y,z),计算它沿着法向在初始矩形平面上的投影X p,然后得到它的参数值(u,v).第3个参数坐标w为X(x,y,z)与点X p 之间的距离.这样便有X=S(u,v)+w×N(u,v),其中N(u,v)是曲面S(u,v)在点X p处的法向量.3)修改变形工具.用户通过调整控制顶点修改初始曲面S(u,v),得到新曲面S~(u,v).4)物体发生变形.保持每一个采样点的参数坐标不变,采样点X(x,y,z)在物理空间中变为X~=S~(u,v)+w×N~(u,v),其中N~(u,v)是曲面S~(u,v)在点(u,v)处的法向量.在该变形方法中,用户可以方便地通过修改S(u,v)来对物体进行弯曲和扭曲操作,修改w来对物体进行Taper操作.另外,文中定义了一个额外参数H z(u,v),其为高度曲面H(u,v)的z向量.此时:X~=S~(u,v)+H z(u,v)×w×N~(u,v). 初始高度曲面为平面z=1上的矩形平面网格,也为一张准均匀张量积B样条曲面.该方法可以实现局部和整体变形,交互性强,但因为该方法中的映射不是等距的,会导致物体各个部分产生不均匀的变形,从而产生扭曲现象.冯结青等人[19]在1998年提出了一种沿着参数曲面的均匀变形方法,其主要采用了等距映射和重新参数化的方法.Chen等人[20]提出了一种沿着参数曲面的无扭曲的自由变形方法,其主要思想是采用无扭曲纹理映射的方法,并把作为变形工具的参数曲面先进行近似展开为平面,然后把待变形物体上的点投影到展开后得到的平面上,并把平面上的点投影到原参数曲面上,最后沿着参数曲面在各点的法向偏移,并进行线性插值,就得到了变形后的物体.冯结青等人利用基于参数曲面的自由变形技术及关键帧插值技术实现了物体的动态变形[21].由于上述方法的变形工具通常为正规拓扑,而不是任意拓扑,因此不便于局部变形和任意变形. K obayashi和Oot subo[22]在2003年提出了一种基于三角网格的自由变形方法,其主要思想是对三角网格中的每个三角形都建立一个局部坐标系,然后643计算机研究与发展 2010,47(2)对待变形物体上的每个采样点都在每个坐标系中赋予一个局部参数坐标,当三角网格发生变形时保持采样点的参数坐标不变,则采样点在每个坐标系中的空间位置会发生变化,得到一些新点,最终的变形结果为这些新点的线性组合.J u等人[23]在2005年的Siggrap h会议上讨论了如何计算任何封闭三角网格的平均值坐标,然后改变封闭三角网格的形状,而保持平均值坐标不变,从而实现物体的变形.Feng(冯结青)等人2006年提出了一种基于细分曲面的多分辨率自由变形方法[24].它可以看作基于参数曲面的自由变形的方法在任意拓扑下的扩展,也可以看作基于体细分的方法在曲面情形下的推广.由于其控制网格具有任意拓扑,因此该算法虽然属于变形工具为二维的一类,但可以达到三维的效果,计算和存储成本也相对较低,可实现多分辨率变形.Y oon和K im[25]在2006年的Eurograp hics会议上提出了一种基于扫掠曲面(sweep surface)的自由变形方法.其思想是先用若干个扫掠曲面逼近待变形物体,然后把待变形物体上的采样点绑定在扫掠曲面上,最后通过对扫掠曲面变形来实现物体的变形.该方法有两大优势:一是可以实现多层次的联动变形,二是物体的体积在变形过程中可保持不变.由于修改曲面的控制顶点要比修改参数体的控制顶点容易得多,因此基于曲面的自由变形方法在交互性方面有了很大提高.变形功能方面也有了很大改进,可以实现多分辨率和保体积的效果.在实际应用中常用来作整体变形.1.3 基于曲线的变形Chang和Rockwood[26]在1994年提出了一种基于Bézier曲线de Casteljau算法的自由变形方法.Bechmann及Elkouhen[27]详细给出了该算法的实现过程.该算法主要受Bézier曲线de Casteljau 算法的启发,即Bézier曲线的de Casteljau算法实际上是把线段[0,1]变形为一条Bézier曲线.在通常的变形算法中,变形工具通常有两个:初始工具及变形后的工具.但在该算法中,变形工具只有一条Bézier曲线.实际上,初始的变形工具为线段[0,1].由于三角域上的Bézier曲面及矩形域上的Bézier 曲面都有类似的de Casteljau算法,Mikita[28]及Bechmann[29]分别对此进行了推广,不同之处是在每个控制顶点处用户只需指定一个向量(手柄).实际上,如果将上述算法推广到三变量的情况,便可得到基于Bézier体的自由变形算法[5].这也体现了此3类方法在本质上的联系.Lazarus等人[30]在1994年提出了一种基于轴曲线的轴自由变形方法.其主要步骤如下:1)创建变形工具.用户给出初始曲线.2)将物体映射到参数空间中.运用细分方法将初始曲线离散成直线段的逼近,并利用旋转最小标架对每条直线段建立局部坐标系.对待变形物体上的每个采样点,采用最近点原则找到逼近曲线上的一个点,并计算该采样点在此点处的旋转最小标架的参数值.3)修改变形工具.利用调整控制顶点的方法对初始曲线进行变形.4)物体发生变形.对变形后的曲线重新细分逼近,并重新建立局部坐标系.保持采样点的原参数坐标不变,便会得到采样点新的空间位置.轴变形方法由于采用了基于最近点原则的参数化,因此计算量比较大,在连续性方面也有待提高. Singh等人[31]在1998年Siggrap h会议上提出了一种基于曲线网的自由变形方法,使得交互手段更加灵活.基于旋转最小标架的轴变形方法不能通过操纵轴曲线来实现物体的扭曲,为弥补这一缺点, Hui[32]利用曲线对来控制局部坐标标架的方向,从而可方便地实现物体的弯曲和扭曲.冯结青等人[33]提出了一种基于弧长参数化的轴变形方法,可以实现物体的等长度和等体积变形.计忠平等人[34]给出了一种无局部自交的轴变形方法.该方法通过调整轴曲线的控制顶点来调整轴曲线的曲率,从而避免物体发生局部自交.基于曲线的自由变形方法,在用户交互性方面比较好,但是变形功能比较少.在实际应用中常用来作整体变形.1.4 基于点的变形Borrel和Bechmann[35]在1991年提出了一种基于约束的空间变形方法.此方法中物体的变形以物体上点的偏移量为基础.Hsu等人[36]利用最小二乘法来解决自由变形后的物体通过空间指定一点的问题.Rup recht等人[37]在1995年提出了一种利用散乱点插值技术来实现自由变形的技术.Moccozet 和Magnenat[38]在1997年提出了一种具有代表性的基于点的自由变形方法,称为Dirichlet自由变形.该方法基于计算几何中凸包、Voronoi图、Delaunay三角化的概念以及传统自由变形的思想,可以实现自由变形后的物体通过空间指定一点的效果.其算法流程为:743徐 岗等:自由变形技术及其应用1)创建变形工具.用户输入控制点集合P,但不需在控制点集合上定义特殊的拓扑结构.物体需要变形的部分必须在控制点集合所形成的凸包内.2)将物体映射到参数空间中.对物体上的每个采样点p确定其Sibson邻域集合P n={p i|0≤i≤n},并计算其Sibson坐标u i.3)修改变形工具.即移动P中的一个或多个控制点.4)物体发生变形.保持每个采样点的Sibson 坐标u i不变,计算出物体上每个采样点的位移和新位置.Hu等人[39]利用约束优化的方法解决了基于B 样条体的自由变形中变形后的物体通过空间一点的问题,给出了控制顶点的显式调整公式,该方法更加直观有效.McDonnell和Qin基于径向基函数的思想,提出了一种新的点自由变形方法(PB FFD),该方法具有自动构造变形空间、可实现多分辨率变形、具有自动修改功能等特点[40].基于点的自由变形方法操作简单,变形效果更加直观,变形的自由度更多,交互性更强.其缺点是多功能性不高,应用方面有待加强.1.5 4类方法的分析与比较上述4类非精确自由变形方法,其思想基本一致,都是首先把待变形物体嵌入一个参数空间,然后对参数空间进行变形,从而实现物体的自由变形;其基本步骤都可概括为4步:创建变形工具;将物体映射到参数空间中;修改变形工具;保持物体的参数化不变,物体发生变形.这是此4类方法的共同点,也是自由变形方法的思想精髓.由于上述4类方法所分别采用的变形工具不同,因此在变形工具的创建、参数化方法、变形工具的修改等方面都有所差异.本文将从变形工具创建的难易程度,参数化效率,变形工具修改的交互性、多功能性,应用程度4个角度来对4种方法进行比较,如表1所示.在表1中,“★”越多表示此类方法的该项指标越高.T able1 Comparisons B etw een Four Kinds of N on2Accurate F ree Form Deform ation表1 四类非精确自由变形方法的比较Performance Volume Based FFD Surface Based FFD Curve Based FFD Point Based FFD Convenience of Deformation Tools Creation★★★★★★★★★★Efficiency of Parametrization★★★★★★★★★★Interactivity of Deformation Tools Modification★★★★★★★★★★Versatility★★★★★★★★★★Application in Popular Modeling Software★★★★★★★★★★2 精确自由变形从理论上讲,只有当变形作用在物体的每一点上时变形才是精确的.如果物体的采样点比较稀疏时,用以上方法就难以得到令人满意的变形结果.冯结青等人[41]在1998年首先提出了多边形物体的精确自由变形方法.他们借助于移位算子的概念,用函数复合的方法[42]解决了多边形物体的B样条自由变形的采样问题.其算法的主要步骤如下:1)首先将B样条参数体通过节点插入算法转化为分块连续的Bézier参数体;2)根据参数体的节点向量对多边形物体进行剖分和重新参数化,使剖分和重新三角化后物体的每一个三角片严格位于某个Bézier参数体之内;3)通过函数复合三角片与相应的Bézier参数体,得到三角片的精确变形结果,即一个次数为B 样条各个方向的次数之和的三角Bézier曲面片.三角Bézier曲面片控制顶点的计算通过广义de Casteljau算法完成,该算法不仅数值稳定,而且无需用户交互,可由算法自动完成.但由于广义de Casteljau算法非常耗时,因此,上述变形算法很难达到实时的要求.Feng(冯结青)等人[43]基于Bernstein 多项式插值理论,提出了快速计算三角Bézier曲面片的控制顶点的方法,使得精确自由变形的计算量大大降低.由于目前流行的造型和绘制系统不直接支持三角Bézier曲面片,而支持张量积Bézier曲面片,并且许多图形硬件可以对张量积曲面片加速处理.因而需要将每个三角Bézier曲面片转化为3个非退化张量积Bézier曲面片或者Trimmed Bézier曲面片.其次,转化后的张量积曲面的次数一般较高,给计算、存储和交互带来很大开销.为此,Feng(冯结青)等人[44]提出了变形结果为Trimmed Bézier曲面片843计算机研究与发展 2010,47(2)的精确变形方法以解决上述问题.在该方法中,剖分后的物体是由多边形而不是大量三角片组成,共面的多边形被变形为Trimmed Bézier曲面,并且Trimmed Bézier曲面片的表示符合当前的工业标准STEP.但该方法在交互性和鲁棒性方面还存在一定问题.冯结青等人[45]在此方法基础上进行了如下改进:对空间多边形共面的判断方法进行了改进,以满足工程CAD中高数值精度的要求;提出了新的多边形物体剖分方法,使剖分算法的效率得到提高;利用翼边结构给出了一种新的空间多边形合并算法,减少了变形结果中Trimmed Bézier曲面片的数目.随后,冯结青等人[46]利用等距技术将B样条曲线或曲面所张成的变形空间近似表示为张量积B 样条参数体,结合多边形物体精确B样条自由变形方法,实现了参数曲线和曲面控制的多边形物体变形反走样.精确自由变形解决了自由变形的采样和精确性问题,具有比较广阔的应用前景,但目前还没有集成到现有造型系统.3 自由变形技术的应用几何造型和动画领域是自由变形技术的应用大户.目前,现今流行的大多数造型软件都集成了自由变形功能,比如3D Max,Maya及Softimage等.它们广泛应用于游戏角色的建模制作、CAD模型的修改与造型、虚拟现实场景的构建等领域.自由变形技术不仅广泛应用于几何造型和动画领域,而且在图像与视频处理、服装CAD、有限元分析、古人类学等领域都得到了一定程度的应用.在医学图像配准方面,Rueckert等人在1999年首先采用基于B样条体的自由变形技术来进行医学图像的配准,得到了令人满意的效果[47].随后Rohde等人[48]于2001年提出了基于自适应自由变形技术进行医学图像配准的方法.L u等人将自由变形技术与变分法相结合,提出了一种快速的图像配准方法,进一步提高了配准效率[49].为提高图像配准的灵活度和准确度,Wang和Jiang[50]提出了一种基于NU RBS的自由变形技术进行脑部MRI医学图像的非刚体配准方法.在图像和视频处理方面,Xie和Farin将自由变形的思想运用到图像变形,提高了图像变形的质量和用户交互性[51].Karantzalo s和Paragios提出了一种基于自由变形的光流场表示和计算技术[52].随后,他们又将自由变形技术与Level Set技术结合,用于视频的多帧分割及光流场的计算[53].Lee 等人利用自由变形和Snake模型提出了一种新的图像渐变方法,对已有方法进行了改进[54].在服装CAD方面,为适应服装CAD中对可展性的需求,Wang和Tang利用三角网格上高斯曲率的离散表示,提出了一种保持三角网格可展性的自由变形方法[55].Wang利用基于三角网格的自由变形方法和兼容网格技术进行服装及鞋帽CAD设计[56].4 结论及展望随着计算机硬件和软件技术的发展和社会各领域新的技术需求的涌现,计算机图形学领域呈现出新的发展方向,主要表现在动漫产业的迅猛发展为几何造型和动画技术提供了新的发展舞台,快速而高质量的几何建模研究成为热点;随着图形硬件的发展,实时绘制与建模任务相结合也成为重要的研究课题.因此,作为当今主流的造型软件及动画软件的重要模块,自由变形技术对图形学和数字娱乐产业的重要性不言而喻.各种自由变形方法的基本思想大致相同,都可以看成一个映射或函数的复合.但这些技术既有优点,又有缺点.目前还没有一种完美的自由变形方法可以把所有的优点和功能集中在一个统一的框架之内.虽然现在自由变形技术已比较成熟,但通过以上的介绍可以看出还有一系列的问题值得深入研究:1)选择更加合理有效的变形工具,使得该变形框架能有尽可能多的功能和优点;2)对于曲线Π曲面的自由变形方法,解决变形后物体过一指定约束点的问题,得到控制顶点的调整公式,这对于提高交互性具有重要意义;3)将各种自由变形方法统一到同一个应用框架中.把各种变形方法结合起来,使其能够兼顾各种方法的优点,使变形结果更加合理有效.参考文献[1]Barr A H.G lobal and local deformations of solid primitives[J].Computer Graphics,1984,18(3):21230[2]Li Lingfeng,Tan Jianrong,Chen Yuanpeng.A constraintcurve surface deformation model based on metaball[J].Journal of Computer Research and Development,2006,43(4):6882694(in Chinese)943徐 岗等:自由变形技术及其应用。

Mean Shift-A Robust Approach toward Feature Space Analysis

Mean Shift-A Robust Approach toward Feature Space Analysis

Mean Shift:A Robust ApproachToward Feature Space AnalysisDorin Comaniciu,Member,IEEE,and Peter Meer,Senior Member,IEEEAbstractÐA general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it.The basic computational module of the technique is an old pattern recognition procedure,the mean shift.We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and,thus,its utility in detectingthe modes of the density.The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators of location is also established.Algorithms for two low-level vision tasks,discontinuity preserving smoothing and image segmentation,are described as applications.In these algorithms,the only user set parameter is the resolution of the analysis and either gray level or color images are accepted as input.Extensive experimental results illustrate their excellent performance.Index TermsÐMean shift,clustering,image segmentation,image smoothing,feature space,low-level vision.æ1I NTRODUCTIONL OW-LEVEL computer vision tasks are misleadingly diffi-cult.Incorrect results can be easily obtained since the employed techniques often rely upon the user correctly guessing the values for the tuning parameters.To improve performance,the execution of low-level tasks should be task driven,i.e.,supported by independent high-level informa-tion.This approach,however,requires that,first,the low-level stage provides a reliable enough representation of the input and that the feature extraction process be controlled only by very few tuning parameters corresponding to intuitive measures in the input domain.Feature space-based analysis of images is a paradigm which can achieve the above-stated goals.A feature space is a mapping of the input obtained through the processing of the data in small subsets at a time.For each subset,a parametric representation of the feature of interest is obtained and the result is mapped into a point in the multidimensional space of the parameter.After the entire input is processed,significant features correspond to denser regions in the feature space,i.e.,to clusters,and the goal of the analysis is the delineation of these clusters.The nature of the feature space is application dependent. The subsets employed in the mapping can range from individual pixels,as in the color space representation of an image,to a set of quasi-randomly chosen data points,as in the probabilistic Hough transform.Both the advantage and the disadvantage of the feature space paradigm arise from the global nature of the derived representation of the input. On one hand,all the evidence for the presence of a significant feature is pooled together,providing excellent tolerance to a noise level which may render local decisions unreliable.On the other hand,features with lesser support in the feature space may not be detected in spite of being salient for the task to be executed.This disadvantage, however,can be largely avoided by either augmenting the feature space with additional(spatial)parameters from the input domain or by robust postprocessing of the input domain guided by the results of the feature space analysis. Analysis of the feature space is application independent. While there are a plethora of published clustering techni-ques,most of them are not adequate to analyze feature spaces derived from real data.Methods which rely upon a priori knowledge of the number of clusters present (including those which use optimization of a global criterion to find this number),as well as methods which implicitly assume the same shape(most often elliptical)for all the clusters in the space,are not able to handle the complexity of a real feature space.For a recent survey of such methods,see[29,Section8].In Fig.1,a typical example is shown.The color image in Fig.1a is mapped into the three-dimensional L*u*v*color space(to be discussed in Section4).There is a continuous transition between the clusters arising from the dominant colors and a decomposition of the space into elliptical tiles will introduce severe artifacts.Enforcing a Gaussian mixture model over such data is doomed to fail,e.g.,[49], and even the use of a robust approach with contaminated Gaussian densities[67]cannot be satisfactory for such complex cases.Note also that the mixture models require the number of clusters as a parameter,which raises its own challenges.For example,the method described in[45] proposes several different ways to determine this number. Arbitrarily structured feature spaces can be analyzed only by nonparametric methods since these methods do not have embedded assumptions.Numerous nonparametric clustering methods were described in the literature and they can be classified into two large classes:hierarchical clustering and density estimation.Hierarchical clustering techniques either aggregate or divide the data based on. aniciu is with the Imaging and Visualization Department,SiemensCorporate Research,755College Road East,P rinceton,NJ08540.E-mail:comanici@..P.Meer is with the Electrical and Computer Engineering Department,Rutgers University,94Brett Road,P iscataway,NJ08854-8058.E-mail:meer@.Manuscript received17Jan.2001;revised16July2001;accepted21Nov.2001.Recommended for acceptance by V.Solo.For information on obtaining reprints of this article,please send e-mail to:tpami@,and reference IEEECS Log Number113483.0162-8828/02/$10.00ß2002IEEEsome proximity measure.See[28,Section3.2]for a survey of hierarchical clustering methods.The hierarchical meth-ods tend to be computationally expensive and the definition of a meaningful stopping criterion for the fusion(or division)of the data is not straightforward.The rationale behind the density estimation-based non-parametric clustering approach is that the feature space can be regarded as the empirical probability density function (p.d.f.)of the represented parameter.Dense regions in the feature space thus correspond to local maxima of the p.d.f., that is,to the modes of the unknown density.Once the location of a mode is determined,the cluster associated with it is delineated based on the local structure of the feature space[25],[60],[63].Our approach to mode detection and clustering is based on the mean shift procedure,proposed in1975by Fukunaga and Hostetler[21]and largely forgotten until Cheng's paper[7] rekindled interest in it.In spite of its excellent qualities,the mean shift procedure does not seem to be known in statistical literature.While the book[54,Section6.2.2]discusses[21],the advantages of employing a mean shift type procedure in density estimation were only recently rediscovered[8].As will be proven in the sequel,a computational module based on the mean shift procedure is an extremely versatile tool for feature space analysis and can provide reliable solutions for many vision tasks.In Section2,the mean shift procedure is defined and its properties are analyzed.In Section3,the procedure is used as the computational module for robust feature space analysis and implementa-tional issues are discussed.In Section4,the feature space analysis technique is applied to two low-level vision tasks: discontinuity preserving filtering and image segmentation. Both algorithms can have as input either gray level or color images and the only parameter to be tuned by the user is the resolution of the analysis.The applicability of the mean shift procedure is not restricted to the presented examples. In Section5,other applications are mentioned and the procedure is put into a more general context.2T HE M EAN S HIFT P ROCEDUREKernel density estimation(known as the Parzen window technique in pattern recognition literature[17,Section4.3])is the most popular density estimation method.Given n data points x i,i 1;...;n in the d-dimensional space R d,the multivariate kernel density estimator with kernel K x and a symmetric positive definite dÂd bandwidth matrix H, computed in the point x is given by^f x 1nni 1K H xÀx i; 1 whereK H x j H jÀ1=2K HÀ1=2x : 2 The d-variate kernel K x is a bounded function with compact support satisfying[62,p.95]R dK x d x 1limk x k3Ik x k d K x 0R dx K x d x 0R dxx b K x d x c K I;3where c K is a constant.The multivariate kernel can be generated from a symmetric univariate kernel K1 x in two different waysK P xdi 1K1 x i K S x a k;d K1 k x k ; 4where K P x is obtained from the product of the univariate kernels and K S x from rotating K1 x in R d,i.e.,K S x is radially symmetric.The constant aÀ1k;dR dK1 k x k d x assures that K S x integrates to one,though this condition can be relaxed in our context.Either type of multivariate kernel obeys(3),but,for our purposes,the radially symmetric kernels are often more suitable.We are interested only in a special class of radially symmetric kernels satisfyingK x c k;d k k x k2 ; 5 in which case it suffices to define the function k x called the profile of the kernel,only for x!0.The normalization constant c k;d,which makes K x integrate to one,is assumed strictly positive.Using a fully parameterized H increases the complexity of the estimation[62,p.106]and,in practice,the bandwidth matrix H is chosen either as diagonal H diag h21;...;h2d ,Fig.1.Example of a feature space.(a)A400Â276color image.(b)Corresponding L*u*v*color space with110;400data points.or proportional to the identity matrix H h 2I .The clear advantage of the latter case is that only one bandwidth parameter h >0must be provided;however,as can be seen from (2),then the validity of an Euclidean metric for the feature space should be confirmed first.Employing only one bandwidth parameter,the kernel density estimator (1)becomes the well-known expression^f x 1nh d n i 1K x Àx i h: 6The quality of a kernel density estimator is measured bythe mean of the square error between the density and its estimate,integrated over the domain of definition.In practice,however,only an asymptotic approximation of this measure (denoted as AMISE)can be computed.Under the asympto-tics,the number of data points n 3I ,while the bandwidth h 30at a rate slower than n À1.For both types of multivariate kernels,the AMISE measure is minimized by the Epanechni-kov kernel [51,p.139],[62,p.104]having the profilek E x1Àx 0 x 10x >1;&7 which yields the radially symmetric kernelK E x 12c À1dd 2 1Àk x k 2 k x k 10otherwise ;&8where c d is the volume of the unit d -dimensional sphere.Note that the Epanechnikov profile is not differentiable at the boundary.The profilek N x exp À1xx !0 9yields the multivariate normal kernelK N x 2 Àd=2exp À12k x k 210for both types of composition (4).The normal kernel is oftensymmetrically truncated to have a kernel with finite support.While these two kernels will suffice for most applications we are interested in,all the results presented below are valid for arbitrary kernels within the conditions to be stated.Employing the profile notation,the density estimator (6)can be rewritten as^f h;K x c k;d nh d n i 1k x Àx i h 2 :11 The first step in the analysis of a feature space with theunderlying density f x is to find the modes of this density.The modes are located among the zeros of the gradient r f x 0and the mean shift procedure is an elegant way to locate these zeros without estimating the density.2.1Density Gradient EstimationThe density gradient estimator is obtained as the gradient of the density estimator by exploiting the linearity of (11)^r f h;K x r ^f h;K x 2c k;d nh d 2 n i 1x Àx i k H x Àx i h 2 : 12We define the functiong x Àk H x ;13assuming that the derivative of the kernel profile k exists forall x P 0;I ,except for a finite set of points.Now,using g x for profile,the kernel G x is defined asG x c g;d g k x k 2; 14 where c g;d is the corresponding normalization constant.Thekernel K x was called the shadow of G x in [7]in a slightly different context.Note that the Epanechnikov kernel is the shadow of the uniform kernel,i.e.,the d -dimensional unit sphere,while the normal kernel and its shadow have the same expression.Introducing g x into (12)yields,^rf h;K x 2c k;d nh d 2 n i 1x i Àx g x Àx i h 22c k;d nh d 2 ni 1g x Àx i h 2 45 n i 1x i g x Àx i h 2 n i 1g x Àx i h2 Àx P R Q S ; 15where ni 1g x Àx i h2is assumed to be a positive number.This condition is easy to satisfy for all the profiles met in practice.Both terms of the product in (15)have special significance.From (11),the first term is proportional to the density estimate at x computed with the kernel G^f h;G x c g;d nh d n i 1g x Àx i h2 : 16 The second term is the mean shift m h;G x n i 1x i g x Àx i h2 n i 1g x Àx i h2 Àx ; 17i.e.,the difference between the weighted mean,using the kernel G for weights,and x ,the center of the kernel (window).From (16)and (17),(15)becomes^rf h;K x ^f h;G x 2c k;d h 2c g;dm h;G x ; 18yieldingm h;G x 1h 2c ^rf h;K x ^fh;G x :19 The expression (19)shows that,at location x ,the mean shift vector computed with kernel G is proportional to the normalized density gradient estimate obtained with kernel K .The normalization is by the density estimate in x computed with the kernel G .The mean shift vector thus always points toward the direction of maximum increase in the density.COMANICIU AND MEER:MEAN SHIFT:A ROBUST APPROACH TOWARD FEATURE SPACE ANALYSIS 3This is a more general formulation of the property first remarked by Fukunaga and Hostetler [20,p.535],[21],and discussed in [7].The relation captured in (19)is intuitive,the local mean is shifted toward the region in which the majority of the points reside.Since the mean shift vector is aligned with the local gradient estimate,it can define a path leading to a stationary point of the estimated density.The modes of the density are such stationary points.The mean shift procedure ,obtained by successive.computation of the mean shift vector m h;G x ,.translation of the kernel (window)G x by m h;G x ,is guaranteed to converge at a nearby point where the estimate (11)has zero gradient,as will be shown in the next section.The presence of the normalization by the density estimate is a desirable feature.The regions of low-density values are of no interest for the feature space analysis and,in such regions,the mean shift steps are large.Similarly,near local maxima the steps are small and the analysis more refined.The mean shift procedure thus is an adaptive gradient ascent method.2.2Sufficient Condition for ConvergenceDenote by f y j g j 1;2...the sequence of successive locations of the kernel G ,where,from (17),y j 1 n i 1x i g x Àx i h2 n i 1g x Àx i h j 1;2;... 20 is the weighted mean at y j computed with kernel G and y 1is the center of the initial position of the kernel.The corresponding sequence of density estimates computedwith kernel K ,f ^f h;K j g j 1;2...,is given by^fh;K j ^f h;K y j j 1;2...:21As stated by the following theorem,a kernel K that obeys some mild conditions suffices for the convergence of thesequences f y j g j 1;2...and f ^f h;K j g j 1;2....Theorem 1.If the kernel K has a convex and monotonically decreasing profile,the sequences y j ÈÉj 1;2...andf ^f h;K jg j 1;2...converge and f ^f h;K j g j 1;2...is monotoni-cally increasing.The proof is given in the Appendix.The theoremgeneralizes the result derived differently in [13],where K was the Epanechnikov kernel and G the uniform kernel.The theorem remains valid when each data point x i is associated with a nonnegative weight w i .An example of nonconver-gence when the kernel K is not convex is shown in [10,p.16].The convergence property of the mean shift was also discussed in [7,Section iv].(Note,however,that almost all the discussion there is concerned with the ªblurringºprocess in which the input is recursively modified after each mean shift step.)The convergence of the procedure as defined in this paperwas attributed in [7]to thegradient ascentnature of (19).However,as shown in [4,Section 1.2],moving in the direction of the local gradient guarantees convergence only for infinitesimal steps.The step size of a gradient-based algo-rithm is crucial for the overall performance.If the step size is too large,thealgorithm will diverge,while ifthestep sizeis too small,the rate of convergence may be very slow.A number ofcostly procedures have been developed for step size selection [4,p.24].The guaranteed convergence (as shown by Theorem 1)is due to the adaptive magnitude of the mean shift vector,which also eliminates the need for additional procedures to chose the adequate step sizes.This is a major advantage over the traditional gradient-based methods.For discrete data,the number of steps to convergence depends on the employed kernel.When G is the uniform kernel,convergence is achieved in a finite number of steps since the number of locations generating distinct mean values is finite.However,when the kernel G imposes a weighting on the data points (according to the distance from its center),the mean shift procedure is infinitely convergent.The practical way to stop the iterations is to set a lower bound for the magnitude of the mean shift vector.2.3Mean Shift-Based Mode DetectionLet us denote by y c and ^f c h;K^f h;K y c the convergence points of the sequences f y j g j 1;2...and f ^f h;K j g j 1;2...,respectively.The implications of Theorem 1are the following.First,the magnitude of the mean shift vector converges to zero.Indeed,from (17)and (20)the j th mean shift vector ism h;G y j y j 1Ày j22and,at the limit,m h;G y c y c Ày c 0.In other words,the gradient of the density estimate (11)computed at y c is zeror ^fh;K y c 0; 23due to (19).Hence,y c is a stationary point of ^fh;K .Second,since f ^f h;K j g j 1;2...is monotonically increasing,the meanshift iterations satisfy the conditions required by the Capture Theorem [4,p.45],which states that the trajectories of such gradient methods are attracted by local maxima if they are unique (within a small neighborhood)stationary points.That is,once y j gets sufficiently close to a mode of ^fh;K ,it converges to it.The set of all locations that converge to the same mode defines the basin of attraction of that mode.The theoretical observations from above suggest a practical algorithm for mode detection:.Run the mean shift procedure to find the stationarypoints of ^fh;K ,.Prune these points by retaining only the localmaxima.The local maxima points are defined,according to the Capture Theorem,as unique stationary points within some small open sphere.This property can be tested by perturbing each stationary point by a random vector of small norm and letting the mean shift procedure converge again.Should the point of convergence be unchanged (up to a tolerance),the point is a local maximum.2.4Smooth Trajectory PropertyThe mean shift procedure employing a normal kernel has an interesting property.Its path toward the mode follows a smooth trajectory,the angle between two consecutive mean shift vectors being always less than 90degrees.Using the normal kernel (10),the j th mean shift vector is given by4IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.24,NO.5,MAY 2002m h;N y j y j 1Ày j ni 1x i exp xÀx i h2ni 1exp xÀx i h2Ày j: 24The following theorem holds true for all j 1;2;..., according to the proof given in the Appendix.Theorem2.The cosine of the angle between two consecutive mean shift vectors is strictly positive when a normal kernel is employed,i.e.,m h;N y j b m h;N y j 1k m h;N y j kk m h;N y j 1 k>0: 25As a consequence of Theorem2,the normal kernel appears to be the optimal one for the mean shift procedure. The smooth trajectory of the mean shift procedure is in contrast with the standard steepest ascent method[4,p.21] (local gradient evaluation followed by line maximization) whose convergence rate on surfaces with deep narrow valleys is slow due to its zigzagging trajectory.In practice,the convergence of the mean shift procedure based on the normal kernel requires large number of steps, as was discussed at the end of Section2.2.Therefore,in most of our experiments,we have used the uniform kernel, for which the convergence is finite,and not the normal kernel.Note,however,that the quality of the results almost always improves when the normal kernel is employed. 2.5Relation to Kernel RegressionImportant insight can be gained when(19)is obtained approaching the problem differently.Considering the univariate case suffices for this purpose.Kernel regression is a nonparametric method to estimate complex trends from noisy data.See[62,chapter5]for an introduction to the topic,[24]for a more in-depth treatment. Let n measured data points be X i;Z i and assume that the values X i are the outcomes of a random variable x with probability density function f x ,x i X i;i 1;...;n, while the relation between Z i and X i isZ i m X i i i 1;...;n; 26 where m x is called the regression function and i is an independently distributed,zero-mean error,E i 0.A natural way to estimate the regression function is by locally fitting a degree p polynomial to the data.For a window centered at x,the polynomial coefficients then can be obtained by weighted least squares,the weights being computed from a symmetric function g x .The size of the window is controlled by the parameter h,g h x hÀ1g x=h .The simplest case is that of fitting a constant to the data in the window,i.e.,p 0.It can be shown,[24,Section3.1],[62,Section5.2],that the estimated constant is the value of the Nadaraya-Watson estimator,^m x;h ni 1g h xÀX i Z ini 1g h xÀX i; 27introduced in the statistical literature35years ago.The asymptotic conditional bias of the estimator has the expression[24,p.109],[62,p.125],E ^m x;h Àm x j X1;...;X n%h2m HH x f x 2m H x f H x2f x2 g ;28 where 2 gu2g u du.Defining m x x reduces the Nadaraya-Watson estimator to(20)(in the univariate case), while(28)becomesE ^xÀx j X1;...;X n %h2f H xf x 2 g; 29which is similar to(19).The mean shift procedure thus exploits to its advantage the inherent bias of the zero-order kernel regression.The connection to the kernel regression literature opens many interesting issues,however,most of these are more ofa theoretical than practical importance.2.6Relation to Location M-EstimatorsThe M-estimators are a family of robust techniques which can handle data in the presence of severe contaminations,i.e., outliers.See[26],[32]for introductory surveys.In our context only,the problem of location estimation has to be considered. Given the data x i;i 1;...;n;and the scale h,will define^ ,the location estimator as^ argminJ argminni 1Àx ih223; 30where, u is a symmetric,nonnegative valued function, with a unique minimum at the origin and nondecreasing for u!0.The estimator is obtained from the normal equationsr J ^ 2hÀ2 ^ Àx i w^ Àxih2HdIe 0; 31 wherew ud udu:Therefore,the iterations to find the location M-estimate are based on^ni 1x i w^ Àx i h2ni 1w^ Àx i h; 32which is identical to(20)when w u g u .Taking into account(13),the minimization(30)becomes^ argmaxni 1kÀx i223; 33 which can also be interpreted as^ argmax^fh;K j x1;...;x n : 34 That is,the location estimator is the mode of the density estimated with the kernel K from the available data.Note that the convexity of the k x profile,the sufficient condition for the convergence of the mean shift procedure(Section2.2)is inCOMANICIU AND MEER:MEAN SHIFT:A ROBUST APPROACH TOWARD FEATURE SPACE ANALYSIS5accordance with the requirements to be satisfied by the objective function u .The relation between location M-estimators and kernel density estimation is not well-investigated in the statistical literature,only[9]discusses it in the context of an edge preserving smoothing technique.3R OBUST A NALYSIS OF F EATURE S PACES Multimodality and arbitrarily shaped clusters are the defin-ing properties of a real feature space.The quality of the mean shift procedure to move toward the mode(peak)of the hill on which it was initiated makes it the ideal computational module to analyze such spaces.To detect all the significant modes,the basic algorithm given in Section2.3should be run multiple times(evolving in principle in parallel)with initializations that cover the entire feature space.Before the analysis is performed,two important(and somewhat related)issues should be addressed:the metric of the feature space and the shape of the kernel.The mapping from the input domain into a feature space often associates a non-Euclidean metric to the space.The problem of color representation will be discussed in Section4,but the employed parameterization has to be carefully examined even in a simple case like the Hough space of lines,e.g., [48],[61].The presence of a Mahalanobis metric can be accommo-dated by an adequate choice of the bandwidth matrix(2).In practice,however,it is preferable to have assured that the metric of the feature space is Euclidean and,thus,the bandwidth matrix is controlled by a single parameter, H h2I.To be able to use the same kernel size for all the mean shift procedures in the feature space,the necessary condition is that local density variations near a significant mode are not as large as the entire support of a significant mode somewhere else.The starting points of the mean shift procedures should be chosen to have the entire feature space(except the very sparse regions)tessellated by the kernels(windows). Regular tessellations are not required.As the windows evolve toward the modes,almost all the data points are visited and,thus,all the information captured in the feature space is exploited.Note that the convergence to a given mode may yield slightly different locations due to the threshold that terminates the iterations.Similarly,on flat plateaus,the value of the gradient is close to zero and the mean shift procedure could stop.These artifacts are easy to eliminate through postproces-sing.Mode candidates at a distance less than the kernel bandwidth are fused,the one corresponding to the highest density being chosen.The global structure of the feature space can be confirmed by measuring the significance of the valleys defined along a cut through the density in the direction determined by two modes.The delineation of the clusters is a natural outcome of the mode seeking process.After convergence,the basin of attraction of a mode,i.e.,the data points visited by all the mean shift procedures converging to that mode,automati-cally delineates a cluster of arbitrary shape.Close to the boundaries,where a data point could have been visited by several diverging procedures,majority logic can be em-ployed.It is important to notice that,in computer vision, most often we are not dealing with an abstract clustering problem.The input domain almost always provides an independent test for the validity of local decisions in the feature space.That is,while it is less likely that one can recover from a severe clustering error,allocation of a few uncertain data points can be reliably supported by input domain information.The multimodal feature space analysis technique was discussed in detail in[12].It was shown experimentally, that for a synthetic,bimodal normal distribution,the technique achieves a classification error similar to the optimal Bayesian classifier.The behavior of this feature space analysis technique is illustrated in Fig.2.A two-dimensional data set of110;400points(Fig.2a)is decom-posed into seven clusters represented with different colors in Fig.2b.A number of159mean shift procedures with uniform kernel were employed.Their trajectories are shown in Fig.2c,overlapped over the density estimate computed with the Epanechnikov kernel.The pruning of the mode candidates produced seven peaks.Observe that some of the trajectories are prematurely stopped by local plateaus. 3.1Bandwidth SelectionThe influence of the bandwidth parameter h was assessed empirically in[12]through a simple image segmentation task.In a more rigorous approach,however,four different techniques for bandwidth selection can be considered..The first one has a statistical motivation.The optimal bandwidth associated with the kernel density esti-mator(6)is defined as the bandwidth that achieves thebest compromise between the bias and variance of theestimator,over all x P R d,i.e.,minimizes AMISE.Inthe multivariate case,the resulting bandwidth for-mula[54,p.85],[62,p.99]is of little practical use,sinceit depends on the Laplacian of the unknown densitybeing estimated,and its performance is not wellunderstood[62,p.108].For the univariate case,areliable method for bandwidth selection is the plug-inrule[53],which was proven to be superior to least-squares cross-validation and biased cross-validation[42],[55,p.46].Its only assumption is the smoothnessof the underlying density..The second bandwidth selection technique is related to the stability of the decomposition.The bandwidthis taken as the center of the largest operating rangeover which the same number of clusters are obtainedfor the given data[20,p.541]..For the third technique,the best bandwidth max-imizes an objective function that expresses the qualityof the decomposition(i.e.,the index of clustervalidity).The objective function typically comparesthe inter-versus intra-cluster variability[30],[28]orevaluates the isolation and connectivity of thedelineated clusters[43]..Finally,since in most of the cases the decomposition is task dependent,top-down information providedby the user or by an upper-level module can be usedto control the kernel bandwidth.We present in[15],a detailed analysis of the bandwidth selection problem.To solve the difficulties generated by the narrow peaks and the tails of the underlying density,two locally adaptive solutions are proposed.One is nonpara-metric,being based on a newly defined adaptive mean shift6IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL.24,NO.5,MAY2002。

02-专业学位研究生英语水平自测试题

02-专业学位研究生英语水平自测试题

UNIT 2 Model Test 2Part I Vocabulary1.Although Asian countries are generally more cautious in social customs than Western countries, there havebeen several notable examples of women leaders in both China and India.A) conservative 保守的B) confidential 机密的C) comprehensive 综合的D) consistent 始终的虽然亚洲的国家在社会习俗方面普遍比西方国家保守,但中国和印度仍有好几位著明的女性领导人。

Cautious:谨慎的2.The law prohibits occupancy by more than 250 persons.A) abandons 放弃B) defines 定义C) declare 宣布D) forbids 禁止法律禁止超过250人的占用。

Prohibit:禁止3.One condition of this job is that you must be present at work at weekends.A) available to 可以用来B) capable of 有能力C) acceptable to 可接受D)accessible to 可得到这个工作的一个条件就是你必须周末去工作。

Present at:在场4.The availability of food and probably to a lesser extent, the degree of danger from predators are two basicinfluences in the relations between individuals within a species.A) distribution 分配B) diversity 多样性C) imitation 模仿D) utility 效用食物的可获得性及可能在稍微小的程度上来自捕食者的危险程度是两个影响个体在物种之间的关系基本因素。

研究生学术英语翻译5

研究生学术英语翻译5

Definition定义An academic essay,also called a research article,differs from an ordinary essay in that it consists of the review of previous studies on a particular topic and one’s own research.In most cases it is a kind of documented report from the writer’s first-hand acquisition,synthesis and interpretation of information,data and findings.A typical academic essay includes five parts,namely,abstract,introduction,body,conclusion and reference.一个学术论文,也被称为一个研究文章,不同于普通的文章,它由一个特定主题的对前人研究的回顾和自己的研究组成。

在大多数情况下,它是一个信息综合、数据和结果都由作家第一手采集和解释的书面报告。

一个典型的学术论文包括五个部分,即,摘要,介绍,主体,结论和参考文献。

Two types of research paper两种类型的研究论文There are two major kinds of academic paper:primary research paper and secondary research paper.A primary research paper is the study of a subject through firsthand investigation,involving presenting original ideas and information on your own.In most cases you need to conduct a survey or an experiment original ideas to obtain new findings.Hence it is sometimes called “survey-or-experiment-based research”.A secondary research paper,however,involves gathering and analyzing the research findings from other people’s research.To illustrate your argument,you need to borrow and use evidence and findings available on the topic in the library or on the Internet.Hence it is sometimes called”library-or-Internet-based research”.学术论文主要有两大类:初次研究和二次研究。

财务弹性与企业现金政策(July 2013)

财务弹性与企业现金政策(July 2013)

Financial Flexibility and Corporate Cash PolicyTao Chen, Jarrad Harford and Chen Lin*July 2013Abstract:Using variations in local real estate prices as exogenous shocks to corporate financing capacity, we investigate the causal effects of financial flexibility on cash policies of US firms. Building on this natural experiment, we find strong evidence that increases in real estate values lead to smaller corporate cash reserves, declines in the marginal value of cash holdings, and lower cash flow sensitivities of cash. The representative US firm holds $0.037 less of cash for each $1 of collateral, quantifying the sensitivity of cash holdings to collateral value. We further find that the decrease in cash holdings is more pronounced in firms with greater investment opportunities, financial constraints, better corporate governance, and lower local real estate price volatility.JEL classification: G32; G31; G34; R30Keywords: Cash policy; Debt capacity; Collateral; Real estate value; Cash holding; Marginal value of cash; Cash flow sensitivity of cash* Chen is from The Chinese University of Hong Kong. Lin is from the University of Hong Kong. Harford is from the University of Washington. We thank Harald Hau, Gustavo Manso, and Micah Officer for helpful comments and discussion. Lin gratefully acknowledges the financial support from the Chinese University of Hong Kong and the Research Grants Council of Hong Kong (Project No. T31/717/12R).1.IntroductionFinancial flexibility refers to a firm’s ability to access financing at a low cost and respond to unexpected changes in the firm’s cash flows or investment opportunities in a timely manner (Denis, 2011). A survey of CFOs in Graham and Harvey (2001) suggests that financial flexibility is the most important determining factor of corporate capital structure decisions, but flexibility has not been studied as a first-order determinant of corporate financial policies until very recently.1 Consequently, as pointed out in Denis (2011), an interesting and unresolved research question remains: “To what extent are flexibility considerations first-order determinants of financial policies?” In this paper, we directly test the effects of financial flexibility on corporate cash holdings by exploiting exogenous shocks to firms’ financing capacity.As the amount of cash U.S. firms hold on their balance sheets has grown, so has interest in how they manage liquidity and access to capital. While the literature documents substantial support for the precautionary savings hypothesis put forth by Keynes (1936), we still know relatively little about how firms tradeoff debt capacity and cash reserves, and specifically the degree to which increases in the supply of credit substitute for internal slack. Answers to such questions are important not only for a better understanding of cash and liquidity policy in general, but also for assessing the impact of the credit channel on real activity.Reflected in cash holding theory, the concept of financial flexibility matters in the presence of financing frictions, under which firms have precautionary incentives to stockpile cash. Specifically, the precautionary savings hypothesis posits that firms hold cash as a buffer to shield from adverse cash flow shocks due to costly external financing. Opler, et al. (1999), Harford (1999), Bates, Kahle and Stulz (2009), and Duchin (2010), among others provide1 DeAngelo and DeAngelo (2007) discuss preservation of financial flexibility as an explanation for observed capital structure choices. Gamba and Triantis (2008) provide a theoretical analysis of the effect of financial flexibility on firm value. Denis and McKeon (2011) lend further support that in the form of unused debt capacity, financial flexibility plays an important role in capital structure.1evidence of precautionary savings’ role in cash policy. Cash studies typically control for leverage and sometimes cash substitutes such as net working capital. Almeida, et al. (2004) and Faulkender and Wang (2006) have shown that cash policy is more important when firms are financially constrained. Nevertheless, to our knowledge, none of the extant studies have directly tested the role of external financing capacity in shaping corporate cash policies.2 In this paper, we attempt to fill this void by providing a comprehensive understanding of the causal effects of financial flexibility on cash policies.The striking paucity of the research into the effect of debt capacity on cash policy is likely to be partially driven by a lack of readily available measures of financing capacity. Moreover, the fact that financing capacity is endogenous has also hindered such attempts. For instance, firms’ cash balance and liquidity policy might exert feedback effects on firms’ financing capacity. Unobservable firm heterogeneity correlated with both debt capacity and corporate liquidity policies could also bias the estimation results.In this paper, we make use of a novel experiment developed by Chaney,Sraer and Thesmar (2012). Specifically, we use changes in the value of a firm’s collateral value caused by variations in local real estate prices (at state level or Metropolitan Statistical Areas (MSA) level) as an exogenous change to the financing capacity of a firm, increasing its financial flexibility. Existing literature points out that pledging collateral such as real estate assets can alleviate agency costs caused by moral hazard and adverse selection, enhance firms’ financing capacity, and allow firms to borrow more in the presence of incomplete contracting (Barro, 1976; Stiglitz and Weiss, 1981; Hart and Moore, 1994; Jimenez et al., 2006). Firms with more tangible assets have higher recovery rate in financial distress, and banks are ex ante more likely to provide looser contract2 Most of the existent research in this area provides at most indirect evidences, by primarily focusing on the relationship between cash flow risk and cash holdings, and papers use industry cash flow volatility to proxy for cash flow risk (e.g., Opler et al., 1999; Bates et al., 2009), and find this measure is positively associated with cash holdings. Han and Qiu (2007) use firm-level measure of cash low volatility and find consistent results. More recently, Duchin (2010) finds that investment opportunity risk increases cash holdings.2terms to firms with more pledgeable assets. Tangible assets thus can alleviate banks’ concern of asset substitution and debt recovery risk, which increases firms’ financial flexibility. As a consequence, it reduces firms’ incentive to save cash. Consistent with theory, recent empirical studies show that firms with greater collateral value are able to raise external funding at lower costs (e.g. Berger et al., 2011; Lin et al., 2011) and to invest more (Chaney et al., 2012).3 If financial flexibility exerts first-order effects on a firm’s financial policy, we would expect that an exogenous shock increasing real estate values translates into a lower precautionary motive to stockpile cash. Likewise, following a large deterioration in collateral value, firms would confront more stringent external financing, and consequently hold more cash. A key advantage of our identifying strategy is that it not only provides variation in exogenous shocks to debt capacity, but also solves the omitted variables concerns by allowing multiple shocks to different firms at different times at different locations (states or MSAs).Primarily, we find that the representative US public firm holds $0.037 less of cash for each additional $1 of collateral over the 1993-2007 period. As Chaney et al. (2012) document that an average firm raises its investment by $0.06 and issues new debt of $0.03 for a $1 increase in collateral value, our results fit perfectly with their findings on the gap between the investment and new debt in the perspective that firms finance approximately half of their new investment using internal accumulated cash. In terms of economic magnitude, a one standard deviation increase in collateral value results in a decrease of about 8.1% of the mean value of cash ratio. To further refine our understanding of the effects of debt capacity on cash holding decisions, we look at heterogeneous firm characteristics that might shape the relationship between debt capacity and cash reserves. Precautionary motives predict that the effects would be more pronounced in firms with more investment opportunities and generally greater financial 3 Berger et al. (2011) use a rough measure indicating whether collateral was pledged at loan origination, and Lin et al. (2011) use tangibility to proxy for collateral value. One pertinent concern is that tangibility itself is a noisy measure of collateral value, while another concern is that collateral requirement and loan spread might be jointly determined by unobservable factors, which results in endogeneity problem.3constraint. Moreover, as agency theory argues that cash is the most vulnerable asset to agency conflicts (Berle and Means, 1933; Jensen and Meckling, 1976; Myers and Rajan, 1998) and Jensen (1986) argues that debt constrains managers, managers of poorly governed firms are unlikely to view debt capacity and cash as substitutes. Additionally, firms located in the areas with high historical real estate fluctuations might be subjective to more uncertainties in the future value of the real estate asset they hold, and thus might not be willing to reduce cash holdings as firms with low historical real estate volatilities. In further subsample tests, we indeed find that the decrease in cash holdings following increased collateral value is more pronounced in firms with greater investment opportunities, more financial constraint, better corporate governance, and lower historical local real estate volatility.Our findings of the strong impact of financing capacity on cash holdings largely rely on two underlying assumptions: 1) higher collateral value reduces the marginal benefit of holding cash, and 2) firms consequently save less cash out of cash flow and display lower cash flow sensitivity of cash. We can test these assumptions by directly test the prediction for the marginal value of cash holdings using the Faulkender and Wang (2006) approach, and the prediction for the cash flow sensitivity of cash using Almeida et al. (2004)’s specification. We find that following exogenous shocks to collateral value, the marginal value of cash decreases. Quantitatively, a shocked firm’s value of a marginal dollar of cash is approximately 25% lower than that of an otherwise similar firm. In further exploration, we find that for firms with prior financial constraint, shareholders value cash less after a positive exogenous shock to the value of the firm’s real estate. In such firms, increasing collateral value provides more benefits to the firms as managers can use collateral to easily access external financing.We next analyze how debt capacity affects the cash flow sensitivity of cash. We find that firms show reduced cash flow sensitivity of cash following an exogenous shock to their debt capacity. Compared to an unaffected firm, the median shocked firm has a 5% lower of cash flow4sensitivity of cash. We further find that the effect on cash flow sensitivity of cash is larger in firms with greater investment opportunities. In addition, all of our empirical results are robust to controlling for the potential sources of endogeneity, as in Chaney et al. (2012) as well.Our paper contributes to and is related to several strands of literature. Foremost, our paper contributes to the cash holding literature by showing how financing capacity causally affects cash holdings, the value of cash, and the cash flow sensitivity of cash. The evidence is consistent with the precautionary motive of cash holdings. In this regard, our paper also contributes to the broader literature of liquidity management (Campello et al., 2010, 2011) by documenting how firms manage liquid resources in response to financing capacity.Moreover, our results also highlight the importance of corporate governance in cash policies. We find that there is a non-trivial gap between the degrees of the decline in the marginal value of cash holdings, and that of the drop in the actual cash balance, following increased collateral value. Through our subsample analysis, we find that the decrease in cash holdings is more pronounced in firms with greater investment opportunities, prior financial constraint, and better corporate governance. This reveals that firms with entrenched managers are reluctant to substitute cash and debt capacity. Further, exogenous changes in credit provision have an immediate impact on firms with strong investment opportunities and firms with some financial constraint.The remainder of the paper proceeds as follows. Section 2 presents our construction of the sample and data. Sections 3 to 5 investigate the effects of collateral shocks on cash holdings, the marginal value of cash holdings, and the cash flow sensitivity of cash, respectively. In each section, we firstly introduce the estimation models and descriptive statistics, and then report our empirical findings. Section 6 concludes.52.Sample and DataThe sample construction and the empirical approach in the first part of the paper closely follow Chaney et al. (2012), who identify local variation in real estate prices as an exogenous and meaningful shock to firms’ debt capacity. Their study focuses exclusively on the credit channel’s effect on real investment. We start from the universal sample of Compustat firms that were active in 1993 with non-missing information of total assets. We require that the firm was active in 1993 as this was the last year when data on accumulated depreciation on buildings is still available in Compustat. We retain firms whose headquarters are in the US, and keep only firms that exist for at least three consecutive years in the sample. We further exclude firms operating in the industry of finance, insurance, real estate, construction, and mining businesses. We also restrict the sample to firms not involved in major acquisitions. We further require that the firms have information for us to calculate the market value of real estate assets and also non-missing information for the major variables in the cash equation. Eventually we obtain a final sample of 26,242 firm-year observations associated with 2,790 unique firms.Our key variable of interest is the market value of real estate assets. First, we define real estate assets as the summation of three major categories of property, plant, and equipment (PPE): buildings, land and improvement, and construction in progress. These values are at historical cost, rather than marked-to-market, and we need to recover their market value. Next, we estimate the average age of those assets using the procedure from Chaney et al. (2012). Specifically, we calculate the ratio of the accumulated depreciation of buildings (dpacb in Compustat) to the historic cost of building (fatb in Compustat) and multiply by the assumed mean depreciable life of 40 years (Nelson et al., 2000), and get the average age of the real estate assets. Thus we obtain the year of purchase for the real estate assets. Finally, for each firm’s real estate assets (fatp+fatb+fatc in Compustat), we use a real estate price index to estimate the market value of these real estate assets for 1993 and then calculate the market6value for each year in the sample period (1993 to 2007). We use both state-level and MSA-level real estate price indices. The real estate price indices are obtained from the Office of Federal Housing Enterprise Oversight (OFHEO). We match the state-level real estate price index with our accounting data using the state identifier from Compustat. For the MSA-level real estate price index, we utilize a mapping table between zip code and MSA code maintained by the US Department of Labor’s Office of Workers’ Compensation Programs (OWCP), to match with our accounting data by zip code from Compustat.To be more specific, we obtain the real estate value in 1993 as the book value (fatp+fatb+fatc in Compustat) multiplied by the cumulative price increase from the acquisition year to 1993. For purpose of illustration, consider Johnson & Johnson with an accumulated depreciation of buildings of 808 million USD in 1993, and a historic cost of building of 2,389 million USD in 1993. We get the proportion of buildings used of 0.3382 (dpacb/fatb in Compustat), and obtain the average age of the real estate assets of 13 years by multiplying 0.3382 with the assumed mean depreciable life of 40 years. Consequently, we get the year of purchase for the real estate assets to be 1980. Then we use the cumulative price increase in the state real estate price index and MSA real estate price index from 1980 to 1993, and multiply by the historical cost of real estate assets (fatp+fatb+fatc in Compustat) (3,329 million USD) to get the market value of real estate assets in 1993 for Johnson & Johnson. We further adjust for inflation, divide by total assets, and get our final measure, RE Value. Johnson & Johnson has a value of 63% for RE Value in 1993, using state-level real estate prices. For the subsequent years, we estimate the real estate value as the book value at 1993 multiplied by the cumulative price increase from 1993 to that year.One notable issue is that we do not consider the value of any new real estate repurchases or sales subsequent to 1993. This practice has both advantages and drawbacks. The advantage is that it successfully avoids any endogeneity between real estate purchases and investment7opportunities, while the disadvantage is that it introduces noise into our measure. As illustrated in Chaney et al. (2012), firms are not likely to sell real estate assets to realize the capital gains when confronted with an increase in their real estate value, thus alleviating some of our concerns stemming from measurement error. Finally, we standardize our measure of market value of real estate assets by firms’ total assets. This standardization will help us make dollar-to-dollar economic interpretations of the effect of collateral value on cash policy. For a representative firm over 1993 to 2007, the market value of real estate represents 26% of the firm’s total assets.4 Real estate is therefore a sizable proportion of firm’s assets on balance sheet. More summary statistics will be discussed in section 3.2.3.Collateral Shocks and Cash HoldingsWe begin our analysis by examining the effects of collateral shocks on cash holdings. In this section, we first describe our estimation strategy and summary statistics, and then report the empirical results. Further, we provide instrumental variable analysis to cope with any lingering endogeneity concerns and present additional robustness tests. This initial part of our analysis generally follows Chaney et al.’s (2012) analysis of investment following collateral shocks. Finally, we conduct subsample analysis to look at the effects of investment opportunities, financial constraint, and corporate governance in shaping the relationship between debt capacity and cash holdings.3.1.Estimation Model and Variables4 Our measures differ in magnitude with Chaney et al. (2012) as we are scaling real estate value using total book assets to better interpret in the cash regressions, while Chaney et al. (2012) are using PPE to standardize their major variables of real estate value.8In order to compute the sensitivity of cash reserves to collateral value, we augment the standard cash equation as in the literature (e.g., Opler et al., 1999; Bates et al., 2009) by introducing a variable capturing the value of real estate owned by the firm (RE value). Specifically, for firm i, with headquarters in location j(sate or MSA), in fiscal year t, we construct the following model:Casℎi,j,t=α+β1×RE value i,j,t+β2×RE price index j,t+δ′X+εi,j,t, (1)where the dependent variable Cash refers to the ratio of cash and short-term investments to total assets, or to net assets, following Opler et al. (1999) and Bates et al (2009). We also test the robustness of the results using log value of cash to net assets as an alternative measure (Bates et al., 2009). RE value is the market value of real estate assets in the fiscal year t scaled by total assets. For regressions using cash ratios scaled by net assets, RE value is scaled by the value of net assets for ease of coefficient interpretation. RE price index controls for state- or MSA-level of real estate prices in location j in fiscal year t.The vector X includes a set of firm-specific control variables following the cash literature. These parameters are: 1) log firm size, measured as the log of real inflation-adjusted book assets; 2) market to book ratio, as the market value of assets over book value of assets; 3) leverage, as all debt scaled by total assets; 4) Investment as capital expenditures divided by total assets; 5) dividends paying dummy, with one indicating firm pays dividends and zero otherwise; 6) cash flow to total assets; 7) NWC, calculated as non-cash net working capital to total assets; 8) acquisition intensity, as acquisitions divided by total assets; 9) R&D/sales; 10) industry cash flow risk, defined as the standard deviation of industry cash flow to firm’s total assets for the previous ten years; 11) two-digit SIC industry and year fixed effects. The detailed definitions are provided in Appendix A.9We include NWC as an independent variable because net working capital can substitute for cash, and therefore we expect firms with a higher value for net working capital to hold less cash. Market to book ratio and R&D/sales proxy for growth opportunities. For firms with larger growth opportunities, underinvestment is more costly, and these firms are expected to accumulate more cash. Firms with more capital expenditures are predicted to hoard less cash, and thus Capx/assets are predicted to be negatively correlated with the level of cash holdings. Similarly, acquisition intensity also proxies for the investment level of a firm, and it is expected to exert negative effects on cash holdings (Bates et al., 2009). Additionally, acquisition intensity also helps to control for the agency costs that managers of firms with excess cash holdings could conduct acquisitions for their private benefit (Jensen, 1986; Harford, 1999). Leverage is predicted to be negatively associated with cash holdings as interest payments decrease the ability of firms to hoard cash. Also, including leverage in the model helps to control for the refinancing risk of the firm, as Harford et al. (2013) find that firms increase cash holdings to mitigate the refinancing risk. Firms paying dividends are expected to have better access to debt financing, and thus less cash holdings. Industry cash flow risk captures cash flow uncertainty, and one would predict firms with greater cash flow risk to hold more precautionary cash (Opler et al., 1999; Bates et al., 2009).Our primary focus is the coefficient estimate of RE value, β1. A negative and statistically significant β1 in regression (1) would be evidence for the causal effect of financing capacity on cash holdings, as it suggests that firms reduce cash balance after the appreciation of real estate value due to exogenous shocks. Therefore, this would be consistent with the precautionary saving hypothesis, as an analogous impact is expected on the downside of the cycle when adverse shocks occurs to the firm’s real estate assets. Since RE value is at firm level and both cash ratios and RE value are using the same divisor, a clear advantage of this model specification is that β1 could capture how sensitive a firm’s cash holding responds to a $1 increment in the value of real estate owned by the firm.103.2.Baseline Regression ResultsAfter restricting the availability of information in regard to cash holdings and major independent variables in equation (1), we obtain a final sample consisting of 26,242 firm-year observations associated with 2,790 unique firms from 1993 to 2007. Panel A of Table 1 reports the corresponding summary statistics.[Table 1 about here]From Panel A of Table 1, we find that the ratio of cash to total assets has a mean of 0.18 and a standard deviation of 0.22, comparable with the literature (Opler et al., 1999; Bates et al., 2009). The ratio of cash to net assets is higher since cash and marketable assets have been subtracted from the denominator. Our major independent variable of interest, RE value, has two versions: one using state-level real estate price index, while the other using MSA-level real estate price index to compute the market value of the firm’s real estate assets. Both of the measures are scaled using total book assets. The two versions yield similar values: the former (using state real estate price index) has a mean value of 0.25 with a standard deviation of 0.40, while the latter has a mean of 0.24 and a standard deviation of 0.39.Table 2 shows the regression results. The dependent variables are Cash/Assets in columns (1) to (3) and Cash/Net Assets in columns (4) to (6). For each dependent variable, we first report the regressions of cash ratios on a set of control variables and our major independent variable of interest RE value calculated using the state real estate price index, and then RE value using the MSA real estate price index. All regressions control for year and two-digit SIC industry fixed11effects, whose coefficient estimates are suppressed. Heteroskedasticity-consistent standard errors clustered at the state-year or MSA-year level are reported.5 Across the four models, we consistently find that RE value has a statistically significant and negative coefficient (β1) at the 1%level, which is consistent with managers trading off debt capacity and cash reserves in managing their access to capital. More importantly, we can characterize the degree of substitution. Specifically, based on the estimates in column (1) when using state real estate price index to compute RE value, the representative firm reduces cash reserve by $0.037 for each additional $1 of real estate actually owned by the firm, holding other factors constant. The effect is not only statistically significant, but also economically large. A one standard deviation increase in collateral value results in a decrease of 0.015 (=0.037×0.396) in the ratio of cash to total assets, which is about 8.1% of the mean, and 6.8% of one standard deviation of the cash ratio.[Table 2 about here]In column (2), we replicate the estimation performed in column (1) using the MSA real estate price index instead of the state index. As argued in Chaney et al. (2012), using MSA-level real estate prices has both advantages and caveats. The advantage is that it makes our identifying assumption that cash holdings are uncorrelated with local real estate prices milder, and it also offers a more accurate source of variation in real estate value (Chaney et al., 2012). The downside is that as now we assume that all the real estate assets owned by a firm are located in the headquarters city, it might be potentially subject to more measurement error. As5 We follow Chaney et al. (2012), and this clustering structure is conservative given the major explanatory variable of interest RE value is measured at the firm level (See Bertrand et al., 2004). We check the sensitivity by clustering at the firm level, and all the regressions reported in the paper are robust to this alternative clustering strategy.12shown in column (2), the coefficient estimate β1 remains stable, at 0.038, and statistically significant at the 1% level.In columns (4) and (5), we change the dependent variable to the ratio of cash and short-term investments to net assets. The coefficient estimates for RE value are negative and statistically significant at the 1% level, and the economic magnitudes are qualitatively similar to columns (1) and (2).The control variables also generate interesting findings, consistent with the prior results in the cash literature. Both the market to book ratio and R&D/sales have positive coefficients, significant at the 1% level across all the models, supporting the hypothesis that firms with larger growth opportunities are more inclined to accumulate a large cash balance to accommodate future investment. The coefficient estimates for Capx/assets and acquisition intensity are both negative and significant at the 1% level for all the model specifications, echoing the results in Bates et al. (2009) that firms with higher level of investment are predicted to hoard less cash. Leverage has a negative and significant coefficient, in support of Harford et al. (2013) that firms with higher level of refinancing risk are more likely to accumulate large cash balance. Firms paying dividends and with a larger size are expected to have easier access to external financing, and that’s why we observe negative and significant coefficients on firm size and the dividend-paying dummy. We also find that NWC has a negative coefficient estimate, statistically significant at the 99% confidence level across all the models, consistent with the substituting role of net working capital to cash reserves. Finally, the high adjusted R-squared of 0.49 provides further support to the trustworthiness of our results, as half of the variation in cash ratio can be explained by our model.3.3.Endogeneity and Instrumental Variable Estimation13。

How to read a paper(如何阅读一篇文章)

How to read a paper(如何阅读一篇文章)

How to Read a PaperS.KeshavDavid R.Cheriton School of Computer Science,University of WaterlooWaterloo,ON,Canadakeshav@uwaterloo.caABSTRACTResearchers spend a great deal of time reading research pa-pers.However,this skill is rarely taught,leading to much wasted effort.This article outlines a practical and efficient three-pass method for reading research papers.I also de-scribe how to use this method to do a literature survey. Categories and Subject Descriptors:A.1[Introductory and Survey]General Terms:Documentation.Keywords:Paper,Reading,Hints.1.INTRODUCTIONResearchers must read papers for several reasons:to re-view them for a conference or a class,to keep current in theirfield,or for a literature survey of a newfield.A typi-cal researcher will likely spend hundreds of hours every year reading papers.Learning to efficiently read a paper is a critical but rarely taught skill.Beginning graduate students,therefore,must learn on their own using trial and error.Students waste much effort in the process and are frequently driven to frus-tration.For many years I have used a simple approach to efficiently read papers.This paper describes the‘three-pass’approach and its use in doing a literature survey.2.THE THREE-PASS APPROACHThe key idea is that you should read the paper in up to three passes,instead of starting at the beginning and plow-ing your way to the end.Each pass accomplishes specific goals and builds upon the previous pass:The first pass gives you a general idea about the paper.The second pass lets you grasp the paper’s content,but not its details.The third pass helps you understand the paper in depth.2.1Thefirst passThefirst pass is a quick scan to get a bird’s-eye view of the paper.You can also decide whether you need to do any more passes.This pass should take aboutfive to ten minutes and consists of the following steps:1.Carefully read the title,abstract,and introduction2.Read the section and sub-section headings,but ignoreeverything else3.Read the conclusions4.Glance over the references,mentally ticking offtheones you’ve already readAt the end of thefirst pass,you should be able to answer thefive Cs:1.Category:What type of paper is this?A measure-ment paper?An analysis of an existing system?A description of a research prototype?2.Context:Which other papers is it related to?Whichtheoretical bases were used to analyze the problem?3.Correctness:Do the assumptions appear to be valid?4.Contributions:What are the paper’s main contribu-tions?5.Clarity:Is the paper well written?Using this information,you may choose not to read fur-ther.This could be because the paper doesn’t interest you, or you don’t know enough about the area to understand the paper,or that the authors make invalid assumptions.The first pass is adequate for papers that aren’t in your research area,but may someday prove relevant.Incidentally,when you write a paper,you can expect most reviewers(and readers)to make only one pass over it.Take care to choose coherent section and sub-section titles and to write concise and comprehensive abstracts.If a reviewer cannot understand the gist after one pass,the paper will likely be rejected;if a reader cannot understand the high-lights of the paper afterfive minutes,the paper will likely never be read.2.2The second passIn the second pass,read the paper with greater care,but ignore details such as proofs.It helps to jot down the key points,or to make comments in the margins,as you read.1.Look carefully at thefigures,diagrams and other illus-trations in the paper.Pay special attention to graphs.Are the axes properly labeled?Are results shown with error bars,so that conclusions are statistically sig-nificant?Common mistakes like these will separate rushed,shoddy work from the truly excellent.2.Remember to mark relevant unread references for fur-ther reading(this is a good way to learn more about the background of the paper).The second pass should take up to an hour.After this pass,you should be able to grasp the content of the paper. You should be able to summarize the main thrust of the pa-per,with supporting evidence,to someone else.This level of detail is appropriate for a paper in which you are interested, but does not lie in your research speciality.Sometimes you won’t understand a paper even at the end of the second pass.This may be because the subject matter is new to you,with unfamiliar terminology and acronyms. Or the authors may use a proof or experimental technique that you don’t understand,so that the bulk of the pa-per is incomprehensible.The paper may be poorly written with unsubstantiated assertions and numerous forward ref-erences.Or it could just be that it’s late at night and you’re tired.You can now choose to:(a)set the paper aside,hoping you don’t need to understand the material to be successful in your career,(b)return to the paper later,perhaps after reading background material or(c)persevere and go on to the third pass.2.3The third passTo fully understand a paper,particularly if you are re-viewer,requires a third pass.The key to the third pass is to attempt to virtually re-implement the paper:that is, making the same assumptions as the authors,re-create the work.By comparing this re-creation with the actual paper, you can easily identify not only a paper’s innovations,but also its hidden failings and assumptions.This pass requires great attention to detail.You should identify and challenge every assumption in every statement. Moreover,you should think about how you yourself would present a particular idea.This comparison of the actual with the virtual lends a sharp insight into the proof and presentation techniques in the paper and you can very likely add this to your repertoire of tools.During this pass,you should also jot down ideas for future work.This pass can take about four orfive hours for beginners, and about an hour for an experienced reader.At the end of this pass,you should be able to reconstruct the entire structure of the paper from memory,as well as be able to identify its strong and weak points.In particular,you should be able to pinpoint implicit assumptions,missing citations to relevant work,and potential issues with experimental or analytical techniques.3.DOING A LITERATURE SURVEYPaper reading skills are put to the test in doing a literature survey.This will require you to read tens of papers,perhaps in an unfamiliarfield.What papers should you read?Here is how you can use the three-pass approach to help.First,use an academic search engine such as Google Scholar or CiteSeer and some well-chosen keywords tofind three to five recent papers in the area.Do one pass on each pa-per to get a sense of the work,then read their related work sections.You willfind a thumbnail summary of the recent work,and perhaps,if you are lucky,a pointer to a recent survey paper.If you canfind such a survey,you are done. Read the survey,congratulating yourself on your good luck. Otherwise,in the second step,find shared citations and repeated author names in the bibliography.These are the key papers and researchers in that area.Download the key papers and set them aside.Then go to the websites of the key researchers and see where they’ve published recently.That will help you identify the top conferences in thatfield because the best researchers usually publish in the top con-ferences.The third step is to go to the website for these top con-ferences and look through their recent proceedings.A quick scan will usually identify recent high-quality related work. These papers,along with the ones you set aside earlier,con-stitute thefirst version of your survey.Make two passes through these papers.If they all cite a key paper that you did notfind earlier,obtain and read it,iterating as neces-sary.4.EXPERIENCEI’ve used this approach for the last15years to read con-ference proceedings,write reviews,do background research, and to quickly review papers before a discussion.This dis-ciplined approach prevents me from drowning in the details before getting a bird’s-eye-view.It allows me to estimate the amount of time required to review a set of papers.More-over,I can adjust the depth of paper evaluation depending on my needs and how much time I have.5.RELATED WORKIf you are reading a paper to do a review,you should also read Timothy Roscoe’s paper on“Writing reviews for sys-tems conferences”[1].If you’re planning to write a technical paper,you should refer both to Henning Schulzrinne’s com-prehensive web site[2]and George Whitesides’s excellent overview of the process[3].6.A REQUESTI would like to make this a living document,updating it as I receive comments.Please take a moment to email me any comments or suggestions for improvement.You can also add comments at CCRo,the online edition of CCR[4]. 7.ACKNOWLEDGMENTSThefirst version of this document was drafted by my stu-dents:Hossein Falaki,Earl Oliver,and Sumair Ur Rahman. My thanks to them.I also benefited from Christophe Diot’s perceptive comments and Nicole Keshav’s eagle-eyed copy-editing.This work was supported by grants from the National Science and Engineering Council of Canada,the Canada Research Chair Program,Nortel Networks,Microsoft,Intel Corporation,and Sprint Corporation.8.REFERENCES[1]T.Roscoe,“Writing Reviews for SystemsConferences,”http://people.inf.ethz.ch/troscoe/pubs/review-writing.pdf.[2]H.Schulzrinne,“Writing Technical Articles,”/hgs/etc/writing-style.html.[3]G.M.Whitesides,“Whitesides’Group:Writing aPaper,”http://www.che.iitm.ac.in/misc/dd/writepaper.pdf. [4]ACM SIGCOMM Computer Communication ReviewOnline,/ccr/drupal/.。

新的研究方法英语作文

新的研究方法英语作文

新的研究方法英语作文Title: Innovative Research Methods。

In the realm of academic inquiry, the pursuit of novel research methods is essential for advancing knowledge and addressing complex questions. In this essay, we will explore several innovative research methodologies that have emerged in recent years, highlighting their significance and potential impact on various fields of study.1. Mixed-Methods Research: This approach involves the integration of qualitative and quantitative research methods within a single study. By combining the strengths of both approaches, researchers can gain a more comprehensive understanding of their research topic. For example, in social sciences, mixed-methods research allows for the exploration of both the depth of individual experiences through qualitative interviews and the breadth of trends through quantitative surveys or analysis.2. Big Data Analytics: With the exponential growth of digital data, big data analytics has become increasingly important across disciplines. This approach involves the analysis of large and complex datasets to uncover patterns, trends, and correlations that may not be apparent through traditional methods. In fields such as healthcare, finance, and marketing, big data analytics enables researchers to extract valuable insights for decision-making and prediction.3. Machine Learning and Artificial Intelligence: Machine learning and artificial intelligence (AI) techniques are revolutionizing research methodologies by enabling computers to learn from data and make predictions or decisions without explicit programming. In areas like natural language processing, image recognition, and predictive modeling, machine learning algorithms are being employed to automate tasks, identify patterns, and generate new hypotheses.4. Participatory Action Research: Participatory action research (PAR) is an approach that involves collaborationbetween researchers and community members to address issues of mutual concern. Through a cyclical process of planning, action, reflection, and evaluation, PAR aims to empower participants and effect positive social change. This methodology is particularly prevalent in fields such as education, community development, and environmental sustainability.5. Virtual Reality and Immersive Technologies: Virtual reality (VR) and immersive technologies offer new possibilities for conducting research in controlled yet realistic environments. Researchers can create virtual simulations to study human behavior, test hypotheses, and explore scenarios that may be difficult or unethical to replicate in the real world. In psychology, medicine, and engineering, VR technology is being utilized to investigate phenomena and develop innovative solutions.6. Longitudinal Studies: Longitudinal studies involve the collection of data from the same subjects over an extended period to observe changes or developments over time. By tracking individuals or cohorts over months, years,or even decades, researchers can gain insights into the effects of various factors on human behavior, health outcomes, and social dynamics. Longitudinal studies are invaluable for uncovering causal relationships and informing policy decisions.7. Crowdsourcing and Citizen Science: Crowdsourcing and citizen science initiatives engage the public in the research process, leveraging the collective intelligence and resources of a diverse group of participants. Whether through crowdsourced data collection, collaborative problem-solving, or distributed computing, these approaches enable researchers to tackle large-scale projects and harness the expertise of non-experts. From astronomy to ecology, crowdsourcing has democratized research and facilitated innovative discoveries.In conclusion, the adoption of innovative research methods is essential for advancing knowledge, solving complex problems, and addressing pressing societal challenges. By embracing approaches such as mixed-methods research, big data analytics, machine learning,participatory action research, virtual reality, longitudinal studies, crowdsourcing, and citizen science, researchers can expand the frontiers of their respective fields and make meaningful contributions to society. As technology continues to evolve and interdisciplinary collaborations flourish, the possibilities for innovation in research methodologies are limitless.。

江苏新教材英语高一B1U4 reading课文原文

江苏新教材英语高一B1U4 reading课文原文

Teen faints after skipping mealsSTONECHESTER-A teenage girl fainted yesterday at Stonechester High School after skipping meals.Jennifer Jones, 15, told friends in her class that she was feeling unwell. She then passed out in her morning PE lesson and was rushed to hospital.Jennifer was found to have dangerously low blood sugar levels and was treated immediately. Her worried parents told the doctor that their daughter missed breakfast that day and hardly touched her dinner the night before. Fortunately, she is now out of danger. Her doctor says that she will make a full recovery in a day or two.Jennifer's classmates hope to see her back at school soon. They say that she has struggled with eating problems for a long time. "Jennifer thought that skipping meals would be a simple way to reach her target weight her friend Laura Williams told our reporter. She has not eaten breakfast for the last few months. She told me she had trouble concentrating in class I warned her that skipping meals was unhealthy, but she wouldn't listen.Jennifer's case is a reminder of the dangers of the unhealthy weight-loss habits that have become common among teenagers of both sexes. In a society where being thin is often seen as being beautiful, teenagers sometimes turn to extreme methods to slim down quickly. According to a recent survey of senior high school students' lifestyles, almost one fifth of teenagers regularly skip meals, one in ten over-exercise and four per cent even take weight-loss medicine. Health experts are concerned about these figures. They are increasing their efforts to educate teenagers about the side effects of losing weight too quickly. They have also warned them against using such extreme methods.“These so-called 'quick-fix methods' prove to be harmful to teenagers. It is normal for teenagers to be slightly overweight and there is no reason why they should be worried. However, for those who are dangerously overweight, it is very important that they try to lose weight properly," said an expert.She pointed out that it is important to have a healthy balanced diet since teenagers are still growing and their bodies need a lot of nutrition to function well. If they do not take in enough food, they may feel weak and get ill easily. She added, "What's more, they should keep regular hours and get plenty of exercise to stay energetic and fit. We strongly encourage all teenagers to follow these lifestyle tips, because living well is the safest and most effective way to get into shape.”1。

ijcai审稿意见模板

ijcai审稿意见模板

ijcai审稿意见模板Title: A Survey of Recent Advances in Reinforcement Learning.Abstract:This article provides a comprehensive overview of recent advances in reinforcement learning (RL) research. RL has gained significant attention in the field of artificial intelligence and machine learning due to its ability to learn optimal decision-making strategies in complex environments. The article covers a wide range of topics, including deep reinforcement learning, multi-agent reinforcement learning, transfer learning, and applications of RL in various domains. The survey also discusses the challenges and future directions of RL research, highlighting the potential impact of RL on real-world applications.Keywords: reinforcement learning, deep reinforcementlearning, multi-agent reinforcement learning, transfer learning, artificial intelligence.Introduction:Reinforcement learning (RL) has emerged as a powerful paradigm for training intelligent agents to make sequential decisions in uncertain and dynamic environments. Over the past decade, there has been a surge of interest in RL research, leading to significant advancements in algorithms, architectures, and applications. This survey aims toprovide a comprehensive overview of recent developments in RL, highlighting the key contributions and emerging trendsin the field.Deep Reinforcement Learning:One of the most significant advancements in RL is the integration of deep learning techniques, leading to the emergence of deep reinforcement learning (DRL). DRL has demonstrated remarkable success in solving complex tasks, such as playing video games, robotic control, andautonomous driving. This section discusses the keyprinciples of DRL, including deep Q-networks (DQN), policy gradient methods, and actor-critic architectures, and highlights the state-of-the-art algorithms and applications.Multi-Agent Reinforcement Learning:Another important area of research in RL is multi-agent reinforcement learning (MARL), which focuses on training multiple agents to collaborate or compete in a shared environment. MARL has applications in multi-agent systems, social dilemmas, and decentralized control. This section provides an overview of the challenges and opportunities in MARL, including coordination, communication, and learningin complex environments with multiple interacting agents.Transfer Learning in Reinforcement Learning:Transfer learning has gained attention in RL researchas a means to leverage knowledge from one task to improve learning in a related task. This section discusses the recent advancements in transfer learning for RL, includingdomain adaptation, model reuse, and meta-learning. The survey also highlights the potential benefits of transfer learning in accelerating learning and improving sample efficiency in RL.Applications of Reinforcement Learning:The survey also covers the diverse applications of RLin various domains, such as robotics, healthcare, finance, and recommendation systems. The article discusses the real-world challenges and opportunities in applying RL to solve complex problems, highlighting the potential impact of RLon addressing societal and industrial challenges.Challenges and Future Directions:Finally, the survey outlines the key challenges andopen research directions in RL, including sample efficiency, generalization, exploration-exploitation trade-offs, and ethical considerations. The article also discusses the potential future advancements in RL, such as incorporating human feedback, learning from demonstrations, andaddressing safety and robustness in RL systems.Conclusion:In conclusion, this survey provides a comprehensive overview of recent advances in reinforcement learning, covering key topics such as deep reinforcement learning, multi-agent reinforcement learning, transfer learning, and applications of RL in various domains. The article highlights the challenges and future directions of RL research, emphasizing the potential impact of RL on real-world applications and societal challenges.。

英文科技论文写作常用句式

英文科技论文写作常用句式

Abstract:1、Introduction (Motivation/ purpose/goal/aim/objective/importance) (present tense)1)The aim of this paper is to assess the accuracy of steady-state scale-up for small-scale sedimentary structures.2)An important objective of these well-constrained experiments is to evaluate current theoretical models of PGE solubility.2、Scope/perspective (past/present tense)1)This study examined fecal coliform reduction in domestic wastewater after receiving treatment through a constructed open water wetland, a slow sand filter, and a portable ultraviolet disinfection unit.2)The scope of this study was to compare the behavior and character of dissolved organic carbon (DOC) during soil-aquifer treatment at different field sites in Arizona and California.3、method/procedure/approach (past tense)1)In this study, a lab-scale aerobic biofilter-biotrickling tower system packed with particulate plastic media (1.5 cm, hexagon in shape) was used to treat the salty artificial wastewater polluted by mineral oil (diesel fuel) mixed with chemical dispersant (CPC OD-10) and inorganic nutrients.2)It involved recycling a small portion of the primary sludge and then activating it to enhance its biosorptive and flocculent properties through 30-min aeration. The aerated sludge was then brought into contact with raw sewage to induce rapid biosorption of soluble COD and flocculation of colloidal organic before the mixture was settled ina regular primary clarifier.4、Results/findings/products (past tense)e.g. Three granitoid magmatic events were identified at 815 Ma, 700-730Ma, and 620-625Ma, which were marked by emplacement of the calc-alkaline Ujjukka granite and granodiorite, the anatectic Suqii-Wagga two-mica granite and the Guttin K-feldspar megacrystic granite, and the anorogenic Ganjii monzogranite, respectively.5、Discussion/Conclusions (present/past tense)e.g. The results showed that under traction free conditions subsidence was a three-dimensional problem andone-dimensional subsidence models tended to focus excessive amounts of vertical deformation near the pumped well. The magnitude of vertical deformation in one-dimensional subsidence models was exacerbated as the grid size became smaller in the vicinity of the pumping well. This was due to increased calculated drawdown in the vicinity of the well for more finely-spaced grids.6、Implications/applications (present tense)e.g. Our experimental results are directly applicable to porphyry copper systems because in such systemstemperature, PH, salinity and oxygen fugacity are similar to the experimental parameters. Application of our experimental results indicate that a typical porphyry systems can transport at least 40 tons of Pd if sources of platinum group elements (PGE) are available and the solubility-controlling phase is metallic Pd.7、limitations and suggestions for further studies (past/present tense for limitations, present/future tense for suggestions)e.g. 1) This study suggests that in order to achieve the highest treatment level of secondary unchlorinatedwastewater, a combination of aquatic ponds and subsurface flow wetlands may be necessary.2) In general, source water quality, drinking water and wastewater treatment should be viewed as onesystem in indirect potable re-use projects.3) However, because constructed wetlands, besides the water quality improvement function, perform amultitude of other functions such as biodiversity, habitat, climatic, hydrological and public use functions, methodologies need to be developed to evaluate these functions and to weigh them in relation to the water quality issue.Structure of an Abstract:1、The first sentence in the Abstract:♦The purpose of this research is…♦The primary goal of this research is…♦The intention of this paper is to survey…♦The overall objective of this study is…♦The chief aim of the present work is to investigate…♦The purpos e of this paper is to explain why…♦The primary object of this fundamental research is to reveal…♦The main objective of our investigation has been to obtain…♦The experiment made here is aimed at …♦The emphasis of this study lies in…♦The problem we have outl ined here deals largely with the study of…♦This study examined…. (past tense)2、The next sentences:♦The method used in the study was…♦The technique applied is referred to as…♦The procedure was as follows…♦The approach adopted extensively is called…♦The experiment consisted of three steps:…♦Included in the experiment were…♦The test e quipment which was used consisted of…♦Several sets of experiments were carried out to test the validity of…♦This formula is verified by…3、The ending part of an abstract:♦The results of the experiment indicated that …♦The studies showed that…♦The studies w e carried out demonstrated that…♦The research suggested that…♦The investigation carried out by… revealed that…♦All the preliminary results threw light on …♦As a result of our experiments, we concluded that…♦From the experiments, we realized that…♦This frui tful work provided explanation to …♦The research work brought about a discovery of…♦These findings of the research led the author to the conclusion that…4、The last sentences:♦…should be further studied in relation to ...♦…should do…♦Further research into…is necessary.♦Further study should be made in relation to…♦Further investigation into…is needed.Introduction:Step 1: the nature and scope of the problem investigated. interest or importance, many other investigators active in the area.♦Recently, there has been a spate of interest in how to…♦In recent years, applied researchers have become increasingly interested in…♦Recently, there has been wide interest in…♦The well-known…phenomena…have been favorite topics for analysis both in…♦Knowledge of …has a great importance for…♦The study of …has become an important aspect of…♦The effect of …has been studied extensively in recent years.♦Many investigators have re cently turned to…♦The relationship between …has been studied by many authors.♦ A central issue in…is the validity of…Step 2: making a topic generalization.statements about knowledge or practice:♦The pathology of… is well known.♦There is now much evid ence to support the hypothesis that…♦The …properties of …are still not completely understood.♦ A standard procedure for assessing has been…♦Education core courses are often criticized for…phenomena:♦…is a common finding in patients with…♦An elaborate syste m of…is found in the …♦English is rich in related words exhibiting “stress shifts”.♦There are many situations where examination scripts are marked and then re-marked by another examiner.Step3:Discuss the relevant primary research literature (with citations) and summarizing our current understanding of the problem you are investigating.Examples of integral citation forms(直接引用方式):♦Halliday (2008) showed that the moon is …..♦Halliday’s theory (2008) claims that …...♦Halliday’s (2008) theory of lunar composition has general support.♦According to Halliday (2008), the moon is…...Examples of indirect citation forms(间接引用方式):♦Pr evious research has shown that the moon is …..(Brie, 1988).♦It has been shown that the moon is made of cheese (Brie, 1988).♦It has been established that the moon is made of cheese (Brie, 1988).♦The moon is probably made of cheese (Brie, 1988).♦The moon may be made of cheese (cf. Rock, 1989).Step4:Indicate a gap.♦No research has been done on …♦Little effort has been spent on the study of…♦(Very) few researchers have investigated…♦The nature of…is overlooked.♦Researchers have failed to notice that…♦The result is misleading/questionable/inconclusive/limited.♦The result of…has several limitations.♦The research can rarely cover…Step5: State the purpose♦The purpose of this study was to....♦We investigated three possible mechanisms to explain the ... (1) …….(2) …….♦This paper reports on the results obtained…♦The aim of the present paper is to give…♦The main purpose of the experiment reported here was to…♦This study was designed to evaluate…♦This paper aims to report the interaction of…Step6: rationale and approachFor example: State briefly how you approached the problem (e.g., you studied oxidative respiration pathways in isolated mitochondria of cauliflower). This will usually follow your statement of purpose in the last paragraph of the Introduction.Step7: the arrangement of the writing at the end of the Introduction.●This paper is divided into five major sections: Section one opens with…. Section two presents…Sectionthree deals with… Section Four reveals…Section five concludes the paper.●We have organ ized the rest of this paper in the following way…●This paper is structured as follows…●The remainder of this paper is divided into five sections. Section II describes…Materials and Methods1.… (method) was used to do….2.The study used/made use of/employe d the method that….3.描述方法过程的连接词:and/and then/then/next/the next step/before that/after that.Conclusion or Discussion:•Based on the above discussion, the following conclusion can be made that …•From .., it can be concluded that…•In conclusion, the result shows that…•These findings of the research led to the conclusion that..•To sum up, the research/this study reveals that…•In summary, the result of the experiment demonstrates that…•As a result of our experiments we concluded that…The results:•The data of the experiment are briefly summarized as follows.•Figure 4 shows the results of the study.•Table 6 presents the data obtained in the experiment.•This table summarizes the data collected in this study.•The findings of this study are summarized/listed in Table 7.•The outcome of the experiment is reported in the following table.ABSTRACTProcessing AIT logging information can get more accurate layer real resistivity and mud invasion depth and section. AIT results help to differentiate layer infiltration ability and fluid type, the calculation of mud invasion depth can instruct the level of oil reservoir to be polluted by well drilling directly and offer reference basis for the following work. The development of AIT offers new method to differentiate oil reservoir mud invade and thin layer and means and condition for establishing the explainable method of complex oil reservoir and raising the appraisement precision. This paper studied AIT principle and explainable method as well as contrast aspect with conventional induction resistivity. This paper presents some application examples of Huabei oilfield logging series optimization technology.Subject Terms:AIT logging contrast the method based of radial resistive rateInvasion thin layer apply1. 油集团从油田勘探开发的瓶颈问题入手,研究出一套较高勘探程度富油气凹陷的精细勘探方法,提出并实践了适合该区地质特点的勘探开发一体化地质研究与管理模式。

ieee transaction under review

ieee transaction under review

ieee transaction under reviewTitle: An Effective Approach for Intrusion Detection in Wireless Sensor Networks: A Survey and ReviewAbstract:With the growing usage of Wireless Sensor Networks (WSNs) in various applications, security has become a paramount concern. Intrusion detection systems (IDSs) are widely employed to detect and mitigate security threats in WSNs. This paper presents a comprehensive survey of recent advancements in intrusion detection techniques for WSNs. We categorize the existing approaches based on detection models, clustering algorithms, and data aggregation techniques. Furthermore, we discuss the challenges and limitations associated with current solutions and propose potential directions for future research. The effectiveness of these approaches is assessed through extensive experiments using real-world datasets. Our findings provide valuable insights for researchers and practitioners who are interested in developing robust intrusion detection systems for WSNs.1. IntroductionWireless Sensor Networks (WSNs) consist of a large number of low-cost and small-sized sensor nodes that collaboratively monitor and collect data from a physical environment. Due to their widespread deployment in critical applications, such as environmental monitoring, healthcare, and military operations, WSNs are vulnerable to various security threats, including node compromise, message tampering, and denial of service (DoS) attacks. Intrusion detection systems (IDSs) play a crucial role in detecting and mitigating such threats. In this paper, we aim toprovide a comprehensive survey of intrusion detection techniques for WSNs, highlighting their strengths and limitations.2. Review MethodologyWe reviewed a wide range of research papers and publications from prestigious IEEE journals and conferences to identify relevant studies on WSN intrusion detection. The selected papers were analyzed and categorized based on the detection models, clustering algorithms, and data aggregation techniques adopted. 3. Detection ModelsDifferent models have been proposed for detecting intrusions in WSNs, including anomaly-based, signature-based, and hybrid models. Anomaly-based IDSs detect deviations from baseline behavior, while signature-based IDSs match observed patterns against known attack signatures. Hybrid IDSs combine the strengths of both approaches to achieve higher detection accuracy. This section provides a detailed analysis of these models, comparing their advantages and disadvantages.4. Clustering AlgorithmsClustering algorithms are widely used in IDSs to group sensor nodes based on their similarity to identify anomalies. We review several popular clustering algorithms, including k-means, hierarchical clustering, and fuzzy clustering, with a focus on their suitability for WSNs. The impact of different clustering parameters on detection accuracy and energy consumption is discussed. We also present a comparative analysis of these algorithms based on experimental results.5. Data Aggregation TechniquesData aggregation techniques play a crucial role in reducing transmission overhead and energy consumption in WSNs. However, they can also introduce vulnerabilities and enable attacks. This section reviews various data aggregation techniques, such as spatial and temporal aggregation, and discuss their implications for intrusion detection. We highlight the trade-off between energy efficiency and security, providing insights into the selection of appropriate data aggregation techniques for different scenarios.6. Challenges and Future DirectionsDespite the advancements in intrusion detection techniques for WSNs, several challenges remain, including resource constraints, scalability, and resilience to sophisticated attacks. This section discusses these challenges and proposes potential solutions and future research directions. We emphasize the need for the development of lightweight and distributed IDSs that can effectively detect and prevent attacks while meeting the resource constraints of WSNs.7. Experimental EvaluationTo evaluate the effectiveness of different intrusion detection approaches, we conducted extensive experiments using real-world datasets. We compare the detection accuracy, false alarm rate, and energy consumption of different models, clustering algorithms, and data aggregation techniques. The experimental results provide insights into the performance of these approaches and enable us to identify the most effective combinations for practical deployment.8. ConclusionThis paper provides a comprehensive survey and review of recent advancements in intrusion detection techniques for WSNs. We categorize and analyze the existing approaches based on detection models, clustering algorithms, and data aggregation techniques. The challenges and limitations associated with current solutions are discussed, and potential directions for future research are proposed. Our evaluation results using real-world datasets provide valuable insights for researchers and practitioners working on robust intrusion detection systems for WSNs.。

2024年高中英语学术前沿单选题30题

2024年高中英语学术前沿单选题30题

2024年高中英语学术前沿单选题30题1.The research on artificial intelligence is in the forefront of academic research. The word “forefront” here means _____.A.backB.middleC.frontD.side答案:C。

“forefront”的意思是“最前线、最前沿”,选项A“back”是“后面”;选项B“middle”是“中间”;选项C“front”是“前面”,符合题意;选项D“side”是“边”。

2.In academic research, innovation is highly valued. The word “innovation” means _____.A.traditionB.changeC.improvementD.innovation itself答案:D。

“innovation”的意思就是“创新”,选项A“tradition”是“传统”;选项B“change”是“改变”;选项C“improvement”是“改进”。

3.The latest scientific discoveries are at the cutting edge of academic research. The phrase “cutting edge” means _____.A.oldB.newC.middleD.back答案:B。

“cutting edge”表示“前沿、尖端”,也就是新的东西,选项A“old”是“旧的”;选项B“new”是“新的”,符合题意;选项C“middle”是“中间”;选项D“back”是“后面”。

4.Academic research requires perseverance and dedication. The word “perseverance” means _____.zinessB.effortC.persistenceD.giving up答案:C。

关于研究方法的英文作文

关于研究方法的英文作文

关于研究方法的英文作文英文,As a student of research methods, I've delved deep into the intricacies of conducting studies and analyzing data. Research methods are crucial for any academic pursuit or scientific inquiry. They provide the framework through which we gather information, test hypotheses, and draw meaningful conclusions.One fundamental aspect of research methods is the choice of methodology. Whether it's qualitative or quantitative, each approach has its strengths and weaknesses. For instance, in my recent project on consumer behavior, I utilized a mixed-methods approach. This allowed me to gather both quantitative data through surveys and qualitative insights through interviews. By combining these methods, I gained a more comprehensive understanding of my research topic.Another important consideration is sampling. The way we select participants can significantly impact the validityand generalizability of our findings. In my study on student satisfaction, I employed stratified random sampling to ensure representation from different demographic groups. This method helped me minimize bias and obtain results that were reflective of the entire population.Data analysis is perhaps the most crucial stage of any research endeavor. Whether it involves statistical tests or thematic analysis, the way we interpret data shapes the conclusions we draw. In my analysis of environmental attitudes, I used regression analysis to identify significant predictors of pro-environmental behavior. This statistical technique enabled me to uncover meaningful relationships between variables and draw actionable insights.However, no research is without its limitations. Despite our best efforts, there are always factors that may confound our results or introduce bias. For example, in my study on workplace productivity, I encountered challenges related to self-reporting bias and social desirability. Acknowledging these limitations is essential formaintaining the integrity and credibility of our research.In conclusion, research methods serve as the cornerstone of academic inquiry, guiding us through the process of knowledge creation and discovery. By carefully selecting methodologies, sampling techniques, and data analysis approaches, we can generate robust findings that contribute to our understanding of the world.中文,作为一个研究方法的学生,我深入研究了如何进行研究和分析数据的复杂性。

based on a most recent survey

based on a most recent survey

based on a most recent surveyBased on a most recent survey, a number of interesting insights have been revealed about consumer behavior and attitudes. This survey was conducted on a large sample size and covered a diverse group of people from different regions and backgrounds. Here are some key takeaways from the survey: Step 1: Social Media TrendsThe survey found that social media is playing an increasingly important role in shaping consumer behavior. Nearly 70% of respondents said they use social media to make purchase decisions, while over 50% of them follow their favourite brands on social media platforms. This highlights the importance of companies leveraging social media channels to engage with their target audience and build brand loyalty.Step 2: Online Shopping HabitsAnother major finding of the survey was related to online shopping. With more and more people relying on e-commerce platforms for their shopping needs, it was foundthat the most important factors influencing online purchasing decisions are product reviews and customer feedback. Over 80% of respondents said they read online reviews before making a purchase, while around 60% considered recommendations from family and friends.Step 3: Sustainability and EnvironmentalismThe survey also revealed that people are becoming increasingly mindful of their impact on the environment. Over 75% of respondents said they make an effort to choose environmentally friendly products, while nearly 60% considersustainability as an important factor when making purchase decisions. This trend presents an opportunity for businesses to differentiate themselves from the competition by adopting sustainable practices and marketing their eco-friendly products.Step 4: The Rise of PersonalizationFinally, the survey found that consumers are embracing the concept of personalization. Over 70% of respondents said they are more likely to make a purchase if a brand offers personalized products or services. This presents a challenge for businesses to be more innovative and customer-focused in their offerings, as well as to invest in technologies that help them analyze customer data and create personalized experiences.In conclusion, the survey provides valuable insightsinto consumer behavior and trends. Businesses can utilizethis information to tailor their marketing strategies and develop products and services that meet the changing needs of customers. By paying attention to social media, online shopping habits, sustainability, and personalization, companies can gain a competitive edge and increase their chances of success in a rapidly changing market.。

掌握研究方法英文作文

掌握研究方法英文作文

掌握研究方法英文作文英文:Research is a crucial aspect of any academic orscientific pursuit. It allows us to explore new ideas, test hypotheses, and expand our understanding of the worldaround us. However, conducting research is not a simpletask and requires a certain set of skills and methods to ensure that the results are accurate and reliable.One of the most important aspects of research is selecting the appropriate method. There are variousresearch methods available, including qualitative and quantitative methods. Qualitative methods involve exploring subjective experiences and opinions, while quantitative methods involve collecting numerical data and analyzing it statistically.Another important aspect of research is data collection. This involves selecting the appropriate sample size,designing surveys or questionnaires, and collecting data in a systematic and unbiased manner. It is also important to consider ethical considerations when collecting data, such as obtaining informed consent and protecting the privacy of participants.Data analysis is also a crucial component of research. This involves analyzing the data collected and drawing conclusions based on the results. There are various statistical methods available for data analysis, such as regression analysis and factor analysis.Finally, it is important to communicate the results of research effectively. This involves writing clear and concise reports, presenting findings in a clear and organized manner, and using appropriate visual aids to support the results.Overall, mastering research methods is essential for any academic or scientific pursuit. By selecting the appropriate method, collecting data systematically and ethically, analyzing data accurately, and communicatingresults effectively, researchers can ensure that their findings are reliable and contribute to the advancement of knowledge.中文:研究是任何学术或科学追求的重要方面。

介绍研究方法英语作文

介绍研究方法英语作文

介绍研究方法英语作文Title: Exploring Research Methods in English Composition As an English teacher specializing in English composition, understanding and employing various research methods is crucial for both instructional purposes and the development of one's own writing skills. In this essay, I will introduce several research methods that are commonly used in the field of English writing.The first method is literature review, which involves the examination of existing literature to identify patterns, theories, and gaps in knowledge. This method allows writers to contextualize their work within a broader academic discourse and to build a foundation for their own arguments. When conducting a literature review, it is essential to use credible sources, such as peer-reviewed journals, books, and reputable websites.Another important method is qualitative research, which focuses on the why and how of experiences, attitudes, and behaviors. This type of research often involves interviews, observations, and case studies to gather in-depth data.Qualitative research can provide rich insights into human behavior and is particularly useful when exploring the nuances of language use and cultural influences on writing.Quantitative research, on the other hand, deals with numerical data and statistical analysis. It aims to quantify findings and test hypotheses through methods such as surveys, experiments, and correlational studies. This approach can be beneficial in assessing the effectiveness of certain writing techniques or in analyzing the relationship between different variables that affect writing performance.Ethnography is a research method that involves the study of cultures and societies through detailed observations and engagement with the people being studied. This method can offer unique perspectives on how language is used within different cultural contexts and how it shapes communication styles in writing.Comparative analysis is a method that allows researchers to examine similarities and differences between two or more texts, authors, or cultures. This method can be used to explore how writing conventions vary across different genres, time periods, or geographical locations.Lastly, experimentation involves the manipulation of variables to test the effects of specific conditions on writing outcomes. This method requires careful control and random assignment of participants to ensure validity and reliability of the results.In conclusion, research methods in English composition are diverse and each serves a unique purpose in advancing our understanding of writing practices and processes. As an English teacher, incorporating these methods into my teaching and research not only enhances my pedagogical approach but also enriches my students' learning experiences by providing them with a comprehensive view of the complexities involved in English writing. By mastering these methods, both educators and students can improve their analytical skills and become more effective writers and communicators.。

Research Findings Methods

Research Findings Methods

Research Findings MethodsResearch findings methods are crucial in the field of scientific inquiry, as they provide the means to gather data, analyze information, and draw conclusions. There are various methods and techniques that researchers employ to conduct their investigations, and each approach has its own strengths and limitations. In this response, I will discuss the different research findings methods, their applications, and the ethical considerations associated with each approach.One of the most common research findings methods is the experimental method, which involves the manipulation of variables to observe the effects on the dependent variable. This method allows researchers to establish cause-and-effect relationships between variables, providing valuable insights into the underlying mechanisms of a phenomenon. However, experimental studies are often conducted in artificial settings, which may limit the generalizability of the findings to real-world situations. Additionally, ethical concerns arise when participants are exposed to potentially harmful conditions or are not fully informed about the nature of the study.Another widely used research findings method is the survey method, which involves the collection of data through questionnaires or interviews. Surveys are valuable for gathering large amounts of information from a diverse sample of participants, allowing researchers to explore a wide range of topics and perspectives. However, surveys rely on self-reported data, which may be subject to response biases and inaccuracies. Moreover, survey methods require careful consideration of ethical issues, such as ensuring the confidentiality and anonymity of participants and obtaining informed consent.Qualitative research methods, such as interviews and focus groups, are employed to gain in-depth insights into the experiences and perspectives of individuals. These methods are particularly valuable for exploring complex social phenomena and understanding the subjective meanings that people attribute to their experiences. However, qualitative research findings are often context-dependent and may not be easily generalizable to other settings. Additionally, ethical considerations in qualitative research pertain to issues of privacy, confidentiality, and the potential impact of the research on participants.In contrast to qualitative methods, quantitative research methods involve the collection and analysis of numerical data. These methods are used to test hypotheses, identify patterns, and quantify relationships between variables. Quantitative research findings are often considered more objective and reliable due to the use of standardized measures and statistical analyses. However, quantitative methods may overlook the nuances and complexities of human experiences, and ethical concerns arise when researchers fail to consider the potential harm or unintended consequences of their work.In recent years, mixed methods research has gained popularity as a comprehensive approach that integrates qualitative and quantitative methods. By combining the strengths of both approaches, mixed methods research allows for a more comprehensive understanding of a research topic, enhancing the validity and reliability of the findings. However, conducting mixed methods research requires expertise in both qualitative and quantitative methodologies, as well as careful consideration of ethical issues related to data integration and interpretation.In conclusion, research findings methods play a critical role in advancing knowledge and understanding in various fields of study. Each method has its own unique strengths and limitations, and researchers must carefully consider the appropriateness of different approaches based on the research questions, objectives, and ethical considerations. By employing rigorous and ethical research findings methods, researchers can contribute to the generation of reliable and meaningful findings that have the potential to inform policy, practice, and further research.。

英文报告摘要

英文报告摘要

英文报告摘要AbstractThis report provides a summary of the findings from a research project conducted on the use of social media in marketing. The study aimed to investigate the current trends in social media marketing, the benefits it offers, as well as the challenges and limitations faced by businesses. The report highlights some of the key findings from the research and offers recommendations for businesses looking to leverage social media to enhance their marketing efforts.IntroductionSocial media has become a popular buzzword in the world of marketing. In recent years, businesses have increasingly turned to social media platforms such as Facebook, Twitter, Instagram, and LinkedIn to promote their brands and engage with customers. As a result, social media has emerged as a powerful tool for marketers to reach and connect with their target audience.Research MethodologyTo investigate the use of social media in marketing, a comprehensive literature review was conducted to identify relevant studies and articles published in academic journals, industry publications, and online resources. In addition, an online survey was administered to a sample of businesses to gather data on their use of social media in marketing.Key FindingsThe research identified several key findings:1. Social media is an effective tool for businesses to reach and engage with their target audience.2. Social media platforms offer a range of benefits to businesses, such as increased brand awareness, customer engagement, and lead generation.3. However, businesses face several challenges and limitations when using social media for marketing, such as the need for consistent and engaging content, managing customer feedback and complaints, and measuring the effectiveness of their campaigns.4. To overcome these challenges, businesses should develop a comprehensive social media strategy that aligns with their overallmarketing goals, engages their target audience, and is regularly reviewed and adjusted based on data and feedback.ConclusionIn conclusion, social media has emerged as a powerful tool for businesses to enhance their marketing efforts. However, to fully leverage its potential, businesses must overcome the challenges and limitations and develop a comprehensive strategy that aligns with their overall marketing objectives. This report provides a summary of the key findings from the research and offers recommendations for businesses looking to incorporate social media into their marketing mix.。

Literature Survey Methods

Literature Survey Methods

Literature Survey MethodsConducting a literature survey is an essential component of any research project, as it provides a comprehensive understanding of the existing knowledge and gaps in the field. There are various methods to conduct a literature survey, each with its own advantages and limitations. In this response, I will discuss the different literature survey methods, their applications, and the best practices for conducting a thorough and effective literature survey.One of the most common methods for conducting a literature survey is through the use of online databases and academic journals. This method involves searching for relevant literature using keywords and filters to narrow down the results. Online databases such as PubMed, Google Scholar, and Scopus are popular choices for researchers due to their extensive coverage of academic literature. Additionally, academic journals provide access to peer-reviewed articles, which are crucial for ensuring the reliability and credibility of the literature being reviewed.Another method for conducting a literature survey is through the use of bibliographic software such as EndNote, Mendeley, or Zotero. These tools allow researchers to organize and manage their references, as well as generate citations and bibliographies. By using bibliographic software, researchers can streamline the process of collecting and organizing relevant literature, ultimately saving time and effort.In addition to online databases and bibliographic software, researchers can also conduct a literature survey by attending conferences and seminars in their field of study. These events provide opportunities to learn about the latest research findings and developments, as well as to network with other researchers and experts in the field. By actively participating in academic conferences, researchers can gain valuable insights and access to unpublished or ongoing research that may not be available through traditional literature sources.Furthermore, researchers can conduct a literature survey by consulting with subject matter experts and scholars in the field. This method involves reaching out to individualswho have in-depth knowledge and experience in the specific area of research. By engaging in discussions and seeking guidance from experts, researchers can gain a deeper understanding of the current state of knowledge and identify potential gaps or areas for further investigation.It is important to note that the effectiveness of a literature survey depends on the thoroughness and rigor of the methods used. Therefore, researchers should employ a combination of the aforementioned methods to ensure a comprehensive and well-rounded literature survey. By utilizing online databases, bibliographic software, attending conferences, and consulting with experts, researchers can gather a diverse range of literature sources and perspectives, ultimately enhancing the quality and credibility of their research.In conclusion, conducting a literature survey is a critical step in the research process, and there are various methods available to researchers. By utilizing online databases, bibliographic software, attending conferences, and consulting with experts, researchers can conduct a thorough and effective literature survey. It is essential for researchers to approach the literature survey with diligence and attention to detail, as it forms the foundation for the development of new knowledge and the advancement of the field.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1 The term “distance” is used loosely in this paper. A distance measure is simply the inverse of a similarity measure and is not required to abey the metricalues is not generally finite, or even discrete; • the sampling rate may not be constant; • the presence of noise in various forms makes it necessary to support very flexible similarity measures. This paper describes some of the recent advances that have been made in this field; methods that allow for indexing of time sequences under flexible similarity measures that are invariant under a wide range of transformations and error sources. The paper is structured as follows: Section 2 gives a more formal presentation of the problem of similarity based retrieval and the so-called “dimensionality curse.” Section 3 describes the general approach of signature based retrieval, or “shrink and search,” Section 4 shows some other approaches, while Section 5 concludes the paper. Finally, Appendix A gives an overview of some basic distance measures1 .
x x ˜ xi xi: j v(x), x t (x) |x|
A sequence A signature of x Element number i of x Elements i to j (inclusive) of x Value of element x Timestamp of element x The length of x Table 1: Notation
A Survey of Recent Methods for Efficient Retrieval of Similar Time Sequences
Magnus Lie Hetland∗ Norwegian University of Science and Technology
Abstract
Time sequences occur in many applications, ranging from science and technology to business and entertainment. In many of these applications, an analysis of time series data, and searching through large, unstructured databases based on sample sequences, is often desirable. Such similarity-based retrieval has attracted a lot of attention in recent years. Although several different approaches have appeared, most are based on the common premise of dimensionality reduction and spatial access methods. This paper gives an overview of recent research and shows how the methods fit into a general context of signature extraction. Keywords: Information retrieval, sequence databases, similarity search, spatial indexing, time sequences.
In some methods, an additional assumption is that the elements are equi-spaced: For every two consecutive elements xi and xi+1 we have t (xi+1 ) − t (xi ) = ∆, (2) where ∆ (the sampling rate of x) is a (positive) constant. If the actual sampling rate is not important, ∆ may be set to 1, and t (x1 ) to 0. The length of a time sequence x is its cardinality, written as |x|. The contiguous subsequence of x containing elements xi to x j (inclusive) is written xi: j . A signature of x (see Section 3) is written x ˜. For a summary of the notation, see Table 1.
∗ mlh@idi.ntnu.no
1.1
Terminology and Notation
A time sequence x = x1 = (v1 , t1 ), · · · , xn = (vn , tn ) is an ordered collection of elements xi consisting of a value vi and a timestamp ti . Abusing language, the value of xi may be referred to as xi . The timestamp of xi is referred to as t (xi ). The values in a time sequence are normally real numbers, but may in some cases be taken from a finite class of values [23], or have more than one dimension [22]. A requirement is that two values can be compared in constant time. The only requirement of the timestamps is that they be nondecreasing (or, in some applications, strictly increasing) with respect to the sequence indices: t (xi ) ≤ t (x j ) ⇐⇒ i ≤ j (1)
1
Introduction
Time sequences arise in many applications — any application which involves storing sensor inputs, or sampling a value which changes over time. A problem which has received increasing attention lately is the problem of similarity retrieval in databases of time sequences, so-called “query by example.” Some uses of this are [1]: • To identify companies with similar pattern of growth; • determine products with similar selling patterns; • discover stocks with similar movement in stock prices; • find if a musical score is similar to one of a set of copyrighted scores; • find portions of seismic waves that are not similar to spot geological irregularities. Applications range from medicine, through economy, to scientific disciplines such as meteorology and astrophysics [8, 31]. Simple algorithms for comparing time sequences generally take polynomial time in the length of both sequences, typically linear or quadratic time. To find the correct offset of a query in a large database, a na¨ ıve sequential scan will require a number of such comparisons linear in the length of the database. Given a query of length m and a database of length n, this gives a time complexity of O(mn), or even O(mn2 ). For large databases this is clearly unacceptable. Many methods are known for performing this sort of query in the domain of strings over finite alphabets, but with time sequences there are a few extra obstacles:
相关文档
最新文档