1-Point RANSAC for EKF-Based Structure from Motion

合集下载

田间作物群体三维点云柱体空间分割方法

田间作物群体三维点云柱体空间分割方法

第37卷第7期农业工程学报V ol.37 No.7 2021年4月Transactions of the Chinese Society of Agricultural Engineering Apr. 2021 175田间作物群体三维点云柱体空间分割方法林承达,韩晶,谢良毅,胡方正(华中农业大学资源与环境学院,武汉430070)摘要:农田作物群体表型信息对于研究作物内部基因改变和培育优良品种具有重要意义。

为实现田间作物群体点云数据中单个植株对象的完整提取与分割,以便于更高效地完成作物个体表型参数的自动测量,该研究提出一种田间作物柱体空间聚类分割方法。

利用三维激光扫描仪获取田间油菜、玉米和棉花的三维点云数据,基于HSI(Hue-Saturation-Intensity,色调、饱和度、亮度)颜色模型进行作物群体目标提取,采用直通滤波方法获取作物茎秆点云,基于茎秆点云数据使用欧氏距离聚类分割算法提取每个植株的聚类中心点,并以聚类中心点建立柱体空间模型,使用该模型分割得到田间作物每个单体植株的点云数据。

试验结果表明,该研究的方法对油菜、玉米和棉花3种作物的分割准确率分别为90.12%、96.63%和100%,与欧氏距离聚类分割结果相比,准确率分别提高了36.42,61.80和82.69个百分点,算法耗时分别缩短为后者的9.98%,16.40%和9.04%,与区域增长算法分割结果相比,该研究的方法可用于不同类型农作物,适用性更强,能够实现农田中较稠密作物植株的分割。

该研究的方法能够实现农田尺度下单个植株的完整提取与分割,具有较高的适用性,可为精确测量作物个体表型信息提供参考。

关键词:作物;激光;三维点云;柱体空间模型;分割doi:10.11975/j.issn.1002-6819.2021.07.021中图分类号:TP391 文献标志码:A 文章编号:1002-6819(2021)-07-0175-08林承达,韩晶,谢良毅,等. 田间作物群体三维点云柱体空间分割方法[J]. 农业工程学报,2021,37(7):175-182. doi:10.11975/j.issn.1002-6819.2021.07.021 Lin Chengda, Han Jing, Xie Liangyi, et al. Cylinder space segmentation method for field crop population using 3D point cloud[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(7): 175-182. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2021.07.021 0 引 言随着人口数量的不断增加,人类对粮食和油料作物的需求急剧上升,但其产量却受到可利用耕地减少、土地荒漠化和自然灾害等的影响而难以提升。

ransac经典文章

ransac经典文章

PROBLEM: Given the set of seven (x,y) pairs shown in the plot, find a best fit line, assuming that no valid datum deviates from this line by more than 0.8 units.
Communications of the ACM June 1981 Volume 24 Number 6
Fig. 1. Failure of Lowing Out The Worst Residual" Heuristic), to Deal with an Erroneous Data Point.
I. Introduction
(RANSAC),for fitting a model to experimental data is
introduced. RANSAC is capable of interpreting/ smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

Ransac和圆拟合

Ransac和圆拟合

课程实验报告
2017 - 2018 学年第一学期
课程名称:计算机视觉及应用
实验名称:
班级:电通1班
学生姓名: 学号: 。

实验日期: 2017.12.1 地点:
指导教师:
成绩评定: 批改日期:
实验数据分析及处理
示例图片RANSAC拟合情况:
通过在MATLAB上仿真,得到RANSAC圆拟合图,如下所示:图1 第一次运行时RANSAC圆拟合图
图2 第二次运行时RANSAC圆拟合图
图3 第三次运行时RANSAC圆拟合图
实验结果分析
1.图1结果分析
如图1所示,随机生成了300个蓝色的点,其中局内点21个,即黄色点,局外点190个,以抽样点为圆心,拟合了一个圆。

2. 图2结果分析
如图2所示,其中局内点11个,局外点200个,得到较好的拟合结果。

3.图3结果分析
如图3所示,其中局内点14个,局外点200个,得到较好的拟合结果。

4.图1、图2、图3横向对比分析
图一、图二、图三,都是经过多次拟合才拟合成功。

可能的原因是样本点是随机产生的,不能确定每次产生的样本点都能成功的拟合。

5.RANSAC拟合原理和流程图
建立模型时利用圆的定义方程:dist(P,A)+dist(P,B)=DIST,其中P为圆上一点,A为圆心。

随机选取三点A,P构建圆模型,计算每个点到此两焦点的距离和与DIST的差值,差值小于一定阈值时的点为符合模型的点,点数最多时的模型即为最佳圆模型,再根据符合条件的点,利用圆一般方程x^2+y^2+Dx+Ey+F=0和得到符合点进行系数拟合,根据函数式画出最终拟合圆。

机器视觉技术中一种基于反对称矩阵及RANSAC算法的摄像机自标定方法

机器视觉技术中一种基于反对称矩阵及RANSAC算法的摄像机自标定方法

机器视觉技术中一种基于反对称矩阵及RANSAC算法的摄像机自标定方法王赟【摘要】This paper describes a self-calibration method . After establishing fundamental matrix by using matched feature points , six constraints equations was founded from the fundamental matrix based on the character of the skew-symmetric matrix . . Then the intrinsic and ex-trinsic parameters can be determined through the relation of the set of constraints . Ransac method was adopted to exclude the singular points from detected feature points , therefore improve the accuracy of feature matching and camera calibration . Experimental results for real video showed that this method can effectively acquire the intrinsic and extrin-sic parameters , and it can be applied into computer vision field .%介绍了一种摄像机自标定方法,该方法通过匹配的特征点建立标准矩阵后,利用反对称矩阵的性质,将标准矩阵表达式分解成6 个约束方程,通过其约束关系得到摄像机内外参数.同时采用了 RANSAC 算法从检测到的特征点中排除奇异的特征点,对数据集进行筛选,以此提高匹配点的准确度和标定的精度.实验表明该方法能根据真实视频获得摄像机内外参数,能够较好的应用于机器视觉领域.【期刊名称】《现代制造技术与装备》【年(卷),期】2015(000)004【总页数】3页(P92-94)【关键词】摄像机自标定;基本矩阵;反对称矩阵【作者】王赟【作者单位】新乡学院机电工程学院,新乡 453003【正文语种】中文机器视觉技术中一种基于反对称矩阵及RANSAC算法的摄像机自标定方法王赟(新乡学院机电工程学院,新乡453003)摘要:介绍了一种摄像机自标定方法,该方法通过匹配的特征点建立标准矩阵后,利用反对称矩阵的性质,将标准矩阵表达式分解成6个约束方程,通过其约束关系得到摄像机内外参数。

MEWS评分在急诊留观患者护理决策中的作用分析

MEWS评分在急诊留观患者护理决策中的作用分析

MEWS评分在急诊留观患者护理决策中的作用分析一、MEWS评分的概念简化急诊患者危重度评估(Modified Early Warning Score,MEWS)是一种通过观察生命体征来评估患者病情变化的评分系统。

MEWS评分包括呼吸频率、心率、收缩压、体温和意识状态五个指标,通过对这些指标进行评分,并将评分结果相加,来评估患者的病情变化程度。

当评分结果高于一定阈值时,就需要及时采取相应的护理措施,以避免患者病情的进一步恶化。

MEWS评分系统简单易行、操作方便,因此在临床中得到了广泛的使用。

二、MEWS评分在急诊留观患者护理决策中的作用1. 及时发现患者病情变化在急诊留观患者的护理过程中,患者病情的变化可能随时发生,而且有些变化可能相当微弱,容易被忽略。

通过对患者进行定期的MEWS评分,可以及时监测患者的生命体征指标,并将评分结果及时记录在案。

一旦发现患者的MEWS评分升高,就可以及时采取护理措施,以防止患者病情的进一步恶化。

MEWS评分在急诊留观患者护理决策中可以起到及时发现患者病情变化的作用。

2. 提高护理质量MEWS评分可以帮助医护人员及时发现患者的病情变化,有利于提高护理质量。

通过对患者进行定期的MEWS评分,可以及时发现患者的病情变化,及时采取相应的护理措施,有利于减少医疗事故的发生,提高医疗质量和护理效果。

3. 促进医护人员间的交流在急诊留观患者的护理决策中,医护人员之间的交流配合是至关重要的。

通过对患者进行定期的MEWS评分,可以使医护人员更好地了解患者的病情变化情况,并及时进行交流,共同制定护理方案,有利于提高医护人员之间的沟通和配合,促进医护团队的协作效率。

三、MEWS评分在急诊留观患者护理决策中的局限性1. 评分标准不够客观MEWS评分系统主要通过对患者的生命体征指标进行评分,存在一定的主观性。

不同的医护人员可能会对患者的生命体征指标进行评判时存在主观性,因此可能会对评分结果产生一定的误差。

基于自然地表的星载光子计数激光雷达在轨标定

基于自然地表的星载光子计数激光雷达在轨标定

第49卷第11期V ol.49N o.ll红外与激光工程Infrared and Laser Engineering2020年11月Nov. 2020基于自然地表的星载光子计数激光雷达在轨标定赵朴凡,马跃,伍煜,余诗哲,李松(武汉大学电子信息学院,湖北武汉430072)摘要:在轨标定技术是影响星载激光雷达光斑定位精度的核心技术之一。

介绍了目前国内外星载 激光雷达的在轨标定技术发展现状,分析了各类在轨标定技术的特点。

针对新型的光子计数模式星载 激光雷达的特性,提出了一种基于自然地表的星载光子计数激光雷达在轨标定新方法,使用仿真点云 对标定算法的正确性进行了验证,并分别使用南极麦克莫多干谷和中国连云港地区的地表数据和美国ICESat-2卫星数据进行了交叉验证实验,实验结果表明:算法标定后的点云相对美国国家航空航天 局提供的官方点云坐标平面偏移在3 m左右,高程偏移在厘米量级。

文中还利用地面人工建筑等特征 点对比了算法标定后的点云与官方点云之间的差异,最后对基于自然地表的在轨标定方法的精度以及 标定场地形的影响进行了讨论。

关键词:光子计数激光雷达;自然地表;在轨标定;卫星激光测高中图分类号:TN958.98 文献标志码:A DOI:10.3788/IRLA20200214Spaceborne photon-counting LiDAR on-orbitcalibration based on natural surfaceZhao Pufan,Ma Yue,Wu Yu,Yu Shizhe,Li Song(School of Electronic Information, Wuhan University, Wuhan 430072, China)Abstract:On-orbit calibration technique is a key factor which affects the photon geolocation accuracy of spaceborne LiDAR. The current status of spaceborne LiDAR on-orbit calibration technique was introduced, and the characteristics of various spaceborne LiDAR on-orbit calibration technique were analyzed. Aiming at the characteristics of the photon counting mode spaceborne LiDAR, a new on-orbit calibration method based on the natural surface was derived, simulated point cloud was used to verify the correctness of the calibration algorithm, and a cross validation experiment was made with the surface data of the Antarctic McMudro Dry Valleys and China Lianyungang areas and ICESat-2 point cloud data, the experimental results show that the plane offset between the point cloud calibrated by proposed algorithm and point cloud provided by National Aeronautics and Space Administration is about 3 m, elevation offset is in centimeter scale. The differences between the point cloud calibrated by the algorithm and the point cloud provided by National Aeronautics and Space Administration were also compared by using the feature points of artificial construction on the ground. Finally, the accuracy of the on- orbit calibration method based on natural surface and the influence of the calibration field topography were discussed.Key words:photon-counting LiDAR; natural surface; on-orbit calibration; spaceborne laser altimetry收稿日期:2020-05-28;修订日期:2020-06-29基金项目:国家自然科学基金(41801261);对地高分国家科技重大专项(11-Y20A12-9001-17/18,42-Y20A11-9001-17/18);中国博士后 科学基金(2016M600612, 20170034)作者简介:赵朴凡(1996-),男,博士生,主要从事激光标定理论与方法方面的研究工作:Email:****************.cn导师简介:李松(1965-),女,教授,博士生导师,博士,主要从事卫星激光遥感技术与设备方面的研究工作Email:**********.cn20200214-1第11期红外与激光工程第49卷0引言星载激光雷达是一种主动式的激光测量设备,它 根据激光脉冲的渡越时间(Time of Flight,ToF)获得 卫星与地表目标间的精确距离值,结合卫星平台的精 确姿态、位置信息以及激光指向信息后可以获得目标 的精确三维坐标。

ABSTRACT Progressive Simplicial Complexes

ABSTRACT Progressive Simplicial Complexes

Progressive Simplicial Complexes Jovan Popovi´c Hugues HoppeCarnegie Mellon University Microsoft ResearchABSTRACTIn this paper,we introduce the progressive simplicial complex(PSC) representation,a new format for storing and transmitting triangu-lated geometric models.Like the earlier progressive mesh(PM) representation,it captures a given model as a coarse base model together with a sequence of refinement transformations that pro-gressively recover detail.The PSC representation makes use of a more general refinement transformation,allowing the given model to be an arbitrary triangulation(e.g.any dimension,non-orientable, non-manifold,non-regular),and the base model to always consist of a single vertex.Indeed,the sequence of refinement transforma-tions encodes both the geometry and the topology of the model in a unified multiresolution framework.The PSC representation retains the advantages of PM’s.It defines a continuous sequence of approx-imating models for runtime level-of-detail control,allows smooth transitions between any pair of models in the sequence,supports progressive transmission,and offers a space-efficient representa-tion.Moreover,by allowing changes to topology,the PSC sequence of approximations achieves betterfidelity than the corresponding PM sequence.We develop an optimization algorithm for constructing PSC representations for graphics surface models,and demonstrate the framework on models that are both geometrically and topologically complex.CR Categories:I.3.5[Computer Graphics]:Computational Geometry and Object Modeling-surfaces and object representations.Additional Keywords:model simplification,level-of-detail representa-tions,multiresolution,progressive transmission,geometry compression.1INTRODUCTIONModeling and3D scanning systems commonly give rise to triangle meshes of high complexity.Such meshes are notoriously difficult to render,store,and transmit.One approach to speed up rendering is to replace a complex mesh by a set of level-of-detail(LOD) approximations;a detailed mesh is used when the object is close to the viewer,and coarser approximations are substituted as the object recedes[6,8].These LOD approximations can be precomputed Work performed while at Microsoft Research.Email:jovan@,hhoppe@Web:/jovan/Web:/hoppe/automatically using mesh simplification methods(e.g.[2,10,14,20,21,22,24,27]).For efficient storage and transmission,meshcompression schemes[7,26]have also been developed.The recently introduced progressive mesh(PM)representa-tion[13]provides a unified solution to these problems.In PM form,an arbitrary mesh M is stored as a coarse base mesh M0together witha sequence of n detail records that indicate how to incrementally re-fine M0into M n=M(see Figure7).Each detail record encodes theinformation associated with a vertex split,an elementary transfor-mation that adds one vertex to the mesh.In addition to defininga continuous sequence of approximations M0M n,the PM rep-resentation supports smooth visual transitions(geomorphs),allowsprogressive transmission,and makes an effective mesh compressionscheme.The PM representation has two restrictions,however.First,it canonly represent meshes:triangulations that correspond to orientable12-dimensional manifolds.Triangulated2models that cannot be rep-resented include1-d manifolds(open and closed curves),higherdimensional polyhedra(e.g.triangulated volumes),non-orientablesurfaces(e.g.M¨o bius strips),non-manifolds(e.g.two cubes joinedalong an edge),and non-regular models(i.e.models of mixed di-mensionality).Second,the expressiveness of the PM vertex splittransformations constrains all meshes M0M n to have the same topological type.Therefore,when M is topologically complex,the simplified base mesh M0may still have numerous triangles(Fig-ure7).In contrast,a number of existing simplification methods allowtopological changes as the model is simplified(Section6).Ourwork is inspired by vertex unification schemes[21,22],whichmerge vertices of the model based on geometric proximity,therebyallowing genus modification and component merging.In this paper,we introduce the progressive simplicial complex(PSC)representation,a generalization of the PM representation thatpermits topological changes.The key element of our approach isthe introduction of a more general refinement transformation,thegeneralized vertex split,that encodes changes to both the geometryand topology of the model.The PSC representation expresses anarbitrary triangulated model M(e.g.any dimension,non-orientable,non-manifold,non-regular)as the result of successive refinementsapplied to a base model M1that always consists of a single vertex (Figure8).Thus both geometric and topological complexity are recovered progressively.Moreover,the PSC representation retains the advantages of PM’s,including continuous LOD,geomorphs, progressive transmission,and model compression.In addition,we develop an optimization algorithm for construct-ing a PSC representation from a given model,as described in Sec-tion4.1The particular parametrization of vertex splits in[13]assumes that mesh triangles are consistently oriented.2Throughout this paper,we use the words“triangulated”and“triangula-tion”in the general dimension-independent sense.Figure 1:Illustration of a simplicial complex K and some of its subsets.2BACKGROUND2.1Concepts from algebraic topologyTo precisely define both triangulated models and their PSC repre-sentations,we find it useful to introduce some elegant abstractions from algebraic topology (e.g.[15,25]).The geometry of a triangulated model is denoted as a tuple (K V )where the abstract simplicial complex K is a combinatorial structure specifying the adjacency of vertices,edges,triangles,etc.,and V is a set of vertex positions specifying the shape of the model in 3.More precisely,an abstract simplicial complex K consists of a set of vertices 1m together with a set of non-empty subsets of the vertices,called the simplices of K ,such that any set consisting of exactly one vertex is a simplex in K ,and every non-empty subset of a simplex in K is also a simplex in K .A simplex containing exactly d +1vertices has dimension d and is called a d -simplex.As illustrated pictorially in Figure 1,the faces of a simplex s ,denoted s ,is the set of non-empty subsets of s .The star of s ,denoted star(s ),is the set of simplices of which s is a face.The children of a d -simplex s are the (d 1)-simplices of s ,and its parents are the (d +1)-simplices of star(s ).A simplex with exactly one parent is said to be a boundary simplex ,and one with no parents a principal simplex .The dimension of K is the maximum dimension of its simplices;K is said to be regular if all its principal simplices have the same dimension.To form a triangulation from K ,identify its vertices 1m with the standard basis vectors 1m ofm.For each simplex s ,let the open simplex smdenote the interior of the convex hull of its vertices:s =m:jmj =1j=1jjsThe topological realization K is defined as K =K =s K s .The geometric realization of K is the image V (K )where V :m 3is the linear map that sends the j -th standard basis vector jm to j 3.Only a restricted set of vertex positions V =1m lead to an embedding of V (K )3,that is,prevent self-intersections.The geometric realization V (K )is often called a simplicial complex or polyhedron ;it is formed by an arbitrary union of points,segments,triangles,tetrahedra,etc.Note that there generally exist many triangulations (K V )for a given polyhedron.(Some of the vertices V may lie in the polyhedron’s interior.)Two sets are said to be homeomorphic (denoted =)if there ex-ists a continuous one-to-one mapping between them.Equivalently,they are said to have the same topological type .The topological realization K is a d-dimensional manifold without boundary if for each vertex j ,star(j )=d .It is a d-dimensional manifold if each star(v )is homeomorphic to either d or d +,where d +=d:10.Two simplices s 1and s 2are d-adjacent if they have a common d -dimensional face.Two d -adjacent (d +1)-simplices s 1and s 2are manifold-adjacent if star(s 1s 2)=d +1.Figure 2:Illustration of the edge collapse transformation and its inverse,the vertex split.Transitive closure of 0-adjacency partitions K into connected com-ponents .Similarly,transitive closure of manifold-adjacency parti-tions K into manifold components .2.2Review of progressive meshesIn the PM representation [13],a mesh with appearance attributes is represented as a tuple M =(K V D S ),where the abstract simpli-cial complex K is restricted to define an orientable 2-dimensional manifold,the vertex positions V =1m determine its ge-ometric realization V (K )in3,D is the set of discrete material attributes d f associated with 2-simplices f K ,and S is the set of scalar attributes s (v f )(e.g.normals,texture coordinates)associated with corners (vertex-face tuples)of K .An initial mesh M =M n is simplified into a coarser base mesh M 0by applying a sequence of n successive edge collapse transforma-tions:(M =M n )ecol n 1ecol 1M 1ecol 0M 0As shown in Figure 2,each ecol unifies the two vertices of an edgea b ,thereby removing one or two triangles.The position of the resulting unified vertex can be arbitrary.Because the edge collapse transformation has an inverse,called the vertex split transformation (Figure 2),the process can be reversed,so that an arbitrary mesh M may be represented as a simple mesh M 0together with a sequence of n vsplit records:M 0vsplit 0M 1vsplit 1vsplit n 1(M n =M )The tuple (M 0vsplit 0vsplit n 1)forms a progressive mesh (PM)representation of M .The PM representation thus captures a continuous sequence of approximations M 0M n that can be quickly traversed for interac-tive level-of-detail control.Moreover,there exists a correspondence between the vertices of any two meshes M c and M f (0c f n )within this sequence,allowing for the construction of smooth vi-sual transitions (geomorphs)between them.A sequence of such geomorphs can be precomputed for smooth runtime LOD.In addi-tion,PM’s support progressive transmission,since the base mesh M 0can be quickly transmitted first,followed the vsplit sequence.Finally,the vsplit records can be encoded concisely,making the PM representation an effective scheme for mesh compression.Topological constraints Because the definitions of ecol and vsplit are such that they preserve the topological type of the mesh (i.e.all K i are homeomorphic),there is a constraint on the min-imum complexity that K 0may achieve.For instance,it is known that the minimal number of vertices for a closed genus g mesh (ori-entable 2-manifold)is (7+(48g +1)12)2if g =2(10if g =2)[16].Also,the presence of boundary components may further constrain the complexity of K 0.Most importantly,K may consist of a number of components,and each is required to appear in the base mesh.For example,the meshes in Figure 7each have 117components.As evident from the figure,the geometry of PM meshes may deteriorate severely as they approach topological lower bound.M 1;100;(1)M 10;511;(7)M 50;4656;(12)M 200;1552277;(28)M 500;3968690;(58)M 2000;14253219;(108)M 5000;029010;(176)M n =34794;0068776;(207)Figure 3:Example of a PSC representation.The image captions indicate the number of principal 012-simplices respectively and the number of connected components (in parenthesis).3PSC REPRESENTATION 3.1Triangulated modelsThe first step towards generalizing PM’s is to let the PSC repre-sentation encode more general triangulated models,instead of just meshes.We denote a triangulated model as a tuple M =(K V D A ).The abstract simplicial complex K is not restricted to 2-manifolds,but may in fact be arbitrary.To represent K in memory,we encode the incidence graph of the simplices using the following linked structures (in C++notation):struct Simplex int dim;//0=vertex,1=edge,2=triangle,...int id;Simplex*children[MAXDIM+1];//[0..dim]List<Simplex*>parents;;To render the model,we draw only the principal simplices ofK ,denoted (K )(i.e.vertices not adjacent to edges,edges not adjacent to triangles,etc.).The discrete attributes D associate amaterial identifier d s with each simplex s(K ).For the sake of simplicity,we avoid explicitly storing surface normals at “corners”(using a set S )as done in [13].Instead we let the material identifier d s contain a smoothing group field [28],and let a normal discontinuity (crease )form between any pair of adjacent triangles with different smoothing groups.Previous vertex unification schemes [21,22]render principal simplices of dimension 0and 1(denoted 01(K ))as points and lines respectively with fixed,device-dependent screen widths.To better approximate the model,we instead define a set A that associates an area a s A with each simplex s 01(K ).We think of a 0-simplex s 00(K )as approximating a sphere with area a s 0,and a 1-simplex s 1=j k 1(K )as approximating a cylinder (with axis (j k ))of area a s 1.To render a simplex s 01(K ),we determine the radius r model of the corresponding sphere or cylinder in modeling space,and project the length r model to obtain the radius r screen in screen pixels.Depending on r screen ,we render the simplex as a polygonal sphere or cylinder with radius r model ,a 2D point or line with thickness 2r screen ,or do not render it at all.This choice based on r screen can be adjusted to mitigate the overhead of introducing polygonal representations of spheres and cylinders.As an example,Figure 3shows an initial model M of 68,776triangles.One of its approximations M 500is a triangulated model with 3968690principal 012-simplices respectively.3.2Level-of-detail sequenceAs in progressive meshes,from a given triangulated model M =M n ,we define a sequence of approximations M i :M 1op 1M 2op 2M n1op n 1M nHere each model M i has exactly i vertices.The simplification op-erator M ivunify iM i +1is the vertex unification transformation,whichmerges two vertices (Section 3.3),and its inverse M igvspl iM i +1is the generalized vertex split transformation (Section 3.4).Thetuple (M 1gvspl 1gvspl n 1)forms a progressive simplicial complex (PSC)representation of M .To construct a PSC representation,we first determine a sequence of vunify transformations simplifying M down to a single vertex,as described in Section 4.After reversing these transformations,we renumber the simplices in the order that they are created,so thateach gvspl i (a i)splits the vertex a i K i into two vertices a i i +1K i +1.As vertices may have different positions in the different models,we denote the position of j in M i as i j .To better approximate a surface model M at lower complexity levels,we initially associate with each (principal)2-simplex s an area a s equal to its triangle area in M .Then,as the model is simplified,wekeep constant the sum of areas a s associated with principal simplices within each manifold component.When2-simplices are eventually reduced to principal1-simplices and0-simplices,their associated areas will provide good estimates of the original component areas.3.3Vertex unification transformationThe transformation vunify(a i b i midp i):M i M i+1takes an arbitrary pair of vertices a i b i K i+1(simplex a i b i need not be present in K i+1)and merges them into a single vertex a i K i. Model M i is created from M i+1by updating each member of the tuple(K V D A)as follows:K:References to b i in all simplices of K are replaced by refer-ences to a i.More precisely,each simplex s in star(b i)K i+1is replaced by simplex(s b i)a i,which we call the ancestor simplex of s.If this ancestor simplex already exists,s is deleted.V:Vertex b is deleted.For simplicity,the position of the re-maining(unified)vertex is set to either the midpoint or is left unchanged.That is,i a=(i+1a+i+1b)2if the boolean parameter midp i is true,or i a=i+1a otherwise.D:Materials are carried through as expected.So,if after the vertex unification an ancestor simplex(s b i)a i K i is a new principal simplex,it receives its material from s K i+1if s is a principal simplex,or else from the single parent s a i K i+1 of s.A:To maintain the initial areas of manifold components,the areasa s of deleted principal simplices are redistributed to manifold-adjacent neighbors.More concretely,the area of each princi-pal d-simplex s deleted during the K update is distributed toa manifold-adjacent d-simplex not in star(a ib i).If no suchneighbor exists and the ancestor of s is a principal simplex,the area a s is distributed to that ancestor simplex.Otherwise,the manifold component(star(a i b i))of s is being squashed be-tween two other manifold components,and a s is discarded. 3.4Generalized vertex split transformation Constructing the PSC representation involves recording the infor-mation necessary to perform the inverse of each vunify i.This inverse is the generalized vertex split gvspl i,which splits a0-simplex a i to introduce an additional0-simplex b i.(As mentioned previously, renumbering of simplices implies b i i+1,so index b i need not be stored explicitly.)Each gvspl i record has the formgvspl i(a i C K i midp i()i C D i C A i)and constructs model M i+1from M i by updating the tuple (K V D A)as follows:K:As illustrated in Figure4,any simplex adjacent to a i in K i can be the vunify result of one of four configurations in K i+1.To construct K i+1,we therefore replace each ancestor simplex s star(a i)in K i by either(1)s,(2)(s a i)i+1,(3)s and(s a i)i+1,or(4)s,(s a i)i+1and s i+1.The choice is determined by a split code associated with s.Thesesplit codes are stored as a code string C Ki ,in which the simplicesstar(a i)are sortedfirst in order of increasing dimension,and then in order of increasing simplex id,as shown in Figure5. V:The new vertex is assigned position i+1i+1=i ai+()i.Theother vertex is given position i+1ai =i ai()i if the boolean pa-rameter midp i is true;otherwise its position remains unchanged.D:The string C Di is used to assign materials d s for each newprincipal simplex.Simplices in C Di ,as well as in C Aibelow,are sorted by simplex dimension and simplex id as in C Ki. A:During reconstruction,we are only interested in the areas a s fors01(K).The string C Ai tracks changes in these areas.Figure4:Effects of split codes on simplices of various dimensions.code string:41422312{}Figure5:Example of split code encoding.3.5PropertiesLevels of detail A graphics application can efficiently transitionbetween models M1M n at runtime by performing a sequence ofvunify or gvspl transformations.Our current research prototype wasnot designed for efficiency;it attains simplification rates of about6000vunify/sec and refinement rates of about5000gvspl/sec.Weexpect that a careful redesign using more efficient data structureswould significantly improve these rates.Geomorphs As in the PM representation,there exists a corre-spondence between the vertices of the models M1M n.Given acoarser model M c and afiner model M f,1c f n,each vertexj K f corresponds to a unique ancestor vertex f c(j)K cfound by recursively traversing the ancestor simplex relations:f c(j)=j j cf c(a j1)j cThis correspondence allows the creation of a smooth visual transi-tion(geomorph)M G()such that M G(1)equals M f and M G(0)looksidentical to M c.The geomorph is defined as the modelM G()=(K f V G()D f A G())in which each vertex position is interpolated between its originalposition in V f and the position of its ancestor in V c:Gj()=()fj+(1)c f c(j)However,we must account for the special rendering of principalsimplices of dimension0and1(Section3.1).For each simplexs01(K f),we interpolate its area usinga G s()=()a f s+(1)a c swhere a c s=0if s01(K c).In addition,we render each simplexs01(K c)01(K f)using area a G s()=(1)a c s.The resultinggeomorph is visually smooth even as principal simplices are intro-duced,removed,or change dimension.The accompanying video demonstrates a sequence of such geomorphs.Progressive transmission As with PM’s,the PSC representa-tion can be progressively transmitted by first sending M 1,followed by the gvspl records.Unlike the base mesh of the PM,M 1always consists of a single vertex,and can therefore be sent in a fixed-size record.The rendering of lower-dimensional simplices as spheres and cylinders helps to quickly convey the overall shape of the model in the early stages of transmission.Model compression Although PSC gvspl are more general than PM vsplit transformations,they offer a surprisingly concise representation of M .Table 1lists the average number of bits re-quired to encode each field of the gvspl records.Using arithmetic coding [30],the vertex id field a i requires log 2i bits,and the boolean parameter midp i requires 0.6–0.9bits for our models.The ()i delta vector is quantized to 16bitsper coordinate (48bits per),and stored as a variable-length field [7,13],requiring about 31bits on average.At first glance,each split code in the code string C K i seems to have 4possible outcomes (except for the split code for 0-simplex a i which has only 2possible outcomes).However,there exist constraints between these split codes.For example,in Figure 5,the code 1for 1-simplex id 1implies that 2-simplex id 1also has code 1.This in turn implies that 1-simplex id 2cannot have code 2.Similarly,code 2for 1-simplex id 3implies a code 2for 2-simplex id 2,which in turn implies that 1-simplex id 4cannot have code 1.These constraints,illustrated in the “scoreboard”of Figure 6,can be summarized using the following two rules:(1)If a simplex has split code c12,all of its parents havesplit code c .(2)If a simplex has split code 3,none of its parents have splitcode 4.As we encode split codes in C K i left to right,we apply these two rules (and their contrapositives)transitively to constrain the possible outcomes for split codes yet to be ing arithmetic coding with uniform outcome probabilities,these constraints reduce the code string length in Figure 6from 15bits to 102bits.In our models,the constraints reduce the code string from 30bits to 14bits on average.The code string is further reduced using a non-uniform probability model.We create an array T [0dim ][015]of encoding tables,indexed by simplex dimension (0..dim)and by the set of possible (constrained)split codes (a 4-bit mask).For each simplex s ,we encode its split code c using the probability distribution found in T [s dim ][s codes mask ].For 2-dimensional models,only 10of the 48tables are non-trivial,and each table contains at most 4probabilities,so the total size of the probability model is small.These encoding tables reduce the code strings to approximately 8bits as shown in Table 1.By comparison,the PM representation requires approximately 5bits for the same information,but of course it disallows topological changes.To provide more intuition for the efficiency of the PSC repre-sentation,we note that capturing the connectivity of an average 2-manifold simplicial complex (n vertices,3n edges,and 2n trian-gles)requires ni =1(log 2i +8)n (log 2n +7)bits with PSC encoding,versus n (12log 2n +95)bits with a traditional one-way incidence graph representation.For improved compression,it would be best to use a hybrid PM +PSC representation,in which the more concise PM vertex split encoding is used when the local neighborhood is an orientableFigure 6:Constraints on the split codes for the simplices in the example of Figure 5.Table 1:Compression results and construction times.Object#verts Space required (bits/n )Trad.Con.n K V D Arepr.time a i C K i midp i (v )i C D i C Ai bits/n hrs.drumset 34,79412.28.20.928.1 4.10.453.9146.1 4.3destroyer 83,79913.38.30.723.1 2.10.347.8154.114.1chandelier 36,62712.47.60.828.6 3.40.853.6143.6 3.6schooner 119,73413.48.60.727.2 2.5 1.353.7148.722.2sandal 4,6289.28.00.733.4 1.50.052.8123.20.4castle 15,08211.0 1.20.630.70.0-43.5-0.5cessna 6,7959.67.60.632.2 2.50.152.6132.10.5harley 28,84711.97.90.930.5 1.40.453.0135.7 3.52-dimensional manifold (this occurs on average 93%of the time in our examples).To compress C D i ,we predict the material for each new principalsimplex sstar(a i )star(b i )K i +1by constructing an ordered set D s of materials found in star(a i )K i .To improve the coding model,the first materials in D s are those of principal simplices in star(s )K i where s is the ancestor of s ;the remainingmaterials in star(a i )K i are appended to D s .The entry in C D i associated with s is the index of its material in D s ,encoded arithmetically.If the material of s is not present in D s ,it is specified explicitly as a global index in D .We encode C A i by specifying the area a s for each new principalsimplex s 01(star(a i )star(b i ))K i +1.To account for this redistribution of area,we identify the principal simplex from which s receives its area by specifying its index in 01(star(a i ))K i .The column labeled in Table 1sums the bits of each field of the gvspl records.Multiplying by the number n of vertices in M gives the total number of bits for the PSC representation of the model (e.g.500KB for the destroyer).By way of compari-son,the next column shows the number of bits per vertex required in a traditional “IndexedFaceSet”representation,with quantization of 16bits per coordinate and arithmetic coding of face materials (3n 16+2n 3log 2n +materials).4PSC CONSTRUCTIONIn this section,we describe a scheme for iteratively choosing pairs of vertices to unify,in order to construct a PSC representation.Our algorithm,a generalization of [13],is time-intensive,seeking high quality approximations.It should be emphasized that many quality metrics are possible.For instance,the quadric error metric recently introduced by Garland and Heckbert [9]provides a different trade-off of execution speed and visual quality.As in [13,20],we first compute a cost E for each candidate vunify transformation,and enter the candidates into a priority queueordered by ascending cost.Then,in each iteration i =n 11,we perform the vunify at the front of the queue and update the costs of affected candidates.4.1Forming set of candidate vertex pairs In principle,we could enter all possible pairs of vertices from M into the priority queue,but this would be prohibitively expensive since simplification would then require at least O(n2log n)time.Instead, we would like to consider only a smaller set of candidate vertex pairs.Naturally,should include the1-simplices of K.Additional pairs should also be included in to allow distinct connected com-ponents of M to merge and to facilitate topological changes.We considered several schemes for forming these additional pairs,in-cluding binning,octrees,and k-closest neighbor graphs,but opted for the Delaunay triangulation because of its adaptability on models containing components at different scales.We compute the Delaunay triangulation of the vertices of M, represented as a3-dimensional simplicial complex K DT.We define the initial set to contain both the1-simplices of K and the subset of1-simplices of K DT that connect vertices in different connected components of K.During the simplification process,we apply each vertex unification performed on M to as well in order to keep consistent the set of candidate pairs.For models in3,star(a i)has constant size in the average case,and the overall simplification algorithm requires O(n log n) time.(In the worst case,it could require O(n2log n)time.)4.2Selecting vertex unifications fromFor each candidate vertex pair(a b),the associated vunify(a b):M i M i+1is assigned the costE=E dist+E disc+E area+E foldAs in[13],thefirst term is E dist=E dist(M i)E dist(M i+1),where E dist(M)measures the geometric accuracy of the approximate model M.Conceptually,E dist(M)approximates the continuous integralMd2(M)where d(M)is the Euclidean distance of the point to the closest point on M.We discretize this integral by defining E dist(M)as the sum of squared distances to M from a dense set of points X sampled from the original model M.We sample X from the set of principal simplices in K—a strategy that generalizes to arbitrary triangulated models.In[13],E disc(M)measures the geometric accuracy of disconti-nuity curves formed by a set of sharp edges in the mesh.For the PSC representation,we generalize the concept of sharp edges to that of sharp simplices in K—a simplex is sharp either if it is a boundary simplex or if two of its parents are principal simplices with different material identifiers.The energy E disc is defined as the sum of squared distances from a set X disc of points sampled from sharp simplices to the discontinuity components from which they were sampled.Minimization of E disc therefore preserves the geom-etry of material boundaries,normal discontinuities(creases),and triangulation boundaries(including boundary curves of a surface and endpoints of a curve).We have found it useful to introduce a term E area that penalizes surface stretching(a more sophisticated version of the regularizing E spring term of[13]).Let A i+1N be the sum of triangle areas in the neighborhood star(a i)star(b i)K i+1,and A i N the sum of triangle areas in star(a i)K i.The mean squared displacement over the neighborhood N due to the change in area can be approx-imated as disp2=12(A i+1NA iN)2.We let E area=X N disp2,where X N is the number of points X projecting in the neighborhood. To prevent model self-intersections,the last term E fold penalizes surface folding.We compute the rotation of each oriented triangle in the neighborhood due to the vertex unification(as in[10,20]).If any rotation exceeds a threshold angle value,we set E fold to a large constant.Unlike[13],we do not optimize over the vertex position i a, but simply evaluate E for i a i+1a i+1b(i+1a+i+1b)2and choose the best one.This speeds up the optimization,improves model compression,and allows us to introduce non-quadratic energy terms like E area.5RESULTSTable1gives quantitative results for the examples in thefigures and in the video.Simplification times for our prototype are measured on an SGI Indigo2Extreme(150MHz R4400).Although these times may appear prohibitive,PSC construction is an off-line task that only needs to be performed once per model.Figure9highlights some of the benefits of the PSC representa-tion.The pearls in the chandelier model are initially disconnected tetrahedra;these tetrahedra merge and collapse into1-d curves in lower-complexity approximations.Similarly,the numerous polyg-onal ropes in the schooner model are simplified into curves which can be rendered as line segments.The straps of the sandal model initially have some thickness;the top and bottom sides of these straps merge in the simplification.Also note the disappearance of the holes on the sandal straps.The castle example demonstrates that the original model need not be a mesh;here M is a1-dimensional non-manifold obtained by extracting edges from an image.6RELATED WORKThere are numerous schemes for representing and simplifying tri-angulations in computer graphics.A common special case is that of subdivided2-manifolds(meshes).Garland and Heckbert[12] provide a recent survey of mesh simplification techniques.Several methods simplify a given model through a sequence of edge col-lapse transformations[10,13,14,20].With the exception of[20], these methods constrain edge collapses to preserve the topological type of the model(e.g.disallow the collapse of a tetrahedron into a triangle).Our work is closely related to several schemes that generalize the notion of edge collapse to that of vertex unification,whereby separate connected components of the model are allowed to merge and triangles may be collapsed into lower dimensional simplices. Rossignac and Borrel[21]overlay a uniform cubical lattice on the object,and merge together vertices that lie in the same cubes. Schaufler and St¨u rzlinger[22]develop a similar scheme in which vertices are merged using a hierarchical clustering algorithm.Lue-bke[18]introduces a scheme for locally adapting the complexity of a scene at runtime using a clustering octree.In these schemes, the approximating models correspond to simplicial complexes that would result from a set of vunify transformations(Section3.3).Our approach differs in that we order the vunify in a carefully optimized sequence.More importantly,we define not only a simplification process,but also a new representation for the model using an en-coding of gvspl=vunify1transformations.Recent,independent work by Schmalstieg and Schaufler[23]de-velops a similar strategy of encoding a model using a sequence of vertex split transformations.Their scheme differs in that it tracks only triangles,and therefore requires regular,2-dimensional trian-gulations.Hence,it does not allow lower-dimensional simplices in the model approximations,and does not generalize to higher dimensions.Some simplification schemes make use of an intermediate vol-umetric representation to allow topological changes to the model. He et al.[11]convert a mesh into a binary inside/outside function discretized on a three-dimensional grid,low-passfilter this function,。

Efficient RANSAC for Point-Cloud Shape Detection

Efficient RANSAC for Point-Cloud Shape Detection

Volume0(1981),Number0pp.1–12Efficient RANSAC for Point-Cloud Shape DetectionRuwen Schnabel Roland Wahl Reinhard Klein†Universität Bonn,Computer Graphics GroupAbstractIn this work we present an automatic algorithm to detect basic shapes in unorganized point clouds.The algorithm decomposes the point cloud into a concise,hybrid structure of inherent shapes and a set of remaining points.Each detected shape serves as a proxy for a set of corresponding points.Our method is based on random sampling and detects planes,spheres,cylinders,cones and tori.For models with surfaces composed of these basic shapes only,e.g.CAD models,we automatically obtain a representation solely consisting of shape proxies.We demonstratethat the algorithm is robust even in the presence of many outliers and a high degree of noise.The proposed method scales well with respect to the size of the input point cloud and the number and size of the shapes within the data.Even point sets with several millions of samples are robustly decomposed within less than a minute.Moreover the algorithm is conceptually simple and easy to implement.Application areas include measurement of physical parameters,scan registration,surface compression,hybrid rendering,shape classification,meshing, simplification,approximation and reverse engineering.Categories and Subject Descriptors(according to ACM CCS):I.4.8[Image Processing and Computer Vision]:Scene AnalysisShape;Surface Fitting;I.3.5[Computer Graphics]:Computational Geometry and Object ModelingCurve, surface,solid,and object representations1.IntroductionDue to the increasing size and complexity of geometric data sets there is an ever-growing demand for concise and mean-ingful abstractions of this data.Especially when dealing with digitized geometry,e.g.acquired with a laser scanner,no handles for modification of the data are available to the user other than the digitized points themselves.However,in or-der to be able to make use of the data effectively,the raw digitized data has to be enriched with abstractions and pos-sibly semantic information,providing the user with higher-level interaction possibilities.Only such handles can pro-vide the interaction required for involved editing processes, such as deleting,moving or resizing certain parts and hence can make the data more readily usable for modeling pur-poses.Of course,traditional reverse engineering approaches can provide some of the abstractions that we seek,but usu-ally reverse engineering focuses onfinding a reconstruction of the underlying geometry and typically involves quite te-dious user interaction.This is not justified in a setting where †e-mail:{schnabel,wahl,rk}@cs.uni-bonn.de a complete and detailed reconstruction is not required at all, or shall take place only after some basic editing operations have been applied to the data.On the other hand,detecting instances of a set of primitive geometric shapes in the point sampled data is a means to quickly derive higher levels of ab-straction.For example in Fig.1patches of primitive shapes provide a coarse approximation of the geometry that could be used to compress the point-cloud very effectively. Another problem arising when dealing with digitized geom-etry is the often huge size of the datasets.Therefore the efficiency of algorithms inferring abstractions of the data is of utmost importance,especially in interactive settings. Thus,in this paper we focus especially onfinding an effi-cient algorithm for point-cloud shape detection,in order to be able to deal even with large point-clouds.Our work is a high performance RANSAC[FB81]algorithm that is capa-ble to extract a variety of different types of primitive shapes, while retaining such favorable properties of the RANSAC paradigm as robustness,generality and simplicity.At the heart of our algorithm are a novel,hierarchically structured sampling strategy for candidate shape generation as well as a novel,lazy cost function evaluation scheme,which signif-c The Eurographics Association and Blackwell Publishing2007.Published by Blackwell Publishing,9600Garsington Road,Oxford OX42DQ,UK and350Main Street,Malden, MA02148,USA.(a)Original(b)ApproximationFigure1:The372detected shapes in the choir screen define a coarse approximation of the surface.icantly reduces overall computational cost.Our method de-tects planes,spheres,cylinders,cones and tori,but additional primitives are possible.The goal of our algorithm is to reli-ably extract these shapes from the data,even under adverse conditions such as heavy noise.As has been indicated above,our method is especially well suited in situations where geometric data is automatically acquired and users refrain from applying surface reconstruc-tion methods,either due to the data’s low quality or due to processing time constraints.Such constraints are typical for areas where high level model interaction is required,as is the case when measuring physical parameters or in interactive, semi-automatic segmentation and postprocessing.Further applications are,for instance,registering many scans of an object,where detecting corresponding primitive shapes in multiple scans can provide good initial matches.High compression rates for point clouds can be achieved if prim-itive shapes are used to represent a large number of points with a small set of parameters.Other areas that can benefit from primitive shape information include hybrid rendering and shape classification.Additionally,a fast shape extraction method as ours can serve as building block in applications such as meshing,simplification,approximation and reverse engineering and bears the potential of significant speed up.2.Previous workThe detection of primitive shapes is a common problem en-countered in many areas of geometry related computer sci-ence.Over the years a vast number of methods have been proposed which cannot all be discussed here in depth.In-stead,here we give a short overview of some of the most important algorithms developed in the differentfields.We treat the previous work on RANSAC algorithms separately in section2.1as it is of special relevance to our work. Vision In computer vision,the two most widely known methodologies for shape extraction are the RANSAC paradigm[FB81]and the Hough transform[Hou62].Both have been proven to successfully detect shapes in2D as well as3D.RANSAC and the Hough transform are reliable even in the presence of a high proportion of outliers,but lack of efficiency or high memory consumption remains their ma-jor drawback[IK88].For both schemes,many acceleration techniques have been proposed,but no one on its own,or combinations thereof,have been shown to be able to provide an algorithm as efficient as ours for the3D primitive shape extraction problem.The Hough transform maps,for a given type of parameter-ized primitive,every point in the data to a manifold in the pa-rameter space.The manifold describes all possible variants of the primitive that contain the original point,i.e.in practice each point casts votes for many cells in a discretized param-eter space.Shapes are extracted by selecting those parame-ter vectors that have received a significant amount of votes. If the parameter space is discretized naively using a simple grid,the memory requirements quickly become prohibitive even for primitives with a moderate number of parameters, such as,for instance,cones.Although several methods have been suggested to alleviate this problem[IK87][XO93]its major application area remains the2D domain where the number of parameters typically is quite small.A notable ex-ception is[VGSR04]where the Hough transform is used to detect planes in3D datasets,as3D planes still have only a small number of parameters.They also propose a two-step procedure for the Hough based detection of cylinders that uses estimated normals in the data points.In the vision community many approaches have been pro-posed for segmentation of range images with primitive shapes.When working on range images these algorithms usually efficiently exploit the implicitly given connectiv-ity information of the image grid in some kind of region growing or region merging step[FEF97][GBS03].This is a fundamental difference to our case,where we are given only an unstructured cloud of points that lacks any explicit connectivity information.In[LGB95]and[LJS97]shapes are found by concurrently growing different seed primi-tives from which a suitable subset is selected according to an MDL criterion(coined the recover-and-select paradigm). [GBS03]detect shapes using a genetic algorithm to optimize a robust MSACfitness function(see also sec.2.1).[MLM01]c The Eurographics Association and Blackwell Publishing2007.introduce involved non-linearfitting functions for primitive shapes that are able to handle geometric degeneracy in the context of recover-and-select segmentation.Another robust method frequently employed in the vision community is the tensor voting framework[MLT00]which has been applied to successfully reconstruct surface geome-try from extremely cluttered scenes.While tensor voting can compete with RANSAC in terms of robustness,it is,how-ever,inherently model-free and therefore cannot be applied to the detection of predefined types of primitive shapes. Reverse engineering In reverse engineering,surface re-covery techniques are usually based on either a separate seg-mentation step or on a variety of region growing algorithms [VMC97][SB95][BGV∗02].Most methods call for some kind of connectivity information and are not well equipped to deal with a large amount of outliers[VMC97].Also these approaches try tofind a shape proxy for every part of the pro-cessed surface with the intent of loading the reconstructed geometry information into a CAD application.[BMV01]de-scribe a system which reconstructs a boundary representa-tion that can be imported into a CAD application from an unorganized point-cloud.However,their method is based on finding a triangulation for the point-set,whereas the method presented in this work is able to operate directly on the input points.This is advantageous as computing a suitable tessela-tion may be extremely costly and becomes very intricate or even ill-defined when there is heavy noise in the data.We do not,however,intend to present a method implementing all stages of a typical reverse engineering process.Graphics In computer graphics,[CSAD04]have recently proposed a general variational framework for approximation of surfaces by planes,which was extended to a set of more elaborate shape proxies by[WK05].Their aim is not only to extract certain shapes in the data,but tofind a globally optimal representation of the object by a given number of primitives.However,these methods require connectivity in-formation and are,due to their exclusive use of least squares fitting,susceptible to errors induced by outliers.Also,the optimization procedure is computationally expensive,which makes the method less suitable for large data sets.The out-put of our algorithm,however,could be used to initialize the set of shape proxies used by these methods,potentially accelerating the convergence of the optimization procedure. While the Hough transform and the RANSAC paradigm have been mainly used in computer vision some applica-tions have also been proposed in the computer graphics com-munity.[DDSD03]employ the Hough transform to identify planes for billboard clouds for triangle data.They propose an extension of the standard Hough transform to include a compactness criterion,but due to the high computational de-mand of the Hough transform,the method exhibits poor run-time performance on large or complex geometry.[WGK05] proposed a RANSAC-based plane detection method for hy-brid rendering of point clouds.To facilitate an efficient plane detection,planes are detected only in the cells of a hier-archical space decomposition and therefore what is essen-tially one plane on the surface is approximated by several planar patches.While this is acceptable for their hybrid ren-dering technique,our methodfinds maximal surface patches in order to yield a more concise representation of the ob-ject.Moreover,higher order primitives are not considered in their approach.[GG04]detect so-called slippable shapes which is a superset of the shapes recognized by our method. They use the eigenvalues of a symmetric matrix derived from the points and their normals to determine the slippability of a point-set.Their detection is a bottom-up approach that merges small initial slippable surfaces to obtain a global de-composition of the model.However the computation of the eigenvalues is costly for large models,the method is sen-sitive to noise and it is hard to determine the correct size of the initial surface patches.A related approach is taken by [HOP∗05].They also use the eigenvalues of a matrix derived from line element geometry to classify surfaces.A RANSAC based segmentation algorithm is employed to detect several shapes in a point-cloud.The method is aimed mainly at mod-els containing small numbers of points and shapes as no opti-mizations or extensions to the general RANSAC framework are adopted.2.1.RANSACThe RANSAC paradigm extracts shapes by randomly draw-ing minimal sets from the point data and constructing cor-responding shape primitives.A minimal set is the smallest number of points required to uniquely define a given type of geometric primitive.The resulting candidate shapes are tested against all points in the data to determine how many of the points are well approximated by the primitive(called the score of the shape).After a given number of trials,the shape which approximates the most points is extracted and the algorithm continues on the remaining data.RANSAC ex-hibits the following,desirable properties:•It is conceptually simple,which makes it easily extensible and straightforward to implement•It is very general,allowing its application in a wide range of settings•It can robustly deal with data containing more than50% of outliers[RL93]Its major deficiency is the considerable computational de-mand if no further optimizations are applied.[BF81]apply RANSAC to extract cylinders from range data,[CG01]use RANSAC and the gaussian image tofind cylinders in3D point clouds.Both methods,though,do not consider a larger number of different classes of shape prim-itives.[RL93]describe an algorithm that uses RANSAC to detect a set of different types of simple shapes.However, their method was adjusted to work in the image domain orc The Eurographics Association and Blackwell Publishing2007.on range images,and they did not provide the optimization necessary for processing large unstructured3D data sets.A vast number of extensions to the general RANSAC scheme have been proposed.Among the more recent ad-vances,methods such as MLESAC[TZ00]or MSAC[TZ98] improve the robustness of RANSAC with a modified score function,but do not provide any enhancement in the perfor-mance of the algorithm,which is the main focus of our work. Nonetheless the integration of a MLESAC scoring function is among the directions of our future work.[Nis05]pro-poses an acceleration technique for the case that the num-ber of candidates isfixed in advance.As it is a fundamen-tal property of our setup that an unknown large number of possibly very small shapes has to be detected in huge point-clouds,the amount of necessary candidates cannot,however, be specified in advance.3.OverviewGiven a point-cloud P={p1,...,p N}with associated nor-mals{n1,...,n N}the output of our algorithm is a set of primitive shapesΨ={ψ1,...,ψn}with corresponding dis-joint sets of points Pψ1⊂P,...,Pψn⊂P and a set of re-maining points R=P\{Pψ1,...,Pψn}.Similar to[RL93]and[DDSD03],we frame the shape extraction problem as an optimization problem defined by a score function.The overall structure of our method is outlined in pseudo-code in algorithm1.In each iteration of the algorithm,the prim-itive with maximal score is searched using the RANSAC paradigm.New shape candidates are generated by randomly sampling minimal subsets of P using our novel sampling strategy(see sec.4.3).Candidates of all considered shape types are generated for every minimal set and all candidates are collected in the set C.Thus no special ordering has to be imposed on the detection of different types of shapes.After new candidates have been generated the one with the high-est score m is computed employing the efficient lazy score evaluation scheme presented in sec.4.5.The best candidate is only accepted if,given the size|m|(in number of points) of the candidate and the number of drawn candidates|C|, the probability P(|m|,|C|)that no better candidate was over-looked during sampling is high enough(see sec.4.2.1).We provide an analysis of our sampling strategy to derive a suit-able probability computation.If a candidate is accepted,the corresponding points P m are removed from P and the can-didates C m generated with points in P m are deleted from C. The algorithm terminates as soon as P(τ,|C|)for a user de-fined minimal shape sizeτis large enough.In our implementation we use a standard score function that counts the number of compatible points for a shape candi-date[RL93][GBS03].The function has two free parame-ters:εspecifies the maximum distance of a compatible point whileαrestricts the deviation of a points’normal from that of the shape.We also ensure that only points forming a con-nected component on the surface are considered(see sec.4.4).Algorithm1Extract shapes in the point cloud PΨ←/0{extracted shapes}C←/0{shape candidates}repeatC←C∪newCandidates(){see sec.4.1and4.3}m←bestCandidate(C){see sec.4.4}if P(|m|,|C|)>p t thenP←P\P m{remove points}Ψ←Ψ∪mC←C\C m{remove invalid candidates}end ifuntil P(τ,|C|)>p treturnΨ4.Our method4.1.Shape estimationAs mentioned above,the shapes we consider in this work are planes,spheres,cylinders,cones and tori which have be-tween three and seven parameters.Every3D-point p i sam-plefixes only one parameter of the shape.In order to reduce the number of required points we compute an approximate surface normal n i for each point[HDD∗92],so that the ori-entation gives us two more parameters per sample.That way it is possible to estimate each of the considered basic shapes from only one or two point samples.However,always using one additional sample is advantageous,because the surplus parameters can be used to immediately verify a candidate and thus eliminate the need of evaluating many relatively low scored shapes[MC02].Plane For a plane,{p1,p2,p3}constitutes a minimal set when not taking into account the normals in the points.To confirm the plausibility of the generated plane,the deviation of the plane’s normal from n1,n2,n3is determined and the candidate plane is accepted only if all deviations are less than the predefined angleα.Sphere A sphere is fully defined by two points with corre-sponding normal vectors.We use the midpoint of the short-est line segment between the two lines given by the points p1and p2and their normals n1and n2to define the center of the sphere c.We take r= p1−c + p2−c2as the sphere ra-dius.The sphere is accepted as a shape candidate only if all three points are within a distance ofεof the sphere and their normals do not deviate by more thanαdegrees.Cylinder To generate a cylinder from two points with nor-mals wefirst establish the direction of the axis with a= n1×n2.Then we project the two parametric lines p1+tn1 and p2+tn2along the axis onto the a·x=0plane and take their intersection as the center c.We set the radius to the dis-tance between c and p1in that plane.Again the cylinder isc The Eurographics Association and Blackwell Publishing2007.verified by applying the thresholds εand αto distance and normal deviation of the samples.Cone Although the cone,too,is fully defined by two points with corresponding normals,for simplicity we use all three points and normals in its generation.To derive the po-sition of the apex c ,we intersect the three planes defined by the point and normal pairs.Then the normal of the plane de-fined by the three points {c +p 1−c p 1−c ,...,c +p 3−c p 3−c }givesthe direction of the axis a .Now the opening angle ωis givenas ω=∑i arccos ((p i −c )·a )3.Afterwards,similar to above,the cone is verified before becoming a candidate shape.Torus Just as in the case of the cone we use one more point than theoretically necessary to ease the computations required for estimation,i.e.four point and normal pairs.The rotational axis of the torus is found as one of the up to two lines intersecting the four point-normal lines p i +λn i [MLM01].To choose between the two possible axes,a full torus is estimated for both choices and the one which causes the smaller error in respect to the four points is selected.To find the minor radius,the points are collected in a plane that is rotated around the axis.Then a circle is computed using three points in this plane.The major radius is given as the distance of the circle center to the plexityThe complexity of RANSAC is dominated by two major fac-tors:The number of minimal sets that are drawn and the cost of evaluating the score for every candidate shape.As we de-sire to extract the shape that achieves the highest possible score,the number of candidates that have to be considered is governed by the probability that the best possible shape is indeed detected,i.e.that a minimal set is drawn that defines this shape.4.2.1.ProbabilitiesConsider a point cloud P of size N and a shape ψtherein consisting of n points.Let k denote the size of a minimal set required to define a shape candidate.If we assume that any k points of the shape will lead to an appropriate candidate shape then the probability of detecting ψin a single pass is:P (n )= n k N k ≈ n N k(1)The probability of a successful detection P (n ,s )after s can-didates have been drawn equals the complementary of s con-secutive failures:P (n ,s )=1−(1−P (n ))s(2)Solving for s tells us the number of candidates T required to detect shapes of size n with a probability P (n ,T )≥p t :T ≥ln (1−p t )ln (1−P (n ))(3)Figure 2:A small cylinder that has been detected by ourmethod.The shape consists of 1066points and was detected among 341,587points.That corresponds to a relative size of 1/3000.For small P (n )the logarithm in the denominator can be approximated by its Taylor series ln (1−P (n ))=−P (n )+O (P (n )2)so that:T ≈−ln (1−p t )P (n )(4)Given the cost C of evaluating the cost function,the asymp-totic complexity of the RANSAC approach is O (TC )=O (1P (n )C ).4.3.Sampling strategyAs can be seen from the last formula,the runtime complexity is directly linked to the success rate of finding good sample sets.Therefore we will now discuss in detail how sampling is performed.4.3.1.Localized samplingSince shapes are local phenomena,the a priori probability that two points belong to the same shape is higher the smaller the distance between the points.In our sampling strategy we want to exploit this fact to increase the probability of draw-ing minimal sets that belong to the same shape.[MTN ∗02]have shown that non-uniform sampling based on locality leads to a significantly increased probability of selecting a set of inliers.From a ball of given radius around an ini-tially unrestrainedly drawn sample the remaining samples are picked to obtain a complete minimal set.This requires to fix a radius in advance,which they derive from a known (or assumed)outlier density and distribution.In our setup however,outlier density and distribution vary strongly for different models and even within in a single model,which renders a fixed radius inadequate.Also,in our case,using minimal sets with small diameter introduces unnecessary stability issues in the shape estimation procedure for shapes that could have been estimated from samples spread farther apart.Therefore we propose a novel sampling strategy that is able to adapt the diameter of the minimal sets to both,outlier density and shape size.cThe Eurographics Association and Blackwell Publishing 2007.We use an octree to establish spatial proximity between sam-ples very efficiently.When choosing points for a new candi-date,we draw thefirst sample p1without restrictions among all points.Then a cell C is randomly chosen from any level of the octree such that p1is contained in C.The k−1other samples are then drawn only from within cell C.The effect of this sampling strategy can be expressed in a new probability P local(n)forfinding a shapeψof size n:P local(n)=P(p1∈ψ)P(p2...p k∈ψ|p2...p k∈C)(5) Thefirst factor evaluates to n/N.The second factor obvi-ously depends on the choice of C.C is well chosen if it con-tains mostly points belonging toψ.The existence of such a cell is backed by the observation that for most points on a shape,except on edges and corners,there exists a neighbor-hood such that all of the points therein belong to that shape. Although in general it is not guaranteed that this neighbor-hood is captured in the cells of the octree,in the case of real-life data,shapes have to be sampled with an adequate density for reliable representation and,as a consequence,for all but very few points such a neighborhood will be at least as large as the smallest cells of the octree.For the sake of analysis,we assume that there exists a C for every p i∈ψsuch thatψwill be supported by half of the points in C, which accounts for up to50%local noise and outliers.We conservatively estimate the probability offinding a good C by1d where d is the depth of the octree(in practice a path of cells starting at the highest good cell to a good leaf will be good as well).The conditional probability for p2,p3∈ψinthe case of a good cell is then described by (|C|/2k−1)(|C|k−1)≈(12)k−1.And substituting yields:P local(n)=nNd2k−1(6)As large shapes can be estimated from large cells(and with high probability this will happen),the stability of the shape estimation is not affected by the sampling strategy.The impact of this sampling strategy is best illustrated with an example.The cylinder depicted in Figure2consists of 1066points.At the time that it belongs to one of the largest shapes in the point-cloud,341,547points of the original2 million still remain.Thus,it then comprises only three thou-sandth of the point-cloud.If an ordinary uniform sampling strategy were to be applied,151,522,829candidates would have to be drawn to achieve a detection probability of99%. With our strategy only64,929candidates have to be gen-erated for the same probability.That is an improvement by three orders of magnitude,i.e.in this case that is the differ-ence between hours and seconds.4.3.1.1.Level weighting Choosing C from a proper level is an important aspect of our sampling scheme.Therefore we can further improve the sampling efficiency by choosing C from a level according to a non-uniform distribution that re-flects the likelihood of the respective level to contain a good cell.To this end,the probability P l of choosing C from level l isfirst initialized with1d.Then for every level l,we keep track of the sumσl of the scores achieved by the candidates generated from a cell on level l.After a given number of candidates has been tested,a new distribution for the levels is computed.The new probabilityˆP l of the level l is given asˆPl=xσlwP l+(1−x)1d,(7)where w=∑d i=1σPi.We set x=.9to ensure that at all times at least10%of the samples are spread uniformly over the levels to be able to detect when new levels start to become of greater importance as more and more points are removed from P.4.3.2.Number of candidatesIn section4.2we gave a formula for the number of candi-dates necessary to detect a shape of size n with a given prob-ability.However,in our case,the size n of the largest shape is not known in advance.Moreover,if the largest candidate has been generated early in the process we should be able to de-tect this lucky case and extract the shape well before achiev-ing a precomputed number of candidates while on the other hand we should use additional candidates if it is still unsure that indeed the best candidate has been detected.Therefore, instead offixing the number of candidates,we repeatedly an-alyze small numbers t of additional candidates and consider the best oneψm generated so far each time.As we want to achieve a low probability that a shape is extracted which is not the real maximum,we observe the probability P(|ψm|,s) with which we would have found another shape of the same size asψm.Once this probability is higher than a threshold p t(we use99%)we conclude that there is a low chance that we have overlooked a better candidate and extractψm.The algorithm terminates as soon as P(τ,s)>p t.4.4.ScoreThe score functionσP is responsible for measuring the qual-ity of a given shape candidate.We use the following aspects in our scoring function:•To measure the support of a candidate,we use the number of points that fall within anε-band around the shape.•To ensure that the points inside the band roughly follow the curvature pattern of the given primitive,we only count those points inside the band whose normals do not deviate from the normal of the shape more than a given angleα.•Additionally we incorporate a connectivity measure: Among the points that fulfill the previous two conditions, only those are considered that constitute the largest con-nected component on the shape.c The Eurographics Association and Blackwell Publishing2007.。

ransac算法公式

ransac算法公式

RANSAC(Random Sample Consensus)算法是一种用于拟合模型和去除离群值的迭代算法。

它适用于一些具有噪音和异常值的数据集。

RANSAC算法的基本步骤如下:1. 随机从数据集中选择一个最小样本数(通常是最小的模型参数数量)作为内点,将其余的数据点标记为外点。

2. 使用选择的内点拟合模型。

3. 计算所有数据点到该模型的距离,并将小于给定阈值的数据点标记为临时内点。

4. 如果临时内点数目大于预设值,接受当前模型并重新拟合使用所有这些临时内点。

5. 重复上述步骤固定的迭代次数,选择拟合度最高的模型作为最终模型。

RANSAC算法的数学公式如下:输入:- 数据集D = {x_1, x_2, ..., x_N},其中x_i 表示第i 个数据点- 模型参数数量k- 最大迭代次数max_iterations- 内点数量阈值min_inliers- 残差阈值threshold输出:拟合模型参数1. best_model = null2. best_score = 03. for iterations = 1 to max_iterations:4. random_sample = randomly select k samples from D5. maybe_model = fit_model_to_samples(random_sample)6. consensus_set = empty set7. for each data_point in D:8. if distance_between(data_point, maybe_model) < threshold:9. add data_point to consensus_set10. if size of consensus_set > min_inliers:11. definitely_model = fit_model_to_samples(consensus_set)12. score = evaluate_model(definitely_model)13. if score > best_score:14. best_model = definitely_model15. best_score = score16. return best_model其中,fit_model_to_samples() 表示使用数据点进行模型拟合,distance_between() 表示计算数据点到模型的距离,evaluate_model() 表示评估模型的拟合度得分。

基于激光点云数据的单木骨架三维重构浔

基于激光点云数据的单木骨架三维重构浔

第40卷第1期2024年1月森㊀林㊀工㊀程FOREST ENGINEERINGVol.40No.1Jan.,2024doi:10.3969/j.issn.1006-8023.2024.01.015基于激光点云数据的单木骨架三维重构赵永辉,刘雪妍,吕勇,万晓玉,窦胡元,刘淑玉∗(东北林业大学计算机与控制工程学院,哈尔滨150040)摘㊀要:针对树木三维重构过程中面临的处理速度慢㊁重构精度低等问题,提出一种采用激光点云数据的单木骨架三维重构方法㊂首先,根据点云数据类型确定组合滤波方式,以去除离群点和地面点;其次,采用一种基于内部形态描述子(ISS )和相干点漂移算法(CPD )的混合配准算法(Intrinsic Shape -Coherent Point Drift ,IS -CPD ),以获取单棵树木的完整点云数据;最后,采用Laplace 收缩点集和拓扑细化相结合的方法提取骨架,并通过柱体构建枝干模型,实现骨架三维重构㊂试验结果表明,相比传统CPD 算法,研究设计的配准方案精度和执行速度分别提高50%和95.8%,最终重构误差不超过2.48%㊂研究结果证明可有效地重构单棵树木的三维骨架,效果接近树木原型,为构建林木数字孪生环境和林业资源管理提供参考㊂关键词:激光雷达;树木点云;关键点提取;树木骨架;几何模型中图分类号:S792.95;TN958.98㊀㊀㊀㊀文献标识码:A㊀㊀㊀文章编号:1006-8023(2024)01-0128-073D Reconstruction of Single Wood Skeleton Based on Laser Point Cloud DataZHAO Yonghui,LIU Xueyan,LYU Yong,WAN Xiaoyu,DOU Huyuan,LIU Shuyu ∗(College of Computer and Control Engineering,Northeast Forestry University,Harbin 150040,China)Abstract :In response to the slow processing speed and low reconstruction accuracy encountered during the 3D reconstruction of trees,a method for 3D reconstruction of single -tree skeletons using laser point cloud data is proposed.Firstly,a combination filtering method is determined based on the point cloud data type to remove outliers and ground points.Secondly,a hybrid registration algorithm based on ISS (Intrinsic Shape Descriptor)and CPD (Coherent Point Drift algorithm),called IS -CPD (Intrinsic Shape -Coherent Point Drift),is employed to obtain complete point cloud data for individual trees.Finally,a method combining Laplace contraction of point sets and topological refinement is used to obtain the skeleton,and branch models are constructed using cylinders to achieve 3D skeleton reconstruction.Experimental results show that compared to traditional CPD algorithm,the proposed registration scheme im-proves accuracy and execution speed by 50%and 95.8%respectively,with a final reconstruction error of no more than 2.48%.The research demonstrates the effective reconstruction of the 3D skeleton of individual trees,with results close to the original trees,provi-ding a reference for building digital twin environments of forest trees and forestry resource management.Keywords :LiDAR;tree point cloud;key point extraction;tree skeleton;geometry model收稿日期:2023-02-10基金项目:国家自然科学基金(31700643)㊂第一作者简介:赵永辉,硕士,工程师㊂研究方向为物联网与人工智能㊂E-mail:hero9968@∗通信作者:刘淑玉,硕士,讲师㊂研究方向为通信与信号处理㊂E -mail:1000002605@引文格式:赵永辉,刘雪妍,吕勇,等.基于激光点云数据的单木骨架三维重构[J].森林工程,2024,40(1):128-134.ZHAO Y H,LIU X Y,LYU Y,et al.3D reconstruction of sin-gle wood skeleton based on laser point cloud data[J].Forest En-gineering,2024,40(1):128-134.0㊀引言激光雷达可用于获取目标稠密点云数据,是实现自动驾驶和三维重建的重要手段㊂使用机载或地基激光雷达可以获取树高㊁胸径和冠层等量化信息,用于树木的三维重建,为推断树木的生态结构参数和碳储量反演提供依据,也可为林业数字孪生提供数据支撑㊂主流的点云数据去噪方法主要有基于密度㊁基于聚类和基于统计3种[1]㊂分离地面点和非地面点是点云数据处理的第一步,学者提出多种算法用于地面点分离㊂然而,即使是最先进的滤波算法,也需要设置许多复杂的参数才能实现㊂Zhang 等[2]提出了一种新颖的布料模拟滤波算法(Cloth Simu-lation Filter,CSF),该算法只需调整几个参数即可实现地面点的过滤,但该算法对于点云噪声非常敏感㊂在点云配准方面,经典的算法是Besl 等[3]提出的迭代最近点算法(Iterative Closest Point,ICP),但易出现局部最优解,从而限制了该算法的应用㊂因此,许多学者采用概率统计方法进行点云配准,典型的方法是相干点漂移算法(Coherent Point Drift,CPD)[4-5]等,但该方法存在运行时间长和计算复杂的问题㊂石珣等[6]结合曲率特征与CPD 提出了一第1期赵永辉,等:基于激光点云数据的单木骨架三维重构种快速配准方法,速度大大提高,但细节精确度有所下降㊂陆军等[7]㊁夏坎强[8]㊁史丰博等[9]对基于关键点特征匹配的点云配准方法进行了深入研究㊂三维树木几何重建从传统的基于规则㊁草图和影像重建,发展到如今借助激光雷达技术,可以构建拓扑正确的三维树木几何形态㊂翟晓晓等[10]以点云数据进行树木重建,由于受激光雷达视场角的约束,难以获得树冠结构的信息,因此仅重建了树干㊂Lin 等[11]㊁You 等[12]涉及点云骨架提取的研究,构建了树的几何和拓扑结构,但重构模型的真实感不够强㊂Cao 等[13]使用基于Laplace 算子的建模方法提取主要枝干的几何信息,拓扑连接正确,并保留了部分细枝㊂曹伟等[14]对点云树木建模的发展和前景进行了综述,但在结合点云数据提取骨架并重建等方面研究不足㊂本研究提出一种基于骨架的方法,旨在准确地从单木的点云数据中重建三维模型㊂原始点云数据经过CSF 算法和K 维树(Kd -Tree)近邻搜素法的组合滤波后,提取了准确的单木数据㊂同时,基于树木特征点云的混合配准算法(Intrinsic Shape -Co-herent Point Drift,IS -CPD),可显著提高配准效率㊂最后,通过提取单棵树木的骨架点,构造连接性,并用圆柱拟合枝干,实现了单木的三维建模㊂1㊀数据采集及预处理1.1㊀数据获取数据采集自山东省潍坊市奎文区植物园内一株高约8.5m㊁树龄约20a 的银杏树㊂使用Ro-boSense 雷达从2个不同角度进行点云数据采集,雷达高为1.5m,与树木水平距离约为10m㊂通过对来自树木正东方向和正北方向的2组点云数据进行采集,如图1所示㊂(a )角度1点云数据(正东方向)(a )Angle 1 point cloud data (East direction )(b )角度2点云数据(正北方向)(b )Angle 2 point cloud data (North direction)图1㊀2组点云的最初扫描结果Fig.1Initial scan results of two sets of point clouds1.2㊀点云预处理为了提高后续处理点云数据的准确性和时效性,需要对数据进行预处理㊂首先,利用CSF 滤波算法去除冗余的地面背景信息,该算法参数较少,分离速度快㊂通过使用落在重力下的布来获取地形的物理表示,单木点云可以被分离出来㊂由于扫描环境和激光雷达硬件误差的影响,可能会出现离群点㊂因此,采用Kd -Tree 算法对提取的点云进行降噪处理,提高单个树木数据的精度,以备在后续的算法使用中得到更准确的结果㊂通过搜索待滤波点云p i (x i ,y i ,z i )中每个点的空间邻近点p j (x j ,y j ,z j ),计算之间的平均距离(d i )㊁全局均值(μ)以及标准差(σ)㊂筛选符合范围(μ-αˑσɤd i ɤμ+αˑσ)的点并过滤掉离群值(α为决定点云空间分布的参数),d i ㊁μ㊁σ的计算公式如下㊂d i =ðkj =1x i -y j k μ=ðn i =1d i n σ=ðni =1(d i -μ)2n ìîíïïïïïïïïïïïï㊂(1)921森㊀林㊀工㊀程第40卷式中:k 为决定点云密集度的参数;n 为点云数量㊂通过试验发现,最终选定参数k =20,α=1.2时,对点云数据进行处理结果最优,滤噪结果如图2所示,基本去除了离群噪声点和地面点同时又确保对点云模型轮廓的保护㊂2㊀单木骨架重构方法单木骨架重构方法的过程主要包括以下几个步骤,如图3所示㊂首先,对预处理的2组点云数据进行特征提取,并进行精确的配准;其次,对点云进行几何收缩,获取零体积点集,并通过拓扑细化将点集细化成一维曲线,得到与点云模型基本吻合的骨架线;最后,基于骨架线对树木枝干进行圆柱拟合,以构建枝干的三维模型㊂图2㊀2组点云滤噪结果图Fig.2Two sets of point cloud filtering and denoisingresults图3㊀单木骨架重构方法过程图Fig.3Process diagram of single wood skeleton reconstruction method2.1㊀三维点云配准CPD 配准是一种基于概率的点集配准算法,在对点集进行配准时,一组点集作为高斯混合模型(Gaussian Mixture Model,GMM)的质心,假设模板点集坐标为X M ˑD =(y 1,y 2, ,y M )T ,另一组点集作为混合高斯模型的数据集,假设目标点集坐标为X N ˑD =(x 1,x 2, ,x N )T ,N ㊁M 分别代表2组点的数目,D 为Z 组的维度,T 为矩阵转置㊂通过GMM 的最大后验概率得到点集之间的匹配对应关系㊂GMM 概率密度函数如下㊂p (x )=ω1N +(1-ω)ðMm =11M p (x m )㊂(2)式中:p x |m ()=1(2πσ2)D 2exp (-x -y m 22σ2),;p (x )是概率密度函数;ω(0ɤωɤ1)为溢出点的权重参数;m 为1 M 中的任何一个数㊂GMM 质心的位置通过调整变换参数(θ)的值进行改变,而变换参数的值可以通过最小化-log 函数来求解㊂E θ,σ2()=-ðN n -1log ðMm -1p (m )p (x n |m )㊂(3)式中,x n 与y m 之间的匹配关系可以由GMM 质心的后验概率p (m x n )=p (m )p (x n m )来定义㊂采用期望最大值算法进行迭代循环,从而对最大似然估计进行优化,当收敛时迭代停止㊂得到θ和σ2的解,即完成模板网格点集向目标网格点集的配准㊂扫描设备采集的点云数据通常数量庞大,因此并非所有点云信息都对配准有效㊂此外,CPD 算法的计算复杂度较高,匹配速度较慢㊂因此,本研究采用ISS(Intrinsic Shape Signaturs)算法[15]提取关键点,以降低几何信息不显著点的数量㊂通过对这些特征点进行精确配准,可以提高点云配准的效率㊂图4给出了IS -CPD 配准过程㊂31第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图4㊀基于特征点提取的配准过程图Fig.4Registration process diagram based on feature point extraction ㊀㊀IS-CPD点云配准算法流程如下㊂(1)选择2个视角点云重叠区域㊂(2)采用ISS算法提取特征点集㊂设点云数据有n个点,(x i,y i,z i),i=0,1, ,n-1㊂记P i=(x i,y i,z i)㊂①针对输入点云的每个点构建一个半径为r的球形邻域,并根据式(4)计算每个点的权重㊂W ij=1||p i-p j||,|p i-p j|<r㊂(4)②根据式(5)计算各点的协方差矩阵cov及其特征值{λ1i,λ2i,λ3i},并按从小到大的次序进行排列㊂cov(p i)=ð|p i-p j|<r w ij(P i-P j)(P i-P j)Tð|P i-P j|<r w ij㊂(5)③设置阈值ε1与ε2,满足λ1iλ2i ≪ε1㊁λ2iλ3i≪ε2的点即为关键点㊂(3)初始化CPD算法参数㊂(4)求出相关概率矩阵与后验概率p(m|x n)㊂(5)利用最小负对数似然函数求出各参数的值㊂(6)判断p的收敛性,若不收敛,则重复步骤(4)直到收敛㊂(7)在点集数据中,利用所得到的转换矩阵,完成配准㊂2.2㊀点云枝干重建传统的构建枝干的方法是直接在点云表面上进行重构,这种方法会导致大量畸变结构㊂因此,本研究先提取单木骨架线,再通过拟合圆柱来构建几何模型㊂图5为骨架提取并重建枝干的过程㊂为精确提取树干和树枝,采用Laplace收缩法提取骨架㊂首先,对点云模型进行顶点邻域三角化,得到顶点的单环邻域关系㊂然后,计算相应的余切形式的拉普拉斯矩阵,并以此为依据收缩点云,直至模型收缩比例占初始体积的1%,再通过拓扑细化将点集细化成一维曲线㊂采用最远距离点球对收缩点进行采样,利用一环邻域相关性将采样点连接成初始骨架,折叠不必要的边,直到不存在三角形,得到与点云模型基本吻合的骨架线㊂为准确地模拟树枝的几何形状,采用圆柱拟合方法㊂在树基区域,使用优化方法来获得主干的几何结构[16]㊂由于靠近树冠和树枝尖端的小树枝的点云数据较为杂乱,使用树木异速生长理论来控制枝干半径㊂最终,拟合圆柱体来得到树木点云的3D 几何模型[17],原理如图6所示㊂以粗度R为半径,以上端点M和下端点N为圆心生成多个圆截面,并沿着骨架线连接圆周点绘制出圆柱体,以此代表每个树枝,最终完成整棵树的枝干的绘制㊂131森㊀林㊀工㊀程第40卷图5㊀构建枝干模型流程图Fig.5Flow chart for building branch model(a)圆柱模型示例(a)Example of a cylindrical model(b)绘制局部树枝示例(b)Example of drawing a partial tree branchNMR图6㊀绘制树干几何形状原理Fig.6Principle of drawing tree trunk geometry3㊀试验结果与分析3.1㊀点云配准结果与分析为验证IS-CPD配准算法的有效性,对滤波后的点云进行试验,比较该算法与原始CPD算法及石珣等[6]提出的方法在同一数据下的运行时间及均方根误差(RMSE,式中记为R MSE),其表达式见式(6),值越小表示配准效果越精确㊂图7及表1给出了3种配准算法的对比结果㊂R MSE=㊀ðn i-1(x i-x︿i)2n㊂(6)式中:n为点云数量;x i和x︿i分别为配准前后对应点之间欧氏距离㊂经过配准结果图7和表1的分析,石珣等[5]算法虽提高了配准速度,但其细节精度下降,配准结果不佳㊂相比之下,CPD和IS-CPD算法均能成功地融合2个不同角度的点云,达到毫米级的精度,2种方法可视为效果近乎一致㊂相比之下,本研究算法的时间复杂度要小得多㊂此外,由表2可知,配准时间缩短至10.77s,平均配准精度相较CPD提高了约50%㊂3.2㊀点云枝干重建结果与分析在几何重建部分(图8),采用基于Laplace收缩的骨架提取方法,仅需不到5次迭代,就可以将点收缩到较好的位置,如图8(b)所示㊂对收缩后的零体231第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图7㊀点云配准可视化对比Fig.7Point cloud registration visualization comparison表1㊀点云配准结果对比Tab.1Comparison of point cloud registration results配准算法Registration algorithm 点云总数/个Total number of point clouds角度1Angle 1角度2Angle 2历时/s Time 均方根误差/mRMSE CPD石珣等[6]Shi xun et al [6]本算法Proposed algorithm3795637647261.748.3ˑ10-386.58 1.6ˑ10-210.774.1ˑ10-3㊀㊀注:IS -CPD 算法提取关键点所需的时间可以忽略不计㊂Note:The time required for the IS -CPD algorithm to extract key points can beignored.积点集进行拓扑细化,得到与点云模型基本吻合的骨架线,如图8(c)所示㊂随后,对枝干进行圆柱拟合㊂至此,树木点云重建工作全部完成㊂图8(d)为树木骨架几何重建的最终结果㊂本研究使用单棵树木的树高和胸径作为重建模型的精度评价指标㊂首先,采用树干点拟合圆柱的方法来将点云投影至圆柱轴向方向,通过求取该轴向投影的最大值和最小值来获取树高信息㊂同时,(a )输入点云(a )Input point cloud(b )点云收缩(b )Point cloud shrinkage (c )连接骨架线(c )Connecting skeleton lines(d )树木点云的几何模型(d )Geometric model of treepoint cloud图8㊀单木几何重建过程Fig.8Single wood geometry reconstruction process在Pitkanen 等[18]研究方法的基础上,对树干点云进行分层切片处理,将二维平面上的分层点云进行投影,再通过圆拟合方法得到更为精确的胸径尺寸㊂为验证该算法重建模型的准确性,进行20次试验,并将其与Nurunnabi 等[16]的重建方法进行了比较㊂表2为2种方法分别获得的树高和胸径的平均值,并将其与真实测量值进行了对比㊂结果表明,该算法相较于Nurunnabi 等[16]的重建方法具有更高的精度,胸径平均误差仅为2.48%,树高平均误差仅为1.64%㊂表2㊀树木重构精度分析Tab.2Tree reconstruction accuracy analysis评估方法Evaluation method胸径/m DBH 树高/m Height 平均误差(%)Average error胸径DBH 树高Height Nurunnabi 等[16]Nurunnabi et al [16]2.13ˑ10-18.26 5.973.17本算法Proposed algorithm1.96ˑ10-18.392.48 1.64实测值Measured value2.01ˑ10-18.53331森㊀林㊀工㊀程第40卷4㊀结论本研究讨论了激光雷达重建单棵树木的流程,分析并改进了关键问题㊂充分发挥CSF滤波和Kd-Tree算法的优势,从而精准地分离出了单棵树木的数据,提高了处理速度㊂提出IS-CPD配准算法,可将点云配准的效率提高约95.8%㊂通过精确配准后的点云数据,成功提取骨架树,最终重构误差控制在2.48%以内㊂试验结果表明,研究方法在树木点云数据滤波㊁配准和骨架提取方面具有可行性,树木枝干结构重建效果良好,且重构模型可为评估农林作物㊁森林生态结构健康等提供支持㊂ʌ参㊀考㊀文㊀献ɔ[1]鲁冬冬,邹进贵.三维激光点云的降噪算法对比研究[J].测绘通报,2019(S2):102-105.LU D D,ZOU J parative research on denoising al-gorithms of3D laser point cloud[J].Survey and Mapping Bulletin,2019(S2):102-105.[2]ZHANG W,QI J,WAN P,et al.An easy-to-use air-borne LiDAR data filtering method based on cloth simula-tion[J].Remote Sensing,2016,8(6):501. [3]BESL P J,MCKAY H D.A method for registration of3-D shapes[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256. [4]MYRONENKO A,SONG X.Point set registration:coherent point drift[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(12):2262-2275. [5]王爱丽,张宇枭,吴海滨,等.基于集成卷积神经网络的LiDAR数据分类[J].哈尔滨理工大学学报,2021, 26(4):138-145WANG A L,ZHANG Y X,WU H B,et al.LiDAR data classification based on ensembled convolutional neural net-works[J].Journal of Harbin University of Science and Technology,2021,26(4):138-145.[6]石珣,任洁,任小康.等.基于曲率特征的漂移配准方法[J].激光与光电子学进展,2018,55(8):248-254. SHI X,REN J,REN X K,et al.Drift registration based on curvature characteristics[J].Laser&Optoelectronics Progress,2018,55(8):248-254.[7]陆军,邵红旭,王伟.等.基于关键点特征匹配的点云配准方法[J].北京理工大学学报,2020,40(4):409-415. LU J,SHAO H X,WANG W,et al.Point cloud registra-tion method based on key point extraction with small over-lap[J].Transactions of Beijing Institute of Technology, 2020,40(4):409-415.[8]夏坎强.基于ISS特征点和改进描述子的点云配准算法研究[J].软件工程,2022,25(1):1-5.XIA K Q.Research on point cloud algorithm based on ISS feature points and improved descriptor[J].Software Engi-neering,2022,25(1):1-5.[9]史丰博,曹琴,魏军.等.基于特征点的曲面点云配准方法[J].北京测绘,2022,36(10):1345-1349.SHI F B,CAO Q,WEI J,et al.Surface point cloud reg-istration method based on feature points[J].Beijing Sur-veying and Mapping,2022,36(10):1345-1349. [10]翟晓晓,邵杰,张吴明.等.基于移动LiDAR点云的树木三维重建[J].中国农业信息,2019,31(5):84-89.ZHAI X X,SHAO J,ZHANG W M,et al.Three-di-mensional reconstruction of trees using mobile laser scan-ning point cloud[J].China Agricultural Information,2019,31(5):84-89.[11]LIN G,TANG Y,ZOU X,et al.Three-dimensional re-construction of guava fruits and branches using instancesegmentation and geometry analysis[J].Computers andElectronics in Agriculture,2021,184:106107. [12]YOU A,GRIMM C,SILWAL A,et al.Semantics-guided skeletonization of upright fruiting offshoot trees forrobotic pruning[J].Computers and Electronics in Agri-culture,2022,192:106622.[13]CAO J J,TAGLIASACCJI A,OLSON M,et al.Pointcloud skeletons via Laplacian based contraction[C]//Proceedings of the Shape Modeling International Confer-ence.Los Alamitos:IEEE Computer Society Press,2010:187-197.[14]曹伟,陈动,史玉峰.等.激光雷达点云树木建模研究进展与展望[J].武汉大学学报(信息科学版),2021,46(2):203-220.CAO W,CHEN D,SHI Y F,et al.Progress and pros-pect of LiDAR point clouds to3D tree models[J].Geo-matics and Information Science of Wuhan University,2021,46(2):203-220.[15]YU Z.Intrinsic shape signatures:A shape descriptor for3D object recognition[C]//IEEE International Confer-ence on Computer Vision Workshops.IEEE,2010. [16]NURUNNABI A,SADAHIRO Y,LINDENBERGH R,etal.Robust cylinder fitting in laser scanning point clouddata[J].Measurement,2019,138:632-651. [17]GUO J W,XU S B,YAN D M,et al.Realistic proce-dural plant modeling from multiple view images[J].IEEE Transactions on Visualization and Computer Graph-ics,2020,26(2):1372-1384.[18]PITKANEN T P,RAUMONEN P,KANGAS A.Measur-ing stem diameters with TLS in boreal forests by comple-mentary fitting procedure[J].ISPRS Journal of Photo-grammetry and Remote Sensing,2019,147:294-306.431。

基于粒子群求解帕累托曲面前沿

基于粒子群求解帕累托曲面前沿

基于粒子群求解帕累托曲面前沿摘要:一、引言1.粒子群优化算法简介2.帕累托前沿简介3.本文研究内容二、粒子群优化算法原理1.粒子群优化算法基本思想2.粒子群优化算法的数学模型3.粒子群优化算法的参数设置三、帕累托前沿的数学模型1.帕累托前沿的定义2.帕累托前沿的性质3.帕累托前沿的求解方法四、基于粒子群求解帕累托曲面前沿的方法1.优化问题描述2.粒子群优化算法参数设置3.求解过程及算法实现五、实验与分析1.实验数据及参数设置2.结果分析与对比3.结果讨论与结论六、展望与未来工作1.算法改进方向2.应用领域拓展3.潜在问题与挑战正文:一、引言随着科学技术的不断发展,优化问题在各领域中都有着广泛的应用。

粒子群优化算法(Particle Swarm Optimization, PSO)是一种基于群体行为的优化算法,具有良好的全局搜索能力。

帕累托前沿(Pareto Frontier)是一种表示系统最优解集合的概念,能够用于解决多目标优化问题。

本文主要研究了基于粒子群优化算法求解帕累托曲面前沿的方法及应用。

二、粒子群优化算法原理粒子群优化算法是一种模仿自然界中鸟群觅食行为的优化算法,通过搜索策略和局部更新策略来寻找最优解。

粒子群优化算法的数学模型主要包括以下几个部分:1.粒子群:由多个粒子组成,每个粒子代表一个解。

2.目标函数:用于评估解的质量。

3.惯性权重:影响粒子群搜索策略的重要参数。

4.学习因子:影响粒子群局部更新策略的重要参数。

三、帕累托前沿的数学模型帕累托前沿是一种表示多目标优化问题最优解集合的概念,具有以下性质:1.帕累托前沿上的每个解都是满足约束条件的有效解。

2.帕累托前沿上的任意两个解之间不存在第三个解能够同时优于这两个解。

求解帕累托前沿的方法有很多,如遗传算法、模拟退火算法等。

四、基于粒子群求解帕累托曲面前沿的方法本文提出了一种基于粒子群优化算法求解帕累托曲面前沿的方法。

首先,将帕累托前沿求解问题转化为多目标优化问题,并定义目标函数。

环青藏高原巨型盆山体系构造与塔里木盆地油气分布规律

环青藏高原巨型盆山体系构造与塔里木盆地油气分布规律

卷(Volu m e)33,期(Numb er)1,总(S UM )120页(Pages)1~9,2009,2(Feb ruary ,2009)大地构造与成矿学Geotecton ica etM eta ll o genia收稿日期:2008-12-02基金项目:国家油气专项科技攻关项目(编号:2008ZX0032005201)资助.作者简介:贾承造(1948-),男,中国科学院院士.本刊副主编.长期从事石油地质与构造地质研究.Em a i :l ji acz @petroch i na .co m .cn环青藏高原巨型盆山体系构造与塔里木盆地油气分布规律贾承造(中国石油天然气股份有限公司,北京100011)摘 要:中国中西部受控于喜山期青藏高原的隆升和向北、向东的推挤,在其外围形成一个巨型的盆山构造体系,环青藏高原巨型盆山体系主要由复活后的古造山带、前陆冲断带和小型克拉通盆地三个基本的构造单元组成,其中古生界小型克拉通与中新生界前陆冲断带是重要的含油气单元,它决定了中国中西部油气分布主要受古生界克拉通古隆起和中新生界前陆冲断带的控制。

塔里木盆地在纵向上由发育齐全的下古生代碳酸盐岩、上古生代海相-海陆交互相碎屑岩沉积和中新生代陆相碎屑岩等构造层序叠置而成,在平面上以较稳定的小型克拉通为核心,边缘环绕库车、喀什、塔西南、塔东南等褶皱或冲断变形的前陆冲断带。

塔里木盆地古生界小型克拉通盆地与中新生界前陆逆冲带叠合-复合的构造特征,以及演化的多阶段性,决定了这类盆地具有/多套烃源岩、多储盖组合、多含油气系统0的叠合-复合含油气系统的特点;油气分布受小型克拉通盆地中的古隆起控制,形成大面积岩性地层油气藏,前陆盆地中的冲断带构造控制形成背斜油气藏,具有多期成藏并存与晚期成藏为主的特点。

关键词:环青藏高原巨型盆山体系;小型克拉通盆地;前陆逆冲带;油气分布规律中图分类号:P 542 文献标识码:A 文章编号:100121552(2009)0120001209我国中西部盆地油气资源丰富,地质条件复杂,与我国东部盆地和世界其他主要含油气盆地相比较有显著差异。

219383584_屋顶光伏光热系统的容量优化配置

219383584_屋顶光伏光热系统的容量优化配置

电气传动2023年第53卷第6期ELECTRIC DRIVE 2023Vol.53No.6摘要:高校建筑是用能重要用户,主要存在电、热两种用能形式,充分利用高校建筑屋顶资源开发太阳能项目并合理配置储能,能够有效提升用能经济性。

以提高经济性为目标,设计了一种包含光伏、太阳能集热器、热泵和蓄电装置的屋顶光伏光热系统,考虑设备的购置和运维费用,利用差分进化算法求解出光伏光热系统中设备的最优容量配置结果,实现用户侧的单位电量成本最低。

最后,以我国西北地区某一高校宿舍楼为实际算例,得到了该光伏光热系统的最小单位电量成本以及最优目标下的配置方案,为绿色低碳型校园建设提供借鉴。

关键词:高校建筑;屋顶光伏光热系统;经济性;差分进化算法;容量配置中图分类号:TM615文献标识码:ADOI :10.19457/j.1001-2095.dqcd24210Optimal Configuration of the Capacity of Rooftop Photovoltaic Thermal SystemWANG Ru ,WANG Haiyun ,FAN Tianyuan ,ZHANG Shengnan(School of Electrical Engineering ,Xinjiang University ,Urumqi 830047,Xinjiang ,China )Abstract:University buildings are major energy users ,and there are two main forms of energy use :electricity and heat.Making full use of the roof resources of university buildings to develop solar energy projects and rationally allocate energy storage can effectively improve the energy economy.In order to improve the economy ,considered the equipment purchase and operation and maintenance costs ,a rooftop photovoltaic photovoltaic system including photovoltaic ,solar collectors ,heat pumps and power storage devices was designed.The differential evolution algorithm was used to solve the problem ,and the optimal capacity configuration result of the equipment in the photovoltaic thermal system was obtained ,so as to achieve the lowest cost per unit of electricity on the user side.Finally ,taked a university dormitory building in Northwest my country as an actual example ,the minimum unit electricity cost of the photovoltaic solar thermal system and the configuration scheme under the optimal target were obtained.It provides a reference for the construction of green and low-carbon campuses.Key words:university buildings ;rooftop photovoltaic thermal system ;economy ;differential evolution algorithm ;capacity allocation基金项目:国家自然科学基金(51667020);新疆维吾尔自治区重点研发计划(2020B02001)作者简介:王茹(1996—),女,硕士,Email :*****************屋顶光伏光热系统的容量优化配置王茹,王海云,范添圆,张胜楠(新疆大学电气工程学院,新疆乌鲁木齐830047)光伏和风电等新能源的大力发展,大大缓解了全球能源和环境问题[1]。

Chapter9_Image Stitching and blending

Chapter9_Image Stitching and blending

• Different approaches for them
Parallax Removal
• Based on bundle adjustment, compute 3D point location, then reproject it to images
Deghosting a mosaic with motion parallax (Shum and Szeliski 2000) c 2000 IEEE: (a) composite with parallax; (b) after a single deghosting step (patch size 32); (c) after multiple steps (sizes 32, 16 and 8)
Observed location of feature I in image k
• Xik depends on xij, error-in-variable • Overweighted for feature observed many times
Bundle Adjustment
• True bundle adjustment to estimate camera pose and 3d points
Parallax Removal
• Blurry or ghosting
– Unmodeled radial distortion – 3D parallax: failure to rotate the camera around its optical center – Small scene motion, large scale scene motion
2JT J 2J T e0 0 J J J e0

MatchingwithPROSAC-ProgressiveSampleConsensus

MatchingwithPROSAC-ProgressiveSampleConsensus
1 The authors were supported by the Czech Science Foundation under project GACR 102/03/0440 and by the European Commission under project IST-004176.
Figure 1: The Great Wall image pair with an occlusion. Given 250 tentative correspondences as input, both PROSAC and RANSAC found 57 correct correspondences (inliers). To estimate the epipolar geometry, RANSAC tested 106,534 seven-tuples of correspondences in 10.76 seconds while PROSAC tested only 9 seven-tuples in 0.06 sec (on average, over hundred runs). Inlier correspondences are marked by a line segment joining the corresponding points. Standard RANSAC does not model the local matching process. It is viewed as a black box that generates N tentative correspondences, i.e. the error-prone matches established by comparing local descriptors. The set U of tentative correspondences contains an a priori unknown number I of 1

基于深度残差网络和近红外光谱的煤矸石智能识别

基于深度残差网络和近红外光谱的煤矸石智能识别

第43 卷第 4 期2024 年4 月Vol.43 No.4607~613分析测试学报FENXI CESHI XUEBAO(Journal of Instrumental Analysis)基于深度残差网络和近红外光谱的煤矸石智能识别王亚栋1,贾俊伟1,谭韦君2,雷萌3*(1.山西天地王坡煤业有限公司,山西晋城 048000;2.天地(常州)自动化有限公司,江苏常州213125;3.中国矿业大学 信息与控制工程学院,江苏徐州221116)摘要:该文开发了一种融合近红外光谱技术与一维残差深度网络(1D-ResNet)的煤炭及矸石快速分类方法。

为保证实验样本的多样性,从河南、河北、山东3省份的多个煤矿中采集了430个煤炭与矸石样本,并基于欧氏距离对异常样本予以剔除,以获得高质量的建模数据集。

在此基础上,为准确捕捉煤炭和矸石与其光谱特征之间的复杂映射关系,构建了基于1D-ResNet的分类模型,可在有效解决梯度消失问题的同时深度挖掘煤炭与矸石的光谱特性,获得高精度的分析结果。

五折交叉验证结果显示,该模型的平均准确率达96.26%,显著优于支持向量机和随机森林等传统机器学习算法。

在训练集和测试集上,该模型的损失函数变化趋势表现出较高的一致性,说明模型具备良好的泛化能力。

测试发现,模型处理每一百个样本的推理时间仅为16.230 ms,进一步突显了其在煤炭与矸石在线分选领域的优势和潜在应用价值。

关键词:煤矸石识别;近红外光谱;深度学习;残差网络中图分类号:O657.3;TQ533文献标识码:A 文章编号:1004-4957(2024)04-0607-07Intelligent Recognition of Coal Gangue Based on Residual Networkand Near Infrared Spectroscopy TechnologyWANG Ya-dong1,JIA Jun-wei1,TAN Wei-jun2,LEI Meng3*(1.Shanxi Tiandi Wangpo Coal Industry Co.,Ltd,Jincheng 048000,China;2.Tiandi(Changzhou)Automation Co.,Ltd,Changzhou 213125,China;3.School of Information and Control Engineering,ChinaUniversity of Mining and Technology,Xuzhou 221116,China)Abstract:This study innovatively developed a rapid classification method for coal and gangue,inte⁃grating near infrared spectroscopy technology with a one-dimensional residual network(1D-ResNet). To ensure the diversity of experimental samples,430 samples of coal and gangue were collected from multiple coal mines in provinces such as Henan,Hebei,and Shandong.Abnormal samples were eliminated based on Euclidean distance to obtain a high-quality dataset for modeling.Building on this,a 1D-ResNet-based classification model was constructed to accurately capture the complex mapping relationships between coal,gangue,and their spectral characteristics. This approach effec⁃tively solved the problem of gradient vanishing and deeply mined the spectral features of coal and gangue,resulting in highly accurate analysis.After five-fold cross-validation,the model achieved an average accuracy of 96.26%,significantly outperforming traditional machine learning algorithms such as support vector machines and random forests. The model demonstrated high consistency in the trend of loss function changes across both the training and test datasets,indicating good generaliza⁃tion ability. Tests revealed that the model processes every hundred samples in just 16.230 millisec⁃onds,further highlighting its advantages and potential application value in the online sorting field for coal and gangue.Key words:coal gangue identification;near infrared spectroscopy;deep learning;residual net⁃work煤矸石作为煤炭开采和加工过程中产生的副产品,其含碳量较低,燃烧性能不佳。

基于点云数据驱动的中厚板机器人焊接路径规划

基于点云数据驱动的中厚板机器人焊接路径规划

Electric Welding MachineVol.53 No.9Sept. 2023第 53 卷 第 9 期2023 年9 月基于点云数据驱动的中厚板机器人焊接路径规划李秉聪1, 夏卫生1, 许晓群2, 吕卫文11.华中科技大学 材料科学与工程学院 材料成形与模具技术国家重点实验室,湖北 武汉 4300742.湖北中烟工业有限责任公司武汉卷烟厂,湖北 武汉 430051摘 要:为实现中厚板多层多道自动化焊接的需求,提出一种基于点云数据驱动的机器人焊接路径自适应规划算法。

以V 形焊缝为例,首先通过线结构光扫描焊缝表面采集点云数据,对点云数据进行滤波预处理,在精简数据量的同时也去除噪声点,据此采用点云分割和边缘提取算法成功提取焊缝特征点。

最终通过焊接实验验证算法的准确性和可行性,结果表明,提出的算法处理得到的焊缝特征点坐标与人工示教得到的实际坐标偏差小于0.17 mm ,能够满足实际应用需求。

关键词:机器人焊接; 点云数据驱动; 多层多道焊接; 路径规划; 线结构光中图分类号:TG409 文献标识码:A 文章编号:1001-2303(2023)09-0078-06Research on Welding Path Planning of Medium-thick Plate Based on theData Driven of Point CloudLI Bingcong 1, XIA Weisheng 1, XU Xiaoqun 2, LV Weiwen 11.State Key Laboratory of Materials Processing and Die & Mould Technology, School of Materials Science and Engineering, Huazhong Univer ‐sity of Science and Technology, Wuhan 430074, China2.China Tobacco Hubei Industrial Corporation Limited Wuhan Cigarette Factory, Wuhan 430074, ChinaAbstract: In order to address the automation requirements of multi-layer and multi-pass welding for medium-thick plates, this paper proposes a robot welding path adaptive planning algorithm based on point cloud data. Taking V-groove welds as an example, the weld surface is scanned using a line structured light system to collect point cloud data. Then the point cloud data is preprocessed through the filtering method to reduce the data size and remove noise points, and the resultant welding feature points are successfully extracted by the point cloud segmentation and edge extraction algorithms. Finally, welding ex ‐periments are carried out to verify the accuracy and feasibility of the proposed algorithm. The results show that the average deviation between the coordinates of weld feature points obtained by the proposed algorithm and the actual coordinates ob ‐tained by manual teaching is less than 0.17 mm, which meets the practical application requirements.Keywords: robot welding; point cloud data-driven; multi-layer multi-pass welding; path planning; line structured light引用格式:李秉聪,夏卫生,许晓群,等.基于点云数据驱动的中厚板机器人焊接路径规划[J ].电焊机,2023,53(9):78-83.Citation:LI Bingcong, XIA Weisheng, XU Xiaoqun, et al.Research on Welding Path Planning of Medium -thick Plate Based on the Data Driven of Point Cloud[J].Electric Welding Machine, 2023, 53(9): 78-83.0 引言中厚板被广泛应用于大型交通工具的关键结构设计中,多层多道焊是完成焊接的主流方法之一。

matlab学习:最小二乘拟合基于RANSAC的直线拟合椭圆拟合

matlab学习:最小二乘拟合基于RANSAC的直线拟合椭圆拟合

matlab学习:最⼩⼆乘拟合基于RANSAC的直线拟合椭圆拟合1.最⼩⼆乘拟合最⼩⼆乘拟合是⼀种数学上的近似和优化,利⽤已知的数据得出⼀条直线或者曲线,使之在坐标系上与已知数据之间的距离的平⽅和最⼩。

2.RANSAC算法参见王荣先⽼师的博⽂3,直线拟合建⽴模型时利⽤直线的⼀般⽅程AX+BY+C=0,随机选取两点构建直线模型,计算每个点到此直线的TLS(Total Least Square),TLS⼩于⼀定阈值时的点为符合模型的点,点数最多时的模型即为最佳直线模型。

再根据此时的直线参数画出最终拟合直线。

4.椭圆拟合建⽴模型时利⽤椭圆的定义⽅程:dist(P,A)+dist(P,B)=DIST,其中P为椭圆上⼀点,A和B为椭圆两焦点。

随机选取三点A,B,P构建椭圆模型,计算每个点到此两焦点的距离和与DIST的差值,差值⼩于⼀定阈值时的点为符合模型的点,点数最多时的模型即为最佳椭圆模型,再根据符合条件的点,利⽤椭圆⼀般⽅程Ax2+Bxy+Cy2+Dx+Ey+F=0 和得到符合点进⾏系数拟合,根据函数式画出最终拟合椭圆。

5.matlab代码(1)最⼩⼆乘拟合View Code%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FILENAME LSF.m% FUNCTION Input points with mouse,Least-squares fit of lines to% 2D points% DATE 2012-10-12% AUTHOR zhangying %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%clc;%% ⿏标输⼊点,enter键结束axis([-1010 -1010]);[x,y]=ginput; %读取坐标直到按下回车键,返回坐标点的x,y坐标num=length(x); %计算点的个数%% 直接⽤最⼩⼆乘进⾏拟合%通过最⼩化误差的平⽅和寻找数据的最佳函数匹配[p1,s1]=polyfit(x,y,1); %n=1为直线拟合 x,y为数据点,n为多项式阶数,返回p为幂次从⾼到低的多项式系数向量[p2,s2]=polyfit(x,y,num-2); %n>1为曲线拟合,找到次数为n的多项式,对于数据点集(x,y),满⾜差的平⽅和最⼩[p3,s3]=polyfit(x,y,num-1); %x必须是单调的。

基于RANSAC算法的稳健点云平面拟合方法

基于RANSAC算法的稳健点云平面拟合方法

基于RANSAC算法的稳健点云平面拟合方法杨军建吴良才【摘要】[摘要]针对氛云平面拟合中存在粗差及异常值等问题,对结合特征值法的随机抽样一致性(random sample consensus,RANSAC)平面拟合算法进行了改进。

该方法以RANSAC算法为基础,结合特征值法,利用氛到平面模型距离的标准偏差来自动选取阈值t,通过阈值t检测并剔除异常数据氛,达到获得理想平面拟合参数的目的。

用改进的算法和传统的特征值法分别对氛云数据进行处理,结果表明,改进的算法适用于存在误差和异常值的氛云数据拟合,能稳定地获得较好的平面参数估值,具有较强的鲁棒性。

【期刊名称】北京测绘【年(卷),期】2016(000)002【总页数】4【关键词】[关键词]氛云数据随机抽样一致性(RANSAC)特征值法平面拟合标准偏差1 引言与传统测量方法相比,三维激光扫描技术获取点云具有快速性、高效性和高精度特性,在测绘领域中扮演着越来越重要的角色。

三维激光扫描能快速获取反映目标物实时、动态变化、真实形态特性的信息,是获取空间数据的有效技术手段。

对点云数据进行拟合就是根据扫描点集中的点云匹配出特定的曲面模型,求取出最佳模型参数,使点云子集合与模型参数之间达到高度吻合的目的。

三维点云数据集合中包含大量的平面特征,这些平面特征可以被用于匹配计算中配准点云、目标建模中简化数据等,因此,三维点云精确的平面拟合具有非常重要的意义[1-2]。

由于仪器本身或外部因素等原因,三维激光扫描得到的点云数据存在各种误差。

文献[3-7]主要介绍了提取点云平面特征的方法,如最小二乘法、特征值法,可根据某种给定的目标方程,求出最佳模型参数。

但这些方法都不能剔除异常值,特别是存在较大较多的异常值时拟合平面不稳定,算法不具有稳健性。

针对含有较大误差或异常值的点云数据,一般采用结合特征值法的随机抽样一致性(random sample consensus,RANSAC)算法拟合点云平面,在异常值存在的情况下可以得到比较理想的拟合效果,但是该方法对阈值t的选取比较敏感,本文对此算法进行了改进,通过自动选取阈值t来检测并剔除异常值,得到最佳参数估值。

RANSAC算法讲解

RANSAC算法讲解

RANSAC算法讲解RANSAC是“Random Sample Consensus(随机抽样一致)”的缩写。

它可以从一组包含“局外点”的观测数据集中,通过迭代方式估计数学模型的参数。

它是一种不确定的算法——它有一定的概率得出一个合理的结果;为了提高概率必须提高迭代次数。

RANSAC的基本假设是:(1)数据由“局内点”组成,例如:数据的分布可以用一些模型参数来解释;(2)“局外点”是不能适应该模型的数据;(3)除此之外的数据属于噪声。

局外点产生的原因有:噪声的极值;错误的测量方法;对数据的错误假设。

RANSAC也做了以下假设:给定一组(通常很小的)局内点,存在一个可以估计模型参数的过程;而该模型能够解释或者适用于局内点。

一、示例一个简单的例子是从一组观测数据中找出合适的2维直线。

假设观测数据中包含局内点和局外点,其中局内点近似的被直线所通过,而局外点远离于直线。

简单的最小二乘法不能找到适应于局内点的直线,原因是最小二乘法尽量去适应包括局外点在内的所有点。

相反,RANSAC能得出一个仅仅用局内点计算出模型,并且概率还足够高。

但是,RANSAC并不能保证结果一定正确,为了保证算法有足够高的合理概率,我们必须小心的选择算法的参数。

二、概述RANSAC算法的输入是一组观测数据,一个可以解释或者适应于观测数据的参数化模型,一些可信的参数。

RANSAC通过反复选择数据中的一组随机子集来达成目标。

被选取的子集被假设为局内点,并用下述方法进行验证:1.有一个模型适应于假设的局内点,即所有的未知参数都能从假设的局内点计算得出。

2.用1中得到的模型去测试所有的其它数据,如果某个点适用于估计的模型,认为它也是局内点。

3.如果有足够多的点被归类为假设的局内点,那么估计的模型就足够合理。

4.然后,用所有假设的局内点去重新估计模型,因为它仅仅被初始的假设局内点估计过。

5.最后,通过估计局内点与模型的错误率来评估模型。

这个过程被重复执行固定的次数,每次产生的模型要么因为局内点太少而被舍弃,要么因为比现有的模型更好而被选用。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA
1-Point RANSAC for EKF-Based Structure from Motion
I. I NTRODUCTION Classical Structure from Motion (SfM) [12] and Bundle Adjustment (BA) [27] techniques have recently been adapted to sequential and real-time processing of long image sequences. Real-time performance has been achieved by performing live optimization over only a limited number frames of the sequence. If these frames are chosen to be ‘keyframes’ sparsely distributed around a working volume, this permits accurate and drift-free room-sized mapping as in [13]. Or in the case we are considering in this paper, choosing a sliding window of the most recently acquired frames permits accurate camera motion estimation for long trajectories (e.g. [16] for a monocular camere generically known as Visual Odometry, and it has also been demonstrated that the trajectory estimates they produce can be combined with loop closure techniques
Javier Civera, Oscar G. Grasa, Andrew J. Davison and J. M. M. Montiel
Abstract— Recently, classical pairwise Structure From Motion (SfM) techniques have been combined with non-linear global optimization (Bundle Adjustment, BA) over a sliding window to recursively provide camera pose and feature location estimation from long image sequences. Normally called Visual Odometry, these algorithms are nowadays able to estimate with impressive accuracy trajectories of hundreds of meters; either from an image sequence (usually stereo) as the only input, or combining visual and propioceptive information from inertial sensors or wheel odometry. This paper has a double objective. First, we aim to illustrate for the first time how similar accuracy and trajectory length can be achieved by filtering-based visual SLAM methods. Specifically, a camera-centered Extended Kalman Filter is used here to process a monocular sequence as the only input, with 6DOF motion estimated. Features are kept live in the filter while visible as the camera explores forward, and are deleted from the state once they go out of view. This permits an increase in the number of tracked features per frame from tens to around a hundred. While improving the accuracy of the estimation, it makes computationally infeasible the exhaustive Branch and Bound search performed by standard JCBB for match outlier rejection. As a second contribution that overcomes this problem, we present here a RANSAC-like algorithm that exploits the probabilistic prediction of the filter. This use of prior information makes it possible to reduce the size of the minimal data subset to instantiate a hypothesis to the minimum possible of 1 point, greatly increasing the efficiency of the outlier rejection stage. Experimental results from real image sequences covering trajectories of hundreds of meters are presented and compared against RTK GPS ground truth. Estimation errors are about 1% of the trajectory for trajectories up to 650 metres.
Javier Civera, Oscar G. Grasa and J. M. M. Montiel are with Instituto de Investigaci´ on e Ingenier´ ıa de Arag´ on (I3A), Universidad de Zaragoza, Spain. {jcivera, oscgg, josemari}@unizar.es Andrew J. Davison is with the Department of Computing, Imperial College, London, UK. ajd@
to construct large but consistent maps (e.g. [26] or [14] with stereo vision). The front-end image processing algorithms in all of the previously mentioned approaches are similar. In a few words, salient features are extracted and correspondences are searched for between a window of currently live frames. Scene structure and camera poses are estimated for the selected live frames via standard SfM methods. Finally, the solution is refined in a Bundle Adjustment optimization step. Sequential 3D motion and structure estimation from a monocular camera has also been tackled using filtering schemes [1], [4], [9] which propagate a probabilistic state estimate. Accurate estimation for large trajectories have only been achieved after the addition of other estimation techniques, like local mapping and loop closing in [8], [10], [20], [22]. The first aim of this paper is to show how filtering algorithms can reach similar accuracy to current Visual Odometry methods. To achieve this, we used the sensorcentered Extended Kalman Filter, introduced first in the context of laser-based SLAM [3]. Contrary to the standard EKF, where estimates are always referred to a world reference frame, the sensor-centered approach represents all feature locations and the camera motion in a reference frame local to the current camera. The typical correlated feature-sensor uncertainty arising from the uncertain camera location is transferred into uncertainty in the world reference frame, resulting in lower variances for the camera and map features, and thus in smaller linearization errors. Another key difference of our approach when compared against usual EKF visual SLAM is the number of measured features, which we increase from tens to more than a hundred (see figure 1). It has been observed experimentally that this increase highly improves the accuracy of the estimation and also makes scale drift, previously reported in [4], [8], [20], almost vanish for the trajectory lengths considered. The second contribution of this paper is the proposal of a new RANSAC algorithm that exploits the probabilistic prediction obtained from the EKF in order to increase the efficiency of the spurious match rejection step. This need for an increase in the efficiency is motivated by the high computational cost of the Joint Compatibility Branch and Bound algorithm (JCBB) [17], which specifically is exponential in the number of measurements. Although its use has been proven feasible and advisable in [8] for one or two tens of matches, the algorithm quickly becomes computationally intractable when the number of matches grows near a hundred. In recent years, an important stream of research has
相关文档
最新文档