Registration of point clouds using range and intensity information. In

合集下载

基于Kinect的点云配准方法

基于Kinect的点云配准方法

基于Kinect 的点云配准方法①李若白1,2, 陈金广1,21(西安工程大学 计算机科学学院, 西安 710048)2(柯桥区西纺纺织产业创新研究院, 绍兴 312030)通讯作者: 陈金广摘 要: Kinect 采集的点云存在点云数量大、点云位置有误差, 直接使用迭代最近点(ICP)算法对点云进行配准时效率低. 针对该问题, 提出一种基于特征点法向量夹角的改进点云配准算法. 首先使用体素栅格对Kinect 采集的原始点云进行下采样, 精简点云数量, 并使用滤波器移除离群点. 然后使用SIFT 算法提取目标点云与待配准点云公共部分的特征点, 通过计算特征点法向量之间的夹角调整点云位姿, 完成点云的初始配准. 最后使用ICP 算法完成点云的精细配准. 实验结果表明, 该算法与传统ICP 算法相比, 在保证点云配准精度的同时, 能够提高点云的配准效率, 具有较高的适用性和鲁棒性.关键词: Kinect; 点云配准; 法向量夹角; 点云滤波; ICP 算法引用格式: 李若白,陈金广.基于Kinect 的点云配准方法.计算机系统应用,2021,30(3):158–163. /1003-3254/7733.htmlPoint Cloud Registration Method Based on KinectLI Ruo-Bai 1,2, CHEN Jin-Guang 1,21(School of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China)2(Shaoxing Keqiao West-Tex Textile Industry Innovative Institute, Shaoxing 312030, China)Abstract : The point clouds collected by Kinect have a large quantity and position errors, and it is inefficient to directly apply the Iterative Closest Point (ICP) algorithm to point cloud registration. To solve this problem, we propose an improved point cloud registration algorithm based on the angle between the normal vectors of feature points. First, the voxel grids are used to down sample the original point clouds collected by Kinect and reduce the number of point clouds and a filter is applied to remove the outliers. Then, the Scale Invariant Feature Transform (SIFT) algorithm is employed to extract the common feature points between the target point clouds and the point clouds to be registered, and the angle between the normal vectors of feature points is calculated to adjust the point cloud pose. Thus, the initial registration of the point clouds is completed. Finally, the ICP algorithm is applied to complete the fine registration of the point clouds.The experimental results show that compared with the traditional ICP algorithm, the proposed algorithm, while ensuring the registration accuracy, can improve the registration efficiency of point clouds and has high applicability and robustness.Key words : Kinect; point cloud registration; method vector angle; point cloud filtering; ICP algorithm1 引言利用点云数据对现实物体进行三维重建是计算机视觉领域的重要技术, 已广泛应用于各行各业. Kinect [1]作为一款具有点云采集功能的设备, 在文物数字化[2]、虚拟现实[3]、三维人体重建与测量[4]、逆向工程[5]等诸多领域都有应用. 通过Kinect 对现实物体的表面进计算机系统应用 ISSN 1003-3254, CODEN CSAOBNE-mail: Computer Systems & Applications,2021,30(3):158−163 [doi: 10.15888/ki.csa.007733] ©中国科学院软件研究所版权所有.Tel: +86-10-62661041① 基金项目: 柯桥纺织产业创新研究院产学研协同创新项目(19KQYB24)Foundation item: Industry-University-Research Cooperation Innovation Project of Keqiao Textile Industry Innovation Research Institute (19KQYB24)收稿时间: 2020-05-22; 修改时间: 2020-06-16; 采用时间: 2020-06-28; csa 在线出版时间: 2021-03-03158行多角度扫描, 得到其点云数据, 利用点云配准技术将不同角度的点云数据合并到统一的三维坐标系下, 形成一个完整的点云数据集, 从而可以得到该物体的三维数字模型.点云配准是三维重建过程中的关键技术, 目前常用的配准算法是Besl等于1992年提出的迭代最近点(Iterative Closest Points, ICP)算法[6], 该算法精度高、容易实现, 但对目标点云和待配准点云的初始位置要求较高, 并且在点云数量较大时, 配准过程会消耗大量时间. 为此, 国内外研究人员在此算法的基础上进行了改进, 提出了PICP (Probability ICP)[7]、MICP (Modified ICP)[8]和CICP (Cluster ICP)[9]等配准算法,这些算法虽然克服了ICP算法的局限性, 但是算法的普适性有所降低, 并且对Kinect实时采集的点云进行配准时具有较低的鲁棒性. Chen等[10]和Zhao等[11]提出了尺度迭代最近点 (Scaling Iterative Closest Point, SICP) 算法, 但在点云数量较大的情况下配准效率较低, 同样不适合直接用来对Kinect采集的原始点云进行配准.针对上述问题, 本文提出一种基于特征点法向量夹角的改进点云配准算法. 该算法在点云配准之前对Kinect采集得到的原始点云数据进行下采样和滤波处理, 在保持点云的形态特征和空间信息不变的情况下,精简点云数量. 在点云初始配准阶段, 通过计算特征点法向量之间的夹角来调整待配准点云的位姿, 使目标点云和待配准点云在空间的位姿保持一致.最后在点云精细配准阶段, 使用迭代最近点配准算法完成两片点云的配准.2 点云数据的获取与处理2.1 使用Kinect获取点云数据Kinect是微软公司开发的一款获取3D体感信息的设备, 它由多阵列麦克风、RGB彩色摄像头、红外摄像头和红外发射器组成, 可应用于体感游戏、三维重建、虚拟试衣、文物保护等领域. Kinect 1.0获取深度图像是基于Light Coding[12]技术, 该技术是将红外线光均匀分布投射在被测物体和空间中, 通过红外摄像头记录空间和物体上的每个散斑, 在得到原始数据后, 使用设备中的PS1080芯片计算出具有深度信息的图像. Kinect 2.0则是基于Time Of Flight (TOF)技术获取深度图像, TOF技术是通过向目标发射连续的特定波长的红外光线脉冲, 经过传感器接收待测物体传回的光信号, 计算光线往返的飞行时间或相位差得到待测物体的3D深度信息. 相比于Kinect 1.0, 采用了TOF 技术的Kinect 2.0获取深度图像的精度更高, 被外界光影响的概率更低, 针对环境光具有更强的抗干扰性, 因此本文选用Kinect 2.0采集点云数据.为获取物体的点云数据, 通常需要先获取物体的深度图像, 然后将深度图像的二维图像信息根据式(1)和式(2)转换成三维点云数据.u0v0式中, f x、f y、、为Kinect的内参, 可通过相机标定求出; (u, v)为深度图像中的点; z c为物体到相机的距离; x w、y w、z w为世界坐标系中的坐标点, 即点云坐标; R、T分别为3×3的旋转矩阵和3×1的平移矩阵, 且[R T]=[I 1].2.2 点云的滤波处理使用Kinect获取的点云数量和设备的相机分辨率有关, 尤其是Kinect 2.0, 获取的每一帧深度图像中包含的点云数量通常在21万左右. 而且在采集点云的过程中, 由于Kinect的精度、使用者的经验和环境因素等带来的影响, 点云数据中会不可避免的出现噪声点,这将对点云配准的精度和效率带来直接影响, 因此在点云配准前需要对Kinect采集的原始点云数据进行下采样和滤波处理.2.2.1 点云下采样由于Kinect获取的每一帧图像中包含的点云数量巨大, 如果直接使用原始点云数据进行配准, 会降低点云的配准效率, 因此在点云配准前需对原始点云进行下采样处理, 精简点云数量. 此处使用体素化网格法[13]实现点云下采样, 其步骤为:(1)为原始点云创建三维体素栅格, 体素大小由原始点云数量决定;(2) 计算三维体素栅格的重心点, 并用距离重心点最近的点代替体素栅格内的其他点, 删除剩余点;2021 年 第 30 卷 第 3 期计算机系统应用159(3) 遍历原始点云中所有点并重复执行步骤(1)和步骤(2), 得到下采样处理后的点云.2.2.2 移除离群点通常在使用Kinect 采集点云数据时, 因为设备和测量误差会产生稀疏的离群点, 如果不将离群点移除,则会影响点云配准的精度. 此处通过计算一个点的邻域范围内K 近邻的邻域平均距离来移除离群点, 其步骤如下:(1) 使用kd-tree 算法[14]对点云数据集进行搜索,寻找数据集中每个目标点的邻域范围内K 个点作为临近点;τ(τ>0)n >τ×m (2) 计算目标点与临近点的平均值n 和标准差m ,并设定一个阈值 , 若, 则该点为离群点, 并将其从点云数据集中移除.3 改进的点云配准算法ICP 算法虽然具有比较高的配准精度, 但该算法对目标点云和待配准点云的初始位姿要求较高, 并且需要对两片点云公共部分的特征点进行逐一匹配, 这将增大点云配准的耗时. 因此, 对点云配准算法进行改进:在点云初始配准阶段, 通过计算两片点云对应的特征点法向量之间的夹角来调整待配准点云的初始位姿,在此基础上使用ICP 算法完成点云的精细配准.3.1 点云初始配准3.1.1 特征点的提取与匹配特征点是点云数据集中具有稳定性、区别性和代表性的点集, 其数量相比于原始点云的数量少很多, 在点云配准、曲面重建和三维人体测量研究中具有重要作用. 特征点提取算法比较多, 如NARF(Normal Aligned Radial Feature)算法[15]、快速点特征直方图(Fast Point Feature Histograms, FPFH)算法[16]、尺度不变特征变换(Scale-Invariant Feature Transform, SIFT)算法[17]、Harris 算法[18]等. SIFT 算法针对点云的平移、旋转、缩放等具有尺度不变性[19], 对Kinect 采集的点云适用性高, 因此本文使用SIFT 算法提取点云特征点.SIFT 算法的原理可理解为: 对每个相邻尺度之间的图像进行相减, 得到差分高斯金字塔图像, 然后对其进行泰勒展开, 从而定位出特征点的精确位置.一个尺度可变高斯函数G (x , y , σ)与原图像I (x ,y )进行卷积运算, 得到该图像在不同尺度下的尺度空间L (x , y , σ), 即:式中, σ 为尺度空间因子, (x , y )为图像的像素坐标.相邻尺度图像之间相减可以得到高斯差分算子D (x , y , σ), 用来对尺度图像中的极值点进行检测:式中, k 为表示不同尺度图像的一个常量. 尺度图像中的极值点组成了特征点,通过拟合三维曲面可以确定特征点的坐标和尺度, 同时去除不稳定、对比度低的特征点. 利用差分高斯 (DoG) 函数对空间尺度进行泰勒展开, 以筛选特征点, 展开式如下:X =(x ,y ,σ)T 式中, , 对式(6)进行求导, 并令其等于0,就可以得到特征点的精确坐标:将式(7)带入式(6), 化简可得:X =(x ,y ,σ)T D ˆX ≥ψψ若极值点满足, 则将该点保存为特征点, 其中为极值点偏移量的阈值.使用SIFT 算法将目标点云和待配准点云公共重叠部分的特征点提取之后, 需要对两片点云的特征点进行匹配, 此处通过计算特征点的最近欧氏距离来确定对应的特征点集. 设目标点云的特征点集为P f ={p 1,p 2, …, p i }, 待配准点云的特征点集为Q f ={q 1, q 2, …,q j }, 针对特征点集P f 中一点p i , 使用快速最近邻搜索(FLANN)算法[20]查找特征点集Q f 中与该点欧式距离最近的一点q j , 得到两片点云对应的特征点对(p i , q j ).3.1.2 基于特征点法向量夹角调整点云位姿由于使用ICP 算法进行点云配准时对目标点云P 和待配准点云Q 的初始位姿有较高要求, 因此在点云初始配准阶段通过计算两片点云的特征点法向量之间的夹角来调整待配准点云的位姿, 算法流程如下:n pi n q j (1) 根据两片点云对应的特征点对计算其法向量和;计算机系统应用2021 年 第 30 卷 第 3 期160(2) 计算对应法向量之间夹角的余弦值, 并与预设的阈值λ进行比较, 计算式如下:(3) 如果式(9)不成立, 则通过反余弦函数计算法向量之间夹角的角度θ, 根据该角度估计待配准点云的刚体变换矩阵, 调整待配准点云的位姿, 重复步骤(2)和步骤(3), 直到式(9)成立.3.2 点云精细配准经过点云的初始配准, 目标点云和待配准点云已经得到了比较理想的位姿状态, 在此基础上使用ICP 算法完成两片点云的精细配准. ICP 算法是通过目标点云和待配准点云之间的几何关系来计算相应的变换参数, 从而确定对应的旋转矩阵和平移矩阵并应用于待配准点云, 得到变换位置后新的待配准点云, 不断重复上述过程, 直到满足所需的收敛条件, 即满足目标函数最小[21]时, 便可完成点云的精细配准.目标函数如下式:P 0i Q 0i 式中, 、分别为初始配准获得的目标点云和待配准点云, R 、T 分别为旋转矩阵和平移矩阵.4 实验结果与分析为验证上述算法在点云配准中的有效性和可行性,使用Stanford Bunny 点云模型和Kinect 采集的点云数据模型分别进行3次实验, 并与传统的ICP 算法进行比较. 实验在Intel core i5-4210U CPU @ 1.70 GHz~2.40 GHz 、8 GB RAM 、Windows 10 64位专业版操作系统、Visual Studio 2019开发平台、PCL 1.8.1版本、Kinect for Windows SDK V2.0环境下进行, C++作为编程语言. 图1为实验2和实验3的算法流程图.实验1使用经典的Stanford Bunny 点云模型作为素材, 以验证本文算法的可行性, 实验结果如图2所示.实验使用不同角度下的点云模型进行配准, 目标点云和待配准点云分别如图2(a)、图2(b)所示. 使用传统ICP 算法和使用本文算法对两片点云进行配准, 实验结果如图2(c)、图2(d)所示. 由图2(c)可知, 直接使用ICP 算法对Stanford Bunny 点云模型进行配准后, 两片点云出现了错位, 没有完全拼接在一起. 使用改进后的算法对点云模型进行配准后的效果较好, 由图2(d)可以看出, 两片点云重合效果和配准后的姿态均好于图2(c).图1 算法流程图(a) 目标点云 (b) 待配准点云(c) 传统 ICP 算法(d) 本文算法图2 Stanford Bunny 点云模型配准实验2使用Kinect 采集人体上半身的点云数据作为实验素材, 以验证本文算法针对Kinect 采集的点云数据进行配准时的有效性, 实验结果如图3所示. 其中2021 年 第 30 卷 第 3 期计算机系统应用161目标点云是被测人体位于Kinect 正前方采集得到, 如图3(a), 待配准点云是将Kinect 旋转15°采集得到的,如图3(b). 图3(c)、图3(d)是分别使用传统ICP 算法和本文算法得到的实验结果, 从实验结果可以看到, 使用传统ICP 算法配准后的效果明显较差, 特别是胳膊,没有完全重合, 使用本文算法的配准效果有了明显改善, 两片点云完全重合.(a) 目标点云 (b) 待配准点云(c) 传统 ICP算法(d) 本文算法图3 Kinect 采集的人体模型配准实验3通过对室内复杂场景的点云模型进行配准以验证本文算法的普适性. 使用Kinect 对室内复杂场景的点云数据进行不同角度的采集作为目标点云和待配准点云, 如图4(a)、图4(b)所示. 图4(c)、图4(d)分别为使用传统ICP 算法和使用本文算法进行点云配准的结果, 由实验结果可知, 本文算法在点云配准效果上好于传统ICP 算法, 如图4(d), 室内的场景均完全重合.但使用传统ICP 算法配准后效果较差, 如图4(c), 两片点云出现错位, 配准精度较低.考虑到Kinect 采集的点云数据中包含有大量噪声点, 会影响点云配准的精度, 所以本文在配准前对点云进行了噪声滤波处理. 为提高点云配准的效率, 本文在保证点云数据中所包含的形状特征与空间结构信息不变的情况下, 使用体积为1 cm 3的体素栅格对采集的点云数据进行下采样, 精简点云数量. 在点云初始配准阶段, 本文通过计算特征点法向量间的夹角来调整待配准点云的初始位姿, 提高其初始位姿精度, 从而可以减少传统 ICP 算法中目标点云与待配准点云对应旋转矩阵的计算量, 以此来减少点云配准算法的迭代次数, 同时能够降低算法的运算复杂度, 减少配准时间.(a)目标点云 (b)待配准点云(c)传统 ICP算法(d) 本文算法图4 Kinect 采集的室内场景模型配准将点云配准用时和均方误差作为点云配准效果的评价标准, 如表1所示, 本文算法在点云配准中的耗时相较于传统ICP 算法明显减少, 均方误差也优于传统ICP 算法. 比如对复杂场景进行配准时, 本文算法比传统ICP 算法用时减少了18.205 s, 配准效率提高了约29%, 均方误差降低了0.177, 配准精度提高了约19%.由此可见本文算法在一定程度上能够减少配准用时,提高点云配准精度.表1 两种配准算法性能比较点云名称原始点云数量传统ICP 算法本文算法配准用时(s)均方误差(mm)点云数量配准用时(s)均方误差(mm)Bunny 35 94732.0760.32815 8059.5940.212三维人体49 64039.1520.38613 0887.7340.177复杂场景94 98062.6810.91662 55644.4760.7395 结语为解决Kinect 采集的点云数据配准效率低的问题, 提出了一种基于特征点法向量夹角的改进点云配准算法, 该算法通过精简点云数量和计算特征点法向量间的夹角来调整点云初始位姿, 以减少配准算法的迭代次数, 从而达到提高点云配准效率的目的. 实验结计算机系统应用2021 年 第 30 卷 第 3 期162果表明, 本文提出的算法在保证Kinect采集的物体点云形状特征不变的情况下, 能够减少点云数量, 提高点云配准精度和效率. 但本文算法在对不同维度的多视角点云进行配准时, 使用SIFT算法对特征点进行提取与匹配时会出现一定的误差, 并耗费较多时间, 使点云配准的精度和效率均有所降低, 因此这也是该算法以后优化的方向, 并在此基础上融合多视角点云配准数据, 优化配准结果, 完成物体三维模型的重建.参考文献吴剑锋, 蒋濛婷, 马梦鑫, 等. 基于点云融合算法的Kinect 三维重建技术及其应用研究. 计算机应用与软件, 2018, 35(8): 260–264. [doi: 10.3969/j.issn.1000-386x.2018.08.047] 1舒欢. 三维重建和3D打印在兵马俑修复中的应用. 电子科学技术, 2017, 4(4): 160–163. [doi: 10.16453/j.issn.2095-8595.2017.04.036]2陆剑锋, 王正平, 金红军. 三维激光扫描与虚拟现实技术在城市景观中的应用. 激光杂志, 2019, 40(7): 174–178. [doi:10.14016/ki.jgzz.2019.07.174]3Xie HY, Zhong YQ. Structure-consistent customized virtual mannequin reconstruction from 3D scans based on optimization. Textile Research Journal, 2020, 90(7–8): 937–950. [doi: 10.1177/0040517519883957]4张德海, 李艳芹, 谢贵重, 等. 三维光学扫描技术逆向工程应用研究. 应用光学, 2015, 36(4): 519–525. [doi: 10.5768/ JAO201536.0401005]5Besl PJ, Mckay ND. A method for registration of 3-D shapes.IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239–256. [doi: 10.1109/34.121791] 6Du SY, Liu J, Zhang CJ, et al. Probability iterative closest point algorithm for m-D point set registration with noise.Neurocomputing, 2015, 157: 187–198. [doi: 10.1016/j.neucom.2015.01.019]7Marani R, Renò V, Nitti M, et al. A modified iterative closest point algorithm for 3D point cloud registration. Computer-Aided Civil and Infrastructure Engineering, 2016, 31(7): 515–534. [doi: 10.1111/mice.12184]8Tazir ML, Gokhool T, Checchin P, et al. CICP: Cluster iterative closest point for sparse-dense point cloud registration. Robotics and Autonomous Systems, 2018, 108: 66–86. [doi: 10.1016/j.robot.2018.07.003]9Chen ECS, McLeod AJ, Baxter JSH, et al. Registration of 103D shapes under anisotropic scaling: Anisotropic-scaled iterative closest point algorithm. International Journal of Computer Assisted Radiology and Surgery, 2015, 10(6): 867–878. [doi: 10.1007/s11548-015-1199-9]Zhao L, Shen XK, Long X. Robust wrinkle-aware non-rigid registration for triangle meshes of hand with rich and dynamic details. Computers & Graphics, 2012, 36(5): 577–583. [doi: 10.1016/j.cag.2012.03.035]11Nitzan D, Brain AE, Duda RO. The measurement and use of registered reflectance and range data in scene analysis.Proceedings of the IEEE, 1977, 65(2): 206–220. [doi: 10.1109/ PROC.1977.10458]12王欢, 汪同庆, 李阳. 利用Kinect深度信息的三维点云配准方法研究. 计算机工程与应用, 2016, 52(12): 153–157. [doi:10.3778/j.issn.1002-8331.1407-0506]13马杰, 王旭娇, 马鹏飞, 等. 融合kd tree邻域查询的深度学习点云分类网络. 深圳大学学报(理工版), 2020, 37(1): 79–83.14Steder B, Rusu RB, Konolige K, et al. Point feature extraction on 3D range scans taking into account object boundaries. Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai, China.2011. 2601–2608.15Rusu RB, Blodow N, Beetz M. Fast Point Feature Histograms (FPFH) for 3D registration. Proceedings of 2009 IEEE International Conference on Robotics and Automation.Kobe, Japan. 2009. 3212–3217.16Lowe DG. Distinctive image features from Scale-Invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91–110. [doi: 10.1023/B:VISI.0000029664.99615.94] 17Sipiran I, Bustos B. A robust 3D interest points detector based on Harris operator. Proceedings of the 3rd Eurographics Conference on 3D Object Retrieval.Norrköping, Sweden. 2010. 7–14.18孙培芪, 卜俊洲, 陶庭叶, 等. 基于特征点法向量的点云配准算法. 测绘通报, 2019, (8): 48–53. [doi: 10.13474/ki.11-2246.2019.0250]19王金龙, 周志峰. 基于SIFT图像特征提取与FLANN匹配算法的研究. 计算机测量与控制, 2018, 26(2): 175–178.[doi: 10.16526/ki.11-4762/tp.2018.02.043]20杨帆, 唐伟智, 吴昊. 改进迭代最近点算法的点云自动精配准. 遥感信息, 2018, 33(2): 40–45. [doi: 10.3969/j.issn.1000-3177.2018.02.006]212021 年 第 30 卷 第 3 期计算机系统应用163。

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。

点云配准方法

点云配准方法
4
2
Outline
Two point cloud sets of A and B, are conformations of the same points in different coordinate systems
Rigid transformation:
Objective function:
Or
5
1. Point Cloud Registration With Target Control
Rotation matrix:
6
1. Point Cloud Registration With Target Control
The expansion of Taylor's formula: Error Equation:
Figure 1 Different kinds of plane target
1. Point Cloud Registration With Target Control
6 spatial similarity transformation parameters 3 angle elements: 3 translation elements: Adjustment model:
14
What is Quaternion ? Unit quaternions
Rotation matrix R
3. Quaternion Method
15
Definite translation vector:
Complete registration station:
Assume that: Minimum objective function:
3. Quaternion Method

3D is here Point Cloud Library (PCL)

3D is here Point Cloud Library (PCL)

3D is here:Point Cloud Library(PCL)Radu Bogdan Rusu and Steve CousinsWillow Garage68Willow Rd.,Menlo Park,CA94025,USA{rusu,cousins}@ Abstract—With the advent of new,low-cost3D sensinghardware such as the Kinect,and continued efforts in advanced point cloud processing,3D perception gains more and more importance in robotics,as well as otherfields.In this paper we present one of our most recent initiatives in the areas of point cloud perception:PCL(Point Cloud Library –).PCL presents an advanced and extensive approach to the subject of3D perception,and it’s meant to provide support for all the common3D building blocks that applications need.The library contains state-of-the art algorithms for:filtering,feature estimation,surface reconstruction,registration,modelfitting and segmentation. PCL is supported by an international community of robotics and perception researchers.We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.I.I NTRODUCTIONFor robots to work in unstructured environments,they need to be able to perceive the world.Over the past20years, we’ve come a long way,from simple range sensors based on sonar or IR providing a few bytes of information about the world,to ubiquitous cameras to laser scanners.In the past few years,sensors like the Velodyne spinning LIDAR used in the DARPA Urban Challenge and the tilting laser scanner used on the PR2have given us high-quality3D representations of the world-point clouds.Unfortunately, these systems are expensive,costing thousands or tens of thousands of dollars,and therefore out of the reach of many robotics projects.Very recently,however,3D sensors have become available that change the game.For example,the Kinect sensor for the Microsoft XBox360game system,based on underlying technology from PrimeSense,can be purchased for under $150,and provides real time point clouds as well as2D images.As a result,we can expect that most robots in the future will be able to”see”the world in3D.All that’s needed is a mechanism for handling point clouds efficiently, and that’s where the open source Point Cloud Library,PCL, comes in.Figure1presents the logo of the project.PCL is a comprehensive free,BSD licensed,library for n-D Point Clouds and3D geometry processing.PCL is fully integrated with ROS,the Robot Operating System(see ),and has been already used in a variety of projects in the robotics community.II.A RCHITECTURE AND I MPLEMENTATIONPCL is a fully templated,modern C++library for3D point cloud processing.Written with efficiency andper-Fig.1.The Point Cloud Library logo.formance in mind on modern CPUs,the underlying data structures in PCL make use of SSE optimizations heavily. Most mathematical operations are implemented with and based on Eigen,an open-source template library for linear algebra[1].In addition,PCL provides support for OpenMP (see )and Intel Threading Building Blocks(TBB)library[2]for multi-core parallelization.The backbone for fast k-nearest neighbor search operations is provided by FLANN(Fast Library for Approximate Nearest Neighbors)[3].All the modules and algorithms in PCL pass data around using Boost shared pointers(see Figure2),thus avoiding the need to re-copy data that is already present in the system.As of version0.6,PCL has been ported to Windows,MacOS,and Linux,and Android ports are in the works.From an algorithmic perspective,PCL is meant to incor-porate a multitude of3D processing algorithms that operate on point cloud data,including:filtering,feature estimation, surface reconstruction,modelfitting,segmentation,registra-tion,etc.Each set of algorithms is defined via base classes that attempt to integrate all the common functionality used throughout the entire pipeline,thus keeping the implementa-tions of the actual algorithms compact and clean.The basic interface for such a processing pipeline in PCL is:•create the processing object(e.g.,filter,feature estima-tor,segmentation);•use setInputCloud to pass the input point cloud dataset to the processing module;•set some parameters;•call compute(orfilter,segment,etc)to get the output. The sequence of pseudo-code presented in Figure2shows a standard feature estimation process in two steps,where a NormalEstimation object isfirst created and passed an input dataset,and the results together with the original input are then passed together to an FPFH[4]estimation object.To further simplify development,PCL is split into a series of smaller code libraries,that can be compiled separately:•libpclfilters:implements datafilters such as downsam-Fig.2.An example of the PCL implementation pipeline for Fast Point Feature Histogram(FPFH)[4]estimation.pling,outlier removal,indices extraction,projections, etc;•libpcl features:implements many3D features such as surface normals and curvatures,boundary point estima-tion,moment invariants,principal curvatures,PFH and FPFH descriptors,spin images,integral images,NARF descriptors,RIFT,RSD,VFH,SIFT on intensity data, etc;•libpcl io:implements I/O operations such as writing to/reading from PCD(Point Cloud Data)files;•libpcl segmentation:implements cluster extraction, modelfitting via sample consensus methods for a va-riety of parametric models(planes,cylinders,spheres, lines,etc),polygonal prism extraction,etc’•libpcl surface:implements surface reconstruction tech-niques,meshing,convex hulls,Moving Least Squares, etc;•libpcl registration:implements point cloud registration methods such as ICP,etc;•libpcl keypoints:implements different keypoint extrac-tion methods,that can be used as a preprocessing step to decide where to extract feature descriptors;•libpcl range image:implements support for range im-ages created from point cloud datasets.To ensure the correctness of operations in PCL,the methods and classes in each of the above mentioned libraries contain unit and regression tests.The suite of unit tests is compiled on demand and verified frequently by a dedicated build farm,and the respective authors of a specific compo-nent are being informed immediately when that component fails to test.This ensures that any changes in the code are tested throughly and any new functionality or modification will not break already existing code that depends on PCL. In addition,a large number of examples and tutorials are available either as C++sourcefiles,or as step-by-step instructions on the PCL wiki web pages.III.PCL AND ROSOne of the corner stones in the PCL design philosophy is represented by Perception Processing Graphs(PPG).The rationality behind PPGs are that most applications that deal with point cloud processing can be formulated as a concrete set of building blocks that are parameterized to achieve dif-ferent results.For example,there is no algorithmic difference between a wall detection algorithm,or a door detection,or a table detection–all of them share the same building block, which is in this case,a constrained planar segmentation algorithm.What changes in the above mentioned cases is a subset of the parameters used to run the algorithm.With this in mind,and based on the previous experience of designing other3D processing libraries,and most recently, ROS,we decided to make each algorithm from PCL available as a standalone building block,that can be easily connected with other blocks,thus creating processing graphs,in the same way that nodes connect together in a ROS ecosystem. Furthermore,because point clouds are extremely large in nature,we wanted to guarantee that there would be no unnecessary data copying or serialization/deserialization for critical applications that can afford to run in the same process.For this we created nodelets,which are dynamically loadable plugins that look and operate like ROS nodes,but in a single process(as single or multiple threads).A concrete nodelet PPG example for the problem of identifying a set of point clusters supported by horizontal planar areas is shown in Figure3.Fig.3.A ROS nodelet graph for the problem of object clustering on planar surfaces.IV.V ISUALIZATIONPCL comes with its own visualization library,based on VTK[5].VTK offers great multi-platform support for ren-dering3D point cloud and surface data,including visualiza-tion support for tensors,texturing,and volumetric methods. The PCL Visualization library is meant to integrate PCL with VTK,by providing a comprehensive visualization layer for n-D point cloud structures.Its purpose is to be able to quickly prototype and visualize the results of algorithms operating on such hyper-dimensional data.As of version0.2, the visualization library offers:•methods for rendering and setting visual properties (colors,point sizes,opacity,etc)for any n-D point cloud dataset;•methods for drawing basic3D shapes on screen(e.g., cylinders,spheres,lines,polygons,etc)either from sets of points or from parametric equations;•a histogram visualization module (PCLHistogramVisu-alizer)for 2D plots;•a multitude of geometry and color handlers.Here,the user can specify what dimensions are to be used for the point positions in a 3D Cartesian space (see Figure 4),or what colors should be used to render the points (see Figure 5);•RangeImage visualization modules (see Figure 6).The handler interactors are modules that describe how colors and the 3D geometry at each point in space are computed,displayed on screen,and how the user interacts with the data.They are designed with simplicity in mind,and are easily extendable.A code snippet that produces results similar to the ones shown in Figure 4is presented in Algorithm 1.Algorithm 1Code example for the results shown in Figure 4.using namespace pcl visualization ;PCLVisualizer p (“Test”);PointCloudColorHandlerRandom handler (cloud);p.addPointCloud (cloud,handler,”cloud random”);p.spin ();p.removePointCloud (”cloud random”);PointCloudGeometryHandlerSurfaceNormal handler2(cloud);p.addPointCloud (cloud,handler2,”cloud random”);p.spin ();The library also offers a few general purpose tools for visualizing PCD files,as well as for visualizing streams of data from a sensor in real-time inROS.Fig.4.An example of two different geometry handers applied to the same dataset.Left:the 3D Cartesian space represents XYZ data,with the arrows representing surface normals estimated at each point in the cloud,right:the Cartesian space represents the 3dimensions of the normal vector at each point for the samedataset.Fig. 5.An example of two different color handers applied to the same dataset.Left:the colors represent the distance from the acquisition viewpoint,right:the color represent the RGB texture acquired at eachpoint.Fig.6.An example of a RangeImage display using PCL Visualization (bottom)for a given 3D point cloud dataset (top).V.U SAGE E XAMPLESIn this section we present two code snippets that exhibit the flexibility and simplicity of using PCL for filtering and segmentation operations,followed by three application examples that make use of PCL for solving the perception problem:i)navigation and mapping,ii)object recognition,and iii)manipulation and grasping.Filtering constitutes one of the most important operations that any raw point cloud dataset usually goes through,before any higher level operations are applied to it.Algorithm 2and Figure 7present a code snippet and the results obtained after running it on the point cloud dataset from the left part of the figure.The filter is based on estimating a set of statistics for the points in a given neighborhood (k =50here),and using them to select all points within 1.0·σdistance from the mean distance µ,as inliers (see [6]for more information).Algorithm 2Code example for the results shown in Figure 7.pcl::StatisticalOutlierRemoval <pcl::PointXYZ >f;f.setInputCloud (input cloud);f.setMeanK (50);f.setStddevMulThresh (1.0);f.filter (outputcloud);Fig.7.Left:a raw point cloud acquired using a tilting laser scanner,middle:the resultant filtered point cloud (i.e.,inliers)after a StatisticalOut-lierRemoval operator was applied,right:the rejected points (i.e.,outliers).The second example constitutes a segmentation operation for planar surfaces,using a RANSAC [7]model,as shown in Algorithm 3.The input and output results are shown in Figure 8.In this example,we are using a robust RANSAC estimator to randomly select 3non-collinear points and calculate the best possible model in terms of the overallnumber of inliers.The inlier thresholding criterion is set to a maximum distance of 1cm of each point to the plane model.Algorithm 3Code example for the results shown in Figure 8.pcl::SACSegmentation <pcl::PointXYZ >s;f.setInputCloud (input cloud);f.setModelType (pcl::SACMODEL PLANE);f.setMethodType (pcl::SAC RANSAC);f.setDistanceThreshold (0.01);f.segment (outputcloud);Fig.8.Left:the input point cloud,right:the segmented plane represented by the inliers of the model marked with purple color.An example of a more complex navigation and mapping application is shown in the left part of Figure 9,where the PR2robot had to autonomously identify doors and their handles [8],in order to explore rooms and find power sockets [9].Here,the modules used included constrained planar segmentation,region growing methods,convex hull estimation,and polygonal prism extractions.The results of these methods were then used to extract certain statistics about the shape and size of the door and the handle,in order to uniquely identify them and to reject false positives.The right part of Figure 9shows an experiment with real-time object identification from complex 3D scenes [10].Here,a set of complex 3D keypoints and feature descriptors are used in a segmentation and registration framework,that aims to identify previously seen objects in the world.Figure 10presents a grasping and manipulation applica-tion [11],where objects are first segmented from horizontal planar tables,clustered into individual units,and a registra-tion operation is applied that attaches semantic information to each cluster found.VI.C OMMUNITY AND F UTURE P LANSPCL is a large collaborative effort,and it would not exist without the contributions of several people.Though the community is larger,and we accept patches and im-provements from many users,we would like to acknowledge the following institutions for their core contributions to the development of the library:AIST,UC Berkeley,UniversityofHandleDoorFig.9.Left:example of door and handle identification [8]during a navigation and mapping experiment [9]with the PR2robot.Right:object recognition experiments (chair,person sitting down,cart)using Normal Aligned Radial Features (NARF)[10]withthe PR2robot.Fig.10.Experiments with PCL in grasping applications [11],from left to right:a visualization of the collision environment including points associated with unrecognized objects (blue),and obstacles with semantic information (green);detail showing 3D point cloud data (grey)with 3D meshes superimposed for recognized objects;successful grasping showing the bounding box associated with an unrecognized object (brown)attached to the gripper.Bonn,University of British Columbia,ETH Zurich,Univer-sity of Freiburg,Intel Reseach Seattle,LAAS/CNRS,MIT,University of Osnabr¨u ck,Stanford University,University of Tokyo,TUM,Vienna University of Technolog,and Wash-ington University in St.Louis.Our current plan for PCL is to improve the documentation,unit tests,and tutorials and release a 1.0version.We will continue to add functionality and make the system available on other platforms such as Android,and we plan to add support for GPUs using CUDA and OpenCL.We welcome any new contributors to the project,and we hope to emphasize the importance of code sharing for 3D processing,which is becoming crucial for advancing the robotics field.R EFERENCES[1]G.Guennebaud,B.Jacob,et al.,“Eigen v3,”,2010.[2]J.Reinders,Intel threading building blocks :outfitting C++for multi-core processor parallelism .O’Reilly,2007.[3]M.Muja and D.G.Lowe,“Fast approximate nearest neighbors withautomatic algorithm configuration,”in International Conference on Computer Vision Theory and Application VISSAPP’09).INSTICC Press,2009,pp.331–340.[4]R.B.Rusu,N.Blodow,and M.Beetz,“Fast Point Feature Histograms(FPFH)for 3D Registration,”in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),Kobe,Japan,May 12-172009.[5]W.Schroeder,K.Martin,and B.Lorensen,Visualization Toolkit:AnObject-Oriented Approach to 3D Graphics,4th Edition .Kitware,December 2006.[6]R.B.Rusu,Z.C.Marton,N.Blodow,M.Dolha,and M.Beetz,“Towards 3D Point Cloud Based Object Maps for Household Envi-ronments,”Robotics and Autonomous Systems Journal (Special Issue on Semantic Knowledge),2008.[7] A.M.Fischler and C.R.Bolles,“Random sample consensus:aparadigm for model fitting with applications to image analysis and automated cartography,”Communications of the ACM ,vol.24,no.6,pp.381–395,June 1981.[8]R.B.Rusu,W.Meeussen,S.Chitta,and M.Beetz,“Laser-basedPerception for Door and Handle Identification,”in International Con-ference on Advanced Robotics (ICAR),June 22-262009.[9]W.Meeussen,M.Wise,S.Glaser,S.Chitta,C.McGann,P.Mihelich,E.Marder-Eppstein,M.Muja,V .Eruhimov,T.Foote,J.Hsu,R.Rusu,B.Marthi,G.Bradski,K.Konolige, B.Gerkey,and E.Berger,“Autonomous Door Opening and Plugging In with a Personal Robot,”in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),Anchorage,Alaska,May 3-82010.[10] B.Steder,R.B.Rusu,K.Konolige,and W.Burgard,“Point FeatureExtraction on 3D Range Scans Taking into Account Object Bound-aries,”in Submitted to the IEEE International Conference on Robotics and Automation (ICRA),Shanghai,China,May 9-132010.[11]M.Ciocarlie,K.Hsiao,E.G.Jones,S.Chitta,R.B.Rusu,andI.A.Sucan,“Towards reliable grasping and manipulation in household environments,”New Delhi,India,12/20102010.。

一种新的点云拼接算法_左超

一种新的点云拼接算法_左超

Abstract Iterative closest point(ICP)algorithm is widely used in multi-view fine registration of 3D point clouds, while its accuracy and convergence to global optimization depend on initial registration position.It fails when a great difference exists to initial position of the waited registered point clouds.Coarse registration aims to provide a good initial registration position for ICP.A new coarse registration algorithm—iterative least space distribution entropy is proposed based on the space distribution of point clouds,and the concept of entropy is used for describing this distribution law according to information theory.Experiments show that the proposed algorithm can offer a good initial registration position for ICP and it owns a high efficiency and can realize registration without using ICP under precision permission. Key words remote sensing;3D point cloud;interative closest point;registration;entropy OCIS codes 280.3640;100.6890;100.5010

顾及杆状物和车道线的城市道路场景轻量化快速点云自动配准

顾及杆状物和车道线的城市道路场景轻量化快速点云自动配准

第 32 卷第 4 期2024 年 2 月Vol.32 No.4Feb. 2024光学精密工程Optics and Precision Engineering顾及杆状物和车道线的城市道路场景轻量化快速点云自动配准赵辉友,吴学群*,夏永华(昆明理工大学国土资源工程学院,云南昆明 600093)摘要:针对激光扫描获取城市场景出现不同时期位置偏差,传统点云配准方法存在效率低和鲁棒性低等局限性,本文提出了顾及杆状物和车道线的点云配准改进方法。

首先对滤波后的点云进行体素格网降采样,再利用布料模型滤波对地面点滤波,后使用K均值无监督分类非地面点云,后用先验的随机一致抽样法提取杆状物作为目标特征,并根据点云反射强度提出点云灰度图和空间密度分割法提取车道线。

利用改进迭代最近点(ICP)算法和法向量约束,将杆状物作和车道线作为配准基元,几何一致算法剔除错误点对,并使用双向KD-tree快速对应特征点的关系,加快配准速度和提高精度。

经实验证明,在低重叠度的城市点云场景耗时不到20 s,且只迭代20次,精度可达1.987 7×10-5 m,可实现城市道路场景点云的高效准确配准。

关键词:车载激光扫描;杆状物;地面点滤波;K均值;车道线;改进ICP中图分类号:TP394.1;TH691.9 文献标识码:A doi:10.37188/OPE.20243204.0535Urban road scenes utilize lightweight fast point cloudauto-registration of poles-like and lane linesZHAO Huiyou,WU Xuequn*,XIA Yonghua(College of Mechanical Engineering, Kunming University of Science and Technology,Kunming 600093, China)* Corresponding author, E-mail: wuxuequn520@163. comAbstract: In view of the position deviation of vehicle laser scanning to obtain urban scenes in different peri⁃ods, the Traditional point cloud registration methods still have the limitations of low efficiency and low ro⁃bustness, and an improved point cloud registration method using rods and lane lines was proposed in this pa⁃per. Firstly, the filtered point cloud was voxel grid down-sampled, and then the cloth model was used to fil⁃ter the ground points,and then the K-means unsupervised classification of non-ground point clouds was used, and then the rods were extracted as the target features, and the point cloud grayscale map and spatial density segmentation method were proposed according to the reflection intensity of the point cloud.Then,the improved iterative closest point (ICP)algorithm and normal constraint were used to use rods and lane lines as registration primitives, geometric consistency algorithms were used to eliminate wrong point pairs,and bidirectional KD-trees were used to quickly correspond to the relationship of feature points, so as to ac⁃文章编号1004-924X(2024)04-0535-14收稿日期:2023-08-11;修订日期:2023-09-08.基金项目:国家自然科学基金地区基金项目(No.42161067,No.42261074)第 32 卷光学精密工程celerate the registration speed and improve accuracy. Experiments show that it takes less than 20 s in urban point cloud scenarios with low overlap, and only 20 iterations, and the accuracy can reach 1.987 7×10-5 me⁃ters, which can realize the efficient and accurate registration of laser point clouds in urban road scenes.Key words: vehicle laser scan;pole-like object;ground point filtering;K-means;lane lines;improved Iterative Closest Point(ICP)1 引言随着智慧城市和数字化城市的迅猛发展,对各大场景三维信息获取的效率和精度有着更高的要求,出现了许多可以获取三维信息的方法,如傅里叶变换轮廓术(Fourier Transform Pro⁃filometry,FTP),相移测量轮廓术(Phase Shift⁃ing Profilometry,PSP),调制测量轮廓术(Modu⁃lation Measurement Profilometry,MMP),空间相位检测法(Spatial Phase Detection,SPD),锁相环轮廓法(Phase Lock Loop Profilometry,PLLP),计算莫尔轮廓术(Computer-Generated Moire Profilometry ,CGMP),激光雷达距离测量等(Light Detection and Ranging Measurement,Li⁃DAR)[1-4]。

211081693_基于SHOT与目标函数对称ICP的低重叠率术前术中点云配准算法

211081693_基于SHOT与目标函数对称ICP的低重叠率术前术中点云配准算法

Copyright©博看网. All Rights Reserved.
· 112·
北京生物医学工程 第 42 卷
registration more challenging. This paper adopts the combination of local geometric features and distance
算法能应对 低 重 叠 配 准 问 题,具 有 较 高 的 鲁 棒 性
和计算效率。
特征的配准方法 [14-20] 能够稳健地找到点云之间的
对应关系,且局部描述子在有遮挡的场景中更稳健;
标准,单独使用在迭代过程中存在歧义。
因此,本文提出局部几何特征与距离度量相结
合的方式应对低重叠配准问题。 方向直方图描述子
优。 Raposo 等[11] 用两点及其法向量代替 Super 4PCS
0 引言
椎弓根螺钉内固定术广泛应用于胸腰椎骨折、
脊柱退变、外伤等疾病的治疗
[1]
,准确的螺钉置入
能提高固定强度并减少对周围重要解剖结构的损
伤。 以往手术以解剖学辅助影像定位为主,根据医
生的经验和术中二维射线透视结果来确定螺钉的位
通过增加
角度约束减少无效点对的生成。 但这两种方法只适
用于重叠率相对较高的场景,否则仍会陷入局部最
好的平衡;Rusinkiewicz [27] 提出的目标函数对称 ICP
量,允许更多的位置集。 本文针对术前术中点云数
据低重叠率且存在噪声、异常值的情况,采用 SHOT
特征描述子与目标函数对称 ICP 结合的方法,以实
symmetric point⁃to⁃surface objective function. Registration experiments were performed on five groups of lumbar

点云提取于基础工程项目说明书

点云提取于基础工程项目说明书

WF20432Point Cloud Extraction for Infrastructure ProjectsKevin Grover, Regional Technology LeadStantec Consulting Ltd.Ramesh Sridharan, Principal Research EngineerAutodeskDescriptionPoint clouds are an important component in current infrastructure projects, and it is vital to know how to handle them for successful project execution. This class will provide all the necessary information to import, process, and extract information from point cloud data in Autodesk, Inc., architecture, engineering, and construction products. More specifically, this class will teach surface/terrain generation from high-resolution point cloud data in InfraWorks software, and it will demonstrate how to use it in AutoCAD Civil 3D software. The class will also cover how to model city asset features to create a virtual city model, and how to bring those city assets into a design product, such as AutoCAD Civil 3D software, as Coordinate Geometry (COGO) points. This class has no prerequisite. Any knowledge of InfraWorks software, AutoCAD Civil 3D software, and point clouds will be helpful in understanding the class material. At the end of this class, attendees will be able to use point cloud data with ease and confidence for their modeling and design projects. This session features InfraWorks 360 and AutoCAD Civil 3D.Your AU Expert(s)Kevin Grover is the regional technology lead and unmanned aircraft systems (UAS) operations manager at Stantec. His group has been collecting, processing, and extracting data from terrestrial and mobile laser scanning for the past 8 years, and with UAS for the past 3. With the amount of data collection technology being utilized at Stantec, Grover is always looking for better ways to capitalize on processing software to enhance and improve data extraction methods. He is leading the growth of UAS operations at Stantec for survey, mapping, and inspection purposes.Ramesh Sridharan is a product owner in the AEC Product Development Group (PDG) at Autodesk, Inc. His team focuses on creating solutions that enable users to utilize reality-capture data for major infrastructure designs and models. To realize this goal, they work on 3D point-cloud processing, machine learning, and image processing that extracts and models core information content for large, reality-capture, point-cloud data that are used for modeling and design purposes. Before joining Autodesk, he was a chief technology officer at Virtual Geomatics, which was acquired by Autodesk in 2014. As a CTO, he was in charge of product development that focuses on point-cloud information extraction for different infrastructure applications. He has been working with the reality-capture field for more than decade and a half, and he received his postgraduate degree from Indian Institutes of Technology, with research focus on image processing and artificial intelligence.How to use point cloud visualization for analysis Autodesk has advanced the ability to utilize point cloud information over the last several years. Using programs like Autodesk Recap, users can import large point cloud datasets with tools for managing, cropping and visualizing in an efficient way. This class won’t focus on importing data into Recap, but assumes knowledge of import data into Recap is known.Autodesk RecapRecap offers a wide range of visual styling options for the better viewing of point cloud data. Depending on how the data is collected will determine if all the visual styles are available. RGB - When a point cloud is collected along with high resolution imagery, Recap has the ability to assign a color index value to each 3D point. This visual style is ideal for the most realistic view of a point cloud.Intensity – Laser scanners have the ability to collect a reflectance value of each 3D point. This is the native viewing setting for points cloud when imagery is not collected.Elevation – As each 3D point has an elevation value, a point cloud can be viewed by a color elevation range. This visual style is ideal for surface topography.Normals – During the preprocessing stage in Recap on import, normals are calculated to determine the planarity of surfaces and how surfaces relate to each other. This visual style is ideal for modeling in 3D to determine the relationship between surfaces.Scan Location – For a dataset derived from terrestrial laser scanning, scans can be colored by scan location to determine data overlap. This isn’t a commonly used visual setting.Autodesk InfraworksInfraworks has been able to ingest point cloud data for the last few years, and the visual quality in the software has been advanced substantially. On first import of a point cloud, it will default to RGB if color imagery was collected, or given a black and white intensity view when RGB is not present.Importing - Point clouds can be imported similarly to any other data format in Infraworks, but it has to be in the Recap scan project format (RCP) before import. Once imported, the data can be assigned a coordinate system, scale, translation, etc.Styling – Upon first import, RGB is the default style that is assigned in Infraworks. If color was not collected during the scan, then it is a black and white intensity view of the point cloud. Infraworks does have functionality to style a point cloud like Recap, however, these styles must be defined manually.•Choose Point Cloud Themes•Press the icon to add a new theme•Now you can customize by elevation, normal, single color, classification, intensity, or elevation and intensityUsability of Point Cloud in InfraworksEven though the ability to import a point cloud into Infraworks is possible, there are still limited tools to work with the point cloud.•Primarily for visualization purposes•Limited tools to manually work with point cloud•No editing of point cloud in InfraworksThe automated extraction tools recently introduced will be discussed in the next section. Items to consider with point cloud data in Infraworks:•Requires disk space – on import of an RCP project, a copy of the entire point cloud data is placed in the Infraworks working directory structure. Ensure there is adequate localstorage depending on the size of the point cloud file•Unified data – this will enhance to performance of the point cloud in Infraworks. Ensure the scan locations are not needed for further functionality, as they are all merged intoone large scan file.Autodesk Civil 3DSimilarly to Infraworks, a point cloud file can be imported into Civil3D. Once the point cloud is inserted into Civil3D it can be styled.•Select the point cloud, and this will change to the point cloud ribbon•Once selected, under Scan Colors, you can change the style as needed•Under Color Mapping in the ribbon, these styles can be customized as well.Tips for better performance of points clouds•Unified data – this will enhance to performance of the point cloud in Infraworks. Ensure the scan locations are not needed for further functionality, as they are all merged into one large scan file.•Split large datasets into multiple projects – tile datasets prior to importing•Assign regions to the Recap project fileHow to generate a surface from point cloud dataPoint clouds are a great dataset for Infrastructure projects, however they can be challenging to work with efficiently. As these types of projects can be quite large, the point cloud files become even more difficult to handle due to the file sizes. The point cloud itself is considered non-intelligent (only a mess of 3D points), it is necessary to extract usable data from the point cloud for design purposes.Depending on the level of information needed to extract (ie. Ground surface, curb and gutter, utilities, paint lines, etc.), this will determine how much time and effort is needed to do so. As there are many ways to do this, the next section will focus on various methods.Manual WorkflowTypically, manual extraction of data from point clouds is done when limited information is needed from it for design. It is a time-consuming process to fully extract data manually, but is necessary on many occasions for certain features.Manual extraction is typically best for the following:•Utility extraction (manholes, valves, power poles, etc.)•Trees (trunk location and drip line)•Power lines•Surface points (noisy / occluded areas)Autodesk offers some basic tools for the manual extraction of features from point cloud data using the 3D Object Snaps for point cloud tools in Autocad. Autocad (and all verticals including C3D) has the following tools:•Node•Intersection•Edge•Corner•Nearest to plane•Perpendicular to plane•Perpendicular to edge•CenterlineUsing these tools in Autocad products objects such as block, C3D points, or 3D linework can be manually placed using the point cloud.Some challenges using these tools include:•Incorrect snap locations•Can’t snap to center of object (ie. Power pole, tree trunk, etc.)•Time consumingTips for manually placing features include:•Crop the point cloud to view smaller areas of data•Crop horizontal planes of the point cloud to aid in extraction building outlines and vertical features (power poles, trees, etc.)•Change the visual style to intensity or normal viewInfraworks does not offer any tools to work manually work with point clouds, however, 3D objects can be manually placed using the point cloud.Semi-Automated WorkflowAs manual extraction can be time consuming, using a semi-automated processing can be a preferred method for extracting features from point clouds. As a note, Autodesk does not offer these tools natively in the software, it is typically done through third party add on packages. These methods will only be discussed in general.Cross Section MethodThe cross-section method is typically the preferred method for linear corridor extraction from point clouds for curbs, road crowns, sidewalks, etc. The benefit of this method is that much faster extraction can be completed when similar features are needed. The process involves the following:• Create an alignment along the corridor• Create sections of point cloud along alignment at required interval o Typically 0.3 meter (1 foot) width at 10 meter (30 feet) intervals • Define point or linework codes for features to extractf• Move to next station along alignment and define the same extraction points. Continuealong alignment until each section is extractedThe benefits of using this method include:• Great for consistent corridor extraction of features • Simplified method of extraction• Generates 3D polylines (or survey figures / features lines)o These 3D polylines can be used to create a simpler surface in C3DEdge asphalt 1Edge asphalt 2Edge asphalt 3Edge asphalt 4BarrierAutomated WorkflowThis is the method of extraction that had increased development focus of late as end users of point cloud data are looking for the “easy button” solution.This type of automated extraction is challenging for many reasons. It requires the following to be successful:•Seamless point cloud datasetso Minimal occluded areas (lack of point cloud data)o Consistent density of data (not dense in areas and sparse in others)•Type of sensors used for collection. Point clouds datasets vary in accuracy (terrestrial, mobile, aerial, or photogrammetry), which can make it harder to extractvarious features.o Aerial Lidar – typically collected very high with less dense point cloud (5-20 points per sq. meter). Good for general topography but not for accurateextraction of defined features (ie. Curbs)o Mobile Lidar – more accurate that aerial, however does vary based on quality of system used, collection methodology, amount and quality of ground controlpoints, and quality of vendor collecting. With a high quality dataset, this is thepreferred method of data collection for infrastructure projects due to theconsistently spaced data, but has challenges of line of sight.o Terrestrial – most commonly method of point cloud data, but does vary based on quality of system used, data registration methods (targeted vs. cloud tocloud), amount and quality of ground control points, and quality of vendor.With a high quality dataset, automated extraction can be successful but stillhas challenges with occluded areas, vegetation, and line of sight issues.o Photogrammetry – with the advancements and interest in drones recently, photogrammetric processing is allowing for more use of this data forinfrastructure work. It is not the preferred method for collection due toregulatory hurdles (over people and roads, etc.) as well as heavy dataprocessing requirements.Civil 3D Surface CreationCivil3D has had tools for a while now to create a surface from a point cloud, including a way to filter a point cloud in order to derive a ground surface. The automation of this workflow can be successful for general terrain, but does struggle with accurately defining hard features (edges, curb, etc.).The steps to create a surface from a point cloud are:•Choose “create surface from point cloud”•Name and choose the surface style to apply•Select area to create surface•Change density of point cloud for surface creation•Choose noise filtering (best for a ground terrain model is Kriging)The benefits of this workflow are that any user can create a 3D terrain surface directly from a point cloud.The challenges with this workflow are:•Lack of definition of hard edges (curbs, edges, etc.)•Noisy surfaces. As the process relies on algorithms there will be areas with noise or spikes in the surface that will have to be manually edited.•Heavy surfaces. As the surface is generated from a dense point cloud, the surface tends to be very dense on its own even with filtering and segmenting options chosen. This may slow down Civil3D when viewing in 3D or with certain surface visual styles (3D faces,dense contours, etc.)Infraworks Surface CreationAutodesk has taken large strides in advancing the functionality of Infraworks, including the ability to classify and automatically extract features from point clouds. Infraworks is the ideal product for viewing point clouds as it is natively in 3D as are point clouds.Here are the steps to work with point clouds in Infraworks:•Once the point cloud data is imported and setup (map projection, scale, etc.), it is ready for the extraction workflow•Choose the Point Cloud Terrain Generation tool•This brings up the dialog with settings for extraction•Settings can be customized as follows:o Selected datasets – can choose multiple point clouds if they existo Processing Rules – Optimum setting is recommended▪Ground – settings for “bare earth” ground surface extraction▪Linear Feature – settings for linear features such as curbs and paint lines▪Vertical Feature – setting for buildings, power poles, signs o Results Options▪Override Model Point Cloud – as the point cloud file is being modified during this processing (filtered, classified, etc), there is the option tooverride the existing cloud with the edited one▪Generate Data – option to generate a new point cloud file. All points, light weight, or key points.▪Export Processed File – option to export the points to a new file. Options for all points, only ground points, or do not export.▪Save File – set the save file location for the new point cloud•Start Processing. This process can take some time depending on computer speed, so go grab a coffee!! Or a nap!Once the processing is complete, at this stage, the point cloud has been classified to segment the dataset into common features. Under the Point Cloud Style options, one can set up a style for Classification in order to see how the data was processed.The coloring (as shown in the image below) is how the algorithms in Infraworks has been able to intelligently segment the data.How to model infrastructure assets in InfraWorks using point cloudsAfter the point cloud has been classified, the next steps are to run the processes to intelligently interpret the data into usable features.•Run the Point Cloud Modelling Tool•Once extracted, a wizard interface is provided for further creation and editing of features o Adjust locations of detected featureso Delete false positive featureso Change type of featureFor a typical example, the images below show the progress from raw point cloud to extracted3D features and a terrain model.Figure 1 - Raw point cloud (intensity view)Figure 2 - Classified point cloudFigure 3 - Terrain surfaceFigure 4 - Terrain with image overlayFigure 5 - Terrain with 3D modeled featuresNote: This last section will be edited for the course at AU, as there have been very recent enhancements that should be reflected.Regards,STANTEC CONSULTING LTD.Kevin Grover, ALS, P.Eng.Senior AssociatePhone: (780) 969-3393************************。

基于激光点云数据的单木骨架三维重构浔

基于激光点云数据的单木骨架三维重构浔

第40卷第1期2024年1月森㊀林㊀工㊀程FOREST ENGINEERINGVol.40No.1Jan.,2024doi:10.3969/j.issn.1006-8023.2024.01.015基于激光点云数据的单木骨架三维重构赵永辉,刘雪妍,吕勇,万晓玉,窦胡元,刘淑玉∗(东北林业大学计算机与控制工程学院,哈尔滨150040)摘㊀要:针对树木三维重构过程中面临的处理速度慢㊁重构精度低等问题,提出一种采用激光点云数据的单木骨架三维重构方法㊂首先,根据点云数据类型确定组合滤波方式,以去除离群点和地面点;其次,采用一种基于内部形态描述子(ISS )和相干点漂移算法(CPD )的混合配准算法(Intrinsic Shape -Coherent Point Drift ,IS -CPD ),以获取单棵树木的完整点云数据;最后,采用Laplace 收缩点集和拓扑细化相结合的方法提取骨架,并通过柱体构建枝干模型,实现骨架三维重构㊂试验结果表明,相比传统CPD 算法,研究设计的配准方案精度和执行速度分别提高50%和95.8%,最终重构误差不超过2.48%㊂研究结果证明可有效地重构单棵树木的三维骨架,效果接近树木原型,为构建林木数字孪生环境和林业资源管理提供参考㊂关键词:激光雷达;树木点云;关键点提取;树木骨架;几何模型中图分类号:S792.95;TN958.98㊀㊀㊀㊀文献标识码:A㊀㊀㊀文章编号:1006-8023(2024)01-0128-073D Reconstruction of Single Wood Skeleton Based on Laser Point Cloud DataZHAO Yonghui,LIU Xueyan,LYU Yong,WAN Xiaoyu,DOU Huyuan,LIU Shuyu ∗(College of Computer and Control Engineering,Northeast Forestry University,Harbin 150040,China)Abstract :In response to the slow processing speed and low reconstruction accuracy encountered during the 3D reconstruction of trees,a method for 3D reconstruction of single -tree skeletons using laser point cloud data is proposed.Firstly,a combination filtering method is determined based on the point cloud data type to remove outliers and ground points.Secondly,a hybrid registration algorithm based on ISS (Intrinsic Shape Descriptor)and CPD (Coherent Point Drift algorithm),called IS -CPD (Intrinsic Shape -Coherent Point Drift),is employed to obtain complete point cloud data for individual trees.Finally,a method combining Laplace contraction of point sets and topological refinement is used to obtain the skeleton,and branch models are constructed using cylinders to achieve 3D skeleton reconstruction.Experimental results show that compared to traditional CPD algorithm,the proposed registration scheme im-proves accuracy and execution speed by 50%and 95.8%respectively,with a final reconstruction error of no more than 2.48%.The research demonstrates the effective reconstruction of the 3D skeleton of individual trees,with results close to the original trees,provi-ding a reference for building digital twin environments of forest trees and forestry resource management.Keywords :LiDAR;tree point cloud;key point extraction;tree skeleton;geometry model收稿日期:2023-02-10基金项目:国家自然科学基金(31700643)㊂第一作者简介:赵永辉,硕士,工程师㊂研究方向为物联网与人工智能㊂E-mail:hero9968@∗通信作者:刘淑玉,硕士,讲师㊂研究方向为通信与信号处理㊂E -mail:1000002605@引文格式:赵永辉,刘雪妍,吕勇,等.基于激光点云数据的单木骨架三维重构[J].森林工程,2024,40(1):128-134.ZHAO Y H,LIU X Y,LYU Y,et al.3D reconstruction of sin-gle wood skeleton based on laser point cloud data[J].Forest En-gineering,2024,40(1):128-134.0㊀引言激光雷达可用于获取目标稠密点云数据,是实现自动驾驶和三维重建的重要手段㊂使用机载或地基激光雷达可以获取树高㊁胸径和冠层等量化信息,用于树木的三维重建,为推断树木的生态结构参数和碳储量反演提供依据,也可为林业数字孪生提供数据支撑㊂主流的点云数据去噪方法主要有基于密度㊁基于聚类和基于统计3种[1]㊂分离地面点和非地面点是点云数据处理的第一步,学者提出多种算法用于地面点分离㊂然而,即使是最先进的滤波算法,也需要设置许多复杂的参数才能实现㊂Zhang 等[2]提出了一种新颖的布料模拟滤波算法(Cloth Simu-lation Filter,CSF),该算法只需调整几个参数即可实现地面点的过滤,但该算法对于点云噪声非常敏感㊂在点云配准方面,经典的算法是Besl 等[3]提出的迭代最近点算法(Iterative Closest Point,ICP),但易出现局部最优解,从而限制了该算法的应用㊂因此,许多学者采用概率统计方法进行点云配准,典型的方法是相干点漂移算法(Coherent Point Drift,CPD)[4-5]等,但该方法存在运行时间长和计算复杂的问题㊂石珣等[6]结合曲率特征与CPD 提出了一第1期赵永辉,等:基于激光点云数据的单木骨架三维重构种快速配准方法,速度大大提高,但细节精确度有所下降㊂陆军等[7]㊁夏坎强[8]㊁史丰博等[9]对基于关键点特征匹配的点云配准方法进行了深入研究㊂三维树木几何重建从传统的基于规则㊁草图和影像重建,发展到如今借助激光雷达技术,可以构建拓扑正确的三维树木几何形态㊂翟晓晓等[10]以点云数据进行树木重建,由于受激光雷达视场角的约束,难以获得树冠结构的信息,因此仅重建了树干㊂Lin 等[11]㊁You 等[12]涉及点云骨架提取的研究,构建了树的几何和拓扑结构,但重构模型的真实感不够强㊂Cao 等[13]使用基于Laplace 算子的建模方法提取主要枝干的几何信息,拓扑连接正确,并保留了部分细枝㊂曹伟等[14]对点云树木建模的发展和前景进行了综述,但在结合点云数据提取骨架并重建等方面研究不足㊂本研究提出一种基于骨架的方法,旨在准确地从单木的点云数据中重建三维模型㊂原始点云数据经过CSF 算法和K 维树(Kd -Tree)近邻搜素法的组合滤波后,提取了准确的单木数据㊂同时,基于树木特征点云的混合配准算法(Intrinsic Shape -Co-herent Point Drift,IS -CPD),可显著提高配准效率㊂最后,通过提取单棵树木的骨架点,构造连接性,并用圆柱拟合枝干,实现了单木的三维建模㊂1㊀数据采集及预处理1.1㊀数据获取数据采集自山东省潍坊市奎文区植物园内一株高约8.5m㊁树龄约20a 的银杏树㊂使用Ro-boSense 雷达从2个不同角度进行点云数据采集,雷达高为1.5m,与树木水平距离约为10m㊂通过对来自树木正东方向和正北方向的2组点云数据进行采集,如图1所示㊂(a )角度1点云数据(正东方向)(a )Angle 1 point cloud data (East direction )(b )角度2点云数据(正北方向)(b )Angle 2 point cloud data (North direction)图1㊀2组点云的最初扫描结果Fig.1Initial scan results of two sets of point clouds1.2㊀点云预处理为了提高后续处理点云数据的准确性和时效性,需要对数据进行预处理㊂首先,利用CSF 滤波算法去除冗余的地面背景信息,该算法参数较少,分离速度快㊂通过使用落在重力下的布来获取地形的物理表示,单木点云可以被分离出来㊂由于扫描环境和激光雷达硬件误差的影响,可能会出现离群点㊂因此,采用Kd -Tree 算法对提取的点云进行降噪处理,提高单个树木数据的精度,以备在后续的算法使用中得到更准确的结果㊂通过搜索待滤波点云p i (x i ,y i ,z i )中每个点的空间邻近点p j (x j ,y j ,z j ),计算之间的平均距离(d i )㊁全局均值(μ)以及标准差(σ)㊂筛选符合范围(μ-αˑσɤd i ɤμ+αˑσ)的点并过滤掉离群值(α为决定点云空间分布的参数),d i ㊁μ㊁σ的计算公式如下㊂d i =ðkj =1x i -y j k μ=ðn i =1d i n σ=ðni =1(d i -μ)2n ìîíïïïïïïïïïïïï㊂(1)921森㊀林㊀工㊀程第40卷式中:k 为决定点云密集度的参数;n 为点云数量㊂通过试验发现,最终选定参数k =20,α=1.2时,对点云数据进行处理结果最优,滤噪结果如图2所示,基本去除了离群噪声点和地面点同时又确保对点云模型轮廓的保护㊂2㊀单木骨架重构方法单木骨架重构方法的过程主要包括以下几个步骤,如图3所示㊂首先,对预处理的2组点云数据进行特征提取,并进行精确的配准;其次,对点云进行几何收缩,获取零体积点集,并通过拓扑细化将点集细化成一维曲线,得到与点云模型基本吻合的骨架线;最后,基于骨架线对树木枝干进行圆柱拟合,以构建枝干的三维模型㊂图2㊀2组点云滤噪结果图Fig.2Two sets of point cloud filtering and denoisingresults图3㊀单木骨架重构方法过程图Fig.3Process diagram of single wood skeleton reconstruction method2.1㊀三维点云配准CPD 配准是一种基于概率的点集配准算法,在对点集进行配准时,一组点集作为高斯混合模型(Gaussian Mixture Model,GMM)的质心,假设模板点集坐标为X M ˑD =(y 1,y 2, ,y M )T ,另一组点集作为混合高斯模型的数据集,假设目标点集坐标为X N ˑD =(x 1,x 2, ,x N )T ,N ㊁M 分别代表2组点的数目,D 为Z 组的维度,T 为矩阵转置㊂通过GMM 的最大后验概率得到点集之间的匹配对应关系㊂GMM 概率密度函数如下㊂p (x )=ω1N +(1-ω)ðMm =11M p (x m )㊂(2)式中:p x |m ()=1(2πσ2)D 2exp (-x -y m 22σ2),;p (x )是概率密度函数;ω(0ɤωɤ1)为溢出点的权重参数;m 为1 M 中的任何一个数㊂GMM 质心的位置通过调整变换参数(θ)的值进行改变,而变换参数的值可以通过最小化-log 函数来求解㊂E θ,σ2()=-ðN n -1log ðMm -1p (m )p (x n |m )㊂(3)式中,x n 与y m 之间的匹配关系可以由GMM 质心的后验概率p (m x n )=p (m )p (x n m )来定义㊂采用期望最大值算法进行迭代循环,从而对最大似然估计进行优化,当收敛时迭代停止㊂得到θ和σ2的解,即完成模板网格点集向目标网格点集的配准㊂扫描设备采集的点云数据通常数量庞大,因此并非所有点云信息都对配准有效㊂此外,CPD 算法的计算复杂度较高,匹配速度较慢㊂因此,本研究采用ISS(Intrinsic Shape Signaturs)算法[15]提取关键点,以降低几何信息不显著点的数量㊂通过对这些特征点进行精确配准,可以提高点云配准的效率㊂图4给出了IS -CPD 配准过程㊂31第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图4㊀基于特征点提取的配准过程图Fig.4Registration process diagram based on feature point extraction ㊀㊀IS-CPD点云配准算法流程如下㊂(1)选择2个视角点云重叠区域㊂(2)采用ISS算法提取特征点集㊂设点云数据有n个点,(x i,y i,z i),i=0,1, ,n-1㊂记P i=(x i,y i,z i)㊂①针对输入点云的每个点构建一个半径为r的球形邻域,并根据式(4)计算每个点的权重㊂W ij=1||p i-p j||,|p i-p j|<r㊂(4)②根据式(5)计算各点的协方差矩阵cov及其特征值{λ1i,λ2i,λ3i},并按从小到大的次序进行排列㊂cov(p i)=ð|p i-p j|<r w ij(P i-P j)(P i-P j)Tð|P i-P j|<r w ij㊂(5)③设置阈值ε1与ε2,满足λ1iλ2i ≪ε1㊁λ2iλ3i≪ε2的点即为关键点㊂(3)初始化CPD算法参数㊂(4)求出相关概率矩阵与后验概率p(m|x n)㊂(5)利用最小负对数似然函数求出各参数的值㊂(6)判断p的收敛性,若不收敛,则重复步骤(4)直到收敛㊂(7)在点集数据中,利用所得到的转换矩阵,完成配准㊂2.2㊀点云枝干重建传统的构建枝干的方法是直接在点云表面上进行重构,这种方法会导致大量畸变结构㊂因此,本研究先提取单木骨架线,再通过拟合圆柱来构建几何模型㊂图5为骨架提取并重建枝干的过程㊂为精确提取树干和树枝,采用Laplace收缩法提取骨架㊂首先,对点云模型进行顶点邻域三角化,得到顶点的单环邻域关系㊂然后,计算相应的余切形式的拉普拉斯矩阵,并以此为依据收缩点云,直至模型收缩比例占初始体积的1%,再通过拓扑细化将点集细化成一维曲线㊂采用最远距离点球对收缩点进行采样,利用一环邻域相关性将采样点连接成初始骨架,折叠不必要的边,直到不存在三角形,得到与点云模型基本吻合的骨架线㊂为准确地模拟树枝的几何形状,采用圆柱拟合方法㊂在树基区域,使用优化方法来获得主干的几何结构[16]㊂由于靠近树冠和树枝尖端的小树枝的点云数据较为杂乱,使用树木异速生长理论来控制枝干半径㊂最终,拟合圆柱体来得到树木点云的3D 几何模型[17],原理如图6所示㊂以粗度R为半径,以上端点M和下端点N为圆心生成多个圆截面,并沿着骨架线连接圆周点绘制出圆柱体,以此代表每个树枝,最终完成整棵树的枝干的绘制㊂131森㊀林㊀工㊀程第40卷图5㊀构建枝干模型流程图Fig.5Flow chart for building branch model(a)圆柱模型示例(a)Example of a cylindrical model(b)绘制局部树枝示例(b)Example of drawing a partial tree branchNMR图6㊀绘制树干几何形状原理Fig.6Principle of drawing tree trunk geometry3㊀试验结果与分析3.1㊀点云配准结果与分析为验证IS-CPD配准算法的有效性,对滤波后的点云进行试验,比较该算法与原始CPD算法及石珣等[6]提出的方法在同一数据下的运行时间及均方根误差(RMSE,式中记为R MSE),其表达式见式(6),值越小表示配准效果越精确㊂图7及表1给出了3种配准算法的对比结果㊂R MSE=㊀ðn i-1(x i-x︿i)2n㊂(6)式中:n为点云数量;x i和x︿i分别为配准前后对应点之间欧氏距离㊂经过配准结果图7和表1的分析,石珣等[5]算法虽提高了配准速度,但其细节精度下降,配准结果不佳㊂相比之下,CPD和IS-CPD算法均能成功地融合2个不同角度的点云,达到毫米级的精度,2种方法可视为效果近乎一致㊂相比之下,本研究算法的时间复杂度要小得多㊂此外,由表2可知,配准时间缩短至10.77s,平均配准精度相较CPD提高了约50%㊂3.2㊀点云枝干重建结果与分析在几何重建部分(图8),采用基于Laplace收缩的骨架提取方法,仅需不到5次迭代,就可以将点收缩到较好的位置,如图8(b)所示㊂对收缩后的零体231第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图7㊀点云配准可视化对比Fig.7Point cloud registration visualization comparison表1㊀点云配准结果对比Tab.1Comparison of point cloud registration results配准算法Registration algorithm 点云总数/个Total number of point clouds角度1Angle 1角度2Angle 2历时/s Time 均方根误差/mRMSE CPD石珣等[6]Shi xun et al [6]本算法Proposed algorithm3795637647261.748.3ˑ10-386.58 1.6ˑ10-210.774.1ˑ10-3㊀㊀注:IS -CPD 算法提取关键点所需的时间可以忽略不计㊂Note:The time required for the IS -CPD algorithm to extract key points can beignored.积点集进行拓扑细化,得到与点云模型基本吻合的骨架线,如图8(c)所示㊂随后,对枝干进行圆柱拟合㊂至此,树木点云重建工作全部完成㊂图8(d)为树木骨架几何重建的最终结果㊂本研究使用单棵树木的树高和胸径作为重建模型的精度评价指标㊂首先,采用树干点拟合圆柱的方法来将点云投影至圆柱轴向方向,通过求取该轴向投影的最大值和最小值来获取树高信息㊂同时,(a )输入点云(a )Input point cloud(b )点云收缩(b )Point cloud shrinkage (c )连接骨架线(c )Connecting skeleton lines(d )树木点云的几何模型(d )Geometric model of treepoint cloud图8㊀单木几何重建过程Fig.8Single wood geometry reconstruction process在Pitkanen 等[18]研究方法的基础上,对树干点云进行分层切片处理,将二维平面上的分层点云进行投影,再通过圆拟合方法得到更为精确的胸径尺寸㊂为验证该算法重建模型的准确性,进行20次试验,并将其与Nurunnabi 等[16]的重建方法进行了比较㊂表2为2种方法分别获得的树高和胸径的平均值,并将其与真实测量值进行了对比㊂结果表明,该算法相较于Nurunnabi 等[16]的重建方法具有更高的精度,胸径平均误差仅为2.48%,树高平均误差仅为1.64%㊂表2㊀树木重构精度分析Tab.2Tree reconstruction accuracy analysis评估方法Evaluation method胸径/m DBH 树高/m Height 平均误差(%)Average error胸径DBH 树高Height Nurunnabi 等[16]Nurunnabi et al [16]2.13ˑ10-18.26 5.973.17本算法Proposed algorithm1.96ˑ10-18.392.48 1.64实测值Measured value2.01ˑ10-18.53331森㊀林㊀工㊀程第40卷4㊀结论本研究讨论了激光雷达重建单棵树木的流程,分析并改进了关键问题㊂充分发挥CSF滤波和Kd-Tree算法的优势,从而精准地分离出了单棵树木的数据,提高了处理速度㊂提出IS-CPD配准算法,可将点云配准的效率提高约95.8%㊂通过精确配准后的点云数据,成功提取骨架树,最终重构误差控制在2.48%以内㊂试验结果表明,研究方法在树木点云数据滤波㊁配准和骨架提取方面具有可行性,树木枝干结构重建效果良好,且重构模型可为评估农林作物㊁森林生态结构健康等提供支持㊂ʌ参㊀考㊀文㊀献ɔ[1]鲁冬冬,邹进贵.三维激光点云的降噪算法对比研究[J].测绘通报,2019(S2):102-105.LU D D,ZOU J parative research on denoising al-gorithms of3D laser point cloud[J].Survey and Mapping Bulletin,2019(S2):102-105.[2]ZHANG W,QI J,WAN P,et al.An easy-to-use air-borne LiDAR data filtering method based on cloth simula-tion[J].Remote Sensing,2016,8(6):501. [3]BESL P J,MCKAY H D.A method for registration of3-D shapes[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256. [4]MYRONENKO A,SONG X.Point set registration:coherent point drift[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(12):2262-2275. [5]王爱丽,张宇枭,吴海滨,等.基于集成卷积神经网络的LiDAR数据分类[J].哈尔滨理工大学学报,2021, 26(4):138-145WANG A L,ZHANG Y X,WU H B,et al.LiDAR data classification based on ensembled convolutional neural net-works[J].Journal of Harbin University of Science and Technology,2021,26(4):138-145.[6]石珣,任洁,任小康.等.基于曲率特征的漂移配准方法[J].激光与光电子学进展,2018,55(8):248-254. SHI X,REN J,REN X K,et al.Drift registration based on curvature characteristics[J].Laser&Optoelectronics Progress,2018,55(8):248-254.[7]陆军,邵红旭,王伟.等.基于关键点特征匹配的点云配准方法[J].北京理工大学学报,2020,40(4):409-415. LU J,SHAO H X,WANG W,et al.Point cloud registra-tion method based on key point extraction with small over-lap[J].Transactions of Beijing Institute of Technology, 2020,40(4):409-415.[8]夏坎强.基于ISS特征点和改进描述子的点云配准算法研究[J].软件工程,2022,25(1):1-5.XIA K Q.Research on point cloud algorithm based on ISS feature points and improved descriptor[J].Software Engi-neering,2022,25(1):1-5.[9]史丰博,曹琴,魏军.等.基于特征点的曲面点云配准方法[J].北京测绘,2022,36(10):1345-1349.SHI F B,CAO Q,WEI J,et al.Surface point cloud reg-istration method based on feature points[J].Beijing Sur-veying and Mapping,2022,36(10):1345-1349. [10]翟晓晓,邵杰,张吴明.等.基于移动LiDAR点云的树木三维重建[J].中国农业信息,2019,31(5):84-89.ZHAI X X,SHAO J,ZHANG W M,et al.Three-di-mensional reconstruction of trees using mobile laser scan-ning point cloud[J].China Agricultural Information,2019,31(5):84-89.[11]LIN G,TANG Y,ZOU X,et al.Three-dimensional re-construction of guava fruits and branches using instancesegmentation and geometry analysis[J].Computers andElectronics in Agriculture,2021,184:106107. [12]YOU A,GRIMM C,SILWAL A,et al.Semantics-guided skeletonization of upright fruiting offshoot trees forrobotic pruning[J].Computers and Electronics in Agri-culture,2022,192:106622.[13]CAO J J,TAGLIASACCJI A,OLSON M,et al.Pointcloud skeletons via Laplacian based contraction[C]//Proceedings of the Shape Modeling International Confer-ence.Los Alamitos:IEEE Computer Society Press,2010:187-197.[14]曹伟,陈动,史玉峰.等.激光雷达点云树木建模研究进展与展望[J].武汉大学学报(信息科学版),2021,46(2):203-220.CAO W,CHEN D,SHI Y F,et al.Progress and pros-pect of LiDAR point clouds to3D tree models[J].Geo-matics and Information Science of Wuhan University,2021,46(2):203-220.[15]YU Z.Intrinsic shape signatures:A shape descriptor for3D object recognition[C]//IEEE International Confer-ence on Computer Vision Workshops.IEEE,2010. [16]NURUNNABI A,SADAHIRO Y,LINDENBERGH R,etal.Robust cylinder fitting in laser scanning point clouddata[J].Measurement,2019,138:632-651. [17]GUO J W,XU S B,YAN D M,et al.Realistic proce-dural plant modeling from multiple view images[J].IEEE Transactions on Visualization and Computer Graph-ics,2020,26(2):1372-1384.[18]PITKANEN T P,RAUMONEN P,KANGAS A.Measur-ing stem diameters with TLS in boreal forests by comple-mentary fitting procedure[J].ISPRS Journal of Photo-grammetry and Remote Sensing,2019,147:294-306.431。

3D软件介绍point_cloud_software

3D软件介绍point_cloud_software

CHAPTER 3 – POINT CLOUD PROCESSING SOFTWAREPoint clouds by themselves are not useful without software to process the data and make measurements and other calculations. Also, in order to be useful, the point cloud data needs to interface easily with Computer Aided Design/Drafting (CADD) and slope stability programs. This section discusses the point cloud file format, point cloud processing software, and interfacing between point cloud software and other CADD and slope stability software.THE POINT CLOUD FILEAs discusses in Chapter 2, the point cloud is the basic output from a 3D laser scanner. The most generic point cloud file format is a 3D coordinate file (often referred to as an xyz file). The format for this file is ASCII and can therefore be read by all post-processing software. The comma or tab-separated format for a grayscale 3D coordinate file is as follows with one line for each laser point:Grayscale point cloud: x1 y1 z1 intensity1x2 y2 z2 intensity2…The x, y and z values refer to a specific coordinate system. If the point cloud is not registered, then by default the y direction is most often set to the instrument direction. After registration, the x, y and z directions are most often set to East, North and up, respectively. However these systems are not universal and the scanner or software manufacturer should be contacted for information on their specific 3D coordinate formats. The intensity for each point has a value that range from 0 (black) to 255 (white).Similarly, the comma or tab-separated format for an rgb (red, green blue) 3D coordinate file is as follows:Color point cloud: x1, y1, z1, r1, g1, b1x2, y2, z2, r2, g2, b2…Here r, g and b each have values that range from 0 to 255. Because the xyz file is ASCII, these files are slow to read and write; they also only contain the basic point cloud information. In general, each scanner manufacturer, and also each point cloud processing software manufacturer, has their own specialized binary format. Some examples of file extensions associated with different binary formats are given below.Scanner manufacturer:Leica: .coeRiegl: .3ddPoint cloud processing software manufacturer:Polyworks: .pif file formatSplit FX: .fx file formatAt the present time the ASCII 3D coordinate file is the standard format for point clouds. However, because it is ASCII and only contains point cloud information, that is, no digital image or tin surface information, other formats have been discussed by both manufactures and users as better standard file formats for ground-based LiDAR output. These formats include the LiDAR Exchange Format (LAS) and the Virtual Reality Modeling Language (VRML). Additional details on these file formats are discussed in Chapter 6.POINT CLOUD REGISTRATIONThe first step in point cloud processing is to orient the point cloud into the real world coordinate system based on data taken in the field. Point cloud software usually includes several methods for point cloud registration. The most common method is to register the point cloud based on three or more targets of known position (3D similarity transformation). However, for some applications (such as slope stability), only the orientation registration is required. This means that the point cloud is oriented correctly, but the 3D coordinates are not registered to a known coordinate system (Universal Transverse Mercator coordinate system, for example). In these instances, simpler registration methods are possible, such as only measuring the orientation of the scanner (orient by scanner method) without any position surveying. In this case the scanner’s position is defined by the bearing or direction of its line of sight, its inclination in the direction of the line of sight, and its inclination perpendicular to the line of sight. This provides enough information to correctly georeference the orientation of the scan (but not the position).POINT CLOUD PROCESSING SOFTWAREMost of the scanner manufacturers have developed their own point cloud processing software. In addition, several other companies have developed point cloud processing software. By exporting the point clouds in the xyz file format, point clouds from any scanner can be analyzed with any of the software packages. Point cloud processing software includes:x Cyclone and Cyclone Cloudworx (Leica, )x Polyworks (Innovmetric, )x Riscan Pro (Riegl, )x Isite Studio (Isite, )x LFM Software (Zoller+Fröhlich, www.zofre.de )x Split FX (Split Engineering, )x RealWorks Survey (Trimble, )Details on some of the software listed above are given in Appendix B (from POB, 2008).The following editing/analysis features are found in most of the software packages:x General point cloud visualization, including pan, tilt, and zoom;x General point cloud editing, including adding and deleting points, noise removal, point decimation;x Ability to make measurements such as distances, angles, areas and volumes;x Ability to register scans, including the automatic detection of targets;x Ability to stitch together multiple scans either using survey control or Iterative Closest Point (ICP) type algorithms;x Ability to create a triangulated surface (Triangulated Irregular Network, or TIN );x Ability to best-fit lines, planes, and other shapes to point cloud clusters;x Ability to make profiles and cross sections through a point cloud; andx Ability to handle various import and export formats (to CADD programs, for example). The following advanced features are found in some, but not all of the software packages: x Perform solid modeling (volume generation) based on user-defined lines, planes and other surfaces as bounds;x Perform automatic extraction of standard shapes from cloud (e.g. pipe fittings, structural steel members, etc.);x Have edge detection technology to determine boundaries of solids, planes and other shapes;x Ability to drape a digital image over a triangulated surface;x Automatically compute a full 3D polygonal mesh (not 2.5D) from a point cloud;x Ability to integrate scans with floor plans, engineering drawings of objects and surveyed information; andx Ability to make fly-throughs and other types of advanced visualizations.The focus of this report is on the use of ground-based LiDAR for highway rock slope stability. Therefore, rather than describe all of the items in the above lists, this report focuses on specific features in point cloud software that allow geotechnical information to be extracted from point clouds. It should be noted that most of the point cloud software is generic in nature and is able to perform analyses for a number of applications including mechanical design, architecture, construction, and mining. The Split FX software, on the other hand, was developed specifically for extracting geotechnical information from point clouds of exposed rock surfaces and has the following features:x Ability to automatically delineate fracture surfaces in a point cloud and determine the orientation, area, and roughness of each fracture;x Ability to plot fracture orientations on a stereonet (pole and contour plots);x Ability to pick joint sets, and determine statistical properties of each set set;x Ability to delineate joint traces (automatic and manual) and determine joint spacing, length and orientation (true spacing and orientation if digital image is draped);x Ability to trace fractures on draped photos to determine fracture orientations;x Ability to subtract two point clouds to determine rockfall volume and rate; andx Ability to estimate a rockfall hazard rating from a point cloud.Many of the above items can still be analyzed using the “generic” point cloud software. For instance, to determine the orientation of a fracture in a point cloud, the points making up the fracture can be selected by hand, and the software will determine the orientation of the best-fit plane through the points. This can be done many times throughout the point cloud, and the orientations can be plotted using a separate stereonet program. In a similar fashion, the generic software can be used to estimate fracture length and spacing, roughness, etc.This is discussed in more detail in Chapter 4.INTEROPERABILITY WITH CADD SOFTWARECADD software principally includes Microstation (Bentley, ) and AutoCAD (Autodesk, ), though many other programs are available. It also includes highway-specific CADD software, such as Inroads and Geopak.The interoperability between point cloud and CADD software is very important, and in the past this has been an issue with using LiDAR in highway applications. It still is an issue as will be shown in Chapter 6; however, as the point cloud software has improved with the addition of many new features in the past few years, interoperability is now greatly improved. For instance, importing a point cloud with a high density of points into a CADD program is not recommended, since CADD programs are not set up to efficiently handle the large number of points and the large file size. Many options now exist for exporting 3D information to the CADD environment, and programs such as Cyclone Cloudworx have been designed specifically for manipulating point clouds within a CADD environment. First of all, point clouds can be cropped and the density of points can be decimated so the file size is optimized. Secondly, specific 3D shapes (pipe fittings, steel members) can be extracted from the point cloud, which are much easier to work with in CADD programs than the points themselves. Thirdly, two-dimensional plans and sections can be created in the point cloud software and exported to CADD programs. INTEROPERABILITY WITH SLOPE STABILITY SOFTWARESlope stability software used for highway applications include Rockpack III (RockWare, Inc., ), the Rocscience suite (Dips, Swedge, Rocplane, Slide, Phase2;), the Itasca suite (FLAC, FLAC3D, UDEC, 3DEC; ), Slope/W (Geo-Slope International, ) and many others. Two of the advantages of using LiDAR for highway geotechnical investigations are the ease and speed at which scans can be made and rock characterization information extracted from point clouds. Along these lines, it is important that LiDAR-generated data can be easily exported to the slope stability programs mentioned above. There are three basic kinds of information that the slope stability programs import, and the ability of point cloud processing software to export this information is discussed below.Export Individual Fracture InformationMany slope stability programs (Rockpack III, Swedge) are able to directly input individual fracture information in a spreadsheet format. For each discontinuity, this information includes orientation, size or length, roughness, etc. The specific position of the discontinuity can also be input into some of the programs (3DEC). In general, exporting this kind of information is straightforward for the point cloud processing programs, assuming that the point cloud programs can calculate the information in the first place. Most point cloud programs can fit a plane through a selected set of points and calculate the orientation and size.Export Fracture Set InformationSome of the slope stability programs (Swedge, 3DEC) use statistical information about the number of fracture sets and the statistical properties of each set (such as the mean orientation and the Fisher constant). Once the orientation of individual fractures has been determined from LiDAR, this information is relatively easy to calculate in a spreadsheet. It is also very easy to export to slope stability programs since it only involves a few numbers for each discontinuity set.Export Rock Mass Strength and ModulusMany of the slope stability programs (Slide, FLAC, FLAC3D, Slope/W, Phase2) use rock mass properties (Hoek and Brown rock mass parameters or Mohr-Coulomb rock mass parameters, for example) rather than individual fracture information. To date, none of the point cloud programs have the capability to make the necessary calculations. However, these rock mass properties can be calculated from the information extracted from the point clouds using the procedures described in Hoek (2007) and others.。

大型曲面机器人移动测量点云拼接方法

大型曲面机器人移动测量点云拼接方法
ICP算法优缺点
03
ICP算法具有较高的精度和鲁棒性,但收敛速度较慢,且易陷入局部最优解。
NDT算法原理
NDT(Normal Distribution Transform)算法是一种基于概率统计的点云配准方法。它通过估计每个点在另一个点云中的概率分布,来寻找最佳的变换矩阵。
NDT算法流程
与ICP算法类似,也包括点云获取、预处理、配准等步骤。在配准步骤中,将两个待配准点云视为源点云和目标点云,通过不断迭代计算,使得源点云逐渐逼近目标点云。
扫描设置
在机器人移动过程中,实时获取曲面的点云数据。
数据采集
将获取的点云数据进行存储和格式转换,以便后续处理。
数据格式
去除点云数据中的噪声和冗余信息,提高数据质量。
数据去噪
将大型曲面点云数据进行区域分割,划分为多个子区域,便于后续处理与拼接。
数据分割
对分割后的子区域进行坐标变换和配准,确保点云数据的对齐精度。
NDT算法优缺点
NDT算法具有较高的精度和鲁棒性,且收敛速度较快,但当点云密度较大时,计算量也会显著增加。
包括误差均方根(RMSE)、对齐误差(ALIGNMENT ERROR)等。
评估指标
可以采用一些优化策略,如使用RANSAC(随机抽样一致算法)进行稳健性估计,使用贝叶斯方法进行后处理优化等。
结果优化
02
Wang, Y., Zhang, Y., & Li, Z. (2019). Point cloud registration based on deep learning: A survey. Robotics and Computer-Integrated Manufacturing, 59, 241-253.

点云配准简介

点云配准简介

个变换后的点云。然后将这个变换后的点云与target cloud进行比较,只要两个点云中存在距离小于一定阈值(这就是题主所说的ICP中的一个参数),
我们就认为这两个点就是对应点。这也是"最邻近点"这个说法的来源。
3)R、T优化。有了对应点之后,我们就可以用对应点对旋转R与平移T进行估计。这里R和T中只有6个自由度,而我们的对应点数量是庞大的(存在多余
其中
就是target cloud与source cloud中的一对对应点。
而我们要求的就是其中的R与T旋转平移矩阵。
这里,我们并不知道两个点集中点的对应关系。这也就是配准的核心问题。
2、配准分为粗配准与精配准两步。 粗配准就是再两个点云还差得十万八千里、完全不清楚两个点云的相对位置关系的情况下,找到一个这两个点云近似的旋转平移矩阵(不一定很精确, 但是已经大概是对的了)。 精配准就是在已知一个旋转平移的初值的情况下(这个初值大概已经是正确的了),进一步计算得到更加精确的旋转平移矩阵。
简要介绍一下点集到点集ICP配准的算法: 1)ICP算法核心是最小化一个目标函数:
(这里的表述与原文略微有些不同,原文是用四元数加上一个偏移向量来表达旋转平移变换。)
就是一对对应点,总共有 对对应
点。这个目标函数实际上就是所有对应点之间的欧式距离的平方和。
2)寻找对应点。可是,我们现在并不知道有哪些对应点。因此,我们在有初值的情况下,假设用初始的旋转平移矩阵对source cloud进行变换,得到的一
观测值)。因此,我们可以采用最小二乘等方法求解最优的旋转平移矩阵。一个数值优化问题,这里就不详细讲了。
4)迭代。我们优化得到了一个新的R与T,导致了一些点转换后的位置发生变化,一些最邻近点对也相应的发生了变化。因此,我们又回到了步骤2)中的

基于PCA的ICP点云配准算法的改进研究

基于PCA的ICP点云配准算法的改进研究

《工业控制计算机》2020年第33卷第6期三维点云配准技术[1]是当今数字化检测领域的一项关键技术,在形状检测、计算机视觉等领域都有广泛的应用。

点云配准的准则是把不同视角获取到的点云整合到同一个坐标系下。

问题的关键是坐标变换参数R穴旋转矩阵雪和T穴平移矢量雪的求解。

目前国内外的点云配准一般分两步:初始配准和精确配准。

初始配准是将不同坐标系下的点云大致对准到同一坐标系下,常用的方法有:①中心重合法[2],求出两个点云的中心,并将中心重合,这种方法无法解决旋转方向的偏移;②标签法[3],在测量时人为地贴上一些特征点,根据空间特征不变性,利用这些特征点进行定位;③特征提取法[4],利用曲率特征等提取点云表面特征,寻找对应匹配点进行配准。

精确配准则是利用已知的初始变换矩阵通过迭代优化的方法得到全局最优解。

目前国内外最常用的配准方法为Besl 提出的迭代最近点(Iterative Closest Point ,ICP )算法[5]。

ICP 算法存在的局限性[6]主要有两方面:①该算法要求2个点云有较好的相对初始位置,以避免陷入局部最优解;②搜寻对应点对的时间较长,点云数据较多时配准效率低。

文献[7]在迭代算法中设立动态阈值,利用法向一致性作为约束点来删除错误点对,并采用k-d 树加速对应点的搜索过程;文献[8]提出了一种散乱点云直接配准的算法,首先基于测点的曲率和法矢特征,采用几何哈希技术找出有效的匹配点对集合,再运用最近点迭代法进一步精确配准;文献[9]利用刚体变换的低维性质,提出了一种区域层次上的自动点云配准算法,能够正确配准重叠比例较低的点云。

根据以上研究现状,本文提出了一种基于PCA 的ICP 配准改进算法,通过在初始配准时进行主轴矫正,为后续的精配准提供良好的初始位置,可以有效避免精确配准时陷入局部最优[10]的情况,并利用k-d 树来提高搜索对应点的效率,实现快速、稳定、准确的点云配准。

1点云初始配准初始配准是通过初始配准算法将两组三维点云数据旋转至大致重合,以提供精确匹配中逐次迭代的初值。

PointCloud-Manual

PointCloud-Manual

PointCloud数以亿计的点在AutoCAD中进行处理用户手册一款用于三维激光扫描数据后处理的专业软件关于本手册的一些说明关于本手册的一些说明亲爱的“点云”软件用户:非常感谢您对我们的信任,并祝愿您使用“点云”软件在今后的工作中取得成功。

您可以放心,我们会尽全力把“点云”软件开发成一个使您的工作更加简单、轻松的软件系统。

为了保证使用“点云”软件进行操作的最佳介绍,我们在本手册中引入一些范例,和您一起在AutoCAD中一步步的进行点云的描述与分析。

在第3章中将讲解如何在AutoCAD中导入点云,并说明如何使用简单的工具进行点云评估。

在后面的章节中您将了解到点云专业版扩展的最重要的功能。

第5章帮助您熟悉单一命令最为重要的细节,并使您可以处理自己的项目。

我们建议您花一些时间预先对这些章节进行阅读。

如果有任何疑问,欢迎和我们联系。

我们随时欢迎,您对“点云”软件的问题和建议,这对我们是非常重要的。

“点云”软件的完善依赖于使用者的想法及创造力。

最后,我们期待着与您合作,并衷心祝愿您工作愉快。

目录目录1点云软件可以做什么?...........................................................................................1-1 2点云软件的安装与启动 .........................................................................................2-12.1系统需求 ........................................................................................................2-12.1.1通过图片进行工作的需求.....................................................................2-12.2从CD盘安装...................................................................................................2-12.3安装软件狗.....................................................................................................2-22.3.1通过网络软件狗使用点云软件..............................................................2-32.4在AutoCAD 中插入点云软件 .........................................................................2-62.4.1AutoCAD 2009或更高版本的属性......................................................2-92.5CodeMeter – 许可证管理 ............................................................................. 2-102.5.1访问授权状态..................................................................................... 2-102.5.2请求激活文件..................................................................................... 2-112.5.3激活/更新许可证 ................................................................................ 2-122.5.4认证软件狗时间 ................................................................................. 2-13 3使用点云软件 .......................................................................................................3-13.1在 AutoCAD 2011中用PCG点云工作...........................................................3-13.2PCG 与PTC [AutoCAD 2011 及更高版本]的介绍..........................................3-23.3用于点云的AutoCAD命令[AutoCAD 2011 及更高版] ....................................3-23.4点云对象 ........................................................................................................3-23.5点云对象的状态 (仅指PTC ) ..........................................................................3-33.6第一步............................................................................................................3-53.7使用点云切片工作 ..........................................................................................3-9 4使用点云专业版和图片进行工作...........................................................................4-14.1导向图片 ........................................................................................................4-14.1.1通过点云软件进行图片导向 .................................................................4-24.1.2在AutoCAD中优化图片显示...............................................................4-64.2平面................................................................................................................4-84.3圆柱.............................................................................................................. 4-104.4使用导向图片和表面进行3D绘图................................................................ 4-11目录4.4.1更多可以实现的操作 ......................................................................... 4-144.4.2通过多个视口进行工作...................................................................... 4-164.5使用两个导向图片的3D绘图....................................................................... 4-17 5点云软件命令参考................................................................................................ 5-1插入点云 .............................................................................................................. 5-4设置当前点云....................................................................................................... 5-6设置点云参数… ................................................................................................... 5-7设置首选点云格式[AutoCAD 2011 及更高版本] ................................................... 5-8点云管理(kubit PTC) – 加载点云 ...................................................................... 5-9点云管理(kubit PTC) – 卸载点云 .................................................................... 5-10点云管理(kubit PTC) – 合并点云 .................................................................... 5-11点云管理(kubit PTC) – 裁剪点云 .................................................................... 5-12点云管理(kubit PTC) – 设置控制把手位置 ...................................................... 5-13点云管理(kubit PTC) – 显示性能 .................................................................... 5-14点云管理(kubit PTC) – 显示/隐藏包围盒......................................................... 5-16点云管理(kubit PTC) – 显示/隐藏控制把手 ..................................................... 5-17点云管理(kubit PTC) – 设置点的大小 ............................................................. 5-18点云管理(AutoCAD PCG) – 将kubit 点云转化成AutoCAD 点云(PTC–>PCG) [AutoCAD 2011及更高版本] .............................................................................. 5-19点云管理(AutoCAD PCG) – 重新建立AutoCAD 点云(PCG–>PCG) [AutoCAD 2011 及更高版本] ............................................................................................... 5-20点云管理( AutoCAD PCG) – 为建模工具设置点的密度 [AutoCAD 2011 及更高版本] ...................................................................................................................... 5-21点云管理( AutoCAD PCG) – 显示最优化 [AutoCAD 2011 及更高版本] ........... 5-22导入数据(其他格式) – 导入点云(ASCII)............................................................. 5-23导入数据 (其他格式) – 导入 Leica PTZ 文件... ................................................... 5-26导入数据 (其他格式) – 导入 Riegl RiSCAN Pro工程文件 .................................. 5-29导入数据 (其他格式) – 导入 Trimble RealWorks 测量正射影像.......................... 5-35导入数据(其他格式) – 导入Reconstructor正射影像 .......................................... 5-37定义截面 – 定义切片(UCS) ................................................................................ 5-39定义截面 – 定义切片 (对齐AutoCAD 平面对象) ................................................ 5-40定义截面 – 移动切片.......................................................................................... 5-41定义截面 – 向上移动切片................................................................................... 5-42目录定义截面 – 向下移动切片 ................................................................................... 5-43定义截面 – 改变切片厚度 ................................................................................... 5-44定义截面 – 定义多重切片 ................................................................................... 5-45定义截面 – 定义修剪框....................................................................................... 5-51定义截面 – 定义多边形裁剪................................................................................ 5-52定义截面 – 多边形排除....................................................................................... 5-54定义截面 – 上一个截面定义................................................................................ 5-55定义截面 – 截面管理器[用于kubit 点云] ............................................................. 5-56定义截面 – 截面管理器[用于 AutoCAD 点云] ...................................................... 5-67数字化工具 – 正视图(来自点) ........................................................................... 5-68数字化工具 – 平面UCS ...................................................................................... 5-69数字化工具 – 切片UCS ...................................................................................... 5-70碰撞检测............................................................................................................. 5-71建模 – 拟合轮廓示意图....................................................................................... 5-76建模 – 拟合多边形.............................................................................................. 5-82建模 – 绘制圆(3 点) ............................................................................................ 5-85建模 – 绘制弧线 (3 点) ........................................................................................ 5-86建模 – 圆柱 – 拟合圆柱 ...................................................................................... 5-87建模 – 圆柱 – 绘制圆柱 ...................................................................................... 5-89建模 – 圆柱– 连接 (通过弯头) ............................................................................. 5-90建模 – 圆柱 – 连接 (通过圆环) ............................................................................ 5-91建模 – 圆柱 – 连接 (通过圆柱) ............................................................................ 5-92建模 – 圆柱 – 插入异径管................................................................................... 5-94建模 – 圆柱 – 相交.............................................................................................. 5-95建模 – 圆柱 – 中心.............................................................................................. 5-96建模 – 圆柱 – 折断.............................................................................................. 5-97建模 – 圆柱 – 炸开轴线 ...................................................................................... 5-98建模 – 圆柱 – 生成AutoCAD 3D实体 ................................................................ 5-99建模 – 圆柱 – 创建管道..................................................................................... 5-100建模 – 平面 – 拟合平面 .................................................................................... 5-103建模 – 平面 – 绘制平面 .................................................................................... 5-106建模 – 平面 – 延伸 (2 面) .................................................................................. 5-107目录建模 – 平面 – 相交线 (2面) .............................................................................. 5-108建模 – 平面 – 相交点 (3+ 面) ........................................................................... 5-109建模 – 平面 – 新建边缘.................................................................................... 5-110建模 – 平面 – 修改边缘.................................................................................... 5-111利用图像工作 – 加载导向图片 (kubit ORI格式) ............................................... 5-112使用图像工作 – 插入光栅图像... ....................................................................... 5-115利用图像工作 – 控制点 – 定义 ......................................................................... 5-119使用图像工作 – 控制点 – 导入 ......................................................................... 5-120使用图像工作 – 控制点 – 调整大小 .................................................................. 5-122使用图像工作 – 控制点 – 对齐 ......................................................................... 5-123使用图像工作 – 控制点 – 查找 ......................................................................... 5-124使用图像工作 – 控制点 – 视图UCS ................................................................. 5-125使用图像工作 – 图片导向................................................................................. 5-126通过图片进行工作 – 相机导航 – 相机导航开/关 .............................................. 5-142通过图片进行工作 – 相机导航 – 相机导航 – 缩放 ............................................ 5-144通过图片进行工作 – 相机导航 – 相机导航 – 平移 ............................................ 5-145通过图片进行工作 – 相机导航 – 相机导航 – 旋转 ............................................ 5-146通过图片进行工作 – 相机导航 – 设置尺寸 ....................................................... 5-147通过图片进行工作 – 相机导航 – 显示/隐藏二维视觉样式................................ 5-150通过图片进行工作 – 相机导航 – 显示/隐藏辅助线........................................... 5-151使用图像工作 – 使用面进行3D绘图 – 3D 绘图 – 设置.................................... 5-152使用图像工作 – 使用面进行3D绘图 – 关于3D绘图 ....................................... 5-155通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 准备视口.... 5-156通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D点 ......... 5-157通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D线 ......... 5-162通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D多段线 .. 5-164通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D曲线...... 5-165通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D圆 ......... 5-166通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 3D弧线...... 5-167通过图片进行工作 – 使用两个图像进行3D绘图(双重图像绘制) – 重置视口.... 5-168手册 ................................................................................................................. 5-169许可证管理....................................................................................................... 5-170目录关于点云........................................................................................................... 5-172移动/旋转点云................................................................................................... 5-173 6点云软件调整及配置.............................................................................................6-16.1安装文件列表 .................................................................................................6-16.2块定义列表.....................................................................................................6-36.3实用的 AutoCAD 设置....................................................................................6-4 7许可合同...............................................................................................................7-1点云软件可以做什么?1-11 点云软件可以做什么?“点云”软件是AutoCAD的一个应用程序,在著名的AutoCAD软件环境中显示、分析、处理数以亿计的三维点。

一种改进的ICP激光点云精确配准方法

一种改进的ICP激光点云精确配准方法

李慧慧等:一种改进的ICP激光点云精确配准方法84_____《激光杂志》2〇21 年第 42 卷第 1 期 LASER JOURNAL(V〇l.42, No. 1,2021)一种改进的IC P激光点云精确配准方法李慧慧,刘超,陶远安徽理工大学测绘学院,安徽淮南232000摘要:传统迭代最近点(Iterative Closest Point,ICP)算法在进行点云配准时,若点云初始位置相差较大时, 容易陷入局部最优,同时,该算法无法解决部分重叠的点云的配准问题。

鉴于此,提出了一种改进的IC P激光 点云精确配准方法。

首先通过对两片点云的主成分分析并矫正主轴方向以完成初始配准,获得一个较好的初 始位置。

然后利用2次搜索最近距离来获取各点的概率值,并将其嵌入到最小二乘函数中来改进IC P算法,以达到对部分重叠的点云进行配准的目的。

实验结果表明,在不同重叠度的数据下,提出的方法的配准误差分别 为0.307 8 mm、0•287 2 mm;运行时间仅为4_ 4 s、4.2 s。

该方法可以对初始位置相差较大且具有部分重叠的点 云进行精确配准,同时提高运行效率并对噪声具有相应的鲁棒性。

关键词:点云配准;ICP;主成分分析;重叠度,概率值中图分类号:TN958.9 文献标识码:A doi:10. 14016/ki.jgzz.2021. 01.084A laser point cloud precise registration method with improved ICPLI Huihui,LIU Chao,TAO YuanAnhui University of Science and Technology, School of Geomatics, Huainan Anhui232000, ChinaAbstract:The traditional iterative closest point (ICP)algorithm can easily fall into local optimum when the initial position of point cloud is quite different.At the same time,this algorithm cannot solve the registration problem of par­tially overlapped point cloud.In view of this,a laser point cloud precise registration method with improved ICP is pro­posed.Firstly,the principal components of two point clouds are analyzed and the principal axis direction is corrected to complete the initial registration to obtain a better initial position.Then use tw o search nearest distances to obtain the probability values of each point and embed them in the least squares function to improve the ICP algorithm to achieve the purpose of registering partially overlapping point clouds.The experimental results show that the registration errors of the proposed method are0.307 8 m m and0.287 2 m m under the data of different overlapping degrees and the running times are only4.4 s and4.2 s.This method can accurately register point clouds with large initial position differences and partial overlap,while improving the operating efficiency and making the noise with related robustness.Key words:point cloud registration;ICP;principal component analysis;overlap degree;probability valuei引言3D激光扫描仪技术可以快速、大量捕获点云信 息,而点云配准是3D计算机视觉中的一个基本问题。

点云注册方法

点云注册方法

点云注册方法
点云注册(PointCloudRegistration)是指以满足一定条件的方
式计算两个或多个点云模型的配准变换,以使得两个模型坐标变换到一致的坐标系下。

点云注册是3D空间重建中的重要步骤,是许多深
度学习应用的基础要素。

点云注册就是基于两个点云模型的特征分析,将模型坐标转移到一致的坐标系下。

特征比较相似的点更容易配准,比如共享相同的点法向量的表面也可以用来在两个模型之间进行匹配。

常见的点云注册方法主要包括以下几种:
1. 基于变换模型的点云注册方法:基于欧氏距离的点云注册,Kabsch 算法,Iterative Closest Point (ICP) 算法等;
2. 基于特征的点云注册方法:视觉特征注册,Normal Directional Descriptor (NDD),Point Feature Histogram (PFH) 等;
3. 基于深度学习的点云注册方法:PointNetLK,Feature Learning Point Cloud Registration (FLPCR) 等;
4. 基于统计模型的点云注册方法:基于概率图的点云注册,基
于概率模型的点云注册等。

根据不同的情况,可以选择合适的点云注册方法来进行配准,以满足应用对精度和效率的要求。

- 1 -。

同一物体不同视角的点云配准方法

同一物体不同视角的点云配准方法

同一物体不同视角的点云配准方法陈卓;宋丽梅;陈昌曼;唐欢【摘要】Two point clouds in two different perspectives is managed by the method of surface and then the ICP algorithm is adopted based on curvature characteristic to improve the registration process. The surface method for processing the acquired point clouds from two different perspectives is used firstly. Then the curvature of two point clouds basedon the surface method is calculated,and classify one of two point clouds by the curvature size and sample randomly from different classes. Accordingto the result of random sample, to find the corresponding points which isin another point cloud in accordance with the principle of the curvature first, distance second. At last, calculate rotation matrix and translation matrix of the first point cloud to complete the registration process. Experimental results show that under the condition of the same mass point cloud, surface table generation time is less than the octree generation time on average 406 ms while the finding time is less than average 107 ms. What′s more, the ICP algorithm for curvature features based on curvature features achieves the precision of 0.241 625 mm. It can be basically satisfied the requirements of the multi-view point cloud of high registration precision , fast computing speed, etc.%对两幅不同视角下的点云采用面表法管理,而后对两幅不同视角下的点云采用曲率特征的ICP改进算法进行配准处理.具体处理方法为先根据获得的两幅海量点云数据进行面表法处理,然后根据获得的面表计算每幅点云中的近似曲率,对其中一幅点云的曲率按大小分类,并且每类进行随机采样.根据随机采样的结果按照曲率第一、距离第二的原则在另一幅点云中查找对应点,最后求出第一幅点云的旋转和平移矩阵从而完成配准过程.实验结果表明:在相同海量点云下,面表法的生成时间比八叉树法生成时间平均少406 ms,查找时间平均少53.5 ms;并且曲率特征的ICP算法可以迭代25次收敛到0.241625 mm的精度,基本满足多视角点云配准的精度高、计算速度快等要求.【期刊名称】《天津工业大学学报》【年(卷),期】2014(000)001【总页数】5页(P45-49)【关键词】物体;视觉;点云配准;ICP算法;点云管理【作者】陈卓;宋丽梅;陈昌曼;唐欢【作者单位】天津工业大学电工电能新技术天津市重点实验室,天津 300387;天津工业大学电工电能新技术天津市重点实验室,天津 300387;天津工业大学电工电能新技术天津市重点实验室,天津 300387;天津工业大学电工电能新技术天津市重点实验室,天津 300387【正文语种】中文【中图分类】TP391.7自21世纪以来,国内有很多学者对点云的配准算法进行了研究.1987年,Horn、Arun等用四元数法提出点集对点集配准方法.这种点集与点集坐标系匹配算法通过实践证明是一个解决复杂配准问题的关键方法.1992年,计算机视觉研究者Besl 和Mckay介绍了一种高层次的基于自由形态曲面的配准方法,也称为迭代最近点法(iterative closest point,ICP).ICP法经过十几年的发展,不断得到完善和补充:Senin等[1]提出够增强增强ICP配准精度的方法;Martínez等[2]提出了与镭射光扫描相结合的ICP配准算法;Lena等[3]分析并改进了ICP算法的匹配错误;Sacharow等[4]提出ICP-NI的改进算法;王蕊等[5]和吴禄慎等[6]提出基于特征的点云配准算法;程效军等[7]研究了点云配准误差传播规律;王程冬[8]等提出一种基于SIFT算法的点云配准.张蕾等[9]和郑德华等[10]针对点云配准方法进行约束改进;王欣等[11]提出了一种改进方法来对点云进行配准;张金魁[12]提出了基于曲率的局部配准算法;周春艳等[13]研究了三维点云ICP改进算法.因此,本文在上述研究成果与已有研究的结果[14-15]的基础上,提出了多视角点云配准的新方法.经典的点云的存储方法有八叉树、三维网格等算法.八叉树算法将点云按照树的结构进行排列,但是无法确定点云内部距离最近的两点,而且无法快速确定该点是否是边界点,并且该算法的访问速度比较慢.如果需要找到点云空间中一部分区域内的全部点,可以使用八叉树算法或者三角网格算法.八叉树算法需要遍历整个树,速度较慢;三角网格算法将空间分割为多个盒子,每个盒子内部按序存放点云,该方法消耗的空间与分成盒子的体积大小成反立方关系,也就是体积越小计算机消耗的空间就越大.本文提出了一种改进的三维网格管理点云的算法——面表法,该方法将消耗的空间大小转变为平方关系,同时不会降低生成和访问速度.面表法是三维网格算法的改进,首先要计算点云的包围盒,计算好包围盒(xmin,ymin,zmin,xmax,ymax,zmax),取x,y,z中任意2个向量作为基准.例如取x,y向量,给定阈值(假设阈值为T)进行x,y平面分割,将平面分成(xmin-xmax)×(ymin-ymax)/(T×T)个方块,然后对整个点云进行遍历,使得每个点找到对应的方块,并且按照链表的方式存储.如图1所示.面表生成后有一个较大的缺陷,就是无法获知z方向上的大小,因此每个表格中的点应该对原链表进行再改进.具体改进方法为,每个表格中的点按照阈值对z方向上的大小进行分类.分类方法如图2所示.如果按照面表法寻找点(x0,y0,z0),应先提取出该点坐标的x和y方向的坐标(x0,y0),而后根据自定义的映射规则找出(x0,y0)所对应的表格.假设自定义的映射规则是所有点的x,y方向应遵循x= [(x0-xmin+0.5×T)/T],y= [(y0-ymin+0.5×T)/T].生成面表后按照该规则找到对应的(x,y)面表.然后根据z0找到z方向的分类.找到包含z0的一类后,通过遍历该类获知(x0,y0,z0)的位置.遍历空间区域的方法为找出该区域中包含的所有x,y表格:应从x低表向x高表顺序访问,访问到最高位置时,将y表增加1,x置成低表,继续访问,直到y表达到最高位置为止.z表根据给定的区域对z的分类进行限定,超过或低于该范围的z类不进行访问.基于曲率特征的ICP算法概述:首先需要获知两幅点云的全部曲率特征,由于曲率的含义是根据均匀平滑的曲面来定义的.因此需要对部分点云进行曲面拟合,然后针对拟合后的曲面进行曲率计算,即可获知该点的估计曲率.本文选用的曲面模型是二次曲面模型,即需要拟合曲面的点云数量不能低于4个点.而本文所提出的面表法存储结构,可以有效地进行曲率计算.由于面表法首先是对全部点云进行x,y平面上的表格分类,然后再对每个x,y表格里进行有序的z分类,因此每一个z方向的表格都代表一个空间体.由于每个空间体内点云数量不同,并且当空间体内点云数量少于4个时,无法对这部分点云进行曲面拟合,因此必须对所有的空间体进行归并和删除,即对于空间体内较少的点去尝试找附近有较多点的空间体进行归并运算,如果找不到附近有较多点的空间体则删除该空间体.归并与删除算法是配准算法的预处理环节.在预处理结束后,选择空间体内曲面拟合模型,如公式(1)所示:公式(1)没有加入旋转和平移,因此这个模型必须将点云旋转平移到原点附近,否则无法拟合.由此可见,对于空间体内的点云不能使用公式(1)进行拟合.空间体内的点云进行平面拟合如公式(2)所示:设空间体Q={q1,q2,…,qn}(n>3),q1点的x坐标表示为xq1,则拟合公式(2)可以表示为公式(3):通过解上式可知A、B的数值,从而获得平面向量(A,B,-1),这个平面向量单位化后需要转到(0,0,1).根据二次曲面模型在原点附近求出的曲率,可以找到空间体中一点作为估计参考点,将空间体Q中所有点带入平面方程,获得每个点的状态(大于0,或者小于0,或者等于0).将大于0的所有点设为点集Qu,将小于0的所有点设为点集Qd.对Qu、Qd求取方差Δu、Δd,期望Eu、Ed.在Δu、Δd中选择范数小的方差Δx对应的期望Ex和点集Qx,由于期望Ex是一个空间点,因此只需要从这个空间点引出一条向量为(A,B,-1)的直线l,并从点集Qx中找一点t求取其到直线l的距离.计算通过t点的平面方程式(4):将t点坐标代入(x,y,z)中求D.现已知经过期望点的直线方程如式(5)所示:将平面方程与直线方程联立可以求解出期望点在平面中的投影点s.接下来就可以计算出点t与投影点s的欧式距离d,实质上d就是点t到l的距离.在Qx中找一点tm使得其d的数值小于等于其他点的d的数值.将Q中全部点坐标减去tm坐标获得空间体Q′.对Q′进行旋转变换,进行旋转变换时可使用基于四元数的旋转矩阵.四元数q内每个元素的定义如公式(6)所示:式中:(nx,ny,nz)是旋转轴的单位向量,是围绕该旋转轴的正向角度.因此只需求出旋转轴和所需的角度即可.现已知向量(A,B,-1)单位化后的向量T为(a,b,c),转到的位置为R(0,0,1).因此仅需算出U=T×R就可以计算出旋转轴的向量.即旋转轴的向量U为(b,-a,0),θ=arcos(c)(0≤θ≤π).代入上面的四元数中,通过四元数求旋转矩阵后将空间体Q′转到Q″.对空间体Q″进行二次曲面拟合,其拟合过程如式(7)所示:将二次曲面拟合后对这部分点求高斯曲率如式(8)所示.通过上式计算出点云A的每个点的高斯曲率后,将点云A分成n类,每类含有点的数量近似相等,并且下一类中的任何一个点的曲率都要大于上一类任何一点的曲率.接下来对每一类曲率进行随机采样其采样结果为点云C.点到点的查找方式:从点云C中抽取一点g.在点云B中按面表法的查找规则进行查找,选出与g点具有相近曲率的几个点.最后在这几个点中找到离g点最近一点h从而完成配对过程.点到面的查找方式为:在找到h点的基础上,在点云B中找2个点m和n,使这两点离h点的距离最近.然后令h、m、n三点共面,求g点到平面投影点k,从而完成新的配对.由于在最近距离这一条件上加入了曲率限制,使得算法在稳定性、适用性、迭代速度等方面都有不同层次的上升.面表法生成与查找时间如表1所示.根据表1中测试的结果可以发现面表法相对于八叉树法的快速性.由于八叉树法是针对整部点云进行查找的算法,因此该算法需对整部点云进行查找;而面表法可以通过包围盒间的关系进行查找,从而大大减少点云的查找速度.表2、表3给出的是经典ICP算法与本文算法对比的迭代值表格.图3所示为在ICP算法未匹配前对两幅点云进行了标记点粗匹配,因此该算法能够很快收敛到两幅点云匹配精度较高的位置.图4所示为未经过标记点粗匹配而将3D扫描仪的扫描结果直接使用本文算法进行匹配.由于基于曲率特征的匹配,所以两幅点云依旧能够收敛到正确位置.其中表2的迭代对比是在初值误差为0.561的拼接的结果.在这种情况下经典ICP算法使用了10次迭代,而本文算法仅仅使用了7次迭代.在表3对应的图4的迭代结果中可以发现,在出现较差初值时,ICP使用了122次迭代,而迭代后的精度只有0.670 429.这个精度没有任何实际意义,因为在图3中的未配准前的精度仅仅是0.561 57,并且在图3(a)中可以看出两幅点云有较大差距.而使用本文算法后通过24步迭代就可以获得0.241 625 93的精度.表中无论ICP与本文算法都写有“搜索域变化”,这也是面表法所特有的.当两部点云旋转到适当精度时可以调整比较合适的搜索域来加速整个算法的收敛速度.本文根据多视角点云的配准要求提出了面表法与基于曲率特征的ICP算法,并详细介绍了如何构造面表来对点云进行快速访问.根据表1中记录的结果可知,面表法的生成速度比八叉树方法的生成速度平均快406 ms,而查找速度平均少53.5 ms.通过实验表明,本文所提出的改进的ICP算法要比经典的ICP算法迭代次数平均少8.5次.在初值不好的情况下,改进的ICP算法收敛精度可以超过经典ICP算法.【相关文献】[1]SENIN N,COLOSIMO B M,PACELLA M.Point set augmentation through fitting for enhanced ICP registration of point clouds in multisensor coordinate metrology[J].Robotics and Computer-Integrated Manufacturing,2013(29):39-52.[2]MARTÍNEZ Jorge L,REINA Antonio J,MANDOW Anthony,et al.3D registration of laser range scenes by coincidence of coarse binary cubes[J].Springer-Verlag,2012(23):857-867.[3]LENA Maier-Hein,FRANZ Alfred M,DOS SANTOS Hiago R,et al.Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localizationerror[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2012,34(8):1520-1531.[4]SACHAROW A,BALZER J,BIERMANN D,et al.Non-rigid isometric ICP:A practical registration method for the analysis and compensation of form errors in production engineering[J]. Computer-Aided Design,2011(43):1758-1768.[5] 王蕊,李俊山,刘玲霞,等.基于几何特征的点云配准算法[J].华东理工大学学报:自然科学版,2009(5):768-773.[6]吴禄慎,孔维敬.基于特征点的改进ICP三维点云配准技术[J].南昌大学学报:工科版,2008(3):294-297.[7] 程效军,施贵刚,王峰,等.点云配准误差传播规律的研究[J].同济大学学报:自然科学版,2009(12):1668-1672.[8] 王程冬,程筱胜,崔海华,等.SIFT算法在点云配准中的应用[J].传感器与微系统,2012(2):149-152.[9] 张蕾,冀治航,普杰信,等.约束改进的ICP点云配准方法[J].计算机工程与应用,2012(18):197-200.[10]郑德华,岳东杰,岳建平.基于几何特征约束的建筑物点云配准算法[J].测绘学报,2008(4):464-468.[11]王欣,张明明,于晓,等.应用改进迭代最近点方法的点云数据配准[J].光学精密工程,2012(9):2068-2077.[12]张金魁.基于曲面的局部配准算法[J].科技资讯,2011(9):68-69.[13]周春艳,李勇,邹峥嵘.三维点云ICP算法改进研究[J].计算机技术与发展,2011(8):75-77.[14]SONG Limei,DONG Xiaoxiao,XI Jiangtao,et al.A new phase unwrapping algorithm based on three wavelength phase shift profilometry method[J].Optics&Laser Technology,2013,45:319-329.[15]SONG Limei,CHEN Changman,CHEN Zhuo,et al.Essential parameters calibrationfor the 3D scanner using only single camera and projector[J].Optoelectronics Letters,2013,9(2):143-147.。

ICP算法在叶型点云数据配准中的应用

ICP算法在叶型点云数据配准中的应用

ICP算法在叶型点云数据配准中的应用刘峻峰;何小妹;黄翔;王一璋【摘要】针对叶片截面型线测量点云数据与理论点云数据的配准问题,研究了将ICP算法应用于叶型点云数据配准的方法.首先概述了叶型测量数据与理论数据的匹配问题;然后阐述了基于ICP算法的叶型点云数据配准方法,并在MATLAB平台进行了配准实现;最后给出了配准实例,并将配准结果与点云处理软件CloudCompare的匹配结果进行了对比,验证了该方法的准确性.【期刊名称】《航空制造技术》【年(卷),期】2019(062)012【总页数】4页(P79-82)【关键词】叶片;截面型线;ICP算法;配准;刚体变换【作者】刘峻峰;何小妹;黄翔;王一璋【作者单位】航空工业北京长城计量测试技术研究所,北京100095;南京航空航天大学机电学院,南京210016;航空工业北京长城计量测试技术研究所,北京100095;南京航空航天大学机电学院,南京210016;航空工业北京长城计量测试技术研究所,北京100095【正文语种】中文叶片是航空发动机的关键零部件,其设计和制造质量直接影响着发动机的性能。

目前,叶片制造质量主要采用标准样板法、自动绘图测量法、光学投影测量法、电感测量法和坐标测量法等手段进行评价[1],其中坐标测量法因其具有高精度的特点[2],在国内发动机主机厂得到了广泛应用。

图1为使用三坐标测量机对发动机叶片进行等高截面法测量的示意图,按照航空标准HB5647—98《叶片叶型的标准、公差与叶身表面粗糙度》的规定,首先沿叶片积叠轴的方向围绕各检验剖面进行测量,然后对叶型轮廓度、积叠点位置度、扭转等参数进行评价,做出合格性与否的判定。

其中在叶片截面测量和参数评价过程中,涉及到叶片测量坐标系与模型坐标系对齐以及叶片截面型线测量数据和理论数据匹配的问题,处理好这两个问题将会对上述叶片参数的评价起到关键作用。

图1 叶片等高截面法测量示意图Fig.1 Schematic diagram of blade isometric cross-section measurement目前关于坐标测量机坐标系建立方法的研究较多,如常用的“3-2-1”法、6点迭代法等,而关于叶片截面型线测量数据和理论数据匹配的问题,该环节为利用叶片参数专用评价商业软件对叶片参数进行评价的中间过程,关注较少,但该问题对后续叶片叶型轮廓度误差的评价具有关键作用。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1 INTRODUCTIONFor 3D object modeling data acquisition must be performed from different standpoints. The de-rived local point clouds must be transformed into a common system. This procedure is usually referred to as registration. In the past, several efforts have been made concerning the registra-tion of 3D point clouds. One of the most popular methods is the Iterative Closest Point (ICP)algorithm developed by Besl & McKay (1992), Chen & Medioni (1992), and Zhang (1994).The ICP is based on the search of pairs of nearest points in the two sets and estimates the rigid body transformation, which aligns them. Then, the rigid body transformation is applied to the points of one set and the procedure is iterated until convergence.Several variations and improvements of the ICP method have been made (Bergevin et al.1996, Masuda & Yokoya 1995). In Besl & McKay (1992) and Zhang’s (1994) works the ICP requires every point in one surface to have a corresponding point on the other surface. An alter-native approach to this search scheme was proposed by Chen & Medioni (1992). They used the distance between the surfaces in the direction normal to the first surface as a registration evaluation function instead of the point–to–nearest point distance. This point-to-tangent plane distance idea was originally proposed by Potmesil (1983). In Dorai et al. (1997) the method of Chen & Medioni was extended to an optimal weighted least-squares framework. Zhang (1994)proposed a thresholding technique using robust statistics to limit the maximum distance be-tween points. Masuda & Yokoya (1995) used the ICP with random sampling and least median square error measurement that is robust to a partially overlapping scene. Okatani & Deguchi (2002) proposed the best transformation of two range images to align each other by taking into account the measurement error properties, which are mainly dependent on both the viewing di-rection and the distance to the object surface.The ICP algorithm always converges monotonically to a local minimum with respect to the mean-square distance objective function (Besl & McKay 1992). It does not use the local sur-face gradients in order to direct the solution to a minimum. Several reviews and comparison studies about the ICP variant methods are available in the literature (Jokinen & Haggren 1998,Williams et al. 1999, Campbell & Flynn 2001, Rusinkiewicz & Levoy 2001).Registration of point clouds using range and intensity informationD. AkcaInstitute of Geodesy and Photogrammetry, ETH Zurich, SwitzerlandABSTRACT: An algorithm for the least squares matching of overlapping 3D surfaces is pre-sented. It estimates the transformation parameters between two or more fully 3D surfaces, using the Generalized Gauss-Markoff model, minimizing the sum of squares of the Euclidean dis-tances between the surfaces. This formulation gives the opportunity of matching arbitrarily ori-ented 3D surfaces simultaneously, without using explicit tie points. Besides the mathematical model and execution aspects we give further extensions of the basic model: simultaneous matching of multi sub-surface patches, and matching of surface geometry and its attribute in-formation, e.g. reflectance, color, temperature, etc. under a combined estimation model. We give practical examples for the demonstration of the basic method and the extensions.Recording, Modeling and Visualization of Cultural Heritage, Ascona, Switzerland, May 22-27, 2005, pp.115-126.http://www.photogrammetry.ethz.ch/general/persons/devrim_publ.htmlIn Szeliski & Lavallee (1996) and Neugebauer (1997) two gradient descent type of algo-rithms were given. They adopt the Levenberg-Marquardt method for the estimation.Since 3D point clouds derived by any method or device represent an object surface, the problem should be defined as a surface matching problem. In Photogrammetry, the problem statement of surface patch matching and its solution method was first addressed by Gruen (1985) as a straight extension of Least Squares Matching (LSM).There have been some studies on the absolute orientation of stereo models using Digital Ele-vation Models (DEM) as control information (Ebner & Strunz 1988, Rosenholm & Torlegard 1988). This work is known as DEM matching. This method basically estimates the 3D similar-ity transformation parameters between two DEM patches, minimizing the sum of squares of dif-ferences along the z-axes.Maas (2002) successfully applied a similar method to register airborne laser scanner strips, among which vertical and horizontal discrepancies generally show up due to GPS/INS inaccu-racy problems. Another similar method has been presented in Postolov et al. (1999) for regis-tering surfaces acquired using different methods, in particular, laser altimetry and photogram-metry. Furthermore, techniques for 2.5D DEM surface matching have been developed, which correspond mathematically to Least Squares Image Matching. The DEM matching concept can only be applied to 2.5D surfaces, whose analytic function is described in the explicit form as a single valued function, i.e. z=f(x,y). 2.5D surfaces are of limited value in case of generally formed objects.When the surface curvature is either homogeneous or isotropic, as it is the case with all first or second order surfaces, e.g. plane or sphere, the geometry-based registration techniques will fail. In some studies surface geometry and intensity (or color) information has been combined in order to solve this problem. Maas (2002) used the airborne laser scanner reflectance images as complimentary to the height data for the determination of horizontal shift parameters be-tween the laser scanner strips of flat areas. Roth (1999) and Vanden Wyngaerd & Van Gool (2003) used feature-based methods in which interest points and regions are extracted from the intensity images. More often the intensity information is processed as a metric under an ICP al-gorithm in order to reduce the search effort of correspondent point pairs or in order to eliminate the ambiguities due to inadequate geometric information on the object surface (Weik 1997, Johnson & Kang 1999, Godin et al. 2001, Yoshida & Saito 2002).The LSM (Gruen 1985) concept has been applied to many different types of measurement and feature extraction problems due to its high level of flexibility and its powerful mathemati-cal model: Adaptive Least Squares Image Matching, Geometrically Constrained Multiphoto Matching, Image Edge Matching, Multiple Patch Matching with 2D images, Multiple Cuboid (voxel) Matching with 3D images, Globally Enforced Least Squares Template Matching, Least Squares B-spline Snakes. For a detailed survey the author refers to Gruen (1996). If 3D point clouds derived by any device or method represent an object surface, the problem should be de-fined as a surface matching problem instead of a point cloud matching. In particular, we treat it as least squares matching of overlapping 3D surfaces, which are digitized/sampled point by point using a laser scanner device, the photogrammetric method or other surface measurement techniques. Our mathematical model is another generalization of the LSM (Gruen 1985), as for the case of multiple cuboid matching in 3D voxel space (Maas & Gruen 1995).The details of the mathematical modeling of the proposed method and the execution aspects are explained in the following section. The further extensions to the basic model are given in the third section. Two practical examples for the demonstration of the basic model and the ex-tensions are presented in the fourth section.2LEAST SQUARES 3D SURFACE MATCHING (LS3D)2.1The basic estimation modelAssume that two different partial surfaces of the same object are digitized/sampled point by point, at different times (temporally) or from different viewpoints (spatially). Although the conventional sampling pattern is point based, any other type of sampling pattern is also ac-cepted. f(x,y,z) and g(x,y,z) are discrete 3D representations of the conjugate regions of the object in the left and right surfaces respectively. The problem statement is estimating the finallocation, orientation and shape of the search surface g (x , y , z ), which satisfies minimum condi-tion of Least Squares Matching with respect to the template f (x , y , z ).In an ideal situation one would have),,(),,(z y x g z y x f =(1)Taking into account the noise and assuming that the template noise is independent of the search noise, Equation 1 becomes),,(),,(),,(z y x g z y x e z y x f =−(2)where e (x , y , z ) is a true error vector. Equation 2 are observation equations, which functionally relate the observations f (x , y , z ) to the parameters of g (x , y , z ). The matching is achieved by least squares minimization of a goal function, which represents the sum of squares of the Euclidean distances between the template and the search surface elements: ||d ||2= min, where d stands for the Euclidean distance. The final location is estimated with respect to an initial position of g (x , y , z ), the approximation of the conjugate search surface g 0(x , y , z ).To express the geometric relationship between the conjugate surface patches, a 7-parameter 3D similarity transformation is used:)(013012011z r y r x r m t x x +++=)(023022021z r y r x r m t y y +++=(3))(033032031z r y r x r m t z z +++=where r ij = R ( , , ) are the elements of the orthogonal rotation matrix, [t x t y t z ]T is the transla-tion vector, and m is the uniform scale factor. Depending on the deformation between the tem-plate and the search surfaces, any other type of 3D transformations could be used, e.g. 12-parameter affine, 24-parameter tri-linear, or 30-parameter quadratic family of transformations.In order to perform least squares estimation, Equation 2 must be linearized by Taylor expan-sion.zzz y x g y y z y x g x x z y x g z y x g z y x e z y x f d ),,(d ),,(d ),,(),,(),,(),,(0000∂∂+∂∂+∂∂+=−(4)withi ii i i i p p zz p p y y p p x x d d ,d d ,d d ∂∂=∂∂=∂∂=(5)where p i ∈{t x , t y , t z , m , , , } is the i -th transformation parameter in Equation 3. Differentia-tion of Equation 3 gives:κ+ϕ+ω++=d d d d d d 13121110a a a m a t x x κ+ϕ+ω++=d d d d d d 23222120a a a m a t y y (6)κ+ϕ+ω++=d d d d d d 33323130a a a m a t z z where a ij are the coefficient terms, whose expansions are trivial. Using the following notationzz y x g g y z y x g g x z y x g g z y x ∂∂=∂∂=∂∂=),,(,),,(,),,(000 (7)and substituting Equations 6, Equation 4 results in the following:)),,(),,((d )(d )(d )(d )(d d d ),,(0332313322212312111302010z y x g z y x f a g a g a g a g a g a g a g a g a g m a g a g a g t g t g t g z y x e z y x z y x z y x z y x z z y y x x −−κ+++ϕ+++ω++++++++=− (8)In the context of the Gauss-Markoff model, each observation is related to a linear combina-tion of the parameters, which are variables of a deterministic unknown function. The terms {g x ,g y , g z } are numeric first derivatives of this function g (x , y , z ).Equation 8 gives in matrix notationPA l x e ,−=−(9)where A is the design matrix, x T = [d t x d t y d t z d m d d d ] is the parameter vector, and l = f (x , y , z ) – g 0(x , y , z ) is the constant vector that consists of the Euclidean distances between the template and correspondent search surface elements. The template surface elements are ap-proximated by the data points, on the other hand the search surface elements are represented in two different kind of piecewise forms (planar and bi-linear) optionally, which will be explained in the following. In general both surfaces can be represented in any kind of piecewise form.With the statistical expectation operator E{} and the assumptions e ~ N(0, 02Q ll ) and 02Q ll = 02P ll -1 = K ll = E{ee T } Equation 9 is a Gauss-Markoff estimation model. Q ll , P =P ll and K ll stand for a priori cofactor, weight and covariance matrices respectively.The unknown transformation parameters are treated as stochastic quantities using proper a priori weights. This extension gives advantages of control over the estimating parameters. We introduce the additional observation equations on the system parameters asbb b P I l x e ,−=−(10)where I is the identity matrix, l b is the (fictitious) observation vector for the system parameters,and P b is the associated weight coefficient matrix. The weight matrix P b has to be chosen ap-propriately, considering a priori information of the parameters. An infinite weight value ((P b )ii → ) excludes the i -th parameter from the system, assigning it as constant, whereas zero weight ((P b )ii = 0) allows the i -th parameter to vary freely, assigning it as unknown parameter in the classical meaning.The least squares solution of the joint system Equations 9 and 10 gives as the Generalized Gauss-Markoff model the unbiased minimum variance estimation for the parameters)()(ˆT 1T b b b l l xP P A P PA A ++=−(solution vector)(11)r bb b )(ˆT T 20v v v v P P +=σ(variance factor)(12)l x v −=ˆA (residuals vector for surface observations)(13)b b l xv −=ˆI (residuals vector for parameter observations)(14)where ^ stands for the Least Squares Estimator, and r is the redundancy. When the system con-verges, the solution vector converges to zero (xˆ→ 0). Then the residuals of the surface obser-vations (v )i become the final Euclidean distances between the estimated search surface and the template surface elements.},...,1{,),,(),,(ˆ)(n i z y x f z y x gi i i =−= v (15)The function values g 0(x , y , z ) in Equation 2 are actually stochastic quantities. This fact is ne-glected here to allow for the use of the Gauss-Markoff model and to avoid unnecessary compli-cations, as typically done in LSM (Gruen 1985).Since the functional model is non-linear, the solution is obtained iteratively. In the first it-eration the initial approximations for the parameters must be provided:p i 0 ∈{t x 0, t y 0, t z 0, m 0, 0, 0, 0}. After the solution vector (Equation 11) is solved, the search sur-face g 0(x , y , z ) is transformed to a new state using the updated set of transformation parameters,and the design matrix A and the discrepancies vector l are re-evaluated. The iteration stops ifeach element of the alteration vector xˆ in Equation 11 falls below a certain limit:}d ,d ,d ,d ,d ,d ,{d d ,d κϕω∈<m t t t p c p z y x i i i (16)The numerical derivative terms {g x , g y , g z } are defined as local surface normals n . Their cal-culation depends on the analytical representation of the search surface elements.Two first degree C 0 continuous surface representations are implemented: triangle mesh form (Fig. 1a), which gives planar surface elements, and optionally grid mesh form (Fig. 1b), which gives bi-linear surface elements. The derivative terms are given as x-y-z components of the lo-cal normal vectors: [g x g y g z ]T = n = [n x n y n z ]T .Figure 1. Representation of surface elements in planar (a), and bi-linear (b) forms. Note that T{ } stands for the transformation operator, and g 0 = g 0(x , y , z ), f = f (x , y , z ).Basically derivative terms {g x , g y , g z } constitute a normal vector field with unit magnitude || n || =1 on the search surface. This vector field slides over the template surface towards the final solution, minimizing the least squares objective function.2.2 Execution aspectsThe standard deviations of the estimated transformation parameters and the correlations be-tween themselves may give useful information concerning the stability of the system and qual-ity of the data content (Gruen 1985):1T 0)(,ˆˆ−+=∈σ=σb xx pp pp p q q P PA A Q (17)where Q xx is the cofactor matrix for the estimated parameters.In order to localize and eliminate the occluded parts and the outliers a simple weighting scheme adapted from the Robust Estimation Methods is used:σ<=else0|)(|if 1)(0K i ii v P (18)In our experiments K is selected as >10, since it is aimed to suppress only the large outliers.Because of the high redundancy of a typical data arrangement, a certain amount of occlusions and/or smaller outliers do not have significant effect on the estimated parameters.The computational effort increases with the number of points in the matching process. The main portion of the computational complexity is to search the corresponding elements of the template surface on the search surface, whereas the parameter estimation part is a small system,and can quickly be solved using Cholesky decomposition followed by back-substitution.Searching the correspondence is guided by an efficient boxing structure (Chetverikov 1991),which partitions the search space into cuboids. For a given surface element, the correspondence is searched only in the box containing this element and in the adjacent boxes.xSince the method needs initial approximations of the parameters due to the non-linear func-tional model, one of the methods for pre-alignment in the literature (Murino et al. 2001, Luc-chese et al. 2002, Vanden Wyngaerd & Van Gool 2002) can be utilized.Two first degree C 0 continuous surface representations are implemented. In the case of multi-resolution data sets, in which point densities are significantly different on the template and search surfaces, higher degree C 1 continuous composite surface representations, e.g. bi-cubic Hermit surface (Peters 1974), should give better results, of course increasing the computational expense.3 FURTHER EXTENSIONS TO THE BASIC MODEL 3.1 Simultaneous multi-subpatch matchingThe basic estimation model can be implemented in a multi-patch mode, that is the simultaneous matching of two or more search surfaces g i(x , y , z ), i =1,…,k to one template surface f (x , y , z ).i i i i i P A l x e ,−=−(19)Since the parameter vectors x 1 ,…, x k do not have any joint components, the sub-systems of Equation 19 are orthogonal to each other. In the presence of auxiliary information those sets of equations could be connected via functional constraints, e.g. as in the Geometrically Con-strained Multiphoto Matching (Gruen 1985, Gruen & Baltsavias 1988) or via appropriate for-mulation of multiple (>2) overlap conditions.An ordinary point cloud includes enormously redundant information. A straightforward way to register such two point clouds could be matching of the whole overlapping areas. This is computationally expensive. We propose multi-subpatch mode as a further extension to the basic model, which is capable of simultaneous matching of sub-surface patches, which are selected in cooperative surface areas. They are joined to the system by the same transformation parameters.This leads to the observation equationsii i i P A l x e ,−=−(20)with i =1,…, k subpatches. They can be combined as in Equation 9, since the common pa-rameter vector x joints them to each other. The individual subpatches may not include sufficient information for the matching of whole surfaces, but together they provide a computationally ef-fective solution, since they consist of only relevant information rather than using the full data set. One must carefully select the distribution and size of the subpatches in order to get a homo-geneous quality of the transformation parameters in all directions of the 3D space.3.2 Simultaneous matching of surface geometry and intensityIn case of lack of sufficient geometric information (homogeneity or isotropicity of curvatures)the procedure may fail, since there is not a unique solution geometrically, e.g. in case of matching of two planes or spherical objects. An object surface may have some attribute infor-mation attached to it. Intensity, color, and temperature are well known examples. Most of the laser scanners can supply intensity information in addition to the Cartesian coordinates for each point, or an additional camera may be used to collect texture. We propose another further ex-tension that can simultaneously match intensity information and geometry under a combined estimation model. In this approach the intensity image of the point cloud also contributes ob-servation equations to the system, considering the intensities as supplementary information to the range image.Rather than adopting a feature-based or step-wise approach our method sets up quasi-surfaces from intensity information in addition to the actual surfaces. A hypothetical example of forming the quasi-surfaces is given in Figure 2. The procedure starts with the calculation of surface normal vectors at each data point. The actual surface will include noise and surface spikes (Fig. 2b), which lead to unrealistic calculation for the normal vectors. To cope with the problem a moving average or median type filtering process could be employed. But still some noise would remain depending on the window size.An optimum solution is the least squares fitting of a global trend surface to the whole point cloud (Fig. 2c). It will suppress the noise component and preserves the global continuity of the normal vectors along the surface. We opt for the parametric bi-quadratic trend surface, which is sufficient to model the quadric type of surfaces, e.g. plane, sphere, ellipsoid, etc. For the tem-plate surface f (x , y , z ) we may write:===2020),(i j ji ij w u w u F b (21)where u , w ∈ [0,1]2, F (u ,w ) ∈ 3 is the position vector of any point on the trend surface, and b ij ∈ 3 are the algebraic coefficients, which are estimated by the least squares fitting. For each point the normal vector n f is calculated on the trend surface F (u ,w ) and attached to the actual surface f (x , y , z ) (Fig. 2d):wu w u f f w u F F F F n n ××==),((22)where F u and F w are the tangent vectors along the u and w -axes respectively. Finally the quasi-surface f c (x , y , z ) is formed in such a way that each point of the actual surface f (x , y , z ) is mapped along its normal vector n f up to a distance proportional to its intensity value c f (Fig.2e).ff c c z y x f z y x f λ+=n ),,(),,((23)where is an appropriate scalar factor for the conversion from the intensity range to the Carte-sian space.(a)(b)(c)(d)(e)Figure 2. Forming the quasi-surface. (a) Point cloud with intensity information, (b) meshed surface of the point cloud, (c) trend surface fitted to the point cloud, (d) noise-free normal vectors, (e) generated quasi-surface in addition to the actual one.Rather than the actual surface f (x , y , z ) the trend surface F (u ,w ) can also be defined as the datum, which leads toff c c w u F z y x f λ+=n ),(),,((24)This isolates the geometric noise component from the quasi-surface f c (x , y , z ), but strongly smoothes the geometry. Equations 23 and 24 assume a fairly simplistic radiometric model (in-tensities are mapped perpendicular to the geometric surface). We will refine this model in sub-sequent work.The same procedure is performed for the search surface g (x , y , z ) as well:gg c c z y x g z y x g λ+=n ),,(),,((25)Equation 2 should also be valid for the quasi-surfaces under the assumption that similar illu-mination conditions exist for the both template and search surfaces:),,(),,(),,(z y x g z y x e z y x f c c c =−(26)The random errors of the template and search quasi-surfaces are assumed to be uncorrelated.The contrast and brightness differences or in the extreme case specular reflection will cause model errors, and deteriorate the reliability of the estimation. The radiometric variations be-tween the template and search surface intensities should be adapted before matching by pre-processing or appropriate modeling in the estimation process by extra parameters.For two images of an object acquired by an optical-passive sensor, e.g. a CCD camera, such an intensity transfer function (c f = r 0 + c g r 1) could be suitable for the radiometric adaptation,where r 0 (shift) and r 1 (scale) are radiometric correction parameters. In the case of laser scanner derived intensity images the radiometric variations are strongly dependent on both the incident angle of the signal path with respect to object surface normal and object-to-sensor distance.Then, for a plane type of object the radiometric variations can be modeled in first approximat-ing as in the following:10),,(),,(),,(ur r z y x g z y x e z y x f c c c ++=−(27)where u is the abscissa of the search trend surface G (u ,w ), considering that u -axis is the hori-zontal direction. In other words, u -axis is the principal direction of changing of the incident an-gles. Depending on the characteristics of scan data it can be replaced by ordinate value w , or another type of parameterization. In general a second order bivariate polynomial (r 0 + ur 1 + wr 2 + uwr 3 + u 2r 4 + w 2r 5) or an appropriate subpart of it can be used.Although the radiometric parameters r 0 and r 1 are linear a priori , we expand them to Taylor series. Equation 27 in linearized form gives:}));{),,((),,((d d d d d ),,(0100010g c c cz cy cx c ur r z y x g z y x f r u r zg y g x g z y x e n ++−−++++=−(28)where g cx , g cy , and g cz stand for the derivative terms like as given in Equation 7 for the actualsurface observations. The first approximations of the radiometric parameters are r 00 = r 10 = 0. At the end of the each iteration the quasi search surface g c 0(x , y , z ) is transformed to a new state using the updated set of transformation parameters, and subsequently re-shaped by the current set of radiometric parameters r 00 + ur 10 along the normal vectors n g , which are calculated on the search trend surface G (u ,w ).The quasi-surfaces are treated like actual surfaces in the estimation model. They contribute observation equations to the design matrix, joining the system by the same set of transformation parameters. After the further expansion of Equation 28 and with the assumptions E{e c }= 0 and E{e c e c T }= 02P c -1, the total system becomesP A l x e ,−=−b b b P I l x e ,−=−(29)cc c c P A l x e ,−=−where e c , A c , and P c are the true error vector, the design matrix, and the associated weight co-efficient matrix for the quasi-surface observations respectively, and l c is the constant vector that contains the Euclidean distances between the template and correspondent search quasi-surfaces elements. The hybrid system in Equation 29 is of the combined adjustment type that allows si-multaneous matching of geometry and intensity.In our experiments, weights for the quasi-surface observations are selected as (P c )ii < (P )ii ,and the intensity measurements of the (laser) sensor are considered to be uncorrelated with the distance measurements (E{e c e T }= 0) for the sake of simplicity of the stochastic model.4EXPERIMENTAL RESULTSTwo practical examples are given to show the capabilities of the method. All experiments were carried out using own self-developed C/C++ software that runs on Microsoft Windows® OS. The processing times were counted on a PC with Intel® P4 2.53Ghz CPU and 1 GB RAM. In all experiments the initial approximations of the unknowns were provided by interactively se-lecting 3 common points on both surfaces before matching. Since in all data sets there was no scale difference, the scale factor m was fixed to unity by infinite weight value ((P b)ii→ ). The first example is the registration of two point clouds of a room (Fig. 3) in Neuschwan-stein Castle in Bavaria, Germany. The scanning was performed by using the IMAGER 5003 (Zoller+Fröhlich) terrestrial laser scanner. The average point spacing is 5 millimeters. The point cloud Figure 3b was matched to Figure 3a by use of the LS3D surface matching. The it-eration criteria values c i were selected as 1.0e-4 meters for the elements of the translation vec-tor and 1.0e-3 grad for the rotation angles.In this experiment the whole overlapping areas were matched. The numerical result of the matching is given in part I of Table 1. Relatively homogeneous and small magnitudes of the theoretical precision values of the parameters show success of the matching.(a)(b)(c)Figure 3. Example “room”. (a) Template and (b) search point clouds, (c) composite point cloud after the LS3D surface matching. Note that laser scanner derived intensities are back-projected onto the point clouds only for the visualization purpose.A further matching process was carried out using the simultaneous multi-subpatch approach of the LS3D. Seven occlusion-free cooperative subpatches were selected. The result of the matching is given in part II of Table 1. The simultaneous multi-subpatch approach apparently decreases the computation times.Table 1. Numerical results of “room” example._______________________________________________________________________________________________________ #Surface No.Iter.Time Sigma Standard dev. of Standard dev. of mode points(sec.)naught (mm)t x / t y / t z (mm) / / (grad/100) _______________________________________________________________________________________________________ I P1,155,50210142.5 3.690.01 / 0.01 / 0.010.03 / 0.04 / 0.02B11209.9 3.700.01 / 0.01 / 0.010.03 / 0.04 / 0.02II P279,088825.8 3.600.01 / 0.02 / 0.030.07 / 0.06 / 0.04B1143.9 3.650.01 / 0.02 / 0.030.07 / 0.06 / 0.04_______________________________________________________________________________________________________ P: planar, B: bi-linear surface representation.。

相关文档
最新文档