含自动相对边缘响应方法的NIIR估计(IJIGSP-V9-N11-4)
基于Markov_链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制
第27卷㊀第9期2023年9月㊀电㊀机㊀与㊀控㊀制㊀学㊀报Electri c ㊀Machines ㊀and ㊀Control㊀Vol.27No.9Sep.2023㊀㊀㊀㊀㊀㊀基于Markov 链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制陈勇1,㊀邱子桢2,㊀马凯2,㊀孔治国2,㊀黄炘2(1.广西大学机械工程学院,广西南宁530004;2.中汽研新能源汽车检验中心(天津)有限公司,天津300300)摘㊀要:基于Markov 链算法的随机脉宽调制(RPWM )技术,以电动汽车驱动用永磁同步电机(PMSM )为研究对象,对高频边带电流谐波及声振响应的抑制效果进行研究㊂首先,基于dSPACE 半实物仿真系统和12极/10槽永磁同步样机,对空间矢量脉宽调制(SVPWM )所引入的边带电流谐波与径向电磁力进行解析与频谱阶次分布特征识别㊂其次,对常规RPWM 的随机数性能进行分析,提出多状态Markov 链随机数产生原理与实现方法,并结合粒子群(PSO )算法对转移概率和随机增益两种随机参数进行快速寻优㊂最后,进行稳态工况与不同转速工况的实验验证,相比于RP-WM 结果表明,经过多状态Markov 链优化后所生成的随机数分布更加均匀且对称,边带电流谐波与声振响应均呈现出更好的抑制效果㊂关键词:永磁同步电机;随机脉宽调制;Markov 链;边带电流谐波;振动噪声抑制;粒子群优化DOI :10.15938/j.emc.2023.09.012中图分类号:TM341文献标志码:A文章编号:1007-449X(2023)09-0109-10㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀收稿日期:2022-01-05作者简介:陈㊀勇(1954 ),男,博士,教授,研究方向为新能源汽车动力传动系统设计及CAE 分析;邱子桢(1992 ),男,博士,工程师,研究方向为电驱动系统噪声与振动控制;马㊀凯(1992 ),男,硕士,工程师,研究方向为电驱动系统功能安全测试技术;孔治国(1977 ),男,博士,教授级高级工程师,研究方向为电驱动系统测试技术;黄㊀炘(1984 ),男,硕士,正高级工程师,研究方向为电驱动系统测试技术㊂通信作者:邱子桢Investigation into Markov-chain random modulation for suppression high-frequency sideband vibro-acoustics in permanent magnetsynchronous motorCHEN Yong 1,㊀QIU Zizhen 2,㊀MA Kai 2,㊀KONG Zhiguo 2,㊀HUANG Xin 2(1.College of Mechanical Engineering,Guangxi University,Nanning 530004,China;2.CATARC New Energy Vehicle Test Center (Tianjin)Co.,Ltd.,Tianjin 300300,China)Abstract :An investigation associated with the sideband vibro-acoustics suppression was presented with multi-state Markov-chain random pulse-width modulation (RPWM)in permanent magnet synchronous motor(PMSM),together with the optimization method for the random parameters.Firstly,the feature i-dentifications of sideband current harmonics and the radial electromagnetic force,including spectral and order distributions introduced by the space vector pulse-width modulation(SVPWM),were undertaken on a 12-pole /10-slot prototype PMSM and its dSPACE hardware-in-the-loop control system.Then,the prin-ciple and implementation of different states Markov-chain were presented,while the random performances of conventional RPWM were conducted.The key parameters,i.e.transition probability and random gain,were optimized by using the particle swarm optimization (PSO)method.At last,the sideband cur-rent harmonics and vibro-acoustic results were conducted and evaluated under steady state and differentspeed conditions.Since the random numbers generated by the m Markov chain are evenly and continuous-ly,the suppression effects are better than those in conventional RPWM.Keywords:permanent magnet synchronous motor;random pulse width modulation;Markov chain;side-band current harmonic;vibro-acoustics control;particle swarm optimization0㊀引㊀言当前,在环境保护与清洁能源快速发展的背景下,以电动汽车和混合动力汽车为代表的新能源汽车成为未来汽车产业发展的趋势㊂永磁同步电机(permanent magnet synchronous motor,PMSM)以其高转矩/功率密度㊁高转速㊁操作与控制灵活等优点,广泛用于电驱动总成[1]㊂然而,由于缺少了传统发动机的掩蔽效应,电机本体振动及其辐射噪声对动力总成系统的运行稳定性㊁可靠性及整车层面的NVH (noise,vibration,harshness)性能均有重要的影响[2]㊂特别是,由电压源逆变器(voltage source in-verter,VSI)及空间矢量脉宽调制策略(space vector pulse-width modulation,SVPWM)所引入的高频边带谐波成分,导致边带电流谐波集中在载波频率及其整数倍范围内,从而导致电机辐射出高频率㊁令人感到不适的 啸叫 [3-4]㊂诸多研究文献中采用解析与有限元数值计算的方法[5],分析并识别了边带谐波成分的时空分布与幅值特征㊂通过构建电磁场㊁电机本体结构㊁振动响应与声辐射的多物理场预测模型,基于模态叠加与边界元法的声振有限元与半解析仿真模型[6],实现了从电磁谐波到机械响应之间的多物理场耦合,能够实现高精度的电机声振响应预测㊂解析法[7]可以更快㊁更直接地反映 机电磁控 多物理量之间的耦合关系㊂通过对VSI所输出的PWM波进行傅里叶级数分解,考虑基本的电磁参数与结构尺寸,可以构建定子与转子坐标系下的边带谐波电流解析模型,并运用麦克斯韦张量法以实现对径向电磁力幅值㊁频率次数与空间阶次的完整解析[8]㊂通常,SVPWM的载波频率被设定为固定值,对于边带谐波及声振响应的抑制主要围绕基于Parse-val原理的扩频调制技术,即令信号在时域和频域内的能量保持不变,通过扩大谐波分布频谱范围,达到降低谐波幅值的效果[9-10]㊂根据信号种类,扩频调制策略可以分为基于周期性信号和离散随机性信号两种方式㊂文献[11-12]中,对基于三角波和正弦波的两种扩频调制效果进行了分析,尽管周期性扩频调制技术可以使原先固定的载波频率以确定且可控的方式进行变化,但边带谐波与声振抑制效果有限㊂在诸多随机性调制技术中,以离散随机信号的脉宽调制(random pulse-width modulation,RPWM)技术应用最为广泛,离散的随机信号可以与SVPWM技术相结合,使PWM输出脉冲宽度呈现随机化,使原本集中的边带谐波能量扩展至较宽的频域范围,从而实现抑制边带谐波和声振响应幅值的效果[13]㊂相比于传统的PWM和SVPWM,RPWM能够有效降低谐波畸变率(total harmonic distortion,THD)[12]㊁抑制转矩脉动[9]㊁降低电机铁损等,特别是有效降低低频侧与高频侧边带谐波成分[14]㊂然而,考虑到离散的随机数序列在硬件上实现的难易程度,常规RPWM通常采用以线性同余法或查表法为主的伪随机数生成方法[15]㊂与理想随机数相比,伪随机数在局部时间段内会大于或小于数学期望值,从而导致所生成的随机化载波频率呈现出连续大于或小于初始载波频率的现象[16]㊂RP-WM所生成的不均衡随机数会直接影响谐波抑制效果,使边带谐波不能最大化抑制,而且会使系统输出信号中产生较大的电流纹波,从而导致输出转矩波动及产生额外的开关损耗[14]㊂为改善随机数性能以进一步优化边带谐波电流与振动噪声的抑制效果,本文引入多状态Markov链算法㊂首先,对SVPWM所引入的边带电流谐波与径向电磁力进行解析分析,并通过样机实验对相电流与声振响应的频谱进行特征识别㊂其次,分别建立多状态Markov链算法的随机数生成策略模型;计及转移概率与随机增益参数对抑制效果的影响,结合粒子群算法进行随机参数寻优㊂最后,通过策略搭载与样机实验,对比分析稳态工况与不同转速工况下边带电流谐波与声振响应实验结果,进一步验证多状态Markov链优化算法的有效性㊂1㊀基于SVPWM的边带成分特征识别1.1㊀边带电流谐波与径向电磁力特征解析常规SVPWM可以被等效为对称放置零矢量的011电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀常规采样PWM,其输出相电压被视作调制波与载波频率调制的结果[17],如图1所示,y (t )为调制波,T s 为采样周期㊂以第一载波频段为例,边带电流谐波可以通过在定子坐标系框架中构建以调制波频率f 0和载波频率f c 为变量的双重傅里叶级数,再通过坐标变换重新排列转子坐标系中的边带谐波分量直接解析[5]㊂位于第一载波频率频带范围内的边带电流谐波可以表示为i sideband_1(t )ʈi 1_2cos(f c ʃ2f 0)t +i 1_4cos(f c ʃ4f 0)t ㊂(1)式中i 1_2和i 1_4为边带电流谐波分量的幅值㊂图1㊀SVPWM 对称规则采样Fig.1㊀Symmetrical regular sampled in SVPWM对于径向磁通电机,作用于定子齿面上的电磁激励力是引起电机振动及辐射噪声的主要原因㊂由SVPWM 所引入的边带电流谐波会使气隙磁场感应产生分布于载波频率及其整数倍频带范围的边带磁场分量,进而产生相应频带的边带电磁力㊂通常,气隙电磁力可以分为径向分量与切向分量,他们具有相同的时间与空间特征,通过Maxwell 应力张量法,气隙处的电磁力可以表示为f n (θ,t )=12μ0[B 2n (θ,t )-B 2t (θ,t )]ʈ12μ0B 2n(θ,t )㊂(2)式中:μ0为真空磁导率;B n 和B t 分别代表气隙磁密的径向与切向分量,而切向分量的幅值远小于径向分量,通常予以省略㊂忽略磁饱和效应并考虑磁场叠加原理,径向气隙磁密B n 可以分解为定子电枢磁场B arm 和转子永磁体磁场B mag ,即B n (θ,t )=B arm (θ,t )+B mag (θ,t )㊂(3)式中电枢磁场可以被视为基波磁场B 0与谐波磁场B h 的叠加,B mag 和B arm 的表达式[8]为:B mag (θ,t )=ðɕμB μcos(μpθ-μf 0t );(4)B arm (θ,t )=ðɕvB 0cos(vN t θ+f 0t )+ðɕv ðɕhB h cos(f h t -vN t θ)㊂(5)式中:μ为永磁体磁场阶次,考虑到永磁体基波磁场是振动噪声的主要贡献量,μ取值为1;v 表示电枢磁的谐波次数;θ为空间机械角度;f 0为电信号基波频率;f h 为谐波电流频率;N t 为单元电机,数值上取电机极对数p 与槽数z 的最大公约数㊂将式(3)~式(5)代入式(2)中,可以得到径向电磁力密度的解析表达式,其中,全部表达式有12项成分㊂为了简化体现出边带谐波分量,只考虑永磁体边带谐波与电枢谐波磁场的相互作用,具体表示[8]为f n (θ,t )ʈλ202μ0B μðɕv ðɕh B h{cos[(p -vN t )θ-(f 0-f h )t ]+cos[(p +vN t )θ-(f 0+f h )t ]}㊂(6)由式(6)可以看出,边带电磁分量与机械响应之间的多物理场解析模型可以通过时空特征的耦合关系建立㊂本文以时间频率特征为识别对象,在机械响应中的频率特征为f h ʃf 0次,结合第一载波频率附近的边带电流谐波特征频率,f c ʃ2f 0和f c ʃ4f 0次,边带径向电磁力的主要阶次为f c ʃf 0,f c ʃ3f 0和f c ʃ5f 0次㊂由上,常规SVPWM 技术所引入的边带谐波成分及径向电磁力特征解析模型被完整建立㊂1.2㊀边带电流谐波与声振响应特征识别为了识别常规SVPWM 边带电流谐波与声振响应频谱特征,本文采用由电机样机㊁测试平台㊁控制与测试系统组成的样机实验平台㊂电机样机选用一款小型电动汽车后桥驱动12槽/10极永磁同步电机,具体参数如表1所示㊂实验中的测试平台基于20N㊃m 磁粉测功机实现转矩加载,如图2所示㊂表1㊀永磁同步电机样机参数Table 1㊀Key parameters of the prototype PMSM111第9期陈㊀勇等:基于Markov 链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制图2㊀样机实验与测试平台Fig.2㊀Experimental setup with instrumentation ㊀㊀直流电源采用20kW电池模拟器输出311V-4kW,驱动电路与三相全桥功率模块选用Infineon-BSM-75GB120DN2㊂本文所提出的PWM策略模型均在MATLAB/Simulink中建立,并基于dSPACE1103半实物仿真平台及其实时监控上位机系统,实现PWM信号输出㊁电流与位置信号的反馈㊁策略切换与参数实时更替㊂在测量相电流㊁壳体振动与辐射噪声信号时,本文将ICP型三向加速度传感器安装在电机冷却壳体上,传感器灵敏度为42.32mV/g;参考了声学测试标准,ICP麦克风测点的布置结合了电机结构参数,采用5个麦克风测点的半球面测试方法;振动噪声信号使用朗德SQuadriga II数据采集仪进行采集与运算;电流传感器选用霍尔电流钳,相电流信号使用YOKOGAWA ScopeCorder采集㊂诸多参考文献中指出[8,14],边带谐波成分产生机理的本质为频率调制过程,相比于幅值调制与相位调制,边带谐波成分的幅值变化取决于电机运行速度而不是转矩负载状态㊂为了清楚地验证边带电流谐波与其相关的振动噪声响应,本文将样机的运行条件设置为1000r/min和4N㊃m的高效率稳态工况区间,其中电机转速频率f r为16.67Hz,电流基波频率f0为83.34Hz㊂边带电流谐波通过功率谱密度(power spectral density,PSD)方法进行处理,如图3所示,边带谐波成分在载波频率附近出现明显的阶次分布,特征频率为7667Hz(f c-4f0)㊁7833Hz(f c-2f0)㊁8167Hz (f c+2f0)和8334Hz(f c+4f0),验证了式(1)中的解析模型㊂此外,边带电流谐波的峰值出现在f cʃ2f0,幅值为-27.68dB/Hz;f cʃ4f0的谐波幅值相对较低,幅值为-39.12dB/Hz㊂图3㊀常规SVPWM相电流波形与边带电流谐波成分Fig.3㊀Phase current wave and sideband current har-monic components in conventional SVPWM 边带声振响应频谱结果如图4所示,其中,图4(b)中的噪声频谱为5个麦克风数据均方根处理的结果,并且为了计算动态声压级,所选数据经过A计权(SPL-A)处理㊂类似于边带电流谐波,声振响应频谱中可以明显地看出阶次分布,特征频率分别为7583Hz(f c-5f0)㊁7750Hz(f c-3f0)㊁7917Hz (f c-f0)㊁8084Hz(f c+f0)㊁8250Hz(f c+3f0)和8416Hz(f c+5f0),再次验证了声振响应与边带电流谐波成分的相关性㊂声振响应的峰值出现在f c-3f0次处,幅值分别为0.175m/s2及53.76dBA㊂图4㊀常规SVPWM边带成分声振响应Fig.4㊀Sideband vibro-acoustics in conventional SVPWM211电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀2㊀Markov 链随机脉宽调制技术基于谐波扩频调制技术中的RPWM,即通过将原本固定的载波频率按照随机形式变化,使得相对集中的边带谐波成分转化为分布在指定频谱范围内的离散谐波成分,谐波能量随之降低,从而达到改善边带声振响应的效果㊂然而,RPWM 依然存在随机数短时间内分布不均匀的问题,导致相电流脉动成分及开关损耗增大等问题㊂因此,本文引入多状态Markov 链模型,优化生成随机数效果,并应用粒子群算法选择最优的转移概率与随机增益㊂2.1㊀RPWM 技术原理与随机数分析RPWM 技术的核心是随机数生成策略,考虑到硬件设备生成随机数的难易程度,通常采用伪随机数生成方法[18]㊂原本固定载波频率f c 随机化后的随机载波频率f n +1可以表示为f n +1=f c +Rs ㊂(7)式中:s 为[-1,1]区间范围内的任意数值;R 为伪随机数生成策略的随机增益,在频谱中的具体体现为边带谐波成分的扩频宽度㊂由上式可以看出,随机化后的调制效果由某时刻的随机数值s 和随机增益R 共同决定,即载波频率变化范围为[f c -R ,f c +R ]㊂为了反映出脉冲宽度随机化程度,图5给出了基于非对称规则PWM 采样的RPWM 信号示意图,需要说明的是无论对称还是非对称规则采样PWM 策略,其随机化前后对于载波频率附近的边带谐波及其边带声振响应的影响是相当有限的㊂此外,常规RPWM 的随机载波频率生成结果如图6所示,由于RPWM 对随机数生成并未加以限制,因此在随机数生成过程中会无法避免地产生某时间段内随机数值大于或小于平均期望值,其所影响的随机载波频率也会在某段时间内大于或小于中心载波频率㊂图5㊀随机载波频率PWM 示意图Fig.5㊀Schematic diagram of randomPWM图6㊀常规RPWM 的随机载波频率生成结果Fig.6㊀Simulation results of carrier frequency inconventional RPWM2.2㊀Markov 链原理与多状态随机数产生过程Markov 过程被称为无后效过程,即随机事件序列下一时刻的取值(a n +1,t n +1)只与当前时刻的取值(a n ,t n )有关,与之前的取值无关[19]㊂在随机优化算法仿真与实际设计中,需要将连续事件进行离散化,形成离散Markov 链,因此本文引入单一时刻的条件转移概率P ij (m ,n ),具体表示为P ij (m ,n )=P {X n =a j |X m =a i }=P {X n =j |X m =i ),i ,j ɪS ㊂(8)式中S 为状态空间,S ={a 1,a 2, ,a n }㊂转移概率P ij (m ,n )的意义为:m 时刻系统处于状态a i ,经(n -m )时刻后转移到状态a j 的条件概率;亦可以理解为系统m 时刻状态i ,转移到n 时刻状态j 的概率㊂随着状态空间数量的增加,转移矩阵所含元素越多,这也意味着优化后的随机数效果更加理想㊂为了避免状态空间数量所引入额外计算量,本文将未知参量控制在10个以内㊂两状态Markov 链随机载波频率系统设计是将原先载波频率的变化区间[f c -R ,f c +R ]分解为[f c -R ,f c ]和[f c ,f c +R ]上下两个部分,从而构成载波频率的状态1和状态2空间㊂如设状态1的概率为p a ,状态2的概率为p b ,构成的两状态转移概率矩阵为P =p 11p 12p 21p 22éëêêùûúú=p b p a p ap béëêêùûúú㊂(9)式中p a 和p b 的取值为(0,1),且p a +p b =1㊂三状态Markov 链随机载波频率系统的设计是在两状态的基础上,将载波频率的变化区间进一步细分,引入调制系数k ,k ɪ(0,0.33),载波频率变化区间被平均分割为[f c -R ,f c -kR ]㊁(f c -kR ,f c +kR )和[f c +kR ,f c +R ]三部分,分别对应状态1㊁状311第9期陈㊀勇等:基于Markov 链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制态2和状态3㊂每个状态所对应的概率分别为p a ㊁p b 和p c ㊂考虑到矩阵中每行每列之和为1,所构成的三状态转移概率矩阵可被进一步简化,即P =p a p b p c p bp c p a p cp ap b éëêêêùûúúú=0p 1p 2p 10p 2p 2p 10éëêêêùûúúú㊂(10)式中p 1和p 2的取值为(0,1),且p 1+p 2=1㊂多状态Markov 链随机载波频率生成结果如图7所示,与常规RPWM 的随机载波频率生成结果相比,随机数分布更加均匀,随机频率的分布得到了很好的优化㊂两状态Markov 链的随机化载波频率的分布仍然有部分时刻大于或小于平均期望值;随着转移概率矩阵的增加,三状态Markov 链的随机数性能进一步提高,随机化载波频率分布更加均匀㊂图7㊀多状态随机载波频率生成结果Fig.7㊀Carrier frequency results with multi-states2.3㊀基于粒子群算法的随机参数寻优从上述分析中可以看出,随机增益R 与转移概率P 对边带谐波抑制效果的影响较大,而且这两个参数在系统参数设计中需要限定取值范围,避免占用主控芯片较大的计算内存㊂因此,本文选用粒子群算法,对R 和P 参数进行快速寻优,从而达到最优的边带声振抑制效果㊂粒子群算法(particle swarm optimization,PSO)属于群智能算法,是通过模拟鸟群觅食行为而发展起来的一种基于群体协作的元启发式优化算法[20],相比于传统常用的模拟退火算法(simulated annea-ling,SA)和遗传算法(genetic algorithm,GA),PSO不需要对初始粒子进行种群交叉与变异,仅通过内部迭代即可实现对最优目标解的求解,并且凭借其全局高精度收敛及鲁棒可靠性等优点,在多目标优化㊁自适应控制㊁非线性和多维度空间寻优等领域得到了广泛应用[21]㊂PSO 算法作为随机搜索算法,其核心驱动因素是将全局历史最优解G best 与个体历史最优解P best 进行共享更新迭代,从而实现个体极值与粒子群全局的最优求解,具体流程如图8所示㊂图8㊀粒子群算法流程图Fig.8㊀Flowchart of particle swarm optimization全局最优解G best 的筛选求解过程如下:1)系统初始标定:根据实验采集到的相电流数据,标定仿真模型d 轴电感㊁绕组电阻等参数,确保仿真模型各工况下与实验结果相符㊂2)限定粒子初始位置与极值:R 的范围设定为1000~2000Hz,P 的范围为0.5~1㊂3)初始化粒子群:粒子群中的每个粒子包含2个基本信息,随机增益R 和转移概率P ,在每次迭代过程中,每个粒子的最优解P best 将会与全局最优解G best 进行比较和更新㊂考虑到变量个数较少,本文设定粒子群个数为20个㊂4)运行稳态工况Simulink 仿真程序,并在MATLAB 工作区间生成相电流时域波形数据库㊂R411电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀的计算步长为每步100Hz;P 的计算步长为每步0.01;将生成的500个数据文件整理为数据库,再由PSD 程序对数据进行时频转换;由于实际频谱中两端幅值较低,对PSO 算法的收敛过程存在干扰,故将频带范围修正为7750~8250Hz 的频谱范围,最终拾取个体最优值P best 和全局最优值G best ㊂5)边带电流谐波的最小值修正与判定:将修正后频带范围内的值共同减去一个特定值(40dBA)形成中间数据库,再将中间数据库取负数,选取其中的最大值判定为P best ,并更新G best ㊂6)终止条件设定:标准PSO 算法中终止条件通常为迭代步数和收敛判据㊂为了保证群体的多样性或单一性,本文将迭代步数作为终止条件,以防止粒子群早熟收敛或过度迭代不收敛㊂3㊀实验结果3.1㊀RPWM 边带电流谐波与声振响应RPWM 边带电流谐波成分如图9所示,基于RPWM 边带电流谐波的阶次分布得到了有效地抑制,谐波幅值抑制到-40dB /Hz 以下㊂此外,由于随机数性能的非理想因素,边带谐波的峰值略偏小于8000Hz㊂图9㊀RPWM 边带电流谐波成分Fig.9㊀Sideband current harmonic components in RPWM声振响应频谱如图10所示,与边带电流谐波类似,边带声振响应中明显的阶次效应得到了显著的抑制,其中振动幅值抑制到0.06m /s 2以下,A 计权声压级幅值抑制到45dBA 以下㊂RPWM 的边带电流谐波与声振响应呈现出明显的相关性,并且同时表现出不对称性,峰值均分布在小于中心频率8000Hz 的一侧㊂由于RPWM 的本质仍然是频率调制,即边带电流谐波和振动声响应的大小与速度条件正相关,而对负载条件的变化不敏感,因此图11进一步给出了4N㊃m 恒定转矩下不同转速工况的噪声频谱实验结果,可以看出,边带声压级频谱分布仍显示出不均衡性㊂图10㊀RPWM 边带成分声振响应Fig.10㊀Sideband vibro-acoustics inRPWM图11㊀不同转速工况RPWM 噪声频谱Fig.11㊀Acoustic spectrum of RPWM with different speed3.2㊀多状态Markov 随机载波频率调制实验结果为了能够优化随机数性能,本文基于PSO 优化算法对其中的关键参数进行了快速寻优;基于最优的随机参数,引入多状态Markov 链随机脉宽调制改善随机载波频率分布,进一步抑制边带声振响应,以满足最优的边带谐波成分及声振响应抑制效果㊂511第9期陈㊀勇等:基于Markov 链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制基于PSO 算法的随机参数寻优结果如图12所示,在迭代过程中最大迭代步数步长设置为60,目标函数的误差值在40个迭代步数内趋于稳定,即边带电流谐波的抑制效果达到最优㊂此时,转移概率P 的值等于0.68;随着随机增益R 值的增加,边带谐波峰值减小,且在R =2000时达到最优效果;同时,图12(c)表明当R 值大于2000Hz 之后,对边带谐波及声振响应的抑制效果趋于饱和㊂图12㊀基于粒子群算法的随机参数寻优结果Fig.12㊀Results of randomized parameters with PSOalgorithm根据上述的随机参数寻优分析,图13给出了参数最优下的多状态Markov 链边带电流谐波抑制效果㊂从实验数据中可以看出,两状态Markov 链的边带电流谐波峰值下降到了-40dB /Hz 以下,三状态Markov 链的优化效果更为明显,抑制效果甚至达到了-50dB /Hz 以下;此外,相对于常规RPWM,多状态Markov 链优化算法能够较好地改善边带谐波成分的频谱分布㊂图13㊀多状态Markov 边带电流谐波抑制效果Fig.13㊀Suppression of sideband current harmonic components with muti-states Markov-chain图14和图15分别给出了基于最优随机参数的多状态Markov 链边带声振响应,抑制效果得到了进一步提升,两状态Markov 链的边带声振响应峰值分别被抑制到了0.025m /s 2和40dBA 以下,三状态Markov 链的边带声振响应峰值得到进一步抑制,分别为0.02m /s 2和35dBA 以下;相比于常规RPWM,引入多状态Markov 链的频谱分布更加均匀且对称㊂图16给出了不同转速工况的多状态Markov 链全频带声学响应对比结果,相比于RPWM 实验结果,边带噪声频谱分布更加均匀;三状态Markov 的优化效果最优,高频段噪声幅值进一步下降,进一步验证了多状态Markov 链策略的有效性㊂611电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀图14㊀两状态Markov 边带声振响应Fig.14㊀Sideband vibro-acoustics with two-statesMarkov-chain图15㊀三状态Markov 边带声振响应Fig.15㊀Sideband vibro-acoustics with three-statesMarkov-chain图16㊀不同转速工况的多状态Markov 链全频带声学响应结果Fig.16㊀Acoustic responses of multi-state Markov chainin full frequency band under with different speed conditions4㊀结㊀论为了抑制SVPWM 所引入的边带电流谐波及声振响应,改善RPWM 边带谐波成分的抑制效果,本文提出了基于多状态Markov 链的随机脉宽调制方法,并利用PSO 算法对其关键随机参数进行寻优㊂结合样机实验验证结果,所得出的结论如下:1)RPWM 可以有效抑制边带电流谐波,在声振响应中也呈现出明显的优化效果,然而,由于RP-WM 的随机数生成性能较差,优化后的边带频谱分布偏小于中心频率㊂2)多状态Markov 链随机脉宽调制可以有效优化随机数性能,进一步优化了边带电流谐波及声振响应,并且边带频谱的分布呈现出较好的对称性与均匀性;最优噪声抑制效果可以达到15dBA 以上㊂3)所提出的PSO 优化算法可以有效对随机增益R 与转移概率P 进行寻优;此外,本文所研究的内容可以面向PWM 供电的电驱动系统,为后续永711第9期陈㊀勇等:基于Markov 链随机脉宽调制的永磁同步电机高频边带谐波与声振响应抑制。
基于可变遗忘因子递推最小二乘法的无传感器机器人末端接触力估计
基于可变遗忘因子递推最小二乘法的无传感器机器人末端接触力估计李龙;陈智;汪博文;田应仲【摘要】随着人机共融生产模式的推广,人与机器人需要协作完成工作任务,在人机协作的过程中需要估计机器人末端接触力.传统的机器人末端接触力估计主要是基于外部传感器来实现的,这不仅会使机器人本体成本增加,还会使机器人的控制系统变得更加复杂.针对这个问题,研究了无外部传感器的机器人末端接触力估计算法.首先设计数字低通滤波器对机器人动力学方程进行滤波,建立不显含加速度信号的机器人动力学模型,然后将机器人末端接触力看作时变参数,采用递推最小二乘法估计末端接触力,通过动态的改变遗忘因子使算法具有更好的响应特性,达到了较好的效果.最后通过MATLAB和ADAMS联合仿真验证了算法的有效性.%With the development of collaborative production, human and robot need to co-operate in work, so it is necessary to estimate the robot end-effector contact force. However, conventional contact force estimation methods are based on additional sensors, which are not only enhance cost but also increase the complexity of system. To solve this problem, we propose a contact force estimation method without any extra sensors. First, filtering the robot dynamics equation and establish the dynamics equation without acceleration, then we consider the contact force as time-varying parameters and estimate the contact force based on recursive least-squares algorithm, by changing the forgetting factor adaptively, the algorithm achieve good response characteristics. Finally, the results of simulation on MATLAB and ADAMS verify that the algorithm is valid.【期刊名称】《机械设计与制造》【年(卷),期】2017(000)0z1【总页数】5页(P209-212,216)【关键词】接触力估计;最小二乘法;可变遗忘因子【作者】李龙;陈智;汪博文;田应仲【作者单位】上海大学机电工程与自动化学院,上海 200072;上海大学机电工程与自动化学院,上海 200072;上海大学机电工程与自动化学院,上海 200072;上海大学机电工程与自动化学院,上海 200072【正文语种】中文【中图分类】TH161 引言目前越来越多的生产领域要求工人和机器人共享工作区协同进行工作。
miR-132-3p
面神经与运动纤维、感觉纤维和交感纤维组成的混合神经可形成颞、颧、颊、下颌缘以及颈5个主要分支,这些分支负责许多肌肉的神经支配,并与肌肉共同作用于面部表情的复杂功能[1]。
面神经损伤(FNI )以及由其导致的面部瘫痪会影响患者的正常沟通和情绪表达[2]。
因此,研究FNI 及其再生机制是临床治疗FNI 的关键。
放射性I-125粒子是人工合成同位素,目前I-125粒子植入在临床上用于肿瘤治疗已显示出较好的疗效[3-5]。
相比较于传统的体外放疗,I-125粒子的近距离发放射治疗具有易于传送、长时间持续对肿瘤部位低剂量放疗等优点[6]。
施万细胞是周围神经系统的髓鞘神经胶质细胞,在周围神经损伤后修复中发挥关键作用[7]。
有研究报道I-125辐射可引起FNI 损伤[8],但目前I-125对施万MiR-132-3p negatively regulates CAMTA1to promote Schwann cell proliferation and migration and alleviates I-125seeds-induced exacerbation of facial nerve injury in ratsZHU Jin 1,OUYANG Xin 2,LIU Yu 1,QIAN Yemei 1,XIA Bin 1,SHI Yanan 1,YU Lifu 11Department of Maxillofacial Surgery,Stomatological Hospital of Kunming Medical University,Kunming 650106,China;2Stomatology Center,First People's Hospital of Yunnan Province,Kunming 650032,China摘要:目的探讨miR-132-3p 通过钙调素结合转录因子1(CAMTA1)对I-125粒子处理的面神经损伤大鼠(FNI )中施万细胞的调控作用。
CT影像组学结合临床特征预测活动性耐药肺结核的模型构建与验证
中图分类号
!='(%#!=7%
F"!(/8"&4')=8'#"&+&!B+/#!+'#"&,")*)(!#8'#&2+8'#B(!)=2I)(4#4'+&'*=/>"&+):'=7()8=/"4#4=4#&28">7#&(! CA )+!#">#84+&!8/#+/,(+'=)(4 1"5C*56"J,"5@F3"5@3-"6">"5;'5@.,5@9"F-,W-5P*,X"Y"'Y"5@%"5@6" \*,V3*5@T,6"V3*52,%-6"\*,>"5%-6"Y"5@8,5.,"'6: 6;*#"+).*5)'<Q"I,'/'@%"H3*L,+0)R<<,/,")*I>'0#,)"/ '<8,5O,"5@$*I,&"/A5,B*+0,)%"8,5O,"5@Z[X6DD"F3,5"#9;*#"+).*5)'< $QK"H3*L,+0)R<<,/,")*I>'0#,)"/'< 8,5O,"5@$*I,&"/A5,B*+0,)%"8,5O,"5@Z[X6DD"F3,5"#X;*#"+).*5)'<H-(*+&-/'0,0"H3*L,+0)R<<,/,")*I>'0#,)"/'< 8,5O,"5@$*I,&"/A5,B*+0,)%"8,5O,"5@Z[X6DD"F3,5" F'++*0#'5I,5@ "-)3'+0!J,"5@ F3"5@3-""N.",/!/,"5@&3"5@3-"69XZ[ 6MX:&'.#>"5 ;'5@.,5@"N.",/! M9[ZE9[EDTT:&'.
直线特征的交互矩阵求取
第41卷第10期自动化学报Vol.41,No.10 2015年10月ACTA AUTOMATICA SINICA October,2015直线特征的交互矩阵求取徐德1卢金燕1摘要直线特征在视觉跟踪、视觉伺服中具有重要作用,但目前的直线交互矩阵的求取受到制约,需要已知含有直线的平面在摄像机坐标系中的方程参数.为摆脱含有直线的平面参数的约束,本文利用两点的极坐标推导出直线的交互矩阵,并给出直线交互矩阵求取方法.经分析得知,对于与摄像机光轴接近垂直的直线,其在成像平面上的角度变化主要受摄像机姿态变化的影响,对摄像机的位置变化不敏感.对于与摄像机光轴平行的直线,其在成像平面上的角度变化受摄像机旋转以及垂直于光轴平移的影响较大.实验结果验证了本文方法的有效性.关键词交互矩阵,直线特征,视觉伺服,视觉跟踪,视觉控制引用格式徐德,卢金燕.直线特征的交互矩阵求取.自动化学报,2015,41(10):1762−1771DOI10.16383/j.aas.2015.c150097Determination for Interactive Matrix of Line FeatureXU De1LU Jin-Yan1Abstract Line features are very important for visual tracking and visual servoing.However,determination of the interactive matrix of line feature is possible only if the parameters of the plane containing the line are known in the camera s coordinates.In this paper,the interactive matrix of line feature is derived with two points polar coordinates in order to avoid the parameters requirement for the plane containing the line.Then determination for interactive matrix of line feature is presented.Analysis of the interactive matrix shows that the angle s variation of line feature is mainly influenced by the orientation s variation of the camera and insensitive to translations of camera if the line is almost perpendicular to the camera s optical axis.The angle s variation of line feature for the line parallel to the optical axis is apparently influenced by the camera s translations vertical to the optical axis and rotations.Experimental results verify the effectiveness of the proposed method.Key words Interactive matrix,line feature,visual servoing,visual tracking,visual controlCitation Xu De,Lu Jin-Yan.Determination for interactive matrix of line feature.Acta Automatica Sinica,2015, 41(10):1762−1771点、边缘、直线特征是机器人视觉跟踪、视觉伺服中的常用特征[1−2].文献[3]和文献[4]分别采用Eye-in-hand和Eye-to-hand视觉系统,利用点特征及其交互矩阵,在对点特征的深度进行估计的基础上,在机器人的关节空间进行视觉伺服控制.与点特征相比,边缘特征具有形状约束,在特征提取时可以具有更好的鲁棒性,在增强现实和视觉伺服等应用中受到重视[5−9].例如,文献[5]提出了一种基于目标模型的位姿估计方法,利用图像中目标的纹理和边缘的点特征,结合点的交互矩阵估计目标的位姿变化,实现视觉跟踪.文献[6]利用航母上的边缘特征,通过与航母模型匹配估计航母与飞机之间的姿收稿日期2015-03-02录用日期2015-07-07Manuscript received March2,2015;accepted July7,2015国家自然科学基金(61227804,61421004)资助Supported by National Natural Science Foundation of China (61227804,61421004)本文责任编委徐昕Recommended by Associate Editor XU Xin1.中国科学院自动化研究所精密感知与控制研究中心北京100190 1.Research Center of Precision Sensing and Control,Institute of Automation,Chinese Academy of Sciences,Beijing100190态.文献[7]跟踪目标边缘点的法向量,利用目标的CAD模型中的可见边缘与图像中的目标边缘进行匹配,将运动估计问题转化为几何问题,使得视觉跟踪问题成为优化问题.文献[8]将几何信息和颜色信息相结合,对基于几何信息和基于颜色边缘的特征分别赋以不同的权重,利用交互矩阵将基于几何信息和基于颜色边缘的估计结合在一起.基于几何信息部分,结合目标的CAD模型,利用边缘点到模型可见边缘投影直线的距离估计位姿偏差.基于颜色边缘部分,利用目标与背景的颜色分界曲线上的点沿法线方向到模型边缘的距离估计位姿偏差.文献[9]利用点特征和点到边缘的距离特征,估计摄像机的位姿,实现目标跟踪.上述基于模型的方法需要已知目标的模型,其应用受到较大的限制.与边缘相比,直线具有更强的约束,直线的提取更加鲁棒和准确.因此,基于直线特征的视觉伺服方法对于提高伺服控制的鲁棒性具有很大的潜力,许多研究人员致力于基于直线的视觉跟踪或视觉伺服[10−17].Espiau 等对点、直线、椭圆等分别推导了交互矩阵,其中直10期徐德等:直线特征的交互矩阵求取1763线的交互矩阵需要用到含有直线的平面在摄像机坐标系中的方程参数[10].Comport 等基于文献[10]的直线交互矩阵,推导了点到直线距离特征的交互矩阵,用于在增强现实中对目标进行跟踪[11−12].Mills 等假设电力线为一组平行线,利用直线的交互矩阵与直线特征,估计固定翼无人机与平行线之间的位姿,用于实现固定翼无人机的电力线巡线[13].由于含有直线的平面在摄像机坐标系中的方程不易获得,所以上述直线交互矩阵的应用受到很大制约.为了避免受到上述直线交互矩阵求取的制约,研究人员提出了一些利用直线特征的新方法.例如,Xie 等提出了平行直线的矩特征并给出了其交互矩阵,用于实现旋翼直升机对电力线的巡线[14].Coutard 等利用平行直线的消失点坐标与角度作为特征,推导出了平行线的角度交互矩阵,利用上述特征和交互矩阵估计飞机与航母飞行甲板之间的相对位姿[15].上述基于平行线的方法是基于直线特征视觉跟踪的特例,不具有普遍性.文献[16]采用立体视觉,利用直线特征实现了姿态与位置的解耦控制.Liu 等采用Eye-in-hand 视觉系统,利用两点之间的距离、两条直线的角度、区域中心、面积作为特征,在进行深度估计的基础上,在机器人的关节空间实现视觉伺服控制[17−19].从这些利用直线特征的新方法可以发现,这些方法增加了对直线的新约束,其普适性受到影响.本文脱离含有直线的平面在摄像机坐标系中的方程参数的约束,利用两个点的极坐标推导直线的交互矩阵,并给出直线交互矩阵的求取方法.1特征的交互矩阵1.1特征点的交互矩阵点的交互矩阵描述笛卡儿空间位姿变化与图像空间特征点变化的关系.假设摄像机镜头畸变很小,可以忽略不计,摄像机的内参数采用小孔模型.对于摄像机坐标系中的点(x c ,y c ,z c ),焦距归一化成像平面上的成像点坐标为(x 1c ,y 1c ,1).x 1c=x c z cy 1c =y c z c(1)将式(1)对时间求导数,改写为矩阵形式˙x 1c ˙y 1c= 1z c 0−x 1c z c 01z c −y 1c z c˙x c ˙y c ˙z c(2)式(2)为特征点在笛卡儿空间的平移运动速度与投影到成像平面空间的运动速度之间的关系.摄像机的运动会导致3-D 点在摄像机坐标系中的运动.3-D 点在摄像机坐标系中的运动速度与摄像机在笛卡儿空间运动速度之间的关系为˙x c=−υcax −ωcay z c +ωcaz y c ˙y c =−υcay −ωcaz x c +ωcax z c ˙z c =−υcaz −ωcax y c +ωcay x c(3)式中,X c =[x c ,y c ,z c ]T 是3-D 点的位置向量,υca=[υcax ,υcay ,υcaz ]T 是摄像机的线速度向量,ωca =[ωcax ,ωcay ,v caz ]T 是摄像机的角速度向量.将式(3)代入式(2),并应用式(1),有˙x 1c ˙y 1c =L p υcaωca (4)其中,L p 为点特征的交互矩阵L p =−1z c 0x 1c z c x 1c y 1c−(1+x 21c )y 1c 0−1z cy 1c z c(1+y 21c )−x 1c y 1c−x 1c(5)在矩阵L p 中,计算x 1c 和y 1c 时,涉及摄像机的内参数.z c 的值是该点相对于摄像机坐标系的深度.因此,采用如上形式交互矩阵的任何控制方案必须估计或近似给出z c 的值.1.2特征直线的交互矩阵文献[10]和文献[11]给出的特征直线交互矩阵见式(6): ˙ρ˙θ = λρcos θλρsin θ−λρρ(1+ρ2)sin θλθcos θλθsin θ−λθρ−ρcos θ−(1+ρ2)cos θ0−ρsin θ−1υcaωca(6)式中,θ和ρ为直线在成像平面内的极坐标参数,见图1;λθ和λρ为系数:λθ=a 2sin θ−b 2cos θd 2λρ=a 2ρcos θ+b 2ρsin θ+c 2d 2(7)式中,a 2,b 2,c 2和d 2是含有直线的平面在摄像机坐标系中的方程参数,满足a 2x c +b 2y c +c 2z c +d 2=0,不易获得.1764自动化学报41卷为了摆脱需要已知平面方程参数的制约,下面基于点特征推导直线的交互矩阵.在焦距归一化成像平面的直线可表达为极坐标参数方程形式,即x 1c cos θ+y 1c sin θ=ρ(8)式中,θ和ρ为直线在成像平面内的极坐标参数,见图1.图1直线在焦距归一化成像平面的投影Fig.1The projection of a line on the imaging plane withnormalized focal length如图1所示,成像平面坐标系建立在焦距归一化成像平面上,其原点O 1c 为摄像机坐标系的Z c 轴与焦距归一化成像平面的交点,其X 1c 轴与摄像机坐标系的X c 轴平行,其Y 1c 轴与摄像机坐标系的Y c 轴平行.对于与Z 1c 轴不平行的特征直线,其在焦距归一化成像平面的投影为l 1c ,原点O 1c 到直线l 1c 的垂线为O 1c P 1c 0.O 1c P 1c 0的长度为直线l 1c 的参数ρ,O 1c P 1c 0与x 1c 轴的夹角为直线l 1c 的参数θ.O 1c 为Z 1c 轴的投影,P 1c 0为特征直线上点P 0的投影.因此,对于与Z 1c 轴不平行的直线,根据特征直线上点P 0以及附近点的变化,可以得到特征直线参数的变化.假设特征直线上点P 0在摄像机坐标系中的坐标为(x c 0,y c 0,z c 0),P 1c 0在摄像机坐标系中的坐标为(x 1c 0,y 1c 0,1).P 1c 0点的极坐标参数ρ0,θ0与x 1c 0,y 1c 0的关系为ρ0= x 21c 0+y 21c 0,θ0=arctan y 1c 0x 1c 0(9)将式(9)对时间求导,有˙ρ0=x 1c 0˙x 1c 0+y 1c 0˙y 1c 0ρ0˙θ0=x 1c 0˙y 1c 0−y 1c 0˙x 1c 0ρ02(10)将式(4)代入式(10),并利用ρ0cos θ代替x 1c 0,利用ρ0sin θ代替y 1c 0,得到式(11).˙ρ0˙θ0=−cos θ0z c 0−sin θ0z c 0ρ0z c 0(1+ρ20)sin θ0sin θ0ρ0z c 0−cos θ0ρ0z c 00cos θ0ρ0−(1+ρ20)cos θ00sin θ0ρ0−1νca ωca(11)在P 1c 0点附近沿投影直线l 1c 对称设定两个点P 1c 1和P 1c 2,其对应的极坐标参数为ρ1,θ1和ρ2,θ2,使之满足如下关系:θ1=θ+∆θθ2=θ−∆θρ1=ρ2(12)式中,∆θ是角度增量,是一个接近于0的正数.由图1可以发现,当摄像机运动时,如果∆θ接近于0,则ρ0,ρ1和ρ2的变化量比较接近.因此,可以采用ρ0的变化率近似表示直线参数ρ的变化率.即使θ0不变,但ρ1和ρ2的变化可以导致直线方向的变化,即导致参数θ的变化.例如,当θ0不变而ρ1增加ρ2减小时,θ变小.因此,不能采用θ0的变化率代替直线参数θ的变化率.将用极坐标表示的P 1c 1和P 1c 2代入式(8)直线方程,则ρ1cos θ1cos θ+ρ1sin θ1sin θ=ρρ2cos θ2cos θ+ρ2sin θ2sin θ=ρ(13)由式(13),推导出利用P 1c 1和P 1c 2的极坐标表示的直线参数θ:θ=arctan ρ1cos θ1−ρ2cos θ2ρ2sin θ2−ρ1sin θ1(14)将式(14)对时间求导数,然后将式(12)代入求导后的表达式化简,得:˙θ=12ρ1(˙ρ2−˙ρ1)cot ∆θ+12(˙θ1+˙θ2)(15)将式(11)对应于P 1c 1和P 1c 2的极坐标参数变化率代入式(15),得到式(16).˙θ=L θνx L θνy 12z c 2−12z c 1−ρ1cos θ−ρ1sin θ−1υcaωca(16)10期徐德等:直线特征的交互矩阵求取1765式中,z c 1和z c 2是对应于P 1c 1和P 1c 2的z c 坐标.L θυx =12ρ1(cos θ1z c 1−cos θ2z c 2)cot ∆θ+12ρ1(sin θ2z c 2+sin θ1z c 1)L θυy =12ρ1(sin θ1z c 1−sin θ2z c 2)cot ∆θ−12ρ1(cos θ1z c 1+cos θ2z c 2)(17)结合式(11)和式(16),得到直线的交互矩阵,见式(18).对于垂直于摄像机光轴的直线,z c 1=z c 2.将z c 1=z c 2代入式(17),得到L θυx =0,L θυy =0.此时,式(16)改写为式(19).˙ρ˙θ =−cos θz c 0−sin θz c 0ρz c 0(1+ρ2)sin θL θυxL θυy12z c 2−12z c 1−ρ1cos θ−(1+ρ2)cos θ0−ρ1sin θ−1υca ωca(18)式(18)交互矩阵求取涉及的z c 0,z c 1和z c 2,可以随着摄像机的运动进行在线估计.但式(19)交互矩阵与深度无关,而且与平移无关.利用图像中直线参数ρ和θ计算出式(19)交互矩阵后,可以用于姿态控制.˙θ=000−ρcos θ−ρsin θ−1 υca ωca(19)对于平行于摄像机光轴的直线,其在焦距归一化平面上的投影为过原点O 1c 的直线,如图2中的l 1c 所示.此时,直线上任意一点P 1ci 的极坐标的θi 都与直线的参数θ相等,即θ=θi .因直线过原点,故ρ=0.对于平行于摄像机光轴的直线,假设直线与光轴之间的距离为d .对于直线上任意一点P 1ci ,ρi =d/z ci 成立,z ci 是P 1ci 的Z c 轴坐标.将其代入式(11),得到直线参数θ的交互矩阵:图2平行于Z 1c 轴的直线在焦距归一化成像平面的投影(P 1c 1是远离摄像机的点,P 1c 2是靠近摄像机的点)Fig.2The projection of the line parallel with Z 1c axis onthe imaging plane with normalized focal length (P 1c 1is a point away from the camera,P 1c 2is a pointnear to the camera.)˙θ=sin θi d−cos θi dcos θi ρi sin θi ρi−1υca ωca(20)式中,θi ,ρi 为点P 1ci 的极坐标.由式(20)可知,当摄像机只沿X c ,Y c 轴平移运动时,投影直线l 1c 上所有点的θ会发生相同变化,表现为l 1c 绕Z 1c 轴旋转.例如,当摄像机沿Y c 轴正向平移运动时,图2中的投影直线l 1c 变为投影直线l 1ct ,点P 1ci 的极坐标ρi 变小,角度θi 变小.当摄像机只沿Z c 轴平移运动时,投影直线参数θ不变.当摄像机只绕X c 或Y c 轴旋转运动时,因投影直线上不同点的ρi 不同,其对应的θi 以不同的速度发生变化.如图2所示,当摄像机只绕X c 轴正向旋转运动时,直线上靠近摄像机的点P 1c 2的ρ2较大,其参数θ2以较低的速度变大;直线上远离摄像机的点P 1c 1的ρ1较小,其参数θ1以较高的速度变大,投影直线l 1c 变为投影直线l 1cr .当摄像机只绕Z c 轴旋转运动时,投影直线l 1c 绕Z 1c 轴旋转,其旋转方向与摄像机的旋转方向相反,其旋转速度与摄像机的旋转速度相同.平行于摄像机光轴的直线,一旦摄像机绕X c 或Y c 轴旋转运动,将变成不平行于摄像机光轴的直线.然后,其交互矩阵符合式(18).在直线与摄像机光轴接近垂直时,z c 1≈z c 2≈z c 0,此时式(16)可用式(19)近似,可以近似认为直线参数θ的变化只由摄像机的旋转运动产生,从而实现旋转运动独立于平移运动的测量.因此,合理选择目标上的直线特征,有望在基于图像的视觉伺服中实现姿态调整的分离.1766自动化学报41卷在应用中,应充分利用先验知识,对直线特征是否垂直光轴或平行光轴做出判断.此外,如图2所示,平行于光轴的直线,其一个端点处于成像坐标系原点.2实验与结果基于Motoman 机器人实验平台,针对直线与摄像机光轴接近垂直和不垂直的情况,进行了式(19)所示直线角度交互矩阵的验证实验.一方面,利用机器人的运动量和角度交互矩阵计算直线的角度变化量,与视觉检测出的直线角度变化量进行对比.另一方面,利用视觉检测出的直线角度变化量和角度交互矩阵计算摄像机的旋转运动量,与摄像机的实际旋转运动量进行对比.实验用Motoman 机器人的重复定位精度为0.07mm,平移运动精度为0.1mm,旋转运动精度为0.01◦.首先,对摄像机的内外参数进行了预先标定,结果如下:内参数矩阵:1898.200799.6501908.09627.19001外参数矩阵(位置单位mm):−0.8707−0.4836−0.0325189.74690.4922−0.8699−0.0093345.3072−0.0264−0.03500.999130.88560001畸变系数: −0.0951070.1900430.0010940.0011512.1实验1实验1为直线近似垂直于摄像机光轴时的验证实验.调整摄像机姿态使摄像机光轴与地板近似垂直,采集目标图像.以此时的摄像机位姿作为基准位姿,控制摄像机分别沿X c ,Y c ,Z c 轴做50mm 平移运动和绕X c ,Y c ,Z c 轴做3◦旋转运动,采集相应的图像.利用Canny 算子提取出地板砖的边缘点,再利用Hough 变换提取出特征直线.共采集了13幅图像,图3给出了获得的部分图像以及提取出的直线特征.利用直线上任意两点可以计算其参数ρ和θ.为方便计算,实验中利用4条直线的交点分别计算其在焦距归一化成像平面的坐标,如式(21)所示.x 1ci y 1ci 1 = k x 0u 00k y v 0001 −1 u i v i 1(21)式中,(x 1ci ,y 1ci )是第i 个交点在在焦距归一化成像平面上的坐标,(u i ,v i )是第i 个交点的图像坐标,(u 0,v 0)是主点的图像坐标,k x 和k y 是成像平面到图像平面的放大系数.(a)初始位姿(a)Image at initial pose(b)绕X c 旋转(b)Rotation around X c(c)绕Y c 旋转(c)Rotation around Y c(d)绕Z c 旋转(d)Rotation around Z c(e)沿X c 平移(e)Translation along X c(f)沿Y c 平移(f)Translation along Y c图3摄像机运动后的图像与直线特征Fig.3The images and line features after thecamera moves获得4个交点在焦距归一化成像平面的坐标后,分别利用相邻两个点求取一条直线的方程,得到在焦距归一化成像平面上的4条直线的方程.对于焦10期徐德等:直线特征的交互矩阵求取1767距归一化成像平面上的第j条直线,求取过成像坐标系原点的垂线方程,得到第j条直线与其垂线的交点即垂点的坐标.将垂点的坐标转化为极坐标,得到第j条直线的参数ρj和θj.表1给出了在摄像机运动后4条直线在焦距归一化成像平面的直线的参数ρj和θj.摄像机运动后直线的参数θji减去初始状态下直线的参数ρj0,得到参数θj的变化量作为实际值.利用摄像机的运动量,根据式(19)计算出的参数θj的变化量作为计算值.表2和图4给出了4条直线的角度参数变化量.由表2和图4可以发现,式(19)计算出的结果与实际值吻合.以直线角度的变化量作为已知量,结合直线的角度交互矩阵,可以估计出摄像机的旋转运动.由式(19),得:∆θ1i∆θ2i∆θ3i∆θ4i=Lθ∆θxi∆θyi∆θzi(22)表1直线的参数ρ和θTable1Lines parametersρandθ序号直线1直线2直线3直线4运动ρ(mm)θ(◦)ρ(mm)θ(◦)ρ(mm)θ(◦)ρ(mm)θ(◦)00.1735−89.920.21450.540.213190.380.1754−179.95初始位置10.1203−90.020.2125−0.190.268890.340.1773−179.50绕X c旋转3◦20.2295−90.110.2128 1.030.158390.260.1759179.33绕X c旋转−3◦30.1744−89.550.15750.400.212589.670.2326179.99绕Y c旋转3◦40.1740−90.470.26870.520.212690.940.1229179.98绕Y c旋转−3◦50.1729−93.010.2128 2.640.214287.320.1768176.97绕Z c旋转3◦60.1757−87.020.2123 3.430.211793.220.1774177.04绕Z c旋转−3◦70.1745−90.120.17990.350.212990.250.2088179.96沿X c平移50mm 80.1744−90.0450.24550.540.213090.310.1450−179.99沿X c平移−50mm 90.2069−90.000.21240.440.181090.300.1772179.99沿Y c平移50mm 100.1419−89.980.21280.380.245190.300.1773179.95沿Y c平移−50mm 110.1805−90.040.21890.430.220090.340.1838179.93沿Z c平移50mm 120.1691−90.050.20670.440.206190.290.1708−179.99沿Z c平移−50mm表2直线参数θ的变化量(◦)Table2The variations of line parameterθ(◦)序号计算值实际值偏差运动∆θ1∆θ2∆θ3∆θ4∆θ1∆θ2∆θ3∆θ4∆θ1∆θ2∆θ3∆θ41−0.00−0.640.000.53−0.10−0.72−0.040.450.010.080.040.07绕X c旋转3◦20.000.64−0.00−0.53−0.190.49−0.12−0.720.190.150.110.19绕X c旋转−3◦30.52−0.01−0.640.000.37−0.14−0.71−0.060.150.130.070.06绕Y c旋转3◦4−0.520.010.64−0.00−0.54−0.020.56−0.060.020.030.080.06绕Y c旋转−3◦5−3.00−3.00−3.00−3.00−3.09−3.17−3.06−3.080.090.170.060.08绕Z c旋转3◦6 3.00 3.00 3.00 3.00 2.90 2.89 2.84 2.910.100.110.160.09绕Z c旋转−3◦70.000.000.000.00−0.20−0.19−0.12−0.090.200.190.120.09沿X c平移50mm 80.000.000.000.00−0.120.00−0.07−0.040.120.000.070.04沿X c平移−50mm 90.000.000.000.00−0.07−0.10−0.08−0.060.070.100.080.06沿Y c平移50mm 100.000.000.000.00−0.06−0.16−0.08−0.100.060.160.080.10沿Y c平移−50mm 110.000.000.000.00−0.11−0.11−0.04−0.120.110.110.040.12沿Z c平移50mm 120.000.000.000.00−0.13−0.10−0.09−0.030.130.100.090.03沿Z c平移−50mm1768自动化学报41卷(a)直线1(a)Line1(b)直线2(b)Line2(c)直线3(c)Line3(d)直线4(d)Line4图44条直线的角度变化Fig.4The angle variations of the four lines其中,Lθ=−ρ1i cosθ1iρ1i sinθ1i1ρ2i cosθ2iρ2i sinθ2i1ρ3i cosθ3iρ3i sinθ3i1ρ4i cosθ4iρ4i sinθ4i1式中,ρji和θji是第j条直线第i次运动前的参数,∆θji是第j条直线第i次运动后的变化量,∆θxi,∆θyi,∆θzi是摄像机的第i次旋转运动量,Lθ是4条直线的角度交互矩阵.对式(22)利用最小二乘法求解,得到摄像机的旋转运动量:∆θxi∆θyi∆θzi=(LθT Lθ)−1LθT∆θ1i∆θ2i∆θ3i∆θ4i(23)将表1中初始位置时的ρji和θji代入式(22),计算出Lθ.Lθ=−0.00020.1735−1−0.2145−0.0020−10.0014−0.2131−10.17540.0002−1将Lθ和表2中θ1i∼θ4i的实际值代入式(23),10期徐德等:直线特征的交互矩阵求取1769分别计算出摄像机6次旋转运动的运动量.以式(23)计算出的摄像机的旋转运动量作为估计值,以机器人带动摄像机的旋转运动量作为实际值,得到表3所示的摄像机旋转运动量.由表3可见,摄像机旋转运动的估计值与实际值的最大误差为0.26◦,估计值与实际值吻合.上述实验结果说明,当摄像机光轴与目标直线接近垂直时,直线参数θ的变化只由摄像机的旋转运动产生,摄像机的平移运动对直线参数θ基本不产生影响.2.2实验2实验2为直线与摄像机光轴不垂直时的验证实验.在摄像机光轴与目标垂直的基础上,绕摄像机X c轴旋转15◦,再绕Y c轴旋转−15◦,以此时的位姿作为摄像机的初始位姿.然后,分别使得摄像机绕X c轴旋转5◦,绕Y c轴旋转−5◦,绕Z c轴旋转5◦,采集相应的地面图像.利用Canny算子提取出地板砖的边缘点,再利用Hough变换提取出特征直线.采集的图像及提取出的直线特征见图5.表4给出了在摄像机运动后4条直线在焦距归一化成像平面的直线的参数ρj和θj.摄像机第i次运动后直线的参数θji减去第i−1次运动后的直线参数θj(i−1),得到参数θj的变化量作为实际值.利用摄像机的运动量,根据式(19)计算出的参数θj的变化量作为计算值.参数θj的变化量见表5.由表5可以发现,式(19)计算出的结果与实际值吻合.将表4中第i次运动前的ρji和θji代入式(22),计算出Lθ.将Lθ和表5中θ1i∼θ4i的实际值代入式(23),计算出的摄像机的旋转运动量作为估计值,表3摄像机旋转运动量Table3The rotation angles of the camera序号估计值(◦)实际值(◦)偏差(◦)∆θx∆θy∆θz∆θx∆θy∆θz∆θx∆θy∆θz 1 3.02−0.150.07 3.000.000.000.02−0.150.08 2−3.09−0.210.16−3.000.000.00−0.09−0.210.16 30.18 2.790.100.00 3.000.000.18−0.210.10 4−0.10−2.850.050.00−3.000.00−0.100.150.05 50.26−0.10 3.100.000.00 3.000.26−0.100.10 60.060.16−2.890.000.00−3.000.060.160.11表4旋转运动前后直线的参数ρ和θTable4The lines parametersρandθbefore and after camera rotation序号直线1直线2直线3直线4运动ρ(mm)θ(◦)ρ(mm)θ(◦)ρ(mm)θ(◦)ρ(mm)θ(◦)00.2326−97.650.0845−1.340.104187.670.2477−176.59初始位置10.1427−97.360.0817−1.690.193787.740.2417−175.22绕X c旋转5◦20.1305−98.050.1702−1.730.196788.660.1515−175.28绕Y c旋转−5◦30.1288−103.040.1689−6.790.198783.710.1525179.70绕Z c旋转5◦表5旋转运动后直线参数θ的变化量Table5The variations of line parameterθafter camera rotation序号计算值(◦)实际值(◦)偏差(◦)运动∆θ1∆θ2∆θ3∆θ4∆θ1∆θ2∆θ3∆θ4∆θ1∆θ2∆θ3∆θ410.16−0.42−0.02 1.240.29−0.350.07 1.37−0.14−0.07−0.09−0.13绕X c旋转5◦2−0.65−0.030.98−0.06−0.69−0.050.92−0.060.050.020.060.00绕Y c旋转−5◦3−5.00−5.00−5.00−5.00−4.99−5.06−4.95−5.020.010.06−0.050.02绕Z c旋转5◦1770自动化学报41卷表6实验2摄像机旋转运动量Table6The rotation angles of the camera in Experiment2序号估计值(◦)实际值(◦)偏差(◦)∆θx∆θy∆θz∆θx∆θy∆θz∆θx∆θy∆θz1 5.150.15−0.09 5.000.000.000.150.15−0.0920.22−4.830.010.00−5.000.000.220.170.0130.14−0.17 5.000.000.00 5.000.14−0.170.00(a)初始位姿(a)Image at initial pose(b)绕X c旋转5◦(b)5◦rotation around X c(c)绕Y c旋转−5◦(c)−5◦rotation around Y c(d)绕Z c旋转5◦(d)5◦rotation around Z c图5摄像机旋转运动后的图像与直线特征Fig.5The images and line features after thecamera rotation以机器人带动摄像机的旋转运动量作为实际值,得到表6所示的摄像机旋转运动量.由表6可以发现,估计值与实际值的最大误差为0.22◦,估计值和实际值相吻合.该实验说明,即使直线不垂直于摄像机光轴,但只要直线与垂直于摄像机光轴的平面之间的夹角不是很大,例如,本实验中夹角已达15◦,利用式(19)仍然可以表示摄像机运动与直线参数θ变化之间的关系.即在直线的特征点P0以及附近点的深度变化不剧烈的情况下,直线参数θ的变化只由摄像机的旋转运动产生,摄像机的平移运动对直线参数θ基本不产生影响.3结论对于与摄像机光轴不平行的直线,利用成像平面内直线上靠近垂点的两个点的极坐标,推导出了直线特征的交互矩阵,给出了此类直线交互矩阵的一般表达式.针对与摄像机光轴近似垂直的直线,给出了角度交互矩阵的近似表达式,并指出此类直线在成像平面上投影的角度变化主要受摄像机姿态变化的影响.对于与摄像机光轴平行的直线,利用一个点的极坐标导出了此类直线的角度交互矩阵,并指出此类直线在成像平面上投影的角度受摄像机旋转以及垂直于光轴平移的影响较大.本文得到的直线特征的交互矩阵,不受含有直线的平面参数的制约.另外,在交互矩阵中的深度信息可以随着摄像机的运动进行在线估计.因此,本文给出的直线特征的交互矩阵在使用上更加方便.References1Lepetit V,Fua P.Monocular model-based3D tracking of rigid objects:a survey.Foundations and Trends in Com-puter Graphics and Vision,2005,1(1):1−892Mooser J,You S,Neumann U,Wang Q.Applying robust structure from motion to markerless augmented reality.In: Proceedings of the2009IEEE Workshop on Applications of Computer Vision.Snowbird,Utah,USA:IEEE,2009.1−8 3Liu Y H,Wang H S.An adaptive controller for image-based visual servoing of robot manipulators.In:Proceedings of the 8th World Congress on Intelligent Control and Automation (WCICA).Jinan,China:IEEE,2010.988−9934Wang H S,Liu,Y H,Chen W D.Visual tracking of robots in uncalibrated environments.Mechatronics,2012,22(4):390−3975Pressigout M,Marchand E.Real-time3D model-based tracking:combining edge and texture information.In:Pro-ceedings of the2006IEEE International Conference on Robotics and Automation.Orlando,USA:IEEE,2006.2726−27316Coutard L,Chaumette F.Visual detection and3D model-based tracking for landing on an aircraft carrier.In:Pro-ceedings of the2011IEEE International Conference on Robotics and Automation.Shanghai,China:IEEE,2011.1746−17517Drummond T,Cipolla R.Real-time visual tracking of com-plex structures.IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(7):932−9468Petit A,Marchand E,Kanani K.A robust model-based tracker combining geometrical and color edge information.In:Proceedings of the2013IEEE/RSJ International Con-ference on Intelligent Robots and Systems.Tokyo,Japan: IEEE,2013.3719−372410期徐德等:直线特征的交互矩阵求取17719Vacchetti L,Lepetit V,Fua bining edge and texture information for real-time accurate3D camera tracking.In: Proceedings of the3rd IEEE and ACM International Sym-posium on Mixed and Augmented Reality.Arlington,USA: IEEE,2004.48−5610Espiau B,Chaumette F,Rives P.A new approach to visual servoing in robotics.IEEE Transactions on Robotics and Automation,1992,8(3):313−32611Comport A I,Marchand E,Pressigout M,Chaumette F.Real-time markerless tracking for augmented reality:the virtual visual servoing framework.IEEE Transactions on Vi-sualization and Computer Graphics,2006,12(4):615−62812Wuest H,Stricker D.Tracking of industrial objects by using CAD models.Journal of Virtual Reality and Broadcasting, 2007,4(1):1−913Mills S,Aouf N,Mejias L.Image based visual servo control forfixed wing UAVs tracking linear infrastructure in wind.In:Proceedings of the2013IEEE International Conference on Robotics and Automation.Karlsruhe,Germany:IEEE, 2013.5769−577414Xie H,Lynch A,Jagersand M.IBVS of a rotary wing UAV using line features.In:Proceedings of the27th IEEE Cana-dian Conference on Electrical and Computer Engineering.Toronto,Canada:IEEE,2014.1−615Coutard L,Chaumette F,Pflimlin J M.Automatic land-ing on aircraft carrier by visual servoing.In:Proceedings of the2011IEEE/RSJ International Conference on Intelligent Robots and Systems.San Francisco,USA:IEEE,2011.2843−284816Alkhalil F,Doignon C.Stereo visual servoing with decou-pling control.In:Proceedings of the2012IEEE/RSJ In-ternational Conference on Intelligent Robots and Systems.Algarve,Portugal:IEEE,2012.1671−167617Liu Y H,Wang H S,Chen W D,Zhou D X.Adaptive visual servoing using common image features with unknown geo-metric parameters.Automatica,2013,49(8):2453−246018Liu Y H,Wang H S,Wang C Y,Lam K K.Uncalibrated visual servoing of robots using a depth-independent interac-tion matrix.IEEE Transactions on Robotics,2006,22(4): 804−81719Wang H S,Liu Y H,Zhou D X.Adaptive visual servoing using point and line features with an uncalibrated eye-in-hand camera.IEEE Transactions on Robotics,2008,24(4): 843−857徐德中国科学院自动化研究所研究员.主要研究方向为机器人和自动化,视觉测量,视觉控制,智能控制,焊缝跟踪,视觉定位,显微视觉,微装配.本文通信作者.E-mail:***********.cn(XU De Professor at the Instituteof Automation,Chinese Academy ofSciences.His research interest covers robotics and automation,visual measurement,visual con-trol,intelligent control,welding seam tracking,visual po-sitioning,microscopic vision,and micro-assembly.Corre-sponding author of this paper.)卢金燕中国科学院自动化研究所博士研究生.2011年获得北京航空航天大学计算机科学与技术专业硕士学位.主要研究方向为位姿检测,视觉测量,机器人控制.E-mail:***************.cn(LU Jin-Yan Ph.D.candidate atthe Institute of Automation,ChineseAcademy of Sciences.She received her master degree from Beijing University of Aeronautics and Astronautics in2011.Her research interest covers pose de-tection,visual measurement,and robot control.)。
特征更新的动态图卷积表面损伤点云分割方法
第41卷 第4期吉林大学学报(信息科学版)Vol.41 No.42023年7月Journal of Jilin University (Information Science Edition)July 2023文章编号:1671⁃5896(2023)04⁃0621⁃10特征更新的动态图卷积表面损伤点云分割方法收稿日期:2022⁃09⁃21基金项目:国家自然科学基金资助项目(61573185)作者简介:张闻锐(1998 ),男,江苏扬州人,南京航空航天大学硕士研究生,主要从事点云分割研究,(Tel)86⁃188****8397(E⁃mail)839357306@;王从庆(1960 ),男,南京人,南京航空航天大学教授,博士生导师,主要从事模式识别与智能系统研究,(Tel)86⁃130****6390(E⁃mail)cqwang@㊂张闻锐,王从庆(南京航空航天大学自动化学院,南京210016)摘要:针对金属部件表面损伤点云数据对分割网络局部特征分析能力要求高,局部特征分析能力较弱的传统算法对某些数据集无法达到理想的分割效果问题,选择采用相对损伤体积等特征进行损伤分类,将金属表面损伤分为6类,提出一种包含空间尺度区域信息的三维图注意力特征提取方法㊂将得到的空间尺度区域特征用于特征更新网络模块的设计,基于特征更新模块构建出了一种特征更新的动态图卷积网络(Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)用于点云语义分割㊂实验结果表明,该方法有助于更有效地进行点云分割,并提取点云局部特征㊂在金属表面损伤分割上,该方法的精度优于PointNet ++㊁DGCNN(Dynamic Graph Convolutional Neural Networks)等方法,提高了分割结果的精度与有效性㊂关键词:点云分割;动态图卷积;特征更新;损伤分类中图分类号:TP391.41文献标志码:A Cloud Segmentation Method of Surface Damage Point Based on Feature Adaptive Shifting⁃DGCNNZHANG Wenrui,WANG Congqing(School of Automation,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China)Abstract :The cloud data of metal part surface damage point requires high local feature analysis ability of the segmentation network,and the traditional algorithm with weak local feature analysis ability can not achieve the ideal segmentation effect for the data set.The relative damage volume and other features are selected to classify the metal surface damage,and the damage is divided into six categories.This paper proposes a method to extract the attention feature of 3D map containing spatial scale area information.The obtained spatial scale area feature is used in the design of feature update network module.Based on the feature update module,a feature updated dynamic graph convolution network is constructed for point cloud semantic segmentation.The experimental results show that the proposed method is helpful for more effective point cloud segmentation to extract the local features of point cloud.In metal surface damage segmentation,the accuracy of this method is better than pointnet++,DGCNN(Dynamic Graph Convolutional Neural Networks)and other methods,which improves the accuracy and effectiveness of segmentation results.Key words :point cloud segmentation;dynamic graph convolution;feature adaptive shifting;damage classification 0 引 言基于深度学习的图像分割技术在人脸㊁车牌识别和卫星图像分析领域已经趋近成熟,为获取物体更226吉林大学学报(信息科学版)第41卷完整的三维信息,就需要利用三维点云数据进一步完善语义分割㊂三维点云数据具有稀疏性和无序性,其独特的几何特征分布和三维属性使点云语义分割在许多领域的应用都遇到困难㊂如在机器人与计算机视觉领域使用三维点云进行目标检测与跟踪以及重建;在建筑学上使用点云提取与识别建筑物和土地三维几何信息;在自动驾驶方面提供路面交通对象㊁道路㊁地图的采集㊁检测和分割功能㊂2017年,Lawin等[1]将点云投影到多个视图上分割再返回点云,在原始点云上对投影分割结果进行分析,实现对点云的分割㊂最早的体素深度学习网络产生于2015年,由Maturana等[2]创建的VOXNET (Voxel Partition Network)网络结构,建立在三维点云的体素表示(Volumetric Representation)上,从三维体素形状中学习点的分布㊂结合Le等[3]提出的点云网格化表示,出现了类似PointGrid的新型深度网络,集成了点与网格的混合高效化网络,但体素化的点云面对大量点数的点云文件时表现不佳㊂在不规则的点云向规则的投影和体素等过渡态转换过程中,会出现很多空间信息损失㊂为将点云自身的数据特征发挥完善,直接输入点云的基础网络模型被逐渐提出㊂2017年,Qi等[4]利用点云文件的特性,开发了直接针对原始点云进行特征学习的PointNet网络㊂随后Qi等[5]又提出了PointNet++,针对PointNet在表示点与点直接的关联性上做出改进㊂Hu等[6]提出SENET(Squeeze⁃and⁃Excitation Networks)通过校准通道响应,为三维点云深度学习引入通道注意力网络㊂2018年,Li等[7]提出了PointCNN,设计了一种X⁃Conv模块,在不显著增加参数数量的情况下耦合较远距离信息㊂图卷积网络[8](Graph Convolutional Network)是依靠图之间的节点进行信息传递,获得图之间的信息关联的深度神经网络㊂图可以视为顶点和边的集合,使每个点都成为顶点,消耗的运算量是无法估量的,需要采用K临近点计算方式[9]产生的边缘卷积层(EdgeConv)㊂利用中心点与其邻域点作为边特征,提取边特征㊂图卷积网络作为一种点云深度学习的新框架弥补了Pointnet等网络的部分缺陷[10]㊂针对非规律的表面损伤这种特征缺失类点云分割,人们已经利用各种二维图像采集数据与卷积神经网络对风扇叶片㊁建筑和交通工具等进行损伤检测[11],损伤主要类别是裂痕㊁表面漆脱落等㊂但二维图像分割涉及的损伤种类不够充分,可能受物体表面污染㊁光线等因素影响,将凹陷㊁凸起等损伤忽视,或因光照不均匀判断为脱漆㊂笔者提出一种基于特征更新的动态图卷积网络,主要针对三维点云分割,设计了一种新型的特征更新模块㊂利用三维点云独特的空间结构特征,对传统K邻域内权重相近的邻域点采用空间尺度进行区分,并应用于对金属部件表面损伤分割的有用与无用信息混杂的问题研究㊂对邻域点进行空间尺度划分,将注意力权重分组,组内进行特征更新㊂在有效鉴别外邻域干扰特征造成的误差前提下,增大特征提取面以提高局部区域特征有用性㊂1 深度卷积网络计算方法1.1 包含空间尺度区域信息的三维图注意力特征提取方法由迭代最远点采集算法将整片点云分割为n个点集:{M1,M2,M3, ,M n},每个点集包含k个点:{P1, P2,P3, ,P k},根据点集内的空间尺度关系,将局部区域划分为不同的空间区域㊂在每个区域内,结合局部特征与空间尺度特征,进一步获得更有区分度的特征信息㊂根据注意力机制,为K邻域内的点分配不同的权重信息,特征信息包括空间区域内点的分布和区域特性㊂将这些特征信息加权计算,得到点集的卷积结果㊂使用空间尺度区域信息的三维图注意力特征提取方式,需要设定合适的K邻域参数K和空间划分层数R㊂如果K太小,则会导致弱分割,因不能完全利用局部特征而影响结果准确性;如果K太大,会增加计算时间与数据量㊂图1为缺损损伤在不同参数K下的分割结果图㊂由图1可知,在K=30或50时,分割结果效果较好,K=30时计算量较小㊂笔者选择K=30作为实验参数㊂在分析确定空间划分层数R之前,简要分析空间层数划分所应对的问题㊂三维点云所具有的稀疏性㊁无序性以及损伤点云自身噪声和边角点多的特性,导致了点云处理中可能出现的共同缺点,即将离群值点云选为邻域内采样点㊂由于损伤表面多为一个面,被分割出的损伤点云应在该面上分布,而噪声点则被分布在整个面的两侧,甚至有部分位于损伤内部㊂由于点云噪声这种立体分布的特征,导致了离群值被选入邻域内作为采样点存在㊂根据采用DGCNN(Dynamic Graph Convolutional Neural Networks)分割网络抽样实验结果,位于切面附近以及损伤内部的离群值点对点云分割结果造成的影响最大,被错误分割为特征点的几率最大,在后续预处理过程中需要对这种噪声点进行优先处理㊂图1 缺损损伤在不同参数K 下的分割结果图Fig.1 Segmentation results of defect damage under different parameters K 基于上述实验结果,在参数K =30情况下,选择空间划分层数R ㊂缺损损伤在不同参数R 下的分割结果如图2所示㊂图2b 的结果与测试集标签分割结果更为相似,更能体现损伤的特征,同时屏蔽了大部分噪声㊂因此,选择R =4作为实验参数㊂图2 缺损损伤在不同参数R 下的分割结果图Fig.2 Segmentation results of defect damage under different parameters R 在一个K 邻域内,邻域点与中心点的空间关系和特征差异最能表现邻域点的权重㊂空间特征系数表示邻域点对中心点所在点集的重要性㊂同时,为更好区分图内邻域点的权重,需要将整个邻域细分㊂以空间尺度进行细分是较为合适的分类方式㊂中心点的K 邻域可视为一个局部空间,将其划分为r 个不同的尺度区域㊂再运算空间注意力机制,为这r 个不同区域的权重系数赋值㊂按照空间尺度多层次划分,不仅没有损失核心的邻域点特征,还能有效抑制无意义的㊁有干扰性的特征㊂从而提高了深度学习网络对点云的局部空间特征的学习能力,降低相邻邻域之间的互相影响㊂空间注意力机制如图3所示,计算步骤如下㊂第1步,计算特征系数e mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重㊂分别用Δp mk 和Δf mk 表示三维空间关系和局部特征差异,M 表示MLP(Multi⁃Layer Perceptrons)操作,C 表示concat 函数,其中Δp mk =p mk -p m ,Δf mk =M (f mk )-M (f m )㊂将两者合并后输入多层感知机进行计算,得到计算特征系数326第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图3 空间尺度区域信息注意力特征提取方法示意图Fig.3 Schematic diagram of attention feature extraction method for spatial scale regional information e mk =M [C (Δp mk ‖Δf mk )]㊂(1) 第2步,计算图权重系数a mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重包含比㊂其中k ∈{1,2,3, ,K },K 表示每个邻域所包含点数㊂需要对特征系数e mk 进行归一化,使用归一化指数函数S (Softmax)得到权重多分类的结果,即计算图权重系数a mk =S (e mk )=exp(e mk )/∑K g =1exp(e mg )㊂(2) 第3步,用空间尺度区域特征s mr 表示中心点m 的第r 个空间尺度区域的特征㊂其中k r ∈{1,2,3, ,K r },K r 表示第r 个空间尺度区域所包含的邻域点数,并在其中加入特征偏置项b r ,避免权重化计算的特征在动态图中累计单面误差指向,空间尺度区域特征s mr =∑K r k r =1[a mk r M (f mk r )]+b r ㊂(3) 在r 个空间尺度区域上进行计算,就可得到点m 在整个局部区域的全部空间尺度区域特征s m ={s m 1,s m 2,s m 3, ,s mr },其中r ∈{1,2,3, ,R }㊂1.2 基于特征更新的动态图卷积网络动态图卷积网络是一种能直接处理原始三维点云数据输入的深度学习网络㊂其特点是将PointNet 网络中的复合特征转换模块(Feature Transform),改进为由K 邻近点计算(K ⁃Near Neighbor)和多层感知机构成的边缘卷积层[12]㊂边缘卷积层功能强大,其提取的特征不仅包含全局特征,还拥有由中心点与邻域点的空间位置关系构成的局部特征㊂在动态图卷积网络中,每个邻域都视为一个点集㊂增强对其中心点的特征学习能力,就会增强网络整体的效果[13]㊂对一个邻域点集,对中心点贡献最小的有效局部特征的边缘点,可以视为异常噪声点或低权重点,可能会给整体分割带来边缘溢出㊂点云相比二维图像是一种信息稀疏并且噪声含量更大的载体㊂处理一个局域内的噪声点,将其直接剔除或简单采纳会降低特征提取效果,笔者对其进行低权重划分,并进行区域内特征更新,增强抗噪性能,也避免点云信息丢失㊂在空间尺度区域中,在区域T 内有s 个点x 被归为低权重系数组,该点集的空间信息集为P ∈R N s ×3㊂点集的局部特征集为F ∈R N s ×D f [14],其中D f 表示特征的维度空间,N s 表示s 个域内点的集合㊂设p i 以及f i 为点x i 的空间信息和特征信息㊂在点集内,对点x i 进行小范围内的N 邻域搜索,搜索其邻域点㊂则点x i 的邻域点{x i ,1,x i ,2, ,x i ,N }∈N (x i ),其特征集合为{f i ,1,f i ,2, ,f i ,N }∈F ㊂在利用空间尺度进行区域划分后,对空间尺度区域特征s mt 较低的区域进行区域内特征更新,通过聚合函数对权重最低的邻域点在图中的局部特征进行改写㊂已知中心点m ,点x i 的特征f mx i 和空间尺度区域特征s mt ,目的是求出f ′mx i ,即中心点m 的低权重邻域点x i 在进行邻域特征更新后得到的新特征㊂对区域T 内的点x i ,∀x i ,j ∈H (x i ),x i 与其邻域H 内的邻域点的特征相似性域为R (x i ,x i ,j )=S [C (f i ,j )T C (f i ,j )/D o ],(4)其中C 表示由输入至输出维度的一维卷积,D o 表示输出维度值,T 表示转置㊂从而获得更新后的x i 的426吉林大学学报(信息科学版)第41卷特征㊂对R (x i ,x i ,j )进行聚合,并将特征f mx i 维度变换为输出维度f ′mx i =∑[R (x i ,x i ,j )S (s mt f mx i )]㊂(5) 图4为特征更新网络模块示意图,展示了上述特征更新的计算过程㊂图5为特征更新的动态图卷积网络示意图㊂图4 特征更新网络模块示意图Fig.4 Schematic diagram of feature update network module 图5 特征更新的动态图卷积网络示意图Fig.5 Flow chart of dynamic graph convolution network with feature update 动态图卷积网络(DGCNN)利用自创的边缘卷积层模块,逐层进行边卷积[15]㊂其前一层的输出都会动态地产生新的特征空间和局部区域,新一层从前一层学习特征(见图5)㊂在每层的边卷积模块中,笔者在边卷积和池化后加入了空间尺度区域注意力特征,捕捉特定空间区域T 内的邻域点,用于特征更新㊂特征更新会降低局域异常值点对局部特征的污染㊂网络相比传统图卷积神经网络能获得更多的特征信息,并且在面对拥有较多噪声值的点云数据时,具有更好的抗干扰性[16],在对性质不稳定㊁不平滑并含有需采集分割的突出中心的点云数据时,会有更好的抗干扰效果㊂相比于传统预处理方式,其稳定性更强,不会发生将突出部分误分割或漏分割的现象[17]㊂2 实验结果与分析点云分割的精度评估指标主要由两组数据构成[18],即平均交并比和总体准确率㊂平均交并比U (MIoU:Mean Intersection over Union)代表真实值和预测值合集的交并化率的平均值,其计算式为526第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法U =1T +1∑Ta =0p aa ∑Tb =0p ab +∑T b =0p ba -p aa ,(6)其中T 表示类别,a 表示真实值,b 表示预测值,p ab 表示将a 预测为b ㊂总体准确率A (OA:Overall Accuracy)表示所有正确预测点p c 占点云模型总体数量p all 的比,其计算式为A =P c /P all ,(7)其中U 与A 数值越大,表明点云分割网络越精准,且有U ≤A ㊂2.1 实验准备与数据预处理实验使用Kinect V2,采用Depth Basics⁃WPF 模块拍摄金属部件损伤表面获得深度图,将获得的深度图进行SDK(Software Development Kit)转化,得到pcd 格式的点云数据㊂Kinect V2采集的深度图像分辨率固定为512×424像素,为获得更清晰的数据图像,需尽可能近地采集数据㊂选择0.6~1.2m 作为采集距离范围,从0.6m 开始每次增加0.2m,获得多组采量数据㊂点云中分布着噪声,如果不对点云数据进行过滤会对后续处理产生不利影响㊂根据统计原理对点云中每个点的邻域进行分析,再建立一个特别设立的标准差㊂然后将实际点云的分布与假设的高斯分布进行对比,实际点云中误差超出了标准差的点即被认为是噪声点[19]㊂由于点云数据量庞大,为提高效率,选择采用如下改进方法㊂计算点云中每个点与其首个邻域点的空间距离L 1和与其第k 个邻域点的空间距离L k ㊂比较每个点之间L 1与L k 的差,将其中差值最大的1/K 视为可能噪声点[20]㊂计算可能噪声点到其K 个邻域点的平均值,平均值高出标准差的被视为噪声点,将离群噪声点剔除后完成对点云的滤波㊂2.2 金属表面损伤点云关键信息提取分割方法对点云损伤分割,在制作点云数据训练集时,如果只是单一地将所有损伤进行统一标记,不仅不方便进行结果分析和应用,而且也会降低特征分割的效果㊂为方便分析和控制分割效果,需要使用ArcGIS 将点云模型转化为不规则三角网TIN(Triangulated Irregular Network)㊂为精确地分类损伤,利用图6 不规则三角网模型示意图Fig.6 Schematic diagram of triangulated irregular networkTIN 的表面轮廓性质,获得训练数据损伤点云的损伤内(外)体积,损伤表面轮廓面积等㊂如图6所示㊂选择损伤体积指标分为相对损伤体积V (RDV:Relative Damege Volume)和邻域内相对损伤体积比N (NRDVR:Neighborhood Relative Damege Volume Ratio)㊂计算相对平均深度平面与点云深度网格化平面之间的部分,得出相对损伤体积㊂利用TIN 邻域网格可获取某损伤在邻域内的相对深度占比,有效解决制作测试集时,将因弧度或是形状造成的相对深度判断为损伤的问题㊂两种指标如下:V =∑P d k =1h k /P d -∑P k =1h k /()P S d ,(8)N =P n ∑P d k =1h k S d /P d ∑P n k =1h k S ()n -()1×100%,(9)其中P 表示所有点云数,P d 表示所有被标记为损伤的点云数,P n 表示所有被认定为损伤邻域内的点云数;h k 表示点k 的深度值;S d 表示损伤平面面积,S n 表示损伤邻域平面面积㊂在获取TIN 标准包络网视图后,可以更加清晰地描绘损伤情况,同时有助于量化损伤严重程度㊂笔者将损伤分为6种类型,并利用计算得出的TIN 指标进行损伤分类㊂同时,根据损伤部分体积与非损伤部分体积的关系,制定指标损伤体积(SDV:Standard Damege Volume)区分损伤类别㊂随机抽选5个测试组共50张图作为样本㊂统计非穿透损伤的RDV 绝对值,其中最大的30%标记为凹陷或凸起,其余626吉林大学学报(信息科学版)第41卷标记为表面损伤,并将样本分类的标准分界值设为SDV㊂在设立以上标准后,对凹陷㊁凸起㊁穿孔㊁表面损伤㊁破损和缺损6种金属表面损伤进行分类,金属表面损伤示意图如图7所示㊂首先,根据损伤是否产生洞穿,将损伤分为两大类㊂非贯通伤包括凹陷㊁凸起和表面损伤,贯通伤包括穿孔㊁破损和缺损㊂在非贯通伤中,凹陷和凸起分别采用相反数的SDV 作为标准,在这之间的被分类为表面损伤㊂贯通伤中,以损伤部分平面面积作为参照,较小的分类为穿孔,较大的分类为破损,而在边缘处因腐蚀㊁碰撞等原因缺角㊁内损的分类为缺损㊂分类参照如表1所示㊂图7 金属表面损伤示意图Fig.7 Schematic diagram of metal surface damage表1 损伤类别分类Tab.1 Damage classification 损伤类别凹陷凸起穿孔表面损伤破损缺损是否形成洞穿××√×√√RDV 绝对值是否达到SDV √√\×\\S d 是否达到标准\\×\√\2.3 实验结果分析为验证改进的图卷积深度神经网络在点云语义分割上的有效性,笔者采用TensorFlow 神经网络框架进行模型测试㊂为验证深度网络对损伤分割的识别准确率,采集了带有损伤特征的金属部件损伤表面点云,对点云进行预处理㊂对若干金属部件上的多个样本金属面的点云数据进行筛选,删除损伤占比低于5%或高于60%的数据后,划分并装包制作为点云数据集㊂采用CloudCompare 软件对样本金属上的损伤部分进行分类标记,共分为6种如上所述损伤㊂部件损伤的数据集制作参考点云深度学习领域广泛应用的公开数据集ModelNet40part㊂分割数据集包含了多种类型的金属部件损伤数据,这些损伤数据显示在510张总点云图像数据中㊂点云图像种类丰富,由各种包含损伤的金属表面构成,例如金属门,金属蒙皮,机械构件外表面等㊂用ArcGIS 内相关工具将总图进行随机点拆分,根据数据集ModelNet40part 的规格,每个独立的点云数据组含有1024个点,将所有总图拆分为510×128个单元点云㊂将样本分为400个训练集与110个测试集,采用交叉验证方法以保证测试的充分性[20],对多种方法进行评估测试,实验结果由单元点云按原点位置重新组合而成,并带有拆分后对单元点云进行的分割标记㊂分割结果比较如图8所示㊂726第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图8 分割结果比较图Fig.8 Comparison of segmentation results在部件损伤分割的实验中,将不同网络与笔者网络(FAS⁃DGCNN:Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)进行对比㊂除了采用不同的分割网络外,其余实验均采用与改进的图卷积深度神经网络方法相同的实验设置㊂实验结果由单一损伤交并比(IoU:Intersection over Union),平均损伤交并比(MIoU),单一损伤准确率(Accuracy)和总体损伤准确率(OA)进行评价,结果如表2~表4所示㊂将6种不同损伤类别的Accuracy 与IoU 进行对比分析,可得出结论:相比于基准实验网络Pointet++,笔者在OA 和MioU 方面分别在贯通伤和非贯通伤上有10%和20%左右的提升,在整体分割指标上,OA 能达到90.8%㊂对拥有更多点数支撑,含有较多点云特征的非贯通伤,几种点云分割网络整体性能均能达到90%左右的效果㊂而不具有局部特征识别能力的PointNet 在贯通伤上的表现较差,不具备有效的分辨能力,导致分割效果相对于其他损伤较差㊂表2 损伤部件分割准确率性能对比 Tab.2 Performance comparison of segmentation accuracy of damaged parts %实验方法准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6Ponitnet 82.785.073.880.971.670.1Pointnet++88.786.982.783.486.382.9DGCNN 90.488.891.788.788.687.1FAS⁃DGCNN 92.588.892.191.490.188.6826吉林大学学报(信息科学版)第41卷表3 损伤部件分割交并比性能对比 Tab.3 Performance comparison of segmentation intersection ratio of damaged parts %IoU 准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6PonitNet80.582.770.876.667.366.9PointNet++86.384.580.481.184.280.9DGCNN 88.786.589.986.486.284.7FAS⁃DGCNN89.986.590.388.187.385.7表4 损伤分割的整体性能对比分析 出,动态卷积图特征以及有效的邻域特征更新与多尺度注意力给分割网络带来了更优秀的局部邻域分割能力,更加适应表面损伤分割的任务要求㊂3 结 语笔者利用三维点云独特的空间结构特征,将传统K 邻域内权重相近的邻域点采用空间尺度进行区分,并将空间尺度划分运用于邻域内权重分配上,提出了一种能将邻域内噪声点降权筛除的特征更新模块㊂采用此模块的动态图卷积网络在分割上表现出色㊂利用特征更新的动态图卷积网络(FAS⁃DGCNN)能有效实现金属表面损伤的分割㊂与其他网络相比,笔者方法在点云语义分割方面表现出更高的可靠性,可见在包含空间尺度区域信息的注意力和局域点云特征更新下,笔者提出的基于特征更新的动态图卷积网络能发挥更优秀的作用,而且相比缺乏局部特征提取能力的分割网络,其对于点云稀疏㊁特征不明显的非贯通伤有更优的效果㊂参考文献:[1]LAWIN F J,DANELLJAN M,TOSTEBERG P,et al.Deep Projective 3D Semantic Segmentation [C]∥InternationalConference on Computer Analysis of Images and Patterns.Ystad,Sweden:Springer,2017:95⁃107.[2]MATURANA D,SCHERER S.VoxNet:A 3D Convolutional Neural Network for Real⁃Time Object Recognition [C]∥Proceedings of IEEE /RSJ International Conference on Intelligent Robots and Systems.Hamburg,Germany:IEEE,2015:922⁃928.[3]LE T,DUAN Y.PointGrid:A Deep Network for 3D Shape Understanding [C]∥2018IEEE /CVF Conference on ComputerVision and Pattern Recognition (CVPR).Salt Lake City,USA:IEEE,2018:9204⁃9214.[4]QI C R,SU H,MO K,et al.PointNet:Deep Learning on Point Sets for 3D Classification and Segmentation [C]∥IEEEConference on Computer Vision and Pattern Recognition (CVPR).Hawaii,USA:IEEE,2017:652⁃660.[5]QI C R,SU H,MO K,et al,PointNet ++:Deep Hierarchical Feature Learning on Point Sets in a Metric Space [C]∥Advances in Neural Information Processing Systems.California,USA:SpringerLink,2017:5099⁃5108.[6]HU J,SHEN L,SUN G,Squeeze⁃and⁃Excitation Networks [C ]∥IEEE Conference on Computer Vision and PatternRecognition.Vancouver,Canada:IEEE,2018:7132⁃7141.[7]LI Y,BU R,SUN M,et al.PointCNN:Convolution on X⁃Transformed Points [C]∥Advances in Neural InformationProcessing Systems.Montreal,Canada:NeurIPS,2018:820⁃830.[8]ANH VIET PHAN,MINH LE NGUYEN,YEN LAM HOANG NGUYEN,et al.DGCNN:A Convolutional Neural Networkover Large⁃Scale Labeled Graphs [J].Neural Networks,2018,108(10):533⁃543.[9]任伟建,高梦宇,高铭泽,等.基于混合算法的点云配准方法研究[J].吉林大学学报(信息科学版),2019,37(4):408⁃416.926第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法036吉林大学学报(信息科学版)第41卷REN W J,GAO M Y,GAO M Z,et al.Research on Point Cloud Registration Method Based on Hybrid Algorithm[J]. Journal of Jilin University(Information Science Edition),2019,37(4):408⁃416.[10]ZHANG K,HAO M,WANG J,et al.Linked Dynamic Graph CNN:Learning on Point Cloud via Linking Hierarchical Features[EB/OL].[2022⁃03⁃15].https:∥/stamp/stamp.jsp?tp=&arnumber=9665104. [11]林少丹,冯晨,陈志德,等.一种高效的车体表面损伤检测分割算法[J].数据采集与处理,2021,36(2):260⁃269. LIN S D,FENG C,CHEN Z D,et al.An Efficient Segmentation Algorithm for Vehicle Body Surface Damage Detection[J]. Journal of Data Acquisition and Processing,2021,36(2):260⁃269.[12]ZHANG L P,ZHANG Y,CHEN Z Z,et al.Splitting and Merging Based Multi⁃Model Fitting for Point Cloud Segmentation [J].Journal of Geodesy and Geoinformation Science,2019,2(2):78⁃79.[13]XING Z Z,ZHAO S F,GUO W,et al.Processing Laser Point Cloud in Fully Mechanized Mining Face Based on DGCNN[J]. ISPRS International Journal of Geo⁃Information,2021,10(7):482⁃482.[14]杨军,党吉圣.基于上下文注意力CNN的三维点云语义分割[J].通信学报,2020,41(7):195⁃203. YANG J,DANG J S.Semantic Segmentation of3D Point Cloud Based on Contextual Attention CNN[J].Journal on Communications,2020,41(7):195⁃203.[15]陈玲,王浩云,肖海鸿,等.利用FL⁃DGCNN模型估测绿萝叶片外部表型参数[J].农业工程学报,2021,37(13): 172⁃179.CHEN L,WANG H Y,XIAO H H,et al.Estimation of External Phenotypic Parameters of Bunting Leaves Using FL⁃DGCNN Model[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(13):172⁃179.[16]柴玉晶,马杰,刘红.用于点云语义分割的深度图注意力卷积网络[J].激光与光电子学进展,2021,58(12):35⁃60. CHAI Y J,MA J,LIU H.Deep Graph Attention Convolution Network for Point Cloud Semantic Segmentation[J].Laser and Optoelectronics Progress,2021,58(12):35⁃60.[17]张学典,方慧.BTDGCNN:面向三维点云拓扑结构的BallTree动态图卷积神经网络[J].小型微型计算机系统,2021, 42(11):32⁃40.ZHANG X D,FANG H.BTDGCNN:BallTree Dynamic Graph Convolution Neural Network for3D Point Cloud Topology[J]. Journal of Chinese Computer Systems,2021,42(11):32⁃40.[18]张佳颖,赵晓丽,陈正.基于深度学习的点云语义分割综述[J].激光与光电子学,2020,57(4):28⁃46. ZHANG J Y,ZHAO X L,CHEN Z.A Survey of Point Cloud Semantic Segmentation Based on Deep Learning[J].Lasers and Photonics,2020,57(4):28⁃46.[19]SUN Y,ZHANG S H,WANG T Q,et al.An Improved Spatial Point Cloud Simplification Algorithm[J].Neural Computing and Applications,2021,34(15):12345⁃12359.[20]高福顺,张鼎林,梁学章.由点云数据生成三角网络曲面的区域增长算法[J].吉林大学学报(理学版),2008,46 (3):413⁃417.GAO F S,ZHANG D L,LIANG X Z.A Region Growing Algorithm for Triangular Network Surface Generation from Point Cloud Data[J].Journal of Jilin University(Science Edition),2008,46(3):413⁃417.(责任编辑:刘俏亮)。
基于无迹卡尔曼滤波算法的喷涂机器人末端位姿补偿系统
2024年第48卷第1期Journal of Mechanical Transmission基于无迹卡尔曼滤波算法的喷涂机器人末端位姿补偿系统丁江1崔家旭1左启阳2,3陈海伦2,3周伟2何凯2,3(1 广西大学机械工程学院,广西南宁530004)(2 中国科学院深圳先进技术研究院,广东深圳518055)(3 深圳市精密工程重点实验室,广东深圳518055)摘要喷涂建筑机器人在进行建图时无法将地面平整度的信息包含在地图中。
当机器人按照所建地图运行时,由于地面信息的缺失,喷涂建筑机器人工作末端的喷涂夹具无法与墙面平行。
为补偿喷涂夹具相对于墙面之间的位姿误差,提出一种基于无迹卡尔曼滤波的多传感器融合的喷涂夹具位姿补偿方法:以位移测量传感器测得的数据构建夹具位姿的状态方程,以陀螺仪测得的数据构建夹具位姿测量方程;利用无迹卡尔曼滤波算法获得夹具姿态的最优估计并将其传递给机器人,从而实现对喷涂夹具位姿误差的补偿。
搭建实验平台验证了误差补偿系统的可行性。
实验结果表明,误差补偿后的喷涂夹具相对于墙面之间的角度误差减小至0.005°。
关键词无迹卡尔曼滤波位移测量传感器陀螺仪误差补偿喷涂夹具End Pose Compensation System of Spraying Robots Based on UnscentedKalman Filter AlgorithmDing Jiang1Cui Jiaxu1Zuo Qiyang2,3Chen Hailun2,3Zhou Wei2He Kai2,3(1 School of Mechanical Engineering, Guangxi University, Nanning 530004, China)(2 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China)(3 Shenzhen Key Laboratory of Precision Engineering, Shenzhen 518055, China)Abstract The spraying construction robot is unable to include information about ground leveling in the map when it is created, and when the robot operates according to the built map, the spraying clamping fixture at the working end of the spraying construction robot can not be parallel to the wall due to the lack of ground information. In order to compensate the posture error of the spraying fixture relative to the wall, a multi-sensor fusion method is proposed based on the unscented Kalman filter to compensate the posture of the spraying fixture.The state equation of the fixture posture is constructed from the data measured by the displacement measurement sensor, the equation of fixture posture measurement is constructed from the data measured by the gyroscope, and the optimal estimation of the fixture posture is obtained by using the unscented Kalman filter algorithm and transferring them to the robot, so as to achieve the purpose of compensating the posture error of the spraying fixture. Finally, the experimental platform is built to verify the feasibility of the error compensation system. The experimental results show that the positional error between the spraying fixture and the wall after error compensation is reduced to 0.005°.Key words Unscented Kalman filter Displacement measurement sensor Gyroscope Error compensa⁃tion Spraying fixture0 引言随着装修成本的增加和机器人自动化技术的快速发展,使用机器人代替人工自主进行装修工作将成为必然趋势[1]。
计算全息补偿检测自由曲面的高精度位姿测量
第 31 卷第 11 期2023 年 6 月Vol.31 No.11Jun. 2023光学精密工程Optics and Precision Engineering计算全息补偿检测自由曲面的高精度位姿测量李雯研1,2,程强1,2,曾雪锋1,2*,李福坤1,2,薛栋林1,2*,张学军1,2(1.中国科学院长春光学精密机械与物理研究所,吉林长春 130033;2.中国科学院大学,北京100049)摘要:为实现自由曲面的定位与位姿高精度测量,提出了“光学-机械”基准定位法,建立了位姿测量模型,并对该方法的定位误差和基准选择展开研究。
根据三坐标测量机与计算全息提出了“光学-机械”基准定位法。
然后,采用球形安装的回射器(Sphere Mounted Retroreflector ,SMR)、猫眼、基准球作为基准,基于波像差理论与视差效应分别建立了3种基准的位姿测量模型,得到了位置误差与基准区域波前像差的函数关系,并对3种位姿测量模型进行对比。
最后,对3种基准位姿测量方法进行仿真及实验验证,实测结果与模型的残差结果均小于0.05λ,相对误差均小于2.43%,验证了模型的准确性。
实验结果表明,当检测距离为1 000 mm时,猫眼法的轴向定位误差为24 μm;基准球法的轴向定位误差为50 μm;SMR靶球法的轴向定位误差为16 μm,X,Y方向的定位误差为1 μm,滚转角定位误差为3.26″。
SMR靶球法的定位误差最小、检测动态范围最大且检测光学元件的自由度最多,更适用于自由曲面的高精度位姿检测。
关键词:光学检测;光学面形位姿测量;“光学-机械”基准定位法;计算全息;定位误差中图分类号:O439 文献标识码:A doi:10.37188/OPE.20233111.1581High-precise posture measurement for measuring freeform surface with computer generated hologram compensationLI Wenyan1,2,CHENG Qiang1,2,ZENG Xuefeng1,2*,LI Fukun1,2,XUE Donglin1,2*,ZHANG Xuejun1,2(1.Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences,Changchun 130033, China;2.University of Chinese Academy of Sciences, Beijing 100049, China)* Corresponding author, E-mail:zengxf@ciomp. ac. cn;xuedl@ciomp. ac. cn Abstract: To realize the high-precision position measurement of freeform surfaces, this paper proposes an optic-mechanical reference positioning method that employs a position measurement model. First, an opti⁃cal-mechanical reference positioning method based on a coordinate measuring machine and computer-gener⁃ated holography is proposed. Then, using a spherical mounted retroreflector (SMR) target ball, cat eye,and reference ball as the benchmarks,three benchmark position measurement models are established on the basis of wave aberration theory and parallax effect. The functional relationship between the position er⁃ror and the wavefront aberration in the reference area is obtained, and the three position measurement mod⁃文章编号1004-924X(2023)11-1581-12收稿日期:2023-01-11;修订日期:2023-02-14.基金项目:吉林省卓越创新团队(No.20210509067RQ);国家自然科学基金资助项目(No.61975201,No.62075218,No.12003034);国家重大科研仪器研制项目(No.62127901);中科院青促会项目(No.2020224,(No.2022213)第 31 卷光学精密工程els are compared and analyzed. Finally, the three benchmark position measurement methods are simulated and validated via experiments. The residual difference between the measurement results and the model is below 0.05λ, and the relative error is below 2.43%, confirming the accuracy of the model. The experi⁃mental results indicate that the axial positioning error of the cat-eye method is 24 μm when the measure⁃ment distance is 1 000 mm. The axial positioning error of the reference-ball method is 50 μm. The SMR target ball positioning error is 16 μm in the axial direction, 1 μm in the X and Y directions, and 3.26″ in clocking.The SMR target ball method has the minimum positioning error,maximum measurement dy⁃namic range, and maximum degree of freedom in detecting optical elements; therefore, it is more suitable for high-precision pose measurement of freeform surfaces.Key words: optical testing;optical surface posture measurement;computer generated holography;optic-mechanical reference positioning method; positioning error1 引言计算全息(Computer Generated Hologram,CGH)补偿检测具有高精度、非接触测量等优点,可实现非球面、自由曲面等光学面形补偿检测[1-2]。
基于深度学习的人工智能技术在结直肠癌医学影像诊断中的应用进展
基于深度学习的人工智能技术在结直肠癌医学影像诊断中的应用进展张凯璇,丁康南京中医药大学附属南京中医院肛肠中心,南京210000摘要:结直肠癌(CRC)是我国常见的消化系统恶性肿瘤之一,近年来其发病率和死亡率逐年上升。
目前,CRC 的诊断主要依靠内镜下活检定位和定性诊断,内镜下准确定位和临床分期判断可指导临床医生选择最佳治疗方案,从而改善患者预后。
但结肠镜检查的准确率很大程度上取决于内镜医师的技术水平,主观性较强,存在误诊或漏诊情况。
近年来,随着机器学习、深度学习和深度神经网络技术在计算机视觉领域的快速发展,基于人工智能的计算机辅助系统在结肠镜检查方面取得了显著进展,不仅能显著提高结直肠息肉的检出效率,还能早期识别T1期CRC,从而避免部分患者接受不必要的手术切除,已成为内镜医生不可或缺的辅助诊断工具。
医学影像AI技术在提高CRC的诊断效率方面虽然具有很好的应用前景,但在临床实施中仍面临一些困难与挑战,在临床应用之前必须进行大样本、多中心的模型训练。
关键词:结直肠癌;人工智能;医学影像学doi:10.3969/j.issn.1002-266X.2024.04.026中图分类号:R735.3 文献标志码:A 文章编号:1002-266X(2024)04-0107-04结直肠癌(CRC)起源于结直肠黏膜上皮,是临床常见的消化系统恶性肿瘤之一,在全球恶性肿瘤中其发病率居第三位、死亡率居第二位。
CRC的发生大多遵循“息肉-腺瘤-癌”序列,通常由良性癌前息肉的局灶性变化发展而来,从息肉发展到癌症往往需要5~10年。
越来越多的证据表明,早期筛查及早诊早治可以降低CRC的发病率和死亡率[1]。
在当前的临床实践中,结肠镜是CRC筛查的“金标准”[2]。
内镜医师可通过镜头完整地探查整个肠腔,发现可疑病灶后取活体组织进行病理学检查。
近年来,随着人工智能(AI)技术在疾病诊断中得到快速发展,结肠镜下AI计算机辅助诊断技术(CAD)为CRC的精确诊断提供了新的途径[3]。
通信英语缩略语中英文对照
邻频干扰
ACIR
Adjacent Channel Interference Ratio
邻信道干扰比
ACK
ACKnowledgement
确认
ACL
Asynchronous Connectionless Link
异步无连接
ACLR
Adjacent Channel Leakage Ratio
Adaptive Differential Pulse Code Modulation
自适应差分脉冲编码调制
ADR
Analogue-Digital Recorder
模拟-数字记录器
ADR
Address Digit Receiver
数字地址接收器
ADR
Advisory Route
建议路由
ADR
Analogue Data Recognition
模拟数据识别
ADRC
Automatic Call Distribution Remote Capture
自动呼叫分配(分布)远程截获
ADSL
asymmetric digital subscriber line
不对称数字用户线
ADSL
Asymetric Digital Subscriber Loop
非对称数字用户环路技术
高级实时处理系统
ARQ
Automatic Repeat Request
自动重复请求ቤተ መጻሕፍቲ ባይዱ
ARS
Address Resolution Server
地址解析服务器
ART
Alarm Reporting Telephone
基于边缘线性拟合的芯片亚像素定位算法
基于边缘线性拟合的芯片亚像素定位算法作者:罗振威丁跃浇甘玉坤来源:《软件》2020年第06期摘要:为解决贴片机由于贴片元件中心定位精度不够导致贴片过程中可能出现的抛料、贴片芯片引脚损坏等情况,本文设计一种基于边缘数据拟合的贴片芯片中心定位算法,实现了对贴片芯片中心的亚像素级高精度定位。
并在此基础上,利用检测出来的中心坐标计算出芯片离吸嘴的基准偏差,得到芯片的中心偏移量。
并且通过线性拟合得到的边缘线,计算出芯片的旋转角。
在算法过程中加入了去倒角算法去除倒角信息对拟合的影响,更好地提高检测结果的稳定性。
由程序运行结果可知该算法提高了贴片机的贴片精度,特别对检测芯片偏转角度有着良好的效果,同时相对其他亚像素检测算法保证了检测过程的效率。
关键词:光学测量;贴片机;中心定位;边缘检测;亚像素;数据拟合中图分类号: TP391;TP23 文献标识码: A DOI:10.3969/j.issn.1003-6970.2020.06.041本文著录格式:罗振威,丁跃浇,甘玉坤. 基于边缘线性拟合的芯片亚像素定位算法[J]. 软件,2020,41(06):204207【Abstract】: In order to solve the problem that the placement machine may not be caused by the centering accuracy of the chip component, the throwing of the chip and the pin of the chip may be damaged. In this paper, a center localization algorithm based on edge data fitting is designed to achieve high-precision sub-pixel positioning of the chip center. Based on this, the measured center deviation is used to calculate the reference deviation of the chip from the nozzle, obtain the center offset of the chip. And calculate the rotation angle of the chip by linearly fitting the obtained edge line. In the algorithm process, the de-chamfering algorithm is added to remove the influence of chamfering information on the fitting, and the stability of the detection result is better improved. It can be seen from the running results of the program that the algorithm improves the placement accuracy of the placement machine, and has a good effect on detecting the deflection angle of the chip, and the efficiency of the detection process is ensured compared with other sub-pixel detection algorithms.【Key words】: Optical measurement; Mounter; Center positioning; Edge detection; Sub-pixel; Data fitting0 引言隨着现在手机、电脑等电子产品的快速更新换代,电子芯片也随着向高速、高集成度、微型等趋势发展。
沙田柚乙烯应答因子的鉴定和生物信息学分析
LI Lu-lu, CHEN Yu-mei, LUO Qiong, LI Hui-min, QIN Xin-min∗
( College of Life Science, Guangxi Normal University, Guilin 541004, China)
Abstract: The transcriptomes of the self-pollinated style and cross-pollinated style of Citrus grandis var. shatinyu Hort were researched by high-throughput sequencing, and the ethylene response factor gene sequence was obtained from this plant through difference analysis. The sequence characteristics, physicochemical properties, transmembrane domains, advanced structures and functional domains of the proteins encoded by this gene were analyzed and predicted by using bioinformatic method. The results showed that the full length of the gene was 1013 bp ( GenBank login number was MG905213) , the open reading frame ( ORF) was 498 bp in full length, encoding 166 amino acids with deduced molecular weight of 18.45 kDa, and theoretical pI value was 7.90. This protein had a conserved domain, which was the same as that of AP2 superfamily protein; it was a hydrophilic non-se⁃ cretory unstable protein with 24 possible phosphorylation sites, and was not involved in transmembrane movement. The homology analysis of amino acid sequence indicated that the protein encoded by this gene shared high homology with the AP2 protein of Cit⁃ rus sinensis, XP-006467696.1 ( 100%) and Citrus clementina, XP-006449381.1 ( 98%) . The phylogenetic tree indicated that the ethylene response factor in C. grandis var. shatinyu had very close kinship with that in C. sinensis and C. clementina, and they belonged to the same evolutionary branch.
基于Jetson_Nano的人脸口罩识别系统设计
0 引言近年来,人脸识别技术取得了显著进展,已广泛应用于各种场合,包括安全监控、身份验证、人流量统计以及工业自动化等。
其中,检测是否佩戴口罩成为一个特殊但重要的任务,特别是在需要遵守特定安全规定或健康标准的场合。
然而,即便是最先进的人脸识别算法,也常常会受到环境变量(如光照、角度和遮挡物)的影响,导致算法准确性下降。
为了解决这些问题,需要一种低成本、高效率的解决方案,因此本文提出基于Jetson Nano基于Jetson Nano的人脸口罩识别 系统设计Design of face mask recognition system based on Jetson Nano叶光泽 童宣科 侯保冀桂林电子科技大学光电工程学院, 广西 桂林 541000摘要:人脸和口罩识别在许多现代应用中有广泛需求,包括安全检查、身份认证以及工业安全等方面。
探讨了基于Jetson Nano 的人脸口罩识别系统,旨在实现对个体是否佩戴口罩的准确判断。
该系统使用先进的人脸识别算法来定位图像中的人脸区域,然后应用深度学习模型对人脸区域进行口罩检测。
所有的数据处理和模型推理均在Jetson Nano 上实现,从而使系统具有较高的运行效率和可移植性。
在多个不同环境和视角下进行测试,结果显示该系统在各种复杂条件下都展示出较高的识别准确率,并能处理高达30帧/s 的实时图像。
不仅证明了Jetson Nano 在图像识别和实时数据处理方面的高性能,还为开发多种人脸识别应用提供了有力的技术支持。
关键词:人脸识别;口罩识别;Jetson Nano;深度学习中图分类号:TP394.41;TP183 文献标识码:A的人脸口罩识别系统。
Jetson Nano 是一款专为边缘计算设计的微型计算平台,具有出色的图像处理和机器学习能力,且功耗和成本都较低。
1 硬件设计1.1 硬件组件本设计的硬件架构主要由3个部分组成:Jetson Nano 微型计算平台、显示屏和摄像头。
基于变分贝叶斯的视觉散焦光图像快速盲复原算法
第41卷 第3期吉林大学学报(信息科学版)Vol.41 No.32023年5月Journal of Jilin University (Information Science Edition)May 2023文章编号:1671⁃5896(2023)03⁃0552⁃07基于变分贝叶斯的视觉散焦光图像快速盲复原算法收稿日期:2022⁃06⁃01基金项目:山西省自然科学基金资助项目(201901D111061)作者简介:姜新军(1981 ),男,山东济宁人,太原卫星发射中心工程师,主要从事光学测量技术研究,(Tel)86⁃133****8358(E⁃mail)zhangnan684@㊂姜新军(太原卫星发射中心技术部,太原030000)摘要:为实现数字图像的高精度应用,降低外界光线对视觉成像的影响,提出一种基于变分贝叶斯的视觉散焦光图像快速盲复原算法㊂通过梯度和卷积处理,计算视觉散焦光图像后验概率期望,利用Sobolev 空间函数分布方法提取最优初始图像和散焦光模糊函数先验概率㊂以变分贝叶斯的近似分布思想,无限逼近实际的后验概率;使用相对熵计算多个分布间距离,最大程度贴近真实值;将最小损耗代价函数输入双边滤波器内,即以近似清晰图像为指导图,剔除剩余高频噪声,得到最优图像盲复原结果㊂实验结果表明,所提算法盲复原后图像对比度高,边缘细节清晰,复原速度快,具有极高的应用价值㊂关键词:最小损耗代价函数;变分贝叶斯;散焦光图像;后验概率;相对熵;双边滤波器中图分类号:TP391文献标志码:AFast Blind Restoration Algorithm of Visual Defocus Image Based on Variable BayesianJIANG Xinjun(Department of Technical,Taiyuan Satellite Launch Center,Taiyuan 030000,China)Abstract :In order to realize the high⁃precision application of digital images and reduce the influence of external light on visual imaging,a fast blind restoration algorithm for visual defocused light images based on variational Bayesian is proposed.Through gradient and convolution processing,the posterior probability expectation of the visual defocused light image is calculated,and the optimal initial image and the prior probability of the defocused light blur function are extracted by using the Sobolev space function distribution method.The actual posterior probability is reached infinitely,using relative entropy to calculate the distance between multiple distributions,to approximate the true value of the greatest extent,and input the minimum loss cost function into the bilateral filter,that is,take the approximate clear image as the guide map,to remove the remaining high⁃frequency noise.The optimal image blind restoration results are obtained.The experimental results show that the proposed algorithm has high image contrast,clear edge details and fast restoration speed after blind restoration,which has extremely high application value.Key words :minimum loss cost function;variable decibels;defocus image;a posteriori probability;relative entropy;bilateral filter 0 引 言人工智能和大数据时代加快了数字图像领域研究进程,并已广泛应用于传感探索㊁军事防御㊁医学可视化等多个领域,使其在降低人类劳动强度的同时,通过机器视觉实现高精细化的重复性工作㊂由于设备在成像过程中,将受到自然光㊁背景环境噪声等多方面干扰,即使器械具有调焦点功能,但也会在长期高强度运行下出现零部件偏移,导致呈现重叠㊁虚影㊁模糊的散焦光图像,不能为使用者提供精准可视化结果㊂且在视场范围小㊁景深长度短的环境中,难以校正微型显像零部件误差㊂因此如何快速有效地实现散焦光图像精准复原成为现阶段科研领域研究重点,为此盲复原算法[1]应运而生,该算法不需要明确散焦光成因或其他干扰因素,就能准确预测原始信息,重构清晰图像㊂目前已有对视觉散焦光图像快速盲复原算法的研究报道,例如郭立文等[2]提出一种分割和拼接图像块的复原方法,划分模糊图像边界,将特征多的图像块合并,分析模糊函数和分割线边界间关系,最后拼接交替迭代实现模糊估计和恢复;涂春梅等[3]考虑到不清晰图像的本质是像素的非稀疏性,即成像中明暗像素的不加权平衡问题,导致暗像素比例大增㊁非稀疏性降低,为此在暗像素先验算法中引入损耗最小线性近似,求出估计最优解,实现盲复原㊂但上述方法都没有考虑先验知识对盲复原的积极作用,导致图像盲复原效果较差㊂为此笔者通过变分贝叶斯提取散焦光图像先验知识,转化为复原后验概率分布信息,充分利用原始图像特征向量,降低复原图像细节误差㊂最后引入双边滤波器最大程度去除高频信息,并将复原结果和散焦光图像比较㊂1 视觉散焦光图像分析1.1 视觉散焦光图像数学建模图像的散焦光模糊成像过程的数学模型为G =F ⊗v +φ,(1)其中F 为初始未散焦光的图像,v 为散焦光模糊函数,G 为散焦光后的图像,φ为干扰噪声,⊗为卷积㊂图像的散焦光过程中,边缘背景特征相比目标区域降质程度更严重[4],呈现的图像细节信息少,所以凭借梯度域进行恢复前处理[5]㊂∇F 与∇G 为初始图像和散焦光图像的梯度值,通过线性卷积处理[6]后,式(1)可转化为∇G =∇f ⊗v +φ㊂(2) 固定散焦光梯度值∇G 的同时,保持∇F 与v 彼此独立,仅根据基础贝叶斯原理进行分析,∇F 与v 的复原后验概率[7]为P (v ,∇G ∇F )=P (∇G v ,∇F )P (v ,∇F )P (∇G )∝P (∇G v ,∇F )P (v )P (∇F )㊂(3)分析式(3)可得出,通过贝叶斯进行盲复原的关键在于详细分析初始图像,提取合适先验概率和散焦光模糊函数先验概率㊂1.2 初始图像先验概率考虑到视觉散焦光对图像各异性扩散的影响[8],使用Sobolev(索伯列夫空间)建立初始图像F 的先验概率模型,作为由函数构建的赋范向量空间,其经典调和模型可最大程度保留图像的边缘特征,降低估计损失[9]㊂通过将初始图像最优估计问题转换为线性优化求解,可知F 的先验概率模型为Sob(F )=‖Δx F ‖22+‖Δy F ‖22,(4)其中Δx 与Δy 分别为在二维空间内水平与垂直方向上一阶差分算式㊂Sob(F )在高斯函数分布[10]中为P (F α)∝αN /2-α2(‖Δx F ‖22+‖Δy F ‖22[]),(5)其中α为初始图像先验概率算子,且α>0,α-1为Sob(F )的高斯方差,通过P (F α)避免解出现局部最优问题;N 为初始图像内像素数量,是不为零的正整数㊂1.3 散焦光模糊函数先验概率散焦光图像降质问题主要是由光线和非正常扩散和衰减产生的,且扩散会随着深度加大图像逐渐降低对比度,导致图像表面物体的亮度增加㊂衰减会随着深度加大光线逐层递减[11],导致成像过程中光线355第3期姜新军:基于变分贝叶斯的视觉散焦光图像快速盲复原算法不足,目标明暗对比不明显,即散焦光的支持域相对较为平滑[12],难以提取出精准特征用于后续盲复原㊂为此,使用Sobolev空间表示散焦光模糊函数先验概率模型为Sob(v)=‖Δx v‖22+‖Δy v‖22㊂(6) Sob(F)在高斯函数分布中为P(vβ)∝βN/2-β2(‖Δx v‖22+‖Δy v‖22[]),(7)其中β为初始图像先验概率算子,且β>0,β-1为Sob(v)的高斯方差,通过P(vβ)避免解出现局部最优问题㊂将β的支持域横纵扩展成r×c,即β∈R N×1㊂2 变分贝叶斯下视觉散焦光图像快速盲复原2.1 变分贝叶斯估计充分考虑初始图像先验概率和散焦光模糊函数先验概率后,笔者发现存在部分噪声未知参数[13],难以获得其有效先验概率,为此,通过变分贝叶斯的近似分布思想处理复原估计问题㊂凭借近似分布Q(v,∇F)无限逼近实际的后验概率P(v,∇G∇F)[14],并且摒弃了传统共轭后验理念,使用相对熵[15] (又称Kullback⁃Leibler散度)计算多个分布间距离,以最大程度贴近真实值㊂由于干扰噪声φ是变分贝叶斯估计中的未知参数,其方差可描述为σ2,此时近似分布Q(v,∇F)可改写为Q(v,∇F,σ2),定义近似和真实后验概率间相对熵为K KL[Q(v,∇F,σ2)‖P(v,∇G∇F)]=∫Q(v,∇F,σ2)In Q(v,∇F,σ2)P(v,∇G∇F)d∇F d v+In P(∇Q)≥0,(8)其中有且仅有Q(v,∇F,σ2)=P(v,∇G∇F)时等式才成立,后验概率最准确,复原效果最好㊂2.2 盲复原损耗最小化问题求解通过分析式(8)可看出,在全部估计过程中P(∇Q)都为一个常量,为此定义最小损耗代价函数R KL 求出近似分布的最优解,则有R KL=K KL[Q(v,∇F,σ2)‖P(v,∇G∇F)]-In P(∇Q)=∫Q(∇F)In Q(∇F)P(∇F)d∇F+∫Q(v)Q(∇F)Q(v)P(v)d v+∫Q(-σ2)In Q(-σ2)P(-σ2)d(-σ2)㊂(9) 凭借变分贝叶斯最优期望的最大值原则,实现散焦光盲复原估计损耗最小化,得出近似分布的降质函数[16],最后在双边滤波内输入R KL,即以一个近似清晰图像为指导图,进而计算出剩余高频信息[17],获得最优图像盲复原结果㊂F r为经R KL估计后近似清晰图像,^G㊁^F r为散焦光图像G㊁F r的去卷积形式,假设指导图为^G-ω(^G-^Fr),ω为权重系数,且ω∈[0,1],则得到优化图像函数为min E(F)=‖F-(^G-ω(^G-^F r))‖2+λ∑l m=-l∑l n=-l(F-D m,n F)W m,n(F-D m,n F)T,(10)其中λ为平滑因子方差,D m,n为转移系数,用于描述散焦光图像在水平和垂直方向上分别移动了m㊁n个像素,l为像素领域区间,W m,n为减少盲复原中平滑作用权重矩阵[18],若处于图像非对角线上则元素值是0,反之非对角线上元素值是1,元素值和边缘特征像素强度成反比[19],T为平滑作用周期㊂通过高斯双边滤波求出盲复原图像:^F1(i,j)=∑l m=-l∑l n=-l a(m,n,i,j){^G(i-m,j-n)-ω[^G(i-m,j-n)-^F((i-m,l-n))]},(11)其中a(m,n,i,j)=1Z(i,j)exp-m2+n22δ2s -[^G(i,j)-^G(i-m,j-n)2]2δ2()r ,(12)455吉林大学学报(信息科学版)第41卷其中a (m ,n ,i ,j )为双边滤波器,Z 为归一化因子,(i ,j )为任意像素点坐标位置,δ2s 和F 1为图像特征值域和空间域在进行滤波时的高斯标准差㊂最后计算^F 1和^G 之间的灰度方差[20],如果得出方差值小于等于0.001,则输出盲复原图像^F1;反之,用^F 1代替^F r ,再次运算式(10),式(11),直至满足灰度方差小于等于0.001㊂3 盲复原算法性能测试与研究为验证所提基于变分贝叶斯的视觉散焦光图像快速盲复原算法的有效性,在开源图数据库TigerGraph㊁Janus Graph 中随机挑选200张图作为实验样本集合,用于进行算法性能测试实验㊂为使验证结果更具普适性,与分割和拼接图像块盲复原法和暗像素先验盲复原法进行对比分析㊂3.1 盲复原效果评价在样本集中任意挑选2张图像作为复原对象㊂原始Lena 图像与进行散焦光处理后降质图像如图1所示,3种方法复原效果如图2所示㊂图1 Lena 原始和散焦光处理后图像Fig.1 Lena original and defocused images after processing 图2 3种方法的Lena 图像盲复原效果对比Fig.2 Comparison of the blind restoration effects of Lena images of the three methods 通过图2a 可看出,笔者算法图像细粒度高㊁边缘细节保留完好㊁可提取特征多,人物头发丝和帽子上的流苏都清晰可分,能完整辨别人物的五官和姿态动作㊂这是因为所提算法充分考虑到光线衰减和随成像深度的扩展作用,优先计算散焦光模糊函数先验概率,使近似后验分布能引导复原为最优结果㊂图2b 分割和拼接图像块盲复原法没有完全去除不均匀高频噪声,图2c 暗像素先验盲复原法在此基础上,还存在一层低频模糊度噪声,二者人脸盲复原结果都不理想,难以应用于人脸识别领域㊂原始森林图像与进行散焦光处理后降质图像如图3所示,3种方法复原效果如图4所示㊂森林图像相比人脸的散焦光程度更高,且样本图像内光影㊁树木以及景物深度关系复杂,但通过图4a,图4b 可看出,分割和拼接图像块盲复原法和笔者算法复原结果近乎一致,都能清晰呈现出光线透过树木枝叶投射的光以及光晕,且林间小路也清晰可见,没有被错误复原成落叶或草地;放大观察能看出分割和拼接图像块盲复原法还是存在一定模糊度,复原效果没有笔者算法好㊂图4c 暗像素先验盲复原法方法辨识度最低,纹理和光影细节比较模糊,难以辨识㊂555第3期姜新军:基于变分贝叶斯的视觉散焦光图像快速盲复原算法图3 森林原始和散焦光处理后图像Fig.3 The original and defocused images of the forest after processing 图4 3种方法的森林图像盲复原效果对比Fig.4 Comparison of blind restoration effects of three methodsfor forest image restoration 3.2 盲复原效率评价图5 盲复原算法效率对比Fig.5 Efficiency comparison of blind restoration algorithms 在开源图数据库内200张样本集合中随机挑选11张图像,其中包含森林花草㊁道路车辆和人脸躯干等多个种类,进行盲复原效率测试,3种方法的对比结果如图5所示㊂从图5可看出,因为每个图像编号包含的内容不同,方法在处理先验概率㊁特征等步骤时花费时间也不同,3种方法的盲复原耗时都在上下波动,但是可清晰看出,笔者算法的复原速度最快㊂这是因为充分655吉林大学学报(信息科学版)第41卷考虑了降质图像先验概率,选择最优近似分布,降低了后续因灰度方差不满足要求而需再次进行迭代的次数,实现快速盲复原㊂4 结 语为提高机器视觉的适用范围,降低光线衰减和设备深度对成像结果的影响,提高其对外界干扰的抵抗力,笔者提出一种基于变分贝叶斯的视觉散焦光图像快速盲复原算法㊂通过分析散焦光图像先验概率和模糊函数先验概率,全方面汲取原始图像内有效信息,最大程度提高复原后图像与原始图像间相似度,面对无法完全去除的高频干扰,使用变分贝叶斯近似分布思想,无限逼近真实结果㊂实验证明所提算法复原质量高㊁耗时少,能有效提升复原后视觉散焦光图像的分辨率㊂参考文献:[1]黄彦宁,李伟红,崔金凯,等.强边缘提取网络用于非均匀运动模糊图像盲复原[J].自动化学报,2021,47(11):2637⁃2653.HUANG Y N,LI W H,CUI J K,et al.Strong Edge Extraction Network for Non⁃Uniform Blind Motion Image Deblurring [J].Acta Automatica Sinica,2021,47(11):2637⁃2653.[2]郭立文,廖永忠.应用图像块的运动模糊图像盲恢复算法[J].小型微型计算机系统,2020,41(10):2225⁃2229.GUO L W,LIAO Y Z.Blind Deblurring of Motion Blurred Image Base on Image Patch Mosaic [J].Journal of ChineseComputer Systems,2020,41(10):2225⁃2229.[3]涂春梅,陈国彬,刘超.暗像素先验的模糊图像盲复原方法[J].计算机工程与应用,2020,56(10):213⁃219.TU C M,CHEN G B,LIU C.Dark⁃Pixel⁃Prior Blind Deblurring Method [J].Computer Engineering and Applications,2020,56(10):213⁃219.[4]MENG H,YAN Y,CAI C,et al.A Hybrid Algorithm for Underwater Image Restoration Based on Color Correction and ImageSharpening [J].Multimedia Systems,2020(4):1⁃11.[5]徐宁珊,王琛,任国强,等.混合梯度稀疏先验约束下的图像盲复原[J].光电工程,2021,48(6):61⁃72.XU N S,WANG C,REN G Q,et al.Blind Image Restoration Method Regularized by Hybrid Gradient Sparse Prior [J].Opto⁃Electronic Engineering,2021,48(6):61⁃72.[6]何祥宇,李静,杨数强,等.基于ET⁃PHD 滤波器和变分贝叶斯近似的扩展目标跟踪算法[J].计算机应用,2020,40(12):3701⁃3706.HE X Y,LI J,YANG S Q,et al.Extended Target Tracking Algorithm Based on ET⁃PHD Dilter and Variational BayesianApproximation [J].Journal of Computer Applications,2020,40(12):3701⁃3706.[7]马天力,张扬,陈超波.带不确定混合噪声系统的变分贝叶斯期望最大滤波算法[J].中国惯性技术学报,2021,29(4):475⁃481,490.MA T L,ZHANG Y,CHEN C B.Variational Bayesian Expectation⁃maximization Filter for Systems with Uncertain HybridNoises [J].Journal of Chinese Inertial Technology,2021,29(4):475⁃481,490.[8]NAN Y,JI H.Un⁃Supervised Learning for Blind Image Deconvolution via Monte⁃Carlo Sampling [J].Inverse Problems,2022,38(3):035012.[9]龚平,贺杰,刘娜,等.基于稀疏正则化和渐近边界假设的运动模糊图像盲复原[J].济南大学学报(自然科学版),2021,35(1):16⁃23.GONG P,HE J,LIU N,et al.Blind Restoration of Motion Blurred Images Based on Sparse Regularization and Asymptotic Boundary Hypothesis [J].Journal of University of Jinan (Science and Technology),2021,35(1):16⁃23.[10]吴兰,范晋卿,文成林.基于多尺度编解码网络的道路交通模糊图像盲复原[J].郑州大学学报(理学版),2022,54(2):8⁃15.WU L,FAN J Q,WEN C L.Blind Restoration of Road Traffic Blurred Image Based on Multi⁃Scale Encoder⁃Decoder Network[J].Journal of Zhengzhou University (Natural Science Edition),2022,54(2):8⁃15.[11]柳宁,赵焕明,李德平,等.基于L_0稀疏先验的运动模糊标签图像盲复原[J].华南理工大学学报(自然科学版),2021,49(3):8⁃16.LIU N,ZHAO H M,LI D P,et al.Blind Restoration of Motion Blur Label Image Based on L_0Sparse Priors [J].Journal of 755第3期姜新军:基于变分贝叶斯的视觉散焦光图像快速盲复原算法855吉林大学学报(信息科学版)第41卷South China University of Technology(Natural Science Edition),2021,49(3):8⁃16.[12]姚海燕.基于微分方程的退化图像盲复原数学模型构建[J].计算机仿真,2021,38(5):185⁃188,431.YAO H Y.Construction of Blind Restoration Mathematical Model of Degraded Image Based on Differential Equation[J]. Computer Simulation,2021,38(5):185⁃188,431.[13]陈荣军,郑志君,赵慧民,等.一种基于模糊成像机理的QR码图像快速盲复原方法[J].光子学报,2021,50(7): 99⁃107.CHEN R J,ZHENG Z J,ZHAO H M,et al.Fast Blind Restoration of QR Code Images Based on Blurred Imaging Mechanism [J].Acta Photonica Sinica,2021,50(7):99⁃107.[14]WANG Y,CHANG D,ZHAO Y.A New Blind Image Denoising Method Based on Asymmetric Generative Adversarial Network [J].IET Image Processing,2021,15(6):1260⁃1272.[15]陈晨,许金鑫,危才华,等.基于显著性强度和梯度先验的多尺度图像盲去模糊[J].激光与光电子学进展,2020,57 (4):271⁃277.CHEN C,XU J X,WEI C C,et al.Multi⁃Scale Image Blind Deblurring Based on Salient Intensity and a Priori Gradient[J]. Laser&Optoelectronics Progress,2020,57(4):271⁃277.[16]李佳田,吴华静,林艳,等.顾及模糊核连通性的无人机图像半盲复原方法[J].武汉大学学报(信息科学版),2021, 46(6):816⁃824.LI J T,WU H J,LIN Y,et al.A Semi⁃Blind Restoration Method of UAV Image Considering the Blurred Kernel Connectivity [J].Geomatics and Information Science of Wuhan University,2021,46(6):816⁃824.[17]吴笑天,杨航,孙兴龙.基于区域选择网络的图像复原及其在计算成像中的应用[J].光学精密工程,2021,29(4): 864⁃876.WU X T,YANG H,SUN X L.Image Restoring Method Based on Region Selection Network and Its Application in Computational Imaging[J].Optics and Precision Engineering,2021,29(4):864⁃876.[18]冯象初,王萍,何瑞强.一种应用博弈和L0约束的盲图像修复方法[J].西安电子科技大学学报(自然科学版), 2021,48(4):103⁃112.FENG X C,WANG P,HE R Q.Method for Inpainting Blind Images Using the Game and L0Constraint[J].Journal of Xidian University(Natural Science),2021,48(4):103⁃112.[19]胡佳成,颜迪新,曹丛,等.基于条件生成式对抗网络的AFM图像盲重构方法[J].计量学报,2021,42 (5):545⁃551.HU J C,YAN D X,CAO C,et al.Blind Reconstruction Method of AFM Image Based on Conditional Generative Adversarial Network[J].Acta Metrologica Sinica,2021,42(5):545⁃551.[20]HOU Y,XU J,LIU M,et al.NLH:A Blind Pixel⁃Level Non⁃Local Method for Real⁃World Image Denoising[J].IEEE Transactions on Image Processing,2020,29:5121⁃5135.(责任编辑:刘东亮)。
基于边缘方向信息的高效帧内预测算法
基于边缘方向信息的高效帧内预测算法鞠铭烨;裴志军;张军【摘要】为了降低H.264/AVC中帧内预测算法的复杂度,以便满足实时性要求较高的场合,提出基于宏块的灰度直方图能量分布特性,对宏块预测方式intra_4×4和intra 16×16预判;然后对Pan算法所存在的缺陷加以改进,给出一种新的帧内预测算法.实验结果表明,在全Ⅰ帧编码情况下,该算法与全搜索算法相比编码时间平均减少了68.9%,与Pan算法相比,编码时间减少最大超过37%,在保证图像质量与压缩性能的前提下,可以有效地降低计算复杂度,从而大大提高编码速度.【期刊名称】《计算机应用与软件》【年(卷),期】2013(030)007【总页数】5页(P75-78,82)【关键词】边缘方向直方图;模式选择;帧内预测;视频压缩【作者】鞠铭烨;裴志军;张军【作者单位】天津职业技术师范大学传感技术应用研究所天津300222;天津职业技术师范大学传感技术应用研究所天津300222;天津职业技术师范大学传感技术应用研究所天津300222【正文语种】中文【中图分类】TP3910 引言H.264/AVC是ITU-T和MPEG联合制定的最新视频压缩标准。
与以往视频压缩标准相比,H.264/AVC更注重各功能模块的技术细节[1],有效地提高了编码效率和网络适配性,改善了视频质量,与此同时算法复杂度也随之增加。
H.264/AVC帧内预测编码中,为了获得高质量视频,采用了拉格朗日率失真优化RDO(Rate-Distortion Optimization)的方法来求最优预测模式,该方法有效地降低了空间冗余,提高了压缩效率;但为了得到一个最优帧内预测模式需要进行592种组合模式的RD(Rate-Distortion)运算为代价[2];大幅度增加了运算复杂度,不适合应用于实时通信场景。
为减少帧内预测编码时间,文献[3]利用绝对值残差之和对intra_4×4方式预判,根据变换域绝对残差之和特征排除可能性小的intra_4×4预测方式;文献[4]基于像素之间相关性,筛选可能性较小的预测模式;文献[5]提出了利用多阈值的半路停止算法,较大程度降低4×4块复杂度。
基于自适应阈值分割及边缘检测的虹膜内边缘定位算法
基孑自适应阈值分割及边缘植IIIJB,'J 虹膜内边缘定位算法宁夏大学数学计算机学院赵静【摘要】为了提高虹膜内边缘定位的准确率和速度,本文提出了一种基于自适应阙值分割及边缘检测的虹膜内边缘定位算法。
首 先采用自适应的活动窗计算图像灰度,灰度最小值点确定为瞳孔内的任意一.量,以友度最小值作为阁值对原图像进行瞳孔分割=然 后使用La pa ci a n 边缘检测算子检测瞳孔区域边缘定位虹膜内边缘.最后根据圆的对称性计算虹膜内边缘的圆心和半径。
仿真结果表 明该算法准确定位虹膜内边界的平均时间为o .73s ,准确率为100%。
在虹膜识别系统中有较高的实际应用价值。
【关键词】虹膜识别 虹膜定位边缘检测Laphci an 算子 匪-1-1-11.引言虹膜作为重要的身份鉴别特征,具有唯一性,稳定性、可采集性、非 侵犯性等优点。
虹膜识别技术主要由虹膜图像采集、虹膜定位、特征提 取、匹配与识别等几个环节.组成.虹膜定位是其中的主要环节=虹膜定 位是指对人眼图像中的虹膜内边缘(瞳7L )与虹膜外边缘的定位.其中 虹膜内边缘的定位及相关参数提取是虹膜外边缘定位的重要前提和基 网l Laplaeian 模板 础。
La pl ae ia n 模板其实际上是计算某一个像素点同其八邻域内的像素 ’日前,虹膜内边界定位主要有以下相关算法。
J r .hn Da uB'man 提出 点的差值和,可以有效提取图像的边缘纹理等细节信息。
的微积分网形边缘检测算子法m ,其利用微积分算子搜索圆形边界定位 4.计算虹膜内边缘圆心、半径 出虹膜的内边界,Wi lde s R P 提出了一种利雕灰度投影先对瞳孔进行粗 已经求得r 瞳孔内一点P (x ,y )及瞳孔边缘图A _e dg .需要根据它们定位,然后再用网灰度梯度检测算子精确定位瞳孔的两步定位算法121。
求出瞳孔边缘的圆心和半径。
根据所求得的圆心和半径可在原图上较 中科院自动化研究所王蕴红等利用阈值法分割瞳孔,用C a n n y 算子对 为准确的标记出虹膜的内边缘Ⅸ域。
基于鸡群优化算法的旁瓣抑制滤波器设计
基于鸡群优化算法的旁瓣抑制滤波器设计邱攀;王展【摘要】文章针对相位编码雷达的距离旁瓣抑制问题,提出了基于鸡群优化算法的旁瓣抑制滤波器的设计方法.鸡群优化算法是一种全新的群智能优化算法,能够求解各类复杂的优化问题,具有良好的收敛性能,更容易找到全局最优值.最小峰值旁瓣(PSL)滤波器和最小积分旁瓣(ISL)滤波器的设计即为较复杂的优化问题,本文利用鸡群优化算法对这两种滤波器进行求解.实验仿真分析表明,通过鸡群优化算法设计的旁瓣抑制滤波器能有效地抑制距离旁瓣,并适用于各类编码信号.【期刊名称】《无线互联科技》【年(卷),期】2016(000)018【总页数】3页(P71-73)【关键词】二相编码;旁瓣抑制;智能优化算法【作者】邱攀;王展【作者单位】国防科学技术大学电子科学与工程学院,湖南长沙410073;国防科学技术大学电子科学与工程学院,湖南长沙410073【正文语种】中文二相编码雷达的距离旁瓣一直受到关注,二相编码良好的旁瓣特性对于降低虚警概率、提高雷达检测性能有着极大的意义,因此如何抑制旁瓣是二相编码作为脉冲压缩信号必须解决的问题。
一般通过两大类方法来改善二相编码的旁瓣特性:编码优选和旁瓣抑制。
编码优选是通过设计旁瓣性能好的码元,或者选择自相关性能优良的编码比如常见的巴克码和m序列,但这类方法受约束较多,设计出来的编码也很难广泛应用于工程实现。
在实际应用中,当确定了编码的形式而回波信号主旁瓣比仍无法满足工程要求时,旁瓣抑制就成了主要的手段。
旁瓣抑制通过设计滤波器来降低旁瓣以提高主旁瓣比,常用的方法有失配滤波器法等。
该类方法中主要有最小二乘法[1-2]、线性规划法[3]、神经网络法[4]等。
其中最小二乘法需要经过多次迭代,若是迭代终止条件选择不当或者迭代初始值选择不恰当会严重影响算法的效果。
线性规划一般不适用于复信号。
神经网络要求足够的训练样本量,而且收敛速度慢,无法在实际中广泛应用。
文献[5]提出了一种新的群智能优化算法,通过利用该算法对不同指标下的旁瓣抑制滤波器的系数进行求解。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I.J. Image, Graphics and Signal Processing, 2017, 11, 29-38Published Online November 2017 in MECS (/)DOI: 10.5815/ijigsp.2017.11.04Estimation of NIIRS Incorporating an Automated Relative Edge Response MethodPranav VDepartment of Computer Science and Engineering, Amrita School of Engineering, Coimbatore, India.Email: v.pranav93@E.Venkateswarlu, Thara Nair, G.P.Swamy, B.Gopala KrishnaData Processing, Products Archival and Web Applications Area,National Remote Sensing Center (NRSC), ISRO, Hyderabad, India.Email:venkateswarlu_e@.in, thara_nair@.in, swamy_gp@.in,bgk@.in Received: 22 June 2017; Accepted: 15 August 2017; Published: 08 November 2017Abstract—The quality of remote sensing satellite images are expressed in terms of ground sample distance, modular transfer function, signal to noise ratio and National Imagery Interpretability Rating Scale (NIIRS) by user community. The proposed system estimates NIIRS of an image, by incorporating a new automated method to calculate the Relative Edge Response (RER). The prominent edges which contribute the most for the estimation of RER are uniquely extracted with a combined application of certain filters and morphological operators. RER is calculated from both horizontal and vertical edges separately and the geometric mean is considered as the final result. Later applying the estimated RER along with other parameters, the system returns the NIIRS value of the input image. This work has proved the possible implementation of automated techniques to estimate the NIIRS from images and specifics in the metafile contents of imagery.Index Terms—Relative Edge Response, NIIRS, SNR, GSD, image qualityI.I NTRODUCTIONToday high resolution satellite images are being widely used in many applications and good quality images are of high demand from the users. Since interpretability is one of the most important quality metrics used in the case of satellite images, it is very important to have a system to assess the interpretability of a given image. The inference derived, helps us to understand the quality of the image as well as the performance of the imaging system. It helps us to make appropriate enhancements/corrections. Also, since images of different qualities will be having different applications, we will be able to categorize them based on the result obtained about the image. This was the motivation for the proposed work.There are various factors viz. signal-to-noise ratio, ground sample distance, modular transfer function etc. which expresses the quality of satellite images. But these parameters can provide only a partial indication of image interpretability. Therefore, National imagery Interpretability Rating Scale (NIIRS) was proposed as an image quality metric in terms of interpretability. The proposed system uses NIIRS as the measure of quality.II.R ELATED W ORKA lot of research is going on about the quality analysis of satellite images in the field of remote sensing. Many methods were proposed and new ideas are evolving for the quality analysis of the satellite images.The metric that we discuss here does not exactly depend on the visual quality of the image. But both objective as well as subjective might be considered for Image Quality Assessment. In [1] Subjective versus Objective Picture Quality Assessment is discussed in detail. The calculation of NIIRS is explained in detail in [2]. NIIRS values are assessed by human operators for different high resolution images. This value is validated with that obtained using the image analysis method. Edge detection is a vital phase in the proposed work. Study of various edge detection methods is done in [3] and [4]. [3] Discussed the edge detection techniques which could serve for the meaningful segmentation of an x-ray image. The main objective of the edge detection algorithms explained in [4] is to produce a line and to extract important features and reduce the amount of data in the image. Comparison of various edge detectors is carried out in [5] and [6]. [5] Compares the operators such as Canny, Sobel, Laplacian of Gaussian, Robert’s and Prewitt by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image. In [6] apart from the comparison of the traditional operators and methods, a new nature inspired algorithm, which can detect more number of edges is proposed.[7], [8], [9] and [10] discussed edge profiling in different scenarios. In [7] text recognition is performed based on the edge profile. Also it used a set of heuristicrules to eliminate the non-text areas. At the same time [8] introduced a new method to recognize objects at any rotation using clusters that represent edge profiles. High-quality images can be recaptured from a liquid crystal display (LCD) monitor screen with relative ease using a digital camera. An Image Recapture Detection Algorithm based on Learning Dictionaries of Edge Profiles is introduced in [9] as a solution to this recent issue. Two sets of dictionaries are trained using the K-singular value decomposition approach from the line spread profiles of selected edges from single captured and recaptured images. Finally an SVM classifier is used for categorization of query images. Similar dictionary of edge profiles is used in [10] for the identification of image acquisition chains. The processing chain of a query image is identified by feature matching using the maximum inner product criteria.In [11] various image quality parameters such as Modulation Transfer Function (MTF), Ground Resolved Distance (GRD) and NIIRS are measured from the satellite images. [12] Demonstrates the effect of image compression on the ground sampling distance and relative edge response, which are the major factors affecting NIIRS rating.Morphological operations play a vital role in the edge extraction phase of the proposed work. In [13] an ancient prominent fast algorithm for performing erosion and dilation is introduced which is independent of the structuring function size and hence suitable for signals and images that have large and slowly varying segments. An application of the morphological operations, detection of breast cancer is discussed in [14]. At the pre-processing stage, adaptive median filter is applied and followed by adaptive thresholding method at the segmentation process. The final stage uses morphological operations.III.P ROPOSED W ORKFive quality metrics which express the interpretability partially are combined together to get the NIIRS value. These parameters include Ground Sample Distance (GSD), Noise Gain (G), Height Overshoot (H), Signal-to-Noise Ratio (SNR) and Relative Edge Response (RER). The final NIIRS level decides the characteristics of the image. For example, an image with NIIRS level 3 distinguishes between natural forest and orchards. Similarly from an image with NIIRS level 4, it is possible to identify farm buildings as barns, silos or residences and that with the level 5 identifies individual rail wagons by type (e.g. flat, box, tank). NIIRS is calculated using the equation in (1) which is known as General Image Quality Equation (GIQE). The GIQE focuses on three main attributes: scale, indicated using GSD; sharpness, measured using RER; and the signal-to-noise ratio.The proposed system uses Relative Edge Response (RER) as a vital parameter for the estimation of NIIRS value. Hence all the prominent edges are extracted and considered for the finalization of RER of the image. Median filters of contrary neighborhood size are applied for the extraction of pure horizontal and vertical edges separately. Gradient and morphological operators applied in appropriate combination yields the detection of horizontally and vertically slanted edges. Proposed work made use of connected component analysis in order to limit the number of extracted edges. After the phase of edge extraction, the edge profiling is done by plotting the intensity values in the direction of variation of the same. RER is calculated as the slope of the edge profile that is plotted. Later NIIRS value is estimated by incorporating other parameters.NIIRS 10.251 - a log10 GSD + b log10 RER- (0.656 * H) - (0.344 * G / SNR)(1) In the above equation,a= 3.16, b=2.817, if RER < 0.9 anda=3.32, b= 1.559, if RER > 0.9In remote sensing, Ground Sample Distance is the distance between pixel centers measured on the ground. In other words we can define GSD as the real distance (in meters) represented by a unit pixel of the satellite image. Signal to Noise Ratio gives a physical measure of the sensitivity of the imaging system. SNR value is calculated as the ratio of the mean of the image to the standard deviation. Height Overshoot (H) and Noise Gain (G) parameters are relevant when image restoration procedures are applied to the image. The increase in the overall noise of the image, as a result of restoration, is indicated by G. Relative Edge Response (RER) is the critical parameter in deriving the NIIRS value. RER evaluates how well an edge separates two different regions.Being one of the prominent deciding parameters of the NIIRS value, accurate assessment of RER is very critical. The objective is to develop an automated system that calculates the RER value of a given image which will contribute for the computation of the NIIRS value. The prominent edges are extracted using a unique combination of filters and morphological operators. Connected component analysis also plays a key role in this part.In this section we discuss the proposed framework in detail. Fig.1 illustrates the architecture of the proposed system. The initial step is the extraction of prominent edges from the input satellite image. It is followed by the profiling and computation of RER of horizontal and vertical edges separately. RERx and RERy represents the relative edge response of horizontal and vertical edges respectively. Net RER value is computed from RERx and RERy. Proposed method involves three phases viz. Edge extraction, Profiling of the extracted edges and Computation of RER. These three phases are explained in detail in the following sections.Fig. 1. Architecture diagram of the proposed workA.Edge ExtractionEdge response (ER) is a measure of how well an imaging system is able to reproduce sharp edges in a scene. Relative Edge Response (RER) is defined as a geometric mean of normalized edge response differences measured in two directions of image pixels at points distanced from the edge by -0.5 and 0.5 ground sample distance. In an image there may exist edges which distinguish two regions well and which separate vaguely. That is, edges with different RER values can be present in the same image and all of them are to be considered to judge the overall RER of the image. Hence, the first step is to extract the edges. The architecture diagram of edge extraction is provided in Fig.2. Since RER involves the evaluation of edges with varying intensity values, edges are to be categorized as pure horizontal, pure vertical, horizontally slanted and vertically slanted edges based on the possibilities of these variations. In Fig.3 the variation of intensity values across three different types of edges is shown. From the figure, it can be seen that the variation occurs in vertical direction in the case of a horizontal edge and horizontal direction for a vertical edge. For a pure horizontal edge (Fig.3 (a)), the horizontal variation is zero. Similarly a pure vertical edge (Fig.3 (b)) has no vertical variation of intensity values. But the intensityFig. 2. Architecture diagram of edge extraction values vary in both directions across a slanted edge (Fig.3 (c)). So the ultimate objective of this phase is to extract horizontal, vertical and slanted edges separately.We use Sobel edge detector in our method which gives an output of all the sharp edges being extracted, as shown in Fig.4 (b). It is evident that the edge detector gives all kinds of edges, many of which do not contribute to the calculation of RER. So the next step is to detect and extract the prominent edges that contribute to RER evaluation. In order to extract all the edges which are purely horizontal, a median filter with neighborhood pxq where q>>p is applied on the image that we got in the previous step (Fig.4 (b)). Similarly a median filter with neighborhood pxq where p>>q is applied on Fig.4 (b) to get all the pure vertical edges extracted. The extracted horizontal and vertical edges are shown in Fig.5 (a) and (b) respectively. Later connected component analysis is incorporated, if it is necessary to limit the number of edges to a particular number. Considering the size of the connected component to be an edge prominence metric, ranking the connected components in the order of their size and retrieving the first ‘n’ of them will give n prominent pure horizontal / vertical edges.Fig. 3. Variation of intensity values across (a) pure horizontal edge (b)pure vertical edge (c) slanted edgeFig. 4. (a) Input satellite image (b) Result of applying Sobel edgedetector on the input satellite imageFig. 5 (a) Extracted pure horizontal edges from Fig.4 (a) (b) Extractedpure vertical edges from Fig.4 (a)After extracting pure edges, the next objective is to get the slanted edges extracted. Doing an image OR operation on Fig.5 (a) and (b) gives an image which contains a union of pure horizontal and vertical edges as illustrated in Fig.6 (a). On subtracting this image from the image obtained by applying the Sobel operator, we can extract all the edges which are not pure horizontal/vertical (Fig.6 (b)).The slanted edges that contribute for the decision of RER value are contained in the image that is just obtained (Fig.6 (b)). With the objective of extracting them out as horizontally and vertically slanted, gradient operation is performed. It is good if a median filter is applied before going for the gradient operation, so that the unwanted insignificant edges which appear to be salt n pepper noise get removed.Fig. 6 (a) Union of pure horizontal and pure vertical edges (b) Resultobtained on subtracting Fig.6 (a) from Fig.4 (b)Equations (2) and (3) explains the gradient operation done on each pixel I(x, y) in X and Y directions respectively. Gradient operation performed in X direction gives the image with discontinuous broken horizontal edges. It retains the vertical edges. Similarly gradient operation in Y direction retains the horizontal edges.),(),1(),(y x I y x I y x X gradient -+= (2)),()1,(),(y x I y x I y x Y gradient -+= (3)Fig.7 (a) and (b) illustrate the outcome on applying the equations (2) and (3) respectively over the image shown in Fig.6 (b). It is evident from Fig.7 (a) and (b) that X gradient creates breaks in horizontal edges and Y gradient creates breaks in vertical edges.Next step is to incorporate morphological operations. In the images that we have obtained from the previous step, first erosion and then dilation are performed using structuring elements of appropriate size. Actually we execute the morphological operation ‘opening’ through this, so that each component in the image becomes distinct. As a final step of this phase, the number of vertically slanted and horizontally slanted edges are limited by extracting the first ‘n’ largest connected components as shown in Fig. 7 (c) and (d).B. Edge profilingAfter the first phase, we have all the prominent edges extracted and categorized separately as pure and slanted horizontal and vertical. The second phase of the proposed method is to profile these edges. Edge profiling is very important for the evaluation of RER. We can define edge profile as a plot of intensity values of pixels on both sides of an edge. Fig. 8 illustrates the steps involved in edge profiling.Fig. 7 (a) Result obtained on applying Xgradient on Fig. 6(b) (b) Result obtained on applying Ygradient on Fig. 6(b) (c) Extracted verticallyslanted edges (d) Extracted horizontally slanted edgesFig. 8. Architecture diagram of edge profilingIt is necessary to map each edge obtained from the first phase to the intensity in order to do the profiling. A reference point is needed to do the mapping. In the proposed method, the central pixel of the edge which can be calculated using equation (4) is taken as the reference point. Fig.9 (b) illustrates how an edge shown in Fig.9 (a) gets extracted. In the calculation of the reference point, X coordinate is taken as the row number and Y coordinate represents the column number of an image.Fig. 9. (a) An edge in a grayscale image (b) Result of applying edgeextraction on (a))2ymin)-(ymax +ymin,2xmin)-(xmax +(xmin = y)(x, (4)After mapping the reference point on to the grayscale image, a fixed number of pixels say ‘k’ are chosen on both sides of the edge and their intensity values are plotted. Edge profiling can be defined as the plotting of intensity values of pixels that lie on a line which pass though the reference point by dividing the edge. In order to increase the accuracy of the plot, we choose two more lines of pixels from both the sides of the reference point to plot. The five plots are standardized to give a single final edge profile.The selection of pixels to be plotted in the case of a horizontal edge is illustrated in Fig.10 It can be seen that (x,y) is the reference point and the line connecting pixels at (x-5,y) and (x+5,y) is considered as the reference line (indicated by red color). In the proposed method standardization is done by taking the average of the five plots as shown in Fig. 11 (a) and (b).Fig. 10. Pixels to be considered for the calculation of RERFig. 11 (a) Plots of the set of pixels shown in Fig.10 against intensityvalueFig. 11. (b) Normalized plot of (a)C. Computation of RERAs it was mentioned in the previous section, once the edge profile of an edge is estimated, it is possible to calculate the RER of that edge from it . Fig.12 explains how it is done. We can see that the edge response has been normalized by dividing by the largest intensity value of the image, so that it ranges from 0 to 1. Centre of the X axis say ‘c’ is taken as the centroid pixel of the edge and RER, which can be also called as the slop of the edge response is calculated as the difference between the edge responses at positions c+h and c-h. In Fig.12 the value of c is 5 and h is taken as 1.The next objective is to calculate RER for a whole given image. For that, RER of all the ‘n’ pure horizontal edges are calculated by considering the variation in vertical direction alone. Subsequently, the average of these ‘n’ RER values is calculated, say RERx. Similarly RERy is calculated as the average of RER values of all the pure vertical edges, considering the variation in horizontal direction alone.There are two ways of dealing with slanted edges. The first way is to club both horizontally and vertically slanted edges together as slanted edges and calculate RER for each of them as the mean of RER values obtained while considering the variation in both horizontal and vertical directions separately. After that the average of the RER values of these slanted edges is calculated as RERxy. Later RER of the whole image is estimated by finding the arithmetic mean, as shown in equation (5).3RERxy) +RERy+(RERx=RERimage (5)The alternative method is to club pure and slanted horizontal edges together as horizontal edges and RERx is calculated as explained in the previous step by considering only the vertical variation. Similarly RERy is calculated after clubbing pure and slanted vertical edges. Finally RER of the given image is calculated by finding the geometric mean, as shown in equation (6).RERy*RERx=RERimage (6)Fig. 12. Calculation of RERIV.G UI D ESCRIPTIONThe algorithm was implemented in MATLAB 2014b on Windows 7 with 2GHz CPU and 8GB memory. In the same satellite image itself, different edges gave different RER values. The algorithm could conclude the final RER of the image by considering the contribution of all the edges.We provided two options to calculate the RER value of an image. The first one is fully automated in which the user’s job is just to load the image. The system calculates the RER using the method explained so far and reads the other parameters from the metafile of the image. The other option for the users is that they can select the edge from the image manually so that the RER of that particular edge will be given as the input parameter to calculate the final NIIRS value.Fig.13 shows the initial page with which the system invites the user to it. The two options described above are provided as separate buttons. If the user clicks on the automated button, he/she will be redirected to the page shown in Fig.14 in which the calculation of RER and NIIRS is fully automated. The system gets redirected to the page shown in Fig.15 if the user opts to calculate RER manually.Fig. 13. Starting page of the system(a)(b)Fig. 14. (a) and (b) Page to which the system gets redirected when thebutton ‘AUTOMATED RER’ is clicked(a)(b)Fig. 15. (a) and (b) Page to which the system gets redirected when the button ‘CALCULATE RER MANUALLY’ is clickedIn the fully automated calculation of RER the user can chose the image for which RER and NIIRS values are to be calculated by clicking on the ‘LOAD IMAGE’ button as shown in Fig.16 (a). A click on the ‘Calculate RER’ button extracts the edges from the image and calculates RER using the proposed method discussed in the previous section. The edges used for the calculation are also displayed on the screen along with the RER value. Once the RER is calculated user can go for the estimation of NIIRS value. When the ‘NIIRS’ button is clicked, other parameters such as GSD are read from the metafile of the loaded image and final NIIRS value gets displayed on the screen. Fig. 16 (b) and (c) show the state of the system after the ‘NIIRS’ button is clicked.In the automated RER calculation itself another provision is added that the user can select those edges with their RER values in a particular range for the final estimation of RER of the image. Fig.17 (a) and (b) illustrate an example for this in which only edges with their RER values greater than 0.3 are selected. The buttons ‘RER using selected edges’ and ‘NIIRS (us ing selected edges)’ substitutes ‘Calculate RER’ and NIIRS respectively in this case.(a)(b)(c)Fig. 16 (a) Loading the image (b) and (c) NIIRS of the image beingcalculated using the automated system(a)(b)Fig. 17. (a) and (b) Estimation of RER and NIIRS using selected edges Fig.18 explains the estimation of RER and NIIRS using the manual intervention of the user. The cropping of the edge from the image on clicking the button ‘I want to selec t the region’ is given in Fig.18(a). The selected horizontal / vertical edge alone will be considered for the calculation of RER of the image. Fig.18 (b) shows the estimation of NIIRS using the calculated RER and other parameters. It can be seen that the plot of the intensity values of the selected edge is also displayed on the screen.(a)(b)Fig. 18. (a) and (b) Edge being cropped by the user (b) Calculation ofNIIRS of the image manuallyV.R ESULTSThe proposed method was tested on various satellite images through the implemented system and the results were tabulated and analyzed.A.Dataset detailsThe following Table.1 gives the details of images used for NIIRS estimation. In certain cases, images of the same satellite will have different GSDs due to their difference in the viewing angles.Table.1 Dataset DetailsB.Experimental results and analysisThe RER and NIIRS values obtained for certain Satellite images are given in Table.2 along with other parameters that are used in the estimation. Predicted NIIRS (PNIIRS) tabulated in the final column shows the NIIRS values obtained using GIQE image analysis through the proposed method. Since the final NIIRS value depends purely on the edges and the edge selection may change according to the methods used, the actual NIIRS value always lies in a range. The value that is obtained here is one of them in that particular range. Hence the name Predictive NIIRS.Table.2 Estimation of rer and niirsIt is evident from Table.2 that the final NIIRS level is much dependent on the Relative Edge Response. Also it can be inferred from Table.2 that the Predicted NIIRS has a direct relation with RER. But it varies inversely with GSD. This explains the reason for the reduction in quality of those images which are taken from greater height above sea level.GSD is recorded in metadata for X and Y directions separately. Their geometric mean has been taken as the final GSD value for the calculations. SNR estimated for the entire image is used. Results obtained from automated and manual methods show similar patterns.VI.C ONCLUSION AND F UTURE W ORKThe NIIRS values estimated through GIQE followed same pattern and confirms with the published contemporary sensors imagery scales [15]. This confirms the values through GIQE represent actual interpretability of the image. Associating NIIRS rating with an image helps to establish a level of credibility for the IRS data products. An automated algorithm for NIIRS estimation by providing edge points is implemented in data products generation system.The incorporation of General Purpose Graphical Processing Unit (GPGPU) processing in MATLAB to the implemented system is planned to improve the throughput .The RER estimation of several edges can be done in parallel and hence the net execution time of the system in the operational environment gets reduced. This is the proposed improvement plan for the existing system.A CKNOWLEDGEMENTThe authors would like to place on record their deep gratitude for the support provided by Dr. Y.V.N. Krishnamurthy, Director, NRSC for the successful completion of this work. The authors deeply acknowledge the guidance and encouragement provided by Sri.T.Sivannarayana, Manager, C&DA, NRSC, ISRO and staff of IODP group for this project.The authors would also like to thank the Department of Computer Science and Engineering, Amrita School of Engineering, Coimbatore for the support throughout the work.R EFERENCES[1]Suresha D, H N Prakash, Data Content Weighing forSubjective versus Objective Picture Quality Assessment of Natural Pictures, I.J. Image, Graphics and Signal Processing, Vol. 2, pp.27-36, 2017[2]Taejung Kim, Hyunsuk Kim and HeeSeob Kim, Image-based estimation and validation of NIIRS for high resolution satellite images, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII, 2008.[3]Baishali Goswami, Santanu Kr.Misra, Analysis of variousedge detection methods for x-ray images, International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 2016.[4]Mouad M. H. Ali, Pravin Yannawar and A. T. Gaikwad,Study of edge detection methods based on palmprint lines, International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 2016.[5] D. Poobathy, Dr. R. Manicka Chezian, Edge DetectionOperators: Peak Signal to Noise Ratio Based Comparison,I.J. Image, Graphics and Signal Processing, Vol.10,pp.55-61, 2014.[6]Akansha Jain, Mukesh Gupta, S. N. Tazi, Comparison ofedge detectors, International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom), 2014.[7]Andrej Ikica and Peter Peer, An improved edge profilebased method for text detection in images of natural scenes, International Conference on Computer as a Tool (EUROCON), 2011.[8]Ryan Anderson, Nick Kingsbury, and Julien Fauqueur,Rotation-invariant object recognition using edge profile clusters, 14th European Signal Processing Conference, 2006.[9]Thirapiroon Thongkamwitoon, Hani Muammar, and Pier-Luigi Dragotti, An Image Recapture Detection Algorithm Based on Learning Dictionaries of Edge Profiles, IEEE Transactions on Information Forensics and Security, Vol.10, Issue:5, 2015.[10]Thirapiroon Thongkamwitoon, Hani Muammar and PierLuigi Dragotti, Identification of image acquisition chains using a dictionary of edge profiles, 20th European .Signal Processing Conference (EUSIPCO), 2012.[11]Hyeon Kim, Dongwook Kim, Seungyong Kim andTaejung Kim, Analysis of the effects of image quality on digital map generation from satellite images, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012.[12]Hua-mei Chen, Erik Blasch, Khanh Pham, ZhonghaiWang, and Genshe Chen, An investigation of image compression on NIIRS rating degradation through automated image analysis, Sensors and Systems for Space Applications IX, Vol. 9838, 983811, doi:10.1117/12.2224631, 2016.[13] C. Jeremy Pye, J. A. Bangham, A fast algorithm formorphological erosion and dilation, 8th European Signal Processing Conference, 1996.[14]Muzni Sahar, Hanung Adi Nugroho, Tianur, Igi Ardiyantoand Lina Choridah, Automated Detection of Breast Cancer Lesions Using Adaptive Thresholding and Morphological Operation, International Conference on Information Technology Systems and Innovation (ICITSI), 2016.。