Feature detection and tracking in optical flow on non-flat manifolds
目标检测综述
如上图所示,传统目标检测的方法一般分为三个阶段:首先在给定的图像上选择一些候选的区域,然后对这些区域提取特征,最后使用训练的分类器进行分类。
下面我们对这三个阶段分别进行介绍。
(1) 区域选择这一步是为了对目标的位置进行定位。
由于目标可能出现在图像的任何位置,而且目标的大小、长宽比例也不确定,所以最初采用滑动窗口的策略对整幅图像进行遍历,而且需要设置不同的尺度,不同的长宽比。
这种穷举的策略虽然包含了目标所有可能出现的位置,但是缺点也是显而易见的:时间复杂度太高,产生冗余窗口太多,这也严重影响后续特征提取和分类的速度和性能。
(实际上由于受到时间复杂度的问题,滑动窗口的长宽比一般都是固定的设置几个,所以对于长宽比浮动较大的多类别目标检测,即便是滑动窗口遍历也不能得到很好的区域)(2) 特征提取由于目标的形态多样性,光照变化多样性,背景多样性等因素使得设计一个鲁棒的特征并不是那么容易。
然而提取特征的好坏直接影响到分类的准确性。
(这个阶段常用的特征有 SIFT、 HOG 等)(3) 分类器主要有 SVM, Adaboost 等。
总结:传统目标检测存在的两个主要问题:一是基于滑动窗口的区域选择策略没有针对性,时间复杂度高,窗口冗余;二是手工设计的特征对于多样性的变化并没有很好的鲁棒性。
对于传统目标检测任务存在的两个主要问题,我们该如何解决呢?对于滑动窗口存在的问题, region proposal 提供了很好的解决方案。
region proposal (候选区域) 是预先找出图中目标可能出现的位置。
但由于 regionproposal 利用了图像中的纹理、边缘、颜色等信息,可以保证在选取较少窗口(几千个甚至几百个) 的情况下保持较高的召回率。
这大大降低了后续操作的时间复杂度,并且获取的候选窗口要比滑动窗口的质量更高(滑动窗口固定长宽比) 。
比较常用的 region proposal 算法有selective Search 和 edge Boxes ,如果想具体了解 region proposal 可以看一下PAMI2015 的“What makes for effective detection proposals?”有了候选区域,剩下的工作实际就是对候选区域进行图像分类的工作 (特征提取 +分类)。
脂多糖刺激巨噬细胞分泌含miR-155-5p_的外泌体促进肝星状细胞的活化及迁移
肝纤维化是肝炎-肝硬化-肝癌三部曲的重要病理表型,研究表明抑制肝纤维化能有效减缓肝炎向肝癌的进程[1]。
但目前临床上针对肝纤维化没有行之有效的治疗方案[2]。
因而,深入探索纤维化机制为寻找延缓乃至逆转纤维化的治疗靶点和节约医疗资源具有十分重要意义。
肝巨噬细胞在肝纤维化中发挥至关重要的作用,其主要功能与炎症、肝细胞损伤、肝星状细胞的活化和纤维化密切相关[3]。
肝星状细胞在肝脏生理和纤维生成起到关键作用,其能够受到肝巨噬细胞的调控[4]。
近来Lipopolysaccharide stimulates macrophages to secrete exosomes containing miR-155-5p to promote activation and migration of hepatic stellate cellsLIN Jiayi 1,LOU Anni 1,LI Xu 1,21Department of Emergency Medicine,Nanfang Hospital,Southern Medical University,Guangzhou 510515,China;2Key Laboratory of First Aid and Trauma Research,Ministry of Education,Hainan Medical College,Haikou 571199,China摘要:目的探索脂多糖(LPS )刺激下的巨噬细胞来源的外泌体对肝星状细胞的激活及迁移能力的影响及分子机制。
方法以100ng/mL 丙二醇甲醚醋酸(PMA )处理人THP-1巨噬细胞24h ,诱导其分化为巨噬细胞,给予脂多糖刺激后收集巨噬细胞的培养上清,运用超速离心法提取外泌体并加以鉴定。
荧光定量PCR (qRT-PCR )检测外泌体中miR-155-5p 的表达。
采用Transwell 共培养体系观察巨噬细胞分泌的外泌体对肝星状细胞LX2增殖、氧化应激、迁移和I 型胶原等纤维化标志物表达的影响。
目标检测参考文献
目标检测是计算机视觉领域中的一个重要任务,旨在识别和定位图像中的目标物体。
近年来,随着深度学习方法的发展,目标检测取得了显著的进展。
以下是一些经典的目标检测参考文献及其相关内容。
1.R-CNN: Rich feature hierarchies for accurate object detection 这是目标检测中的经典方法之一,引入了区域提议网络(Region Proposal Network)的概念,通过生成候选目标区域来提高检测的效果。
该方法通过使用卷积神经网络提取图像特征,并使用支持向量机对提取的特征进行分类和定位。
2.Faster R-CNN: Towards real-time object detection with regionproposal networks Faster R-CNN是R-CNN的改进版本,引入了一种称为Region Proposal Network(RPN)的网络模块,用于生成候选目标区域。
相比于R-CNN,Faster R-CNN在保持准确性的同时大幅提升了检测速度。
3.You Only Look Once: Unified, Real-Time Object Detection YOLO是一种实时目标检测方法,通过将目标检测问题转化为回归问题,并在单个神经网络中同时进行目标分类和定位,实现了快速准确的物体检测。
该方法在速度上具有显著优势,但在检测小物体和密集目标方面仍有改进空间。
4.Single Shot MultiBox Detector SSD是另一种实时目标检测方法,采用了特征金字塔网络(Feature Pyramid Network)和多尺度预测的策略,能够在不同尺度下同时检测目标。
SSD在准确性和速度方面取得了良好的平衡,成为目标检测领域的重要方法之一。
5.Mask R-CNN Mask R-CNN是在Faster R-CNN的基础上进行改进的方法,不仅可以检测和定位目标,还可以生成目标的语义分割掩码。
JDSU MTS T-BERD 8000平台光谱分析器模块说明书
MTS/T-BERD 8000 PlatformOptical Spectrum Analyzer ModulesFull-band, high-performance Optical Spectrum Analyzers for testing optical systems and componentsTargeted at providing advanced test solutions, the OSA-150, OSA-180 and the OSA-500 are the next generation of JDSU’s DWDM analyzer modules.A new monochromator design provides ultra high optical resolution, and outstanding wavelength accuracy in a small and rugged OSA module, offering the best field solution for testing DWDM and CWDM networks during installation, maintenance and trouble shooting.The JDSU OSA modules differentiate by the optical measurement resolution and are graded into three classes:The OSA-150 is JDSU’s basic OSA with an optical resolution of 100 pm to measure CWDM and • DWDM networks with moderate channel spacing of 100 GHz to CWDM.The high resolution OSA-18x DWDM analyzers are suited for testing DWDM networks with • tight channel spacing down to 50GHz. In addition the OSA-181 provides a unique channel drop function to isolate single DWDM channels from the spectrum.The ultra high resolution OSA-500 and OSA-500R have an industry leading resolution band-• width of 35 pm for measurements in ultra DWDM networks with channel spacing down to 25 Hz.The OSA-500R is equipped additionally with a new technique to measure the true OSNR in ROADM systems and 40G systems.Key FeaturesI n-band capability for true OSNR measurements in •ROADM and 40G networks Ultra-high optical resolution•Industry-leading wavelength accuracy guaranteed over •instruments lifetimeFuture-proof signal analysis for data rates of 40/100G,•and next-generation modulation formatsChannel drop function for single channel isolation and •tunable filter applications.PMD test option based on fixed analyzer method.•MTS/T-BERD platformApplicationsCommissioning and•maintenance of current and next generation DWDM systemsProvisioning and maintenance •of ROADM networksInstallation and maintenance of •CWDM networks Testing of 40G and 100G •networksSpectral testing of optical •components1981Advanced optical performanceJDSU’s OSA family combines outstanding wavelength accuracy, high dynamic range and an ultra-high resolution. All instruments are equipped with an internal wavelength reference for online calibration without requiring disruption of in-progress measurements. The internal wavelength calibrator is based on a physical constant reference that guarantees unsurpassed wavelength accu-racy over the instrument’s lifetime without the need of external recalibration (JDSU patents), saving recalibration cost.One-step system qualificationOne-button auto-testing guarantees that technicians need no special training to carry out a DWDM test, making JDSU’s instruments suitable for both novice and expert technicians. An Auto-Test mode automatically identifies WDM channels, selects the appropriate wavelength range, and provides auto scaling and system qualification according to pre-defined parameters.Flexible measurement capabilityIn-depth analysis, featuring statistical evaluation, and automatic storage capabilities, is provided. This allows for DWDM system performance verification, including the variation of optical sys-tem parameters (wavelength, power, and OSNR) as well as a series of measurements over a defined period of time. Resulting reports are provided with average, minimum, maximum, and standard deviation values of the measured parameters over time.Powerful pass/fail link managerGraphical and tabular display formats can be selected to assist in the installation, verification, and troubleshooting of multi-channel DWDM systems. Built-in test functions deliver automatic pass/ fail evaluations based on pre-defined alarms, saving time and providing technicians with a quick and intuitive overview of the complete set of results.Measurement of signals at high data rates and new modulation formatsData rates at 10 Gbps or higher have a larger optical bandwidth than the resolution bandwidth of an OSA, and with new modulation formats like duobinary (DB), differential phase shift keying (DPSK) or quadratur phase shift keying (QPSK), the spectral shape of a signal will change from one peak to multiple peaks. Regular OSAs will no longer correctly measure the central wavelength and the total signal power of such transmission signals.JDSU OSAs are prepared for these scenarios as they have a new signal analysis for accurate mea-surement of total channel power and center wavelength of modulated signals. All results will be presented in the WDM table.40 channel DWDM systemGraphical and tabular display showing pass/fail indicators and out-of-range valuesInstrument setupPrecise and correct detection of new modulation formatsNew in-band OSNR measurement techniqueIn ROADM networks the noise floor in between optical channels is suppressed by the optical fil-ters inside the ROADMs. In systems transmitting ultra high data rates like 40G/100G at tight channel spacing of 50 GHz, the modulation bandwidth is larger than the channel bandwidth thus leading to overlapping spectra. Both effects make conventional OSNR measurement based on the IEC interpolation method unreliable.The OSA-500R is JDSUs second generation of opti-cal spectrum analyzers performing the in-bandOSNR A new optical polarization splitting (OPS) method (patent pending) is used to suppress the transmission signal and to get access to the noise value inside the optical channel for measuring the true in-band OSNR. The only viable solution for any test scenarios, whatever the ROADM filter types, data rate or modulation formats.Built-in test applicationsTest applications for optical amplifiers (EDFA) and laser sources (DFB) facilitate network compo-nent verification.Drift measurementsFor optical performance monitoring it is essential to measure the key parameters over time. The built-in drift test application provides the result of power, wavelength and OSNR over a customer definable time frame in a graphical and numerical format. Drift measurements are important in CWDM networks with uncooled laser, which have a typical wavelength of 0.1 nm/°C.PMD test optionsWith the PMD option, the OSA can measure the differential group delay (DGD) for PMD charac-terization of optical fibers and systems. The measurement is based on the fixed analyzer method (TIA/EIA FOTP-113) together with a broadband source and a variable polarizer.Channel isolation (drop) and dual-port optionsA unique channel isolation option is provided to extract a single DWDM channel from the entire spectrum for further analysis with a SONET/SDH or Ethernet analyzer at data rates up to 12.5 Gbps. The built-in tracking function provides wavelength locking to the peak of the selected chan-nel in order to avoid channel frequency drift problems during long-term measurements. The dual-port option (JDSU patents) provides simultaneous measurement of two optical signals, mea-suring the input and output of an optical amplifier at the same time, for example.Advanced analysis solutionJDSU’s OFS-100 Optical FiberTrace Software is a PC-based software application within a true Microsoft Windows environment, offering post-analysis capabilities and the generation of detailed, professional OSA reports.Offline analysis OFS-100EDFA test applicationIn-band noise measurement of optical channels passingdifferent routes in a ROADM networkDrift analysisFull-band WDM analyzerOSA-150Operating modesWDM, DriftSpectral measurement ranges Wavelength range1250 to 1650 nm Measurement samples120,000 No. of optical channels256 Wavelength calibration (1)internal, online. Wavelength accuracy (2)± 100 pm Readout resolution 1 pm Resolution bandwidth(FWHM) (2)100 pmPower measurement rangesDynamic range (3)−60 to +15 dBm Absolute accuracy(2, 4)± 0.6 dBTotal safe power+23 dBm Readout resolution0.01 dBScanning time (full band) (C-band)<5 s <1 sOptical rejection ratio (ORR) (2)at ± 25 GHz (± 0.2 nm)not specified at ± 50 GHz (± 0.4 nm)40 dBc at ± 100 GHz (± 0.8 nm)>43 dBc(1) Built-in, physical constant wavelength calibrator,needs no re-calibration(2) Typical for 1520 to 1565 nm at 18° to 28 °C(3) Max. power per channel +15 dBm(4) At −10 dBm, including PDL(5) −45 dBm to +10 dBm, at 23 °C Full band DWDM AnalyzerOSA-180 / OSA-181Operating modesWDM, Drift, DFB, LED, FPL, EDFASpectral measurement rangesWavelength range1250 to 1650 nmMeasurement samples120,000No. of optical channels256Wavelength calibration (1)internal, online.Wavelength accuracy (2)typ. ± 20 pmReadout resolution 1 pmResolution bandwidth(FWHM) (2)typ. 70 pmPower measurement rangesDynamic range (3)−65 to +23 dBmAbsolute accuracy(2, 4)typ. ± 0.5 dBLinearity (5)± 0.1 dBTotal safe power+23 dBmReadout resolution0.01 dBScanning time (full band)(C-band)<5 s<1 sOptical rejection ratio (ORR) (2)at ± 25 GHz (± 0.2 nm)typ. 35 dBcat ± 50 GHz (± 0.4 nm)typ. 45 dBcChannel drop option (OSA-181 only)Wavelength range 1300 to 1650 nmData rates up to 12.5 GbpsSpectral filter bandwidth>20 GHzInsertion loss(6)typ. <12 dBTracking mode Auto wavelength control(6) 1520 to 1620 nm at 23 °C(7) For data rates up to 10 Gbps(8) For OSNR ≤ 25 dB and PMD <25 psFor data rates of ≥ 40 Gbpswith ≥ 100 GHz ch- spacing typically ± 1 dB(9) Fast mode, independent of no of channelsHigh Perf. DWDM AnalyzerOSA-500 / OSA-500ROperating modesWDM, Drift, DFB, LED, FPL, EDFAIn-band OSNR (OSA-500R only)Spectral measurement rangesWavelength range1250 to 1650 nmMeasurement samples120,000No. of optical channels256Wavelength calibration(1)internal, online.Wavelength accuracy (2)typ. ± 10 pmReadout resolution 1 pmResolution bandwidth(FWHM) (2)typ. 35 pmPower measurement rangesDynamic range (3)−70 to +20 dBmAbsolute accuracy(2, 4)typ. ± 0.5 dBLinearity (5)± 0.1 dBTotal safe power (12)+23 dBmReadout resolution0.01 dBScanning time (full band)(C-band)<5 s<1 sOptical rejection ratio (ORR) (2, 11)at ± 25 GHz (± 0.2 nm)typ. 45 dBcat ± 50 GHz (± 0.4 nm)typ. 50 dBcIn-band OSNR (10) (OSA-500R only)I-OSNR dynamic range up to >30 dBPMD tolerance (7)up to 25 psMeasurement accuracy(8)typ. ± 0.5 dBData signals up to 100 GbpsMeasurement time(9)<2 min(10) only valid for OSA-500R(11) For OSA-500R ORR is reduced by 3 dB(12) +20 dBm for OSA-500ROSA Selection GuideA comprehensive portfolio to better match your application requirements.ApplicationTechnology CWDM DWDM DWDM UDWDM Channel drop ROADMin-band OSNR Instrument class Ch-spacing20 nm100 GHz50 GHz25 GHzBasic OSA OSA-150X X––––High resolution OSA OSA-180X X X–––OSA-181X X X–X–Ultra high resolution OSA OSA-500X X X X––OSA-500R X X X X–XGeneral specificationsDisplay modesGraph, WDM table, graph and tableOptical ports (physical contact interfaces)Input port SMOutput port (drop port OSA-181)SMOptical return loss>35 dBInterface Universal connectors/PCOptical adapters FC, SC, ST, LC, DINTemperatureOperating+5 to +50 °C / 41 to 122 °FStorage−20 to +60 °C / -4 to 140 °FWeight (module only )OSA-150/18x/500 2.2 kg / 4.6 lbsSize (module only )OSA-150/18x/50050 x 250 x 305 mm /2 x 10 x 12 inOSA modulesOrdering information for Full-band DWDM analyzersBasic OSAs2281/91.15OSA-150High-resolution OSAs2281/91.18OSA-1802281/91.22OSA-181, with channel drop 12.5GUltra-high resolution OSAs2281/91.51OSA-500, high performance DWDM OSA2281/91.55OSA-500R, high performance DWDM & ROADM OSAPMD test option (for OSA-18x/500/500R)2281/91.11PMD test kitincludes PMD evaluation SW plus2279/31OBS-55, Optical Broadband Sourceplus2271/01OVP-15, Optical Variable PolarizerApplication softwareEOFS100Optical fiber trace software for post-analysisEOFS200Optical fiber trace software for cable acceptance report generation Test & Measurement Regional Sales。
基于边缘检测的抗遮挡相关滤波跟踪算法
基于边缘检测的抗遮挡相关滤波跟踪算法唐艺北方工业大学 北京 100144摘要:无人机跟踪目标因其便利性得到越来越多的关注。
基于相关滤波算法利用边缘检测优化样本质量,并在边缘检测打分环节加入平滑约束项,增加了候选框包含目标的准确度,达到降低计算复杂度、提高跟踪鲁棒性的效果。
利用自适应多特征融合增强特征表达能力,提高目标跟踪精准度。
引入遮挡判断机制和自适应更新学习率,减少遮挡对滤波模板的影响,提高目标跟踪成功率。
通过在OTB-2015和UAV123数据集上的实验进行定性定量的评估,论证了所研究算法相较于其他跟踪算法具有一定的优越性。
关键词:无人机 目标追踪 相关滤波 多特征融合 边缘检测中图分类号:TN713;TP391.41;TG441.7文献标识码:A 文章编号:1672-3791(2024)05-0057-04 The Anti-Occlusion Correlation Filtering Tracking AlgorithmBased on Edge DetectionTANG YiNorth China University of Technology, Beijing, 100144 ChinaAbstract: For its convenience, tracking targets with unmanned aerial vehicles is getting more and more attention. Based on the correlation filtering algorithm, the quality of samples is optimized by edge detection, and smoothing constraints are added to the edge detection scoring link, which increases the accuracy of targets included in candi⁃date boxes, and achieves the effects of reducing computational complexity and improving tracking robustness. Adap⁃tive multi-feature fusion is used to enhance the feature expression capability, which improves the accuracy of target tracking. The occlusion detection mechanism and the adaptive updating learning rate are introduced to reduce the impact of occlusion on filtering templates, which improves the success rate of target tracking. Qualitative evaluation and quantitative evaluation are conducted through experiments on OTB-2015 and UAV123 datasets, which dem⁃onstrates the superiority of the studied algorithm over other tracking algorithms.Key Words: Unmanned aerial vehicle; Target tracking; Correlation filtering; Multi-feature fusion; Edge detection近年来,无人机成为热点话题,具有不同用途的无人机频繁出现在大众视野。
OPTIFLECF简易操作手册
O P T I F L E C F简易操作手册公司标准化编码 [QQX96QT-XQQB89Q8-NQQJ6Q8-MQM9N]两线制导波雷达(TDR )物位仪表测量液体和浆液的距离、液位、体积,也适用于粉料和固体颗粒本文件根据以下文件编辑整理,如有更改,恕不通知。
手册OPTIFLEX 2200 C/F目录本手册是为现场接线、安装调试提供便捷途径。
详细内容请阅读随机光盘操作手册!1开箱、交货范围①转换器和天线②同轴套管,标配没有!③快速安装说明(英文版)④光盘(包含所有的文件)⑤开盖工具⑥保护帽,用于转换器维修时防止传感器顶部进水。
2 安装要点天线周围300mm半径内没有影响雷达波的机械结构件或受到物料冲刷,短节高度小于直径(h<d)。
3开盖与接线开盖取下外壳与前盖(或后盖)之间的紧固组件。
用开盖工具逆时针旋转前盖(或后盖),从刻度①---刻度②。
向上提出前盖(或后盖)③3.2接线二线制4--20mA回路供电信号接线,信号地③,设备地④分体型仪表传感器与转换器接线打开转换器与传感器盖子,用一根四芯帯屏蔽导线一一对应接线。
(防爆型仪表必须配科隆原厂导线)4就地显示和键盘操作正常显示状态设置显示状态①百分百显示⑤更新数据①菜单功能描述②显示内容:液位、距离等⑥测量值和单位②设置状态③报警信息⑦设备状态③菜单编号④位号⑧键盘面板按键定义:1号键2号键3号键4号键①液位、距离、体积等切换请注意:按住4号键3秒钟,显示语言转换为英语。
再次按住该键3秒钟,显示语言会从英语转换为所设定的语言。
5 菜单操作6 仪表安装时必需设置的内容(快速设置)缆长设置(菜单)加大,可以先加的大一点,读出空罐时的距离,再在该距离的基础上重新设置罐高(菜单)(空罐时的距离+110mm)死区设置(菜单)400~500mm输出范围(菜单)选择4-20mA故障延时(菜单)设置为5min跟踪速度(菜单)根据实际进出料的速度尽量取小天线末端脉冲振幅(菜单)查看实际末端信号大小,然后在天线末端阈值(菜单)中进行修改,尽量将门槛设低。
目标检测学术英语
目标检测学术英语Object detection is a fundamental task in computer vision, which aims to locate and classify objects within an image or video. It has a wide range of applications, including autonomous vehicles, surveillance systems, and augmented reality. In recent years, deep learning-based object detection methods have achieved remarkable performance, outperforming traditional methods in terms of accuracy and efficiency.There are several popular object detection frameworks, such as YOLO (You Only Look Once), SSD (Single Shot Multibox Detector), and Faster R-CNN (Region-based Convolutional Neural Network). These frameworks differ in their approach to object detection, with some prioritizing speed and others prioritizing accuracy. YOLO, for example, is known for its real-time performance, while Faster R-CNN is renowned for its accuracy.One of the key challenges in object detection is handling occlusions, variations in scale, and cluttered backgrounds. This requires the use of sophisticated algorithms and network architectures to effectively detectobjects under these conditions. Additionally, object detection models need to be robust to changes in lighting, weather, and other environmental factors.In recent years, there has been a surge of interest in improving object detection performance through the use of attention mechanisms, which allow the model to focus on relevant parts of the image. This has led to the development of attention-based object detection models, such as DETR (DEtection TRansformer) and SETR (SEgmentation-TRansformer).Furthermore, the integration of object detection with other computer vision tasks, such as instance segmentation and pose estimation, has become an active area of research. This integration allows for a more comprehensive understanding of the visual scene and enables more sophisticated applications.In conclusion, object detection is a critical task in computer vision with a wide range of applications. Deep learning-based methods have significantly advanced the state-of-the-art in object detection, and ongoing researchcontinues to push the boundaries of performance and applicability.目标检测是计算机视觉中的一个基本任务,旨在定位和分类图像或视频中的对象。
基于YOLOv5和重识别的行人多目标跟踪方法
第37卷第7期2022年7月Vol.37No.7Jul.2022液晶与显示Chinese Journal of Liquid Crystals and Displays基于YOLOv5和重识别的行人多目标跟踪方法贺愉婷1,2,车进1,2*,吴金蔓1,2(1.宁夏大学物理与电子电气工程学院,宁夏银川750021;2.宁夏沙漠信息智能感知重点实验室,宁夏银川750021)摘要:针对目前遵循基于检测的多目标跟踪范式存在的不足,本文以DeepSort为基础算法展开研究,以解决跟踪过程中因遮挡导致的目标ID频繁切换的问题。
首先改进外观模型,将原始的宽残差网络更换为ResNeXt网络,在主干网络上引入卷积注意力机制,构造新的行人重识别网络,使模型更关注目标关键信息,提取更有效的特征;然后采用YOLOv5作为检测算法,加入检测层使得模型适应不同尺寸的目标,并在主干网络加入坐标注意力机制,进一步提升检测模型精度。
在MOT16数据集上进行多目标跟踪实验,多目标跟踪准确率达到66.2%,多目标跟踪精确率达到80.8%,并满足实时跟踪的要求。
关键词:多目标跟踪;行人重识别;YOLOv5;注意力机制;深度学习中图分类号:TP391文献标识码:A doi:10.37188/CJLCD.2022-0025Pedestrian multi-target tracking method based on YOLOv5and person re-identificationHE Yu-ting1,2,CHE Jin1,2*,WU Jin-man1,2(1.School of Physics and Electronic-Electrical Engineering,Ningxia University,Yinchuan750021,China;2.Ningxia Key Laboratory of Intelligent Sensing for Desert Information,Yinchuan750021,China)Abstract:Aiming at the shortcomings of current detection-based multi-target tracking paradigm,a research is conducted based on the algorithm of DeepSort to address the issue of frequent switching of targeted ID resulting from occlusion in tracking process.Firstly,focus should be placed on improving appearance model.Efforts should be made in replacing broadband and residual networks with ResNeXt networks,which introduces the mechanism for convolution attention into the backbone network and establish a new person re-identification network.In doing so,the model can pay more attention to critical information of targets and obtain effective features.Then,YOLOv5serves as a detection algorithm. Adding detection layer enables the model to respond to targets of different sizes.Moreover,the mechanism for coordinate attention is introduced into the backbone networks.These efforts can further 文章编号:1007-2780(2022)07-0880-11收稿日期:2022-01-24;修订日期:2022-02-11.基金项目:国家自然科学基金(No.61861037)Supported by National Natural Science Foundation of China(No.61861037)*通信联系人,E-mail:koalache@第7期贺愉婷,等:基于YOLOv5和重识别的行人多目标跟踪方法improve the accuracy of detection model.The multi-target tracking experiment is carried out on data sets of MOT16,the multi-target tracking accuracy rate is up to66.2%,and the multi-target tracking precision ratio is up to80.8%.All these can meet the needs of real-time tracking.Key words:multi-target tracking;person re-identification;YOLOv5network;attention mechanism;deep learning1引言多目标跟踪(Multiple Target Tracking,MTT)主要任务是在给定视频中同时对多个特定目标进行定位,同时保持目标的ID稳定,最后跟踪记录他们的轨迹[1]。
特征更新的动态图卷积表面损伤点云分割方法
第41卷 第4期吉林大学学报(信息科学版)Vol.41 No.42023年7月Journal of Jilin University (Information Science Edition)July 2023文章编号:1671⁃5896(2023)04⁃0621⁃10特征更新的动态图卷积表面损伤点云分割方法收稿日期:2022⁃09⁃21基金项目:国家自然科学基金资助项目(61573185)作者简介:张闻锐(1998 ),男,江苏扬州人,南京航空航天大学硕士研究生,主要从事点云分割研究,(Tel)86⁃188****8397(E⁃mail)839357306@;王从庆(1960 ),男,南京人,南京航空航天大学教授,博士生导师,主要从事模式识别与智能系统研究,(Tel)86⁃130****6390(E⁃mail)cqwang@㊂张闻锐,王从庆(南京航空航天大学自动化学院,南京210016)摘要:针对金属部件表面损伤点云数据对分割网络局部特征分析能力要求高,局部特征分析能力较弱的传统算法对某些数据集无法达到理想的分割效果问题,选择采用相对损伤体积等特征进行损伤分类,将金属表面损伤分为6类,提出一种包含空间尺度区域信息的三维图注意力特征提取方法㊂将得到的空间尺度区域特征用于特征更新网络模块的设计,基于特征更新模块构建出了一种特征更新的动态图卷积网络(Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)用于点云语义分割㊂实验结果表明,该方法有助于更有效地进行点云分割,并提取点云局部特征㊂在金属表面损伤分割上,该方法的精度优于PointNet ++㊁DGCNN(Dynamic Graph Convolutional Neural Networks)等方法,提高了分割结果的精度与有效性㊂关键词:点云分割;动态图卷积;特征更新;损伤分类中图分类号:TP391.41文献标志码:A Cloud Segmentation Method of Surface Damage Point Based on Feature Adaptive Shifting⁃DGCNNZHANG Wenrui,WANG Congqing(School of Automation,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China)Abstract :The cloud data of metal part surface damage point requires high local feature analysis ability of the segmentation network,and the traditional algorithm with weak local feature analysis ability can not achieve the ideal segmentation effect for the data set.The relative damage volume and other features are selected to classify the metal surface damage,and the damage is divided into six categories.This paper proposes a method to extract the attention feature of 3D map containing spatial scale area information.The obtained spatial scale area feature is used in the design of feature update network module.Based on the feature update module,a feature updated dynamic graph convolution network is constructed for point cloud semantic segmentation.The experimental results show that the proposed method is helpful for more effective point cloud segmentation to extract the local features of point cloud.In metal surface damage segmentation,the accuracy of this method is better than pointnet++,DGCNN(Dynamic Graph Convolutional Neural Networks)and other methods,which improves the accuracy and effectiveness of segmentation results.Key words :point cloud segmentation;dynamic graph convolution;feature adaptive shifting;damage classification 0 引 言基于深度学习的图像分割技术在人脸㊁车牌识别和卫星图像分析领域已经趋近成熟,为获取物体更226吉林大学学报(信息科学版)第41卷完整的三维信息,就需要利用三维点云数据进一步完善语义分割㊂三维点云数据具有稀疏性和无序性,其独特的几何特征分布和三维属性使点云语义分割在许多领域的应用都遇到困难㊂如在机器人与计算机视觉领域使用三维点云进行目标检测与跟踪以及重建;在建筑学上使用点云提取与识别建筑物和土地三维几何信息;在自动驾驶方面提供路面交通对象㊁道路㊁地图的采集㊁检测和分割功能㊂2017年,Lawin等[1]将点云投影到多个视图上分割再返回点云,在原始点云上对投影分割结果进行分析,实现对点云的分割㊂最早的体素深度学习网络产生于2015年,由Maturana等[2]创建的VOXNET (Voxel Partition Network)网络结构,建立在三维点云的体素表示(Volumetric Representation)上,从三维体素形状中学习点的分布㊂结合Le等[3]提出的点云网格化表示,出现了类似PointGrid的新型深度网络,集成了点与网格的混合高效化网络,但体素化的点云面对大量点数的点云文件时表现不佳㊂在不规则的点云向规则的投影和体素等过渡态转换过程中,会出现很多空间信息损失㊂为将点云自身的数据特征发挥完善,直接输入点云的基础网络模型被逐渐提出㊂2017年,Qi等[4]利用点云文件的特性,开发了直接针对原始点云进行特征学习的PointNet网络㊂随后Qi等[5]又提出了PointNet++,针对PointNet在表示点与点直接的关联性上做出改进㊂Hu等[6]提出SENET(Squeeze⁃and⁃Excitation Networks)通过校准通道响应,为三维点云深度学习引入通道注意力网络㊂2018年,Li等[7]提出了PointCNN,设计了一种X⁃Conv模块,在不显著增加参数数量的情况下耦合较远距离信息㊂图卷积网络[8](Graph Convolutional Network)是依靠图之间的节点进行信息传递,获得图之间的信息关联的深度神经网络㊂图可以视为顶点和边的集合,使每个点都成为顶点,消耗的运算量是无法估量的,需要采用K临近点计算方式[9]产生的边缘卷积层(EdgeConv)㊂利用中心点与其邻域点作为边特征,提取边特征㊂图卷积网络作为一种点云深度学习的新框架弥补了Pointnet等网络的部分缺陷[10]㊂针对非规律的表面损伤这种特征缺失类点云分割,人们已经利用各种二维图像采集数据与卷积神经网络对风扇叶片㊁建筑和交通工具等进行损伤检测[11],损伤主要类别是裂痕㊁表面漆脱落等㊂但二维图像分割涉及的损伤种类不够充分,可能受物体表面污染㊁光线等因素影响,将凹陷㊁凸起等损伤忽视,或因光照不均匀判断为脱漆㊂笔者提出一种基于特征更新的动态图卷积网络,主要针对三维点云分割,设计了一种新型的特征更新模块㊂利用三维点云独特的空间结构特征,对传统K邻域内权重相近的邻域点采用空间尺度进行区分,并应用于对金属部件表面损伤分割的有用与无用信息混杂的问题研究㊂对邻域点进行空间尺度划分,将注意力权重分组,组内进行特征更新㊂在有效鉴别外邻域干扰特征造成的误差前提下,增大特征提取面以提高局部区域特征有用性㊂1 深度卷积网络计算方法1.1 包含空间尺度区域信息的三维图注意力特征提取方法由迭代最远点采集算法将整片点云分割为n个点集:{M1,M2,M3, ,M n},每个点集包含k个点:{P1, P2,P3, ,P k},根据点集内的空间尺度关系,将局部区域划分为不同的空间区域㊂在每个区域内,结合局部特征与空间尺度特征,进一步获得更有区分度的特征信息㊂根据注意力机制,为K邻域内的点分配不同的权重信息,特征信息包括空间区域内点的分布和区域特性㊂将这些特征信息加权计算,得到点集的卷积结果㊂使用空间尺度区域信息的三维图注意力特征提取方式,需要设定合适的K邻域参数K和空间划分层数R㊂如果K太小,则会导致弱分割,因不能完全利用局部特征而影响结果准确性;如果K太大,会增加计算时间与数据量㊂图1为缺损损伤在不同参数K下的分割结果图㊂由图1可知,在K=30或50时,分割结果效果较好,K=30时计算量较小㊂笔者选择K=30作为实验参数㊂在分析确定空间划分层数R之前,简要分析空间层数划分所应对的问题㊂三维点云所具有的稀疏性㊁无序性以及损伤点云自身噪声和边角点多的特性,导致了点云处理中可能出现的共同缺点,即将离群值点云选为邻域内采样点㊂由于损伤表面多为一个面,被分割出的损伤点云应在该面上分布,而噪声点则被分布在整个面的两侧,甚至有部分位于损伤内部㊂由于点云噪声这种立体分布的特征,导致了离群值被选入邻域内作为采样点存在㊂根据采用DGCNN(Dynamic Graph Convolutional Neural Networks)分割网络抽样实验结果,位于切面附近以及损伤内部的离群值点对点云分割结果造成的影响最大,被错误分割为特征点的几率最大,在后续预处理过程中需要对这种噪声点进行优先处理㊂图1 缺损损伤在不同参数K 下的分割结果图Fig.1 Segmentation results of defect damage under different parameters K 基于上述实验结果,在参数K =30情况下,选择空间划分层数R ㊂缺损损伤在不同参数R 下的分割结果如图2所示㊂图2b 的结果与测试集标签分割结果更为相似,更能体现损伤的特征,同时屏蔽了大部分噪声㊂因此,选择R =4作为实验参数㊂图2 缺损损伤在不同参数R 下的分割结果图Fig.2 Segmentation results of defect damage under different parameters R 在一个K 邻域内,邻域点与中心点的空间关系和特征差异最能表现邻域点的权重㊂空间特征系数表示邻域点对中心点所在点集的重要性㊂同时,为更好区分图内邻域点的权重,需要将整个邻域细分㊂以空间尺度进行细分是较为合适的分类方式㊂中心点的K 邻域可视为一个局部空间,将其划分为r 个不同的尺度区域㊂再运算空间注意力机制,为这r 个不同区域的权重系数赋值㊂按照空间尺度多层次划分,不仅没有损失核心的邻域点特征,还能有效抑制无意义的㊁有干扰性的特征㊂从而提高了深度学习网络对点云的局部空间特征的学习能力,降低相邻邻域之间的互相影响㊂空间注意力机制如图3所示,计算步骤如下㊂第1步,计算特征系数e mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重㊂分别用Δp mk 和Δf mk 表示三维空间关系和局部特征差异,M 表示MLP(Multi⁃Layer Perceptrons)操作,C 表示concat 函数,其中Δp mk =p mk -p m ,Δf mk =M (f mk )-M (f m )㊂将两者合并后输入多层感知机进行计算,得到计算特征系数326第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图3 空间尺度区域信息注意力特征提取方法示意图Fig.3 Schematic diagram of attention feature extraction method for spatial scale regional information e mk =M [C (Δp mk ‖Δf mk )]㊂(1) 第2步,计算图权重系数a mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重包含比㊂其中k ∈{1,2,3, ,K },K 表示每个邻域所包含点数㊂需要对特征系数e mk 进行归一化,使用归一化指数函数S (Softmax)得到权重多分类的结果,即计算图权重系数a mk =S (e mk )=exp(e mk )/∑K g =1exp(e mg )㊂(2) 第3步,用空间尺度区域特征s mr 表示中心点m 的第r 个空间尺度区域的特征㊂其中k r ∈{1,2,3, ,K r },K r 表示第r 个空间尺度区域所包含的邻域点数,并在其中加入特征偏置项b r ,避免权重化计算的特征在动态图中累计单面误差指向,空间尺度区域特征s mr =∑K r k r =1[a mk r M (f mk r )]+b r ㊂(3) 在r 个空间尺度区域上进行计算,就可得到点m 在整个局部区域的全部空间尺度区域特征s m ={s m 1,s m 2,s m 3, ,s mr },其中r ∈{1,2,3, ,R }㊂1.2 基于特征更新的动态图卷积网络动态图卷积网络是一种能直接处理原始三维点云数据输入的深度学习网络㊂其特点是将PointNet 网络中的复合特征转换模块(Feature Transform),改进为由K 邻近点计算(K ⁃Near Neighbor)和多层感知机构成的边缘卷积层[12]㊂边缘卷积层功能强大,其提取的特征不仅包含全局特征,还拥有由中心点与邻域点的空间位置关系构成的局部特征㊂在动态图卷积网络中,每个邻域都视为一个点集㊂增强对其中心点的特征学习能力,就会增强网络整体的效果[13]㊂对一个邻域点集,对中心点贡献最小的有效局部特征的边缘点,可以视为异常噪声点或低权重点,可能会给整体分割带来边缘溢出㊂点云相比二维图像是一种信息稀疏并且噪声含量更大的载体㊂处理一个局域内的噪声点,将其直接剔除或简单采纳会降低特征提取效果,笔者对其进行低权重划分,并进行区域内特征更新,增强抗噪性能,也避免点云信息丢失㊂在空间尺度区域中,在区域T 内有s 个点x 被归为低权重系数组,该点集的空间信息集为P ∈R N s ×3㊂点集的局部特征集为F ∈R N s ×D f [14],其中D f 表示特征的维度空间,N s 表示s 个域内点的集合㊂设p i 以及f i 为点x i 的空间信息和特征信息㊂在点集内,对点x i 进行小范围内的N 邻域搜索,搜索其邻域点㊂则点x i 的邻域点{x i ,1,x i ,2, ,x i ,N }∈N (x i ),其特征集合为{f i ,1,f i ,2, ,f i ,N }∈F ㊂在利用空间尺度进行区域划分后,对空间尺度区域特征s mt 较低的区域进行区域内特征更新,通过聚合函数对权重最低的邻域点在图中的局部特征进行改写㊂已知中心点m ,点x i 的特征f mx i 和空间尺度区域特征s mt ,目的是求出f ′mx i ,即中心点m 的低权重邻域点x i 在进行邻域特征更新后得到的新特征㊂对区域T 内的点x i ,∀x i ,j ∈H (x i ),x i 与其邻域H 内的邻域点的特征相似性域为R (x i ,x i ,j )=S [C (f i ,j )T C (f i ,j )/D o ],(4)其中C 表示由输入至输出维度的一维卷积,D o 表示输出维度值,T 表示转置㊂从而获得更新后的x i 的426吉林大学学报(信息科学版)第41卷特征㊂对R (x i ,x i ,j )进行聚合,并将特征f mx i 维度变换为输出维度f ′mx i =∑[R (x i ,x i ,j )S (s mt f mx i )]㊂(5) 图4为特征更新网络模块示意图,展示了上述特征更新的计算过程㊂图5为特征更新的动态图卷积网络示意图㊂图4 特征更新网络模块示意图Fig.4 Schematic diagram of feature update network module 图5 特征更新的动态图卷积网络示意图Fig.5 Flow chart of dynamic graph convolution network with feature update 动态图卷积网络(DGCNN)利用自创的边缘卷积层模块,逐层进行边卷积[15]㊂其前一层的输出都会动态地产生新的特征空间和局部区域,新一层从前一层学习特征(见图5)㊂在每层的边卷积模块中,笔者在边卷积和池化后加入了空间尺度区域注意力特征,捕捉特定空间区域T 内的邻域点,用于特征更新㊂特征更新会降低局域异常值点对局部特征的污染㊂网络相比传统图卷积神经网络能获得更多的特征信息,并且在面对拥有较多噪声值的点云数据时,具有更好的抗干扰性[16],在对性质不稳定㊁不平滑并含有需采集分割的突出中心的点云数据时,会有更好的抗干扰效果㊂相比于传统预处理方式,其稳定性更强,不会发生将突出部分误分割或漏分割的现象[17]㊂2 实验结果与分析点云分割的精度评估指标主要由两组数据构成[18],即平均交并比和总体准确率㊂平均交并比U (MIoU:Mean Intersection over Union)代表真实值和预测值合集的交并化率的平均值,其计算式为526第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法U =1T +1∑Ta =0p aa ∑Tb =0p ab +∑T b =0p ba -p aa ,(6)其中T 表示类别,a 表示真实值,b 表示预测值,p ab 表示将a 预测为b ㊂总体准确率A (OA:Overall Accuracy)表示所有正确预测点p c 占点云模型总体数量p all 的比,其计算式为A =P c /P all ,(7)其中U 与A 数值越大,表明点云分割网络越精准,且有U ≤A ㊂2.1 实验准备与数据预处理实验使用Kinect V2,采用Depth Basics⁃WPF 模块拍摄金属部件损伤表面获得深度图,将获得的深度图进行SDK(Software Development Kit)转化,得到pcd 格式的点云数据㊂Kinect V2采集的深度图像分辨率固定为512×424像素,为获得更清晰的数据图像,需尽可能近地采集数据㊂选择0.6~1.2m 作为采集距离范围,从0.6m 开始每次增加0.2m,获得多组采量数据㊂点云中分布着噪声,如果不对点云数据进行过滤会对后续处理产生不利影响㊂根据统计原理对点云中每个点的邻域进行分析,再建立一个特别设立的标准差㊂然后将实际点云的分布与假设的高斯分布进行对比,实际点云中误差超出了标准差的点即被认为是噪声点[19]㊂由于点云数据量庞大,为提高效率,选择采用如下改进方法㊂计算点云中每个点与其首个邻域点的空间距离L 1和与其第k 个邻域点的空间距离L k ㊂比较每个点之间L 1与L k 的差,将其中差值最大的1/K 视为可能噪声点[20]㊂计算可能噪声点到其K 个邻域点的平均值,平均值高出标准差的被视为噪声点,将离群噪声点剔除后完成对点云的滤波㊂2.2 金属表面损伤点云关键信息提取分割方法对点云损伤分割,在制作点云数据训练集时,如果只是单一地将所有损伤进行统一标记,不仅不方便进行结果分析和应用,而且也会降低特征分割的效果㊂为方便分析和控制分割效果,需要使用ArcGIS 将点云模型转化为不规则三角网TIN(Triangulated Irregular Network)㊂为精确地分类损伤,利用图6 不规则三角网模型示意图Fig.6 Schematic diagram of triangulated irregular networkTIN 的表面轮廓性质,获得训练数据损伤点云的损伤内(外)体积,损伤表面轮廓面积等㊂如图6所示㊂选择损伤体积指标分为相对损伤体积V (RDV:Relative Damege Volume)和邻域内相对损伤体积比N (NRDVR:Neighborhood Relative Damege Volume Ratio)㊂计算相对平均深度平面与点云深度网格化平面之间的部分,得出相对损伤体积㊂利用TIN 邻域网格可获取某损伤在邻域内的相对深度占比,有效解决制作测试集时,将因弧度或是形状造成的相对深度判断为损伤的问题㊂两种指标如下:V =∑P d k =1h k /P d -∑P k =1h k /()P S d ,(8)N =P n ∑P d k =1h k S d /P d ∑P n k =1h k S ()n -()1×100%,(9)其中P 表示所有点云数,P d 表示所有被标记为损伤的点云数,P n 表示所有被认定为损伤邻域内的点云数;h k 表示点k 的深度值;S d 表示损伤平面面积,S n 表示损伤邻域平面面积㊂在获取TIN 标准包络网视图后,可以更加清晰地描绘损伤情况,同时有助于量化损伤严重程度㊂笔者将损伤分为6种类型,并利用计算得出的TIN 指标进行损伤分类㊂同时,根据损伤部分体积与非损伤部分体积的关系,制定指标损伤体积(SDV:Standard Damege Volume)区分损伤类别㊂随机抽选5个测试组共50张图作为样本㊂统计非穿透损伤的RDV 绝对值,其中最大的30%标记为凹陷或凸起,其余626吉林大学学报(信息科学版)第41卷标记为表面损伤,并将样本分类的标准分界值设为SDV㊂在设立以上标准后,对凹陷㊁凸起㊁穿孔㊁表面损伤㊁破损和缺损6种金属表面损伤进行分类,金属表面损伤示意图如图7所示㊂首先,根据损伤是否产生洞穿,将损伤分为两大类㊂非贯通伤包括凹陷㊁凸起和表面损伤,贯通伤包括穿孔㊁破损和缺损㊂在非贯通伤中,凹陷和凸起分别采用相反数的SDV 作为标准,在这之间的被分类为表面损伤㊂贯通伤中,以损伤部分平面面积作为参照,较小的分类为穿孔,较大的分类为破损,而在边缘处因腐蚀㊁碰撞等原因缺角㊁内损的分类为缺损㊂分类参照如表1所示㊂图7 金属表面损伤示意图Fig.7 Schematic diagram of metal surface damage表1 损伤类别分类Tab.1 Damage classification 损伤类别凹陷凸起穿孔表面损伤破损缺损是否形成洞穿××√×√√RDV 绝对值是否达到SDV √√\×\\S d 是否达到标准\\×\√\2.3 实验结果分析为验证改进的图卷积深度神经网络在点云语义分割上的有效性,笔者采用TensorFlow 神经网络框架进行模型测试㊂为验证深度网络对损伤分割的识别准确率,采集了带有损伤特征的金属部件损伤表面点云,对点云进行预处理㊂对若干金属部件上的多个样本金属面的点云数据进行筛选,删除损伤占比低于5%或高于60%的数据后,划分并装包制作为点云数据集㊂采用CloudCompare 软件对样本金属上的损伤部分进行分类标记,共分为6种如上所述损伤㊂部件损伤的数据集制作参考点云深度学习领域广泛应用的公开数据集ModelNet40part㊂分割数据集包含了多种类型的金属部件损伤数据,这些损伤数据显示在510张总点云图像数据中㊂点云图像种类丰富,由各种包含损伤的金属表面构成,例如金属门,金属蒙皮,机械构件外表面等㊂用ArcGIS 内相关工具将总图进行随机点拆分,根据数据集ModelNet40part 的规格,每个独立的点云数据组含有1024个点,将所有总图拆分为510×128个单元点云㊂将样本分为400个训练集与110个测试集,采用交叉验证方法以保证测试的充分性[20],对多种方法进行评估测试,实验结果由单元点云按原点位置重新组合而成,并带有拆分后对单元点云进行的分割标记㊂分割结果比较如图8所示㊂726第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图8 分割结果比较图Fig.8 Comparison of segmentation results在部件损伤分割的实验中,将不同网络与笔者网络(FAS⁃DGCNN:Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)进行对比㊂除了采用不同的分割网络外,其余实验均采用与改进的图卷积深度神经网络方法相同的实验设置㊂实验结果由单一损伤交并比(IoU:Intersection over Union),平均损伤交并比(MIoU),单一损伤准确率(Accuracy)和总体损伤准确率(OA)进行评价,结果如表2~表4所示㊂将6种不同损伤类别的Accuracy 与IoU 进行对比分析,可得出结论:相比于基准实验网络Pointet++,笔者在OA 和MioU 方面分别在贯通伤和非贯通伤上有10%和20%左右的提升,在整体分割指标上,OA 能达到90.8%㊂对拥有更多点数支撑,含有较多点云特征的非贯通伤,几种点云分割网络整体性能均能达到90%左右的效果㊂而不具有局部特征识别能力的PointNet 在贯通伤上的表现较差,不具备有效的分辨能力,导致分割效果相对于其他损伤较差㊂表2 损伤部件分割准确率性能对比 Tab.2 Performance comparison of segmentation accuracy of damaged parts %实验方法准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6Ponitnet 82.785.073.880.971.670.1Pointnet++88.786.982.783.486.382.9DGCNN 90.488.891.788.788.687.1FAS⁃DGCNN 92.588.892.191.490.188.6826吉林大学学报(信息科学版)第41卷表3 损伤部件分割交并比性能对比 Tab.3 Performance comparison of segmentation intersection ratio of damaged parts %IoU 准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6PonitNet80.582.770.876.667.366.9PointNet++86.384.580.481.184.280.9DGCNN 88.786.589.986.486.284.7FAS⁃DGCNN89.986.590.388.187.385.7表4 损伤分割的整体性能对比分析 出,动态卷积图特征以及有效的邻域特征更新与多尺度注意力给分割网络带来了更优秀的局部邻域分割能力,更加适应表面损伤分割的任务要求㊂3 结 语笔者利用三维点云独特的空间结构特征,将传统K 邻域内权重相近的邻域点采用空间尺度进行区分,并将空间尺度划分运用于邻域内权重分配上,提出了一种能将邻域内噪声点降权筛除的特征更新模块㊂采用此模块的动态图卷积网络在分割上表现出色㊂利用特征更新的动态图卷积网络(FAS⁃DGCNN)能有效实现金属表面损伤的分割㊂与其他网络相比,笔者方法在点云语义分割方面表现出更高的可靠性,可见在包含空间尺度区域信息的注意力和局域点云特征更新下,笔者提出的基于特征更新的动态图卷积网络能发挥更优秀的作用,而且相比缺乏局部特征提取能力的分割网络,其对于点云稀疏㊁特征不明显的非贯通伤有更优的效果㊂参考文献:[1]LAWIN F J,DANELLJAN M,TOSTEBERG P,et al.Deep Projective 3D Semantic Segmentation [C]∥InternationalConference on Computer Analysis of Images and Patterns.Ystad,Sweden:Springer,2017:95⁃107.[2]MATURANA D,SCHERER S.VoxNet:A 3D Convolutional Neural Network for Real⁃Time Object Recognition [C]∥Proceedings of IEEE /RSJ International Conference on Intelligent Robots and Systems.Hamburg,Germany:IEEE,2015:922⁃928.[3]LE T,DUAN Y.PointGrid:A Deep Network for 3D Shape Understanding [C]∥2018IEEE /CVF Conference on ComputerVision and Pattern Recognition (CVPR).Salt Lake City,USA:IEEE,2018:9204⁃9214.[4]QI C R,SU H,MO K,et al.PointNet:Deep Learning on Point Sets for 3D Classification and Segmentation [C]∥IEEEConference on Computer Vision and Pattern Recognition (CVPR).Hawaii,USA:IEEE,2017:652⁃660.[5]QI C R,SU H,MO K,et al,PointNet ++:Deep Hierarchical Feature Learning on Point Sets in a Metric Space [C]∥Advances in Neural Information Processing Systems.California,USA:SpringerLink,2017:5099⁃5108.[6]HU J,SHEN L,SUN G,Squeeze⁃and⁃Excitation Networks [C ]∥IEEE Conference on Computer Vision and PatternRecognition.Vancouver,Canada:IEEE,2018:7132⁃7141.[7]LI Y,BU R,SUN M,et al.PointCNN:Convolution on X⁃Transformed Points [C]∥Advances in Neural InformationProcessing Systems.Montreal,Canada:NeurIPS,2018:820⁃830.[8]ANH VIET PHAN,MINH LE NGUYEN,YEN LAM HOANG NGUYEN,et al.DGCNN:A Convolutional Neural Networkover Large⁃Scale Labeled Graphs [J].Neural Networks,2018,108(10):533⁃543.[9]任伟建,高梦宇,高铭泽,等.基于混合算法的点云配准方法研究[J].吉林大学学报(信息科学版),2019,37(4):408⁃416.926第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法036吉林大学学报(信息科学版)第41卷REN W J,GAO M Y,GAO M Z,et al.Research on Point Cloud Registration Method Based on Hybrid Algorithm[J]. Journal of Jilin University(Information Science Edition),2019,37(4):408⁃416.[10]ZHANG K,HAO M,WANG J,et al.Linked Dynamic Graph CNN:Learning on Point Cloud via Linking Hierarchical Features[EB/OL].[2022⁃03⁃15].https:∥/stamp/stamp.jsp?tp=&arnumber=9665104. [11]林少丹,冯晨,陈志德,等.一种高效的车体表面损伤检测分割算法[J].数据采集与处理,2021,36(2):260⁃269. LIN S D,FENG C,CHEN Z D,et al.An Efficient Segmentation Algorithm for Vehicle Body Surface Damage Detection[J]. Journal of Data Acquisition and Processing,2021,36(2):260⁃269.[12]ZHANG L P,ZHANG Y,CHEN Z Z,et al.Splitting and Merging Based Multi⁃Model Fitting for Point Cloud Segmentation [J].Journal of Geodesy and Geoinformation Science,2019,2(2):78⁃79.[13]XING Z Z,ZHAO S F,GUO W,et al.Processing Laser Point Cloud in Fully Mechanized Mining Face Based on DGCNN[J]. ISPRS International Journal of Geo⁃Information,2021,10(7):482⁃482.[14]杨军,党吉圣.基于上下文注意力CNN的三维点云语义分割[J].通信学报,2020,41(7):195⁃203. YANG J,DANG J S.Semantic Segmentation of3D Point Cloud Based on Contextual Attention CNN[J].Journal on Communications,2020,41(7):195⁃203.[15]陈玲,王浩云,肖海鸿,等.利用FL⁃DGCNN模型估测绿萝叶片外部表型参数[J].农业工程学报,2021,37(13): 172⁃179.CHEN L,WANG H Y,XIAO H H,et al.Estimation of External Phenotypic Parameters of Bunting Leaves Using FL⁃DGCNN Model[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(13):172⁃179.[16]柴玉晶,马杰,刘红.用于点云语义分割的深度图注意力卷积网络[J].激光与光电子学进展,2021,58(12):35⁃60. CHAI Y J,MA J,LIU H.Deep Graph Attention Convolution Network for Point Cloud Semantic Segmentation[J].Laser and Optoelectronics Progress,2021,58(12):35⁃60.[17]张学典,方慧.BTDGCNN:面向三维点云拓扑结构的BallTree动态图卷积神经网络[J].小型微型计算机系统,2021, 42(11):32⁃40.ZHANG X D,FANG H.BTDGCNN:BallTree Dynamic Graph Convolution Neural Network for3D Point Cloud Topology[J]. Journal of Chinese Computer Systems,2021,42(11):32⁃40.[18]张佳颖,赵晓丽,陈正.基于深度学习的点云语义分割综述[J].激光与光电子学,2020,57(4):28⁃46. ZHANG J Y,ZHAO X L,CHEN Z.A Survey of Point Cloud Semantic Segmentation Based on Deep Learning[J].Lasers and Photonics,2020,57(4):28⁃46.[19]SUN Y,ZHANG S H,WANG T Q,et al.An Improved Spatial Point Cloud Simplification Algorithm[J].Neural Computing and Applications,2021,34(15):12345⁃12359.[20]高福顺,张鼎林,梁学章.由点云数据生成三角网络曲面的区域增长算法[J].吉林大学学报(理学版),2008,46 (3):413⁃417.GAO F S,ZHANG D L,LIANG X Z.A Region Growing Algorithm for Triangular Network Surface Generation from Point Cloud Data[J].Journal of Jilin University(Science Edition),2008,46(3):413⁃417.(责任编辑:刘俏亮)。
美敦力ICD产品介绍
比其它腔内心电图振幅小: ➢ R波: 1.0 0.6 mV; P波: 0.36 0.33 mV (n=18) ➢ 与体表心电图相比,P波与R波的振幅比值増加:
▪ P/R比值: 0.43 0.37 (ECG: 0.19 0.02) ➢ 记住: 腔内心电图必须使用 2 mV 增益
改良的确定机制及同步机制
• 在高压充电时,Marquis 将持续监测室性心律失常
– 室性心律失常自发终止可停止充电进程 – 充电过程中的再确认可减少不必要的治疗
得到证实的识别功能
• 被证实的PR Logic™识别标准*
– 100% 敏感度, 95.2%阳性预测值(PPV), 85.2% 特异性**
美敦力ICD产品介绍
美敦力ICD发展历史
Tachycardia
Leadless ECG
Pain Free
75 g 36cc 30J Wavelete Diagnostics
Marquis
Leadless ECG
Pain Free
75 g 36cc 35J Wavelete Diagnostics
Simplified Leadless™ ECG
Conexus™ Wireless Telemetry
MVP (in DR)
Shock Reduction
PainFREE™
Programmable RV / SVC
ATP during Charge
Diagnostics Care Alerts 35J Output
Atrial Therapy (DR)
ATP w/ Charge
MVP
68 g 37cc 35J
New ST R PR Logic
基于视觉的旋翼无人机地面目标跟踪(英文)
I. INTRODUCTION UAV is one of the best platforms to perform dull, dirty or dangerous (3D) tasks [1]. UAV can be used in various applications where human is impossible to intervene. It greatly expands the application space of visual tracking. Research on the technology of vision based ground target tracking for UAV has been a great concern among cybernetic experts and robotic experts, and has become one of the most active research directions in UAV applications. Currently, researchers from America, Britain, France and Sweden are on the cutting edge in this field [2]. Typical visual tracking platforms for UAV include Scan Eagle, GTMax, RQ-11, RQ-16, DragonFly, etc. Because of many advantages, such as small size, light weight, flexible, easy to carry and low cost, rotor UAV has a broad application prospect in the fields of traffic monitoring, resource exploration, electricity patrol, forest fire prevention, aerial photography, atmospheric monitoring, etc [3]. Vision based ground target tracking system for rotor UAV is such a system that gets images by the camera installed on a low-flying rotor UAV, then recognizes the target in the images and estimates the motion state of the target, and finally according to the visual information regulates the pan-tilt-zoom (PTZ) camera automatically to keep the target at the center of the camera view. In view of the current situation of international researches, the study of ground target tracking system for
基于深度学习的目标检测技术研究(英文中文双语版优质文档)
基于深度学习的目标检测技术研究(英文中文双语版优质文档)Object detection is one of the important research directions in the field of computer vision, and it has a wide range of applications in the fields of automatic driving, intelligent security, medical image analysis and so on. In recent years, with the development of deep learning technology, object detection technology based on deep learning has made remarkable progress. This paper will review the research progress of object detection technology based on deep learning in recent years, and analyze its advantages, disadvantages and future development directions.1. Research background of target detection technologyObject detection is one of the important research directions in the field of computer vision. Its main task is to detect the location, size and category of objects in images or videos. Object detection technology is widely used in autonomous driving, intelligent security, medical image analysis and other fields.In traditional target detection techniques, commonly used methods include methods based on feature extraction and traditional machine learning algorithms, such as Haar features and HOG features, and methods based on background modeling, such as ViBe and MoG. However, there are many problems in these traditional methods, such as the feature is not learnable, the robustness is poor, and it is limited by the background model.With the development of deep learning technology, object detection technology based on deep learning has become a research hotspot. These techniques perform end-to-end training on images by using deep neural networks, without the need to manually design features, and thus have better learnability and robustness.2. Overview of target detection technology based on deep learningAt present, object detection techniques based on deep learning are mainly divided into two categories: two-stage detection and one-stage detection. Among them, the two-stage detection method first generates a series of candidate boxes through a Region Proposal Network (RPN), and then classifies and regresses these candidate boxes to obtain the final target detection result. The one-stage detection method directly classifies and regresses the entire image to obtain the target detection result.1. Two-stage detection method(1) Faster R-CNNFaster R-CNN is a typical two-stage object detection method, which proposes an RPN to generate candidate boxes, and classifies and regresses the candidate boxes through the R-CNN network. RPN network is a sliding window framework based on convolutional neural network, which can extract multiple regions that may contain targets in the image, and classify and regress these regions to generate candidate boxes. Then, these candidate boxes are input into the R-CNN network, and they are classified and regressed to obtain the final target detection result.Compared with traditional target detection methods, Faster R-CNN has greatly improved its accuracy and speed. However, there are two problems: one is that the candidate frame generated by RPN requires a lot of calculation, resulting in a slow calculation speed; the other is that the network needs to perform two forward propagations, resulting in a large amount of calculation.(2) Mask R-CNNMask R-CNN is an extension of Faster R-CNN. It adds a segmentation branch to Faster R-CNN, which can simultaneously complete target detection and pixel-level semantic segmentation. Based on Faster R-CNN, Mask R-CNN adds a fully convolutional network to generate target masks to achieve pixel-level semantic segmentation. Mask R-CNN has achieved excellent results on multiple datasets, proving its effectiveness on object detection and semantic segmentation tasks.(3) Cascade R-CNNCascade R-CNN is improved on the basis of Faster R-CNN, and its idea is to perform cascaded classification and regression on candidate frames. Cascade R-CNN improves the detection accuracy by cascading multiple R-CNN networks, and each R-CNN network performs stricter screening of samples that were misclassified by the previous network. Cascade R-CNN achieves state-of-the-art performance on multiple datasets, proving its effectiveness in the field of object detection.2. One-stage detection method(1) YOLO seriesYOLO (You Only Look Once) is a typical one-stage target detection method. YOLO obtains target detection results by classifying and regressing the entire image. YOLO is characterized by being fast and simple, and can be used in real-time scenarios. The YOLO series has now been developed to the fourth edition, and its detection speed and accuracy have been greatly improved. However, there are also some problems in the YOLO series, such as poor detection of small targets.(2) SSD seriesSSD (Single Shot MultiBox Detector) is another typical one-stage target detection method. Unlike YOLO, SSD uses multi-scale feature maps to detect targets, thereby improving the detection effect on small targets. The SSD series has also experienced multiple versions of development, and its detection speed and accuracy have been greatly improved. However, compared with YOLO, the detection speed of SSD is relatively slow, and there are also problems such as poor detection effect on objects with high aspect ratio.(3) RetinaNetRetinaNet is a one-stage target detection method based on Focal Loss. RetinaNet improves the detection effect of small targets by improving the loss function to pay more attention to positive and negative samples that are difficult to distinguish. RetinaNet has achieved excellent results on multiple datasets, proving its effectiveness in the field of object detection.(4) EfficientDetEfficientDet is a one-stage object detection method based on EfficientNet. EfficientDet builds a series of efficient network structures by using different expansion coefficients and depth and width scaling factors, thus achieving a good balance between detection speed and accuracy. EfficientDet achieves state-of-the-art performance on multiple datasets, proving its effectiveness in the field of object detection.In general, the one-stage object detection method has faster detection speed than the two-stage method, but the detection effect on small objects and high aspect ratio objects is relatively poor. Different methods have their own advantages and disadvantages, and the appropriate method needs to be selected according to the specific application scenario.目标检测是计算机视觉领域的重要研究方向之一,其在自动驾驶、智能安防、医学图像分析等领域都有广泛的应用。
人脸检测中相关特征的研究
上海交通大学硕士学位论文人脸检测中相关特征的研究姓名:赵伟达申请学位级别:硕士专业:计算机软件与理论指导教师:张丽清20070101人脸检测中相关特征的研究摘要对人脸的研究在身份验证,档案管理和可视化通讯等诸多领域有着巨大的应用前景。
对于人脸的研究大致分为人脸检测,人脸跟踪和人脸识别三部分。
人脸检测作为整个人脸分析过程的第一步,其目标是准确、快速的从图像中检测出人脸。
如何从人脸图像中提取出能较好区分人脸与非人脸的特征,从而可以提高人脸检测的准确率目前仍然是个复杂的问题。
本文提出采用傅立叶变换,小波变换,自适应独立分量分析(ICA)模型和稀疏编码(Sparse Coding)这4种不同的方法从静态灰度图片中提取表示人脸的有效特征。
主要思想是先找出能够有效捕捉人脸结构特点的函数基底,然后将人脸图片和非人脸图片投影到这些基底上,利用得到的投影系数对图像进行分类。
由于正面人脸的结构非常相似,因此在这些基底上投影出来的特征系数也比较相近;而不包含人脸的自然风景图片投影到这些基底上得到的特征系数和表示人脸图像的特征系数就会不同。
通过比较这两类图像的特征系数,可以区分出两者。
4类函数基底中,傅立叶变换和小波变换是固定的基底,而ICA和超完备的稀疏编码方法则是通过给定的人脸图片样本训练出可以捕捉脸结构特点的函数基底。
文中还采用了基于互信息最大化的特征选取算法,从提取出的特征向量中选出与分类目标最相关的特征子集。
选择过程去掉了冗余的、与目标不相关的特征,从而进一步提高了分类的准确率,降低了训练时间。
文中还采用了支持向量机(SVM)对选择后的特征进行分类。
最后将四个用不同类型特征训练出来的SVM进行级联组合,得到一个可以进行人脸检测的分类器。
实验结果表明,4种特征提取算法都能得到有效的表示人脸的函数基底。
特别是通过自适应ICA算法和稀疏编码算法自适应学习得到的函数基底能够更好的捕捉到人脸的结构特点,因此图像投影到这些基底上得到的特征系数有更好的分类效果。
SMA OptiTrac Global Peak PV系统阴影管理指南说明书
Technical InformationShade ManagementEfficient Operation of Partially Shaded PV Systems with OptiTrac Global PeakContentIt is not always possible to prevent dormers, chimneys, or trees from casting their shadows on PV systems.However, in order not to jeopardize the economic viability of a PV system, output losses resulting from shade must be minimized already in the planning phase.Influential factors such as the arrangement of the PV modules, their circuitry and in particular the choice of an appropriate inverter play an important role.By observing some important planning rules, these factors can be adapted to the respective PV system in sucha way that their energy supply can be used almost completely.Technical Information Effects of Partial Shade on the PV SystemTechnical Information Shade: A Special Task for the InverterTechnical Information Designing Partially Shaded PV Systems3 Designing Partially Shaded PV SystemsIn order not to jeopardize the economic viability of partially shaded PV systems, the loss of output due to shademust be minimized already in the planning phase.To assist the system system planner, the most important planning rules are presented below.3.1 Selecting the Roof AreaThe minimization of energy losses in partially shaded module strings is always based on enabling the inverterto electrically bypass shaded PV cells and thus to optimally use the unshaded PV modules of the same series-connected string. The power of the shaded PV cells, which is diminished anyway, cannot be used at this time.Therefore, when selecting the roof area for a PV system, you should make sure that no permanent shadows fallon the PV array, and especially in times of high irradiation (noon, summer months) no shadows should fall onit at all, if possible. To estimate the properties of the shadows, such as their size and how they change over thecourse of a year, special simulation programs can be used.3.2 Selecting the System ConnectionThe connection of the PV array significantly influences the obtainable energy yield.SMA Solar Technology AG has therefore prepared and published the rules of "Shadow Management" [2].The analysis of the course of shadows is always carried out at the beginning of a system design. The proportionof the shaded PV modules in relation to the total array and the course of shadows over time are importantcharacteristics of a PV system with partial shading. The following recommended actions are important whendealing with partially shaded PV systems:•When a single PV module or a very low portion of the PV modules (e.g. < 10% of the total number) isshaded, the shadow can be distributed evenly on the strings. Since the MPP voltage is always near thenominal MPP voltage in these cases, a special operation control (OptiTrac Global Peak) is not necessary.•If shading is severe, it makes sense to operate the shaded and unshaded PV modules separately.Observe the following:‒Group together generator array with similar irradiation.‒No parallel connection of strings with different irradiation; provide a separate MPP tracker for eachstring. Many small inverters or ones with multistring technology can be used for this.‒OptiTrac Global Peak is necessary to maximize the energy yield.But even with the slight shading described above, the concentration of the shaded PV modules on its own MPPtracker represents an alternative to evenly distributing the shadow over all strings. Even this system designrequires OptiTrac Global Peak to minimize yield losses.Technical Information OptiTrac Global Peak3.3 Selecting the InvertersAs described in "Shadow Management" [2], the choice of the inverter also influences yield losses due to shade.Three points are to be observed when selecting the inverter:•Inverters with a broad input voltage range can still adjust the optimal operating point despite shade andthe resulting decline in MPP voltage.•Using an inverter with a single-string control a partially shaded PV array can operate optimally and avoidmost losses.•To keep yield losses due to shade to a minimum, it is necessary to use an inverter for partially shaded stringswhose MPP tracking recognizes the presence of several operating points (e.g. OptiTrac Global Peak).4 OptiTrac Global PeakSMA OptiTrac Global Peak is an advancement of SMA OptiTrac and allows the operating point of the inverterto follow the MPP precisely at all times. In addition, with the aid of SMA OptiTrac Global Peak, the invertercan detect the presence of several maximum power points in the available operating range, such as may occurparticularly with partially shaded PV strings. The function is deactivated by default. You will find furtherinformation on activation and settings of OptiTrac Global Peak in the installation manual of the particularinverter.5 Sources[1] J. Iken: "Leistungsgipfel mit Geheimnissen" (Performance peak with secrets); Sonne Wind & Wärme,17/2009, p. 160 (only available in german)[2] G. Bettenwort, J. Laschinski: "Schattenmanagement – Der richtige Umgang mit teilverschattetenPV-Generatoren" (Shadow management – The correct handling of partially shaded PV arrays);23. Symposium Photovoltaische Solarenergie, 2008, Bad Staffelstein, Germany (only available in German)。
星空追踪赤道仪说明书
IN 605 10/17Orion ® Solar StarSeeker ™ TrackingAltazimuth Mount#10382Figure 1. The Orion Solar StarSeeker T racking Altazimuth Mount Congratulations on your purchase of a quality Orion product. Y our new Solar StarSeeker Tracking Altazimuth Mount allows high per-formance locating and tracking of the solar disk during the day-time. The mount utilizes a Vixen style dovetail cradle, which is compatible with any small telescope that uses a Vixen dovetail bar.When booted up, the mount will automatically find, locate and track the sun, without needing any manual star alignment. The mount uses a built in GPS receiver, coupled with a small CCD sensor to locate the position of the sun in the sky. The weight of the attached telescope should not exceed 7 lbs., so small solar telescopes such as the PST, SolarMax 40, 60, or Orion Short Tube 80 with white light filter would be appropriate for this mount.These instructions will help you set up and properly use your Solar StarSeeker Tracking Altazimuth Mount. Please read them over thoroughly before getting started and keep this manual handy until you have mastered your mount’s operation.Warning: Do not attempt to attach any telescope to this mount that does not already have a properly mounted solar filter attached securely over the front optics. Attach the filter to scope BEFORE mounting and BEFORE powering on the mount. Remove any opti-cal finderscope that is not filtered as well! This mount will automati-cally point to the sun, and if any telescope is attached that does not have a proper solar filter in place, damage can occur to the telescope, and injury to the user, as concentrated sunlight will be focused through the telescope. Burns and blindness can occur in a fraction of a second, so please be aware of the proper procedure before using this mount.1. Parts List Qty Description 1 Mount Head 1 Adjustable T ripod 1 Accessory T ray 1Extension pier2. AssemblyThe mount assembly procedure is quick and easy, as there are very few parts, and can be completed in a matter of minutes. The individual components are packed in several boxes inside the shipping box. Please remove all parts from all boxes, and make sure all the component pieces are accounted for. Remember to save all the shipping material so that they can be used to transport the mount in the future. In the unlikely event that you need to return the mount, you must use the original packaging.Refer to the Figure 1 during assembly.Camera Battery housingExtension pier locking knobs Extension pierAccessory trayTripod leg spreaderripodLeg height adjustmentsDovetail locking knobVixen dovetail cradleCorporate Offices: 89 Hangar Way, Watsonville CA 95076 - USA Toll Free USA & Canada: (800) 447-1001 International: +1(831) 763-7000Customer Support:*********************Copyright © 2020 Orion T elescopes & Binoculars. All Rights Reserved. No part of this product instruction or any of its contentsmay be reproduced, copied, modified or adapted, without the prior written consent of Orion T elescopes & Binoculars.A N E M P L O Y E E -O W N E D C O M P A N Y2Figure 2.Accessory tray.Figure 3.Extension pier.Figure 4. Extension pier adapter attached to bottom of mount head.1. Remove the tripod from its box, and note that each leghas a telescoping section. To extend each leg, loosen the leg lock lever by rotating it counterclockwise, then extend the leg. When it has been extended to the desired length, rotate the leg lock lever clockwise until tight. Before placing an instrument on the mount, it is a good idea to press down on the tripod to make sure the legs are locked securely and will not give way under the instrument’s weight. 2. Open the legs until the center support tripod spreader is at its widest (Figure 1).3. Attach the accessory tray to the leg spreader by placing the hole in the tray over the center of the leg spreader and rotating the tray into the locking position. The tabs on the end of the triangular tray will lock underneath the channel in each arm of the tripod spreader when properly locked in place (Figure 2).4. Attach the extension pier to the top of the tripod and thread the central bolt up into the extension pier using the thumb knob underneath the top of the tripod (Figure 3).5. Loosen the three locking knobs on the top wall of the extension pier and remove the top adapter from the exten-sion (Figure 4). Attach this top adapter piece into the bot-tom of the Starseeker mount head, using the large thumb knob.6. Place the entire mount with the installed pier adapter back onto the pier, and tighten down the three locking knobs to secure the head in place on the pier (Figure 5).7. Y ou are now ready to install a telescope into the Vixen style dovetail cradle on the side of the mount. WARNING: Before attaching the telescope, make sure a properly secure solar filter is installed on your optical tube. ALSO: Make sure any optical finderscope is removed unless it has a solar filter as well! Make note of where balance of the tube is (it can be difficult once the tube is installed on the mount to gauge where balance is), and insert the dovetail bar into the cradle (making sure the front of the telescope is pointing the same direction as the camera housing on the mount, Figure 6). Tighten down the dovetail locking knob to secure the optical tube.8. Install 8 AA batteries into the battery holder behind the bat-tery cover on the side of the mount head. Please note the direction of each battery in the holder (Figure 7).Y ou are now ready to start an observing session with the Starseeker Mount.A note on optical tube alignment: The accuracy of the system depends on how well aligned the optical tube is in relation to the cradle. If you have a small scope such as a PST that uses a ¼"-20 tripod socket, and a dovetail mounting bar is threaded into that socket, you must make sure the dovetail bar is aligned as close to parallel to the optical tube as you can make it. The further away from parallel, the worse the posi-tioning accuracy of the system will be.3. Powering up and Using the Starseeker Mount Do not attempt to move the mount head in the Azimuth (left and right) direction by hand. Doing so can damage the gears.3Figure 6. T elescope and Camera housingFigure 5.Mount head installed on extension pierFigure 7.Battery compartment.Figure 8. Home positionFigure 9. Control PanelThere is a slip clutch in the Altitude (up and down) direction, so it is ok to move the scope up and down by hand. For best results and the quickest way to find the sun, rotate the scope in Altitude so it is horizontal, and pick up the tripod and rotate the entire unit around until the scope is facing the horizon directly below where the sun is in the sky. It’s ok to start in any other azimuth position, but it will take longer for the mount to acquire the sun, and drain a bit more battery life as it slews around left to right.1. Make sure the solar filter is properly attached to your scope, and any optical finderscope is removed or fit-ted with a solar filter. Do not proceed to step 2 until this is verified.2. Verify the tripod is level: there is a small bubble level in the top of the mount to help. Adjust each leg until level.3. Verify the telescope is inserted into the dovetail cradle in the proper orientation: it should be pointing the same direction as the lens of the camera in the middle of the mount.4. Manually move the telescope up or down until it is point-ed roughly horizontal. The system will find level once it’s turned on, so you want to scope to be close to level for this to occur. This is the “home” position (Figure 8).45. Press the power button for approximately 1 second to turn the system on. The red light will appear when on (Figure 9).6. Wait for the scope to find the horizontal position for the tube, and then the red light will begin blinking, indicating the GPS signal is being acquired. This may take a few moments, and if it doesn’t begin to find the sun after a min-ute or two, there may be interference from the GPS signal. Move to a new location away from buildings or trees, and try again.7. When the GPS location is found, the mount will automati-cally direct the scope up to the proper altitude angle for the sun, and then begin a horizontal sweep using the camera, in order to acquire the sun. If you previously positioned the tripod so the scope was facing the horizon directly below the sun, the mount should acquire the sun quickly, without having to sweep in a wide circle.8. When the mount stops moving, the sun should be some-attached first!) to find the sun. It may not be centered, based on the alignment of your dovetail bar, cradle and optical path, so slide the large button labeled CENTER on the control panel towards one of the four arrows around the button in order to calibrate the unit and center the sun (Figure 9). Once centered, the mount will continue to track the sun!9. When you are finished with your observing session, press the power button for approximately 3 seconds to power the unit down. Return the scope to the horizontal position so you’re ready for the next outing!4.Care and Cleaning of the Starseeker MountIf your mount accumulates dirt/dust while operating, wipe with a soft cloth after use. Clean the mount with mild household detergent and a soft cloth. Keep the mount in a clean and dry environment when not in use, and do not store the mount outdoors.To prevent damage, we recommend removing your telescope or optical instrument from the mount when transporting.5.Technical Specifications Mount: Altazimuth fork arm Tripod: Aluminum Tripod Height: Range of 37.5" - 57" from ground tocenter of dovetail cradle Total Weight: 8 lbs.Motor Drives: Dual-axis geared go-to, internally housed Power requirement: 8 AA batteries Tracking Rate: SolarAlignment method:GPS data, plus CCD sensor acquisitionCorporate Offices: 89 Hangar Way, Watsonville CA 95076 - USA Toll Free USA & Canada: (800) 447-1001 International: +1(831) 763-7000Customer Support:*********************Copyright © 2020 Orion T elescopes & Binoculars. All Rights Reserved. No part of this product instruction or any of its contentsmay be reproduced, copied, modified or adapted, without the prior written consent of Orion T elescopes & Binoculars.A N E M P L O Y E E -O W N E D C O M P A N YOne-Year Limited WarrantyThis Orion product is warranted against defects in materials or workmanship for a period of one year from the date of purchase. This warranty is for the benefit of the original retail purchaser only. During this war-ranty period Orion T elescopes & Binoculars will repair or replace, at Orion’s option, any warranted instru-ment that proves to be defective, provided it is returned postage paid. Proof of purchase (such as a copy of the original receipt) is required. This warranty is only valid in the country of purchase.This warranty does not apply if, in Orion’s judgment, the instrument has been abused, mishandled, or modified, nor does it apply to normal wear and tear. This warranty gives you specific legal rights. It is not intended to remove or restrict your other legal rights under applicable local consumer law; your state or national statutory consumer rights governing the sale of consumer goods remain fully applicable. For further warranty information, please visit /warranty.。
一种窄边波导缝隙行波阵天线抑制交叉极化的新方法_史永康
第32卷第3期遥 测 遥 控Vo.l32, .3 2011年5月Journal of Te le m etry,Tracking and Comm and M ay2011一种窄边波导缝隙行波阵天线抑制交叉极化的新方法史永康, 丁晓磊, 丁克乾, 徐 磊(北京遥测技术研究所 北京 100076)摘 要:提出一种抑制交叉极化的新方法,通过调整天线与地板的间距,使得交叉极化反射波与直射波相消,从而达到抑制交叉极化的目的。
这种方法克服了传统方法的弊端,具有结构简单、实现方便、交叉极化抑制效果良好且对主极化方向图影响小的优点。
理论分析、仿真结果与实测结果取得了一致。
最终将交叉极化电平降低了6dB,证明了该方法的正确性和有效性。
关键词:窄边波导缝隙行波阵; 抑制; 交叉极化中图分类号:TN82文献标识码:A文章编号:CN-1780(2011)03-0051-04引 言随着雷达抗干扰要求的提高,越来越需要低或超低副瓣天线。
窄边波导缝隙行波阵天线具有口面分布便于控制的优点,易于满足天线低副瓣的要求,并且单元间互耦小,因而被广泛用作一维相位扫描雷达设备的天线阵列。
但这种类型天线的交叉极化方向图在主瓣两侧一定角度处各存在一尖峰,尖峰的位置由频率、波导的尺寸以及缝隙间距决定,尖峰的相对电平值由口径分布、倾角大小决定。
这种固有属性降低了天线的抗干扰能力,必须要采取措施加以抑制。
传统的抑制交叉极化的方法有三种[1~6],第一种是在天线口面前方加装平行栅网,第二种是直接在缝隙阵上加装平行隔板,第三种是在天线单元间构建扼流槽。
加装抑制装置必将影响主极化场的辐射,使副瓣抬高,对于副瓣电平有严格要求的雷达天线来说,这些影响是无法忽略的。
另外,传统抑制装置增加了天线的复杂程度,降低了天线的可靠性,与此同时天线的体积和重量也会相应增大,对于星载应用来说这些都是不利因素。
本文提出了一种抑制交叉极化的新方法,即通过调整天线与地板的间距,使得交叉极化直射波与反射波之间存在一定的波程差,相位相差180 ,这样二者就会抵消,从而达到抑制交叉极化的目的。
美敦力起搏器的选择
心室起搏比例(均数)
20
40
60
80
Cum%VP
Dashed lines represent 95% confidence boundaries
100
Sweeney MO, Hellkamp AS, Ellenbogen KA, et al. Adverse Effect of Ventricular Pacing on Heart Failure and Atrial Fibrillation Among Patients With Normal Baseline1Q5RS
Sweeney MO, et al. Circulation 2003;23:2932-2937
6060
Risk of HFH relative to DDDRRipskatoief nHtFwHirtehlaCtuimve%toVP=0 DDDR patient with Cum%VP=0
RRiisskk ooff HHFFHH rreellaattiivvee ttoo DDDDDDRR ppaattiieenntt wwiitthh CCuumm% %VVPP==00
1.8mm
MED-4719内绝缘层 抗挤压
方便固定
容易操纵螺旋头
Tip-ring 10mm 避免远场感知
激素电极 降低阈值
细+避免FFRW+抗挤压+激素电极=精 致
全球唯一超过100万根销售的电极!
26
选择性部位起搏的工具:
CapSure FixNovus5076 电极导线X线下影像:
27
生理性起搏的步骤
18
SAVEPACe试验给我们带来的启示
仅仅有10%的病人植入了带MVP功能的双腔起搏器,而 且这部分病人的随访期也是最短的,所以我们有足够的理由 相信如果植入更多具有MVP功能的起搏器,如果随访时间再 长一些,得到的结果将更加具有显著性。
AXIS M3085-V 2 MP 深度学习小球摄像头说明书
AXIS M3085-V Dome CameraFixed2MP mini dome with deep learningThis cost-efficient mini dome features Wide Dynamic Range(WDR)to ensure clarity even when there’s both dark and light areas in the scene.With Lightfinder,it delivers sharp color images even in low light.A deep learning processing unit enables intelligent analytics based on deep learning on the edge.And AXIS Object Analytics offers detection and classification of different types of objects–all tailored to your specific needs.Furthermore,this compact,easy-to-install,vandal-resistant camera comes factory focused so there’s no manual focusing required.>Great image quality in2MP>Compact,discreet design>WDR and Lightfinder>Support for analytics with deep learning>Built-in cybersecurity featuresDatasheetAXIS M3085-V Dome Camera CameraImage sensor1/2.9"progressive scan RGB CMOSLens 3.1mm,F2.0Horizontal field of view:102°Vertical field of view:55°Fixed iris,IR correctedDay and night Automatically removable infrared-cut filterMinimum illumination With Lightfinder:Color:0.18lux at50IRE F2.0 B/W:0.03lux at50IRE F2.0Shutter speed1/19000s to1/5sCamera angle adjustment Pan:±175°Tilt:±80°Rotation:±175°Can be directed in any direction and see the wall/ceilingSystem on chip(SoC)Model CV25Memory1024MB RAM,512MB Flash ComputecapabilitiesDeep learning processing unit(DLPU) VideoVideo compression H.264(MPEG-4Part10/AVC)Main and High Profiles H.265(MPEG-H Part2/HEVC)Motion JPEGResolution1920x1080(1080p)to320x240Frame rate25/30fps with power line frequency50/60Hz in H.264andH.265aVideo streaming Multiple,individually configurable streams in H.264,H.265and Motion JPEGAxis Zipstream technology in H.264and H.265Controllable frame rate and bandwidthVBR/MBR H.264/H.265Average bitrateMulti-viewstreamingUp to2individually cropped out view areas in full frame rateImage settings Compression,color,brightness,sharpness,contrast,whitebalance,exposure control,motion-adaptive exposure,WDR:upto120dB depending on scene,dynamic overlays,mirroring ofimages,privacy maskRotation:0°,90°,180°,270°,including Corridor FormatPan/Tilt/Zoom Digital PTZAudioAudio streaming Audio output via edge-to-edge technologyAudio input/output Audio features through portcast technology:two-way audio connectivity,voice enhancerNetwork speaker pairingNetworkNetwork protocols IPv4,IPv6USGv6,ICMPv4/ICMPv6,HTTP,HTTPS,HTTP/2,TLS,QoS Layer3DiffServ,FTP,SFTP,CIFS/SMB,SMTP,mDNS(Bonjour), UPnP®,SNMP v1/v2c/v3(MIB-II),DNS/DNSv6,DDNS,NTP, NTS,RTSP,RTCP,RTP,SRTP/RTSPS,TCP,UDP,IGMPv1/v2/v3, DHCPv4/v6,SSH,LLDP,CDP,MQTT v3.1.1,Secure syslog(RFC 3164/5424,UDP/TCP/TLS),Link-Local address(ZeroConf)System integrationApplication Programming Interface Open API for software integration,including VAPIX®and AXIS Camera Application Platform;specifications at One-click cloud connectionONVIF®Profile G,ONVIF®Profile M,ONVIF®Profile S,and ONVIF®Profile T,specification at Event conditions Device status:above operating temperature,above or belowoperating temperature,below operating temperature,IP addressremoved,live stream active,network lost,new IP address,systemready,within operating temperatureEdge storage:recording ongoing,storage disruption,storagehealth issues detectedI/O:manual trigger,virtual input,digital input via accessoriesusing portcast technologyMQTT:subscribeScheduled and recurring:scheduleVideo:average bitrate degradation,tamperingEvent actions Notification:HTTP,HTTPS,TCP and emailRecord video:SD card and network shareMQTT:publishPre-and post-alarm video or image buffering for recording oruploadRecord video:SD card and network shareSNMP traps:send,send while the rule is activeUpload of images or video clips:FTP,SFTP,HTTP,HTTPS,networkshare and emailExternal output activation via accessories using portcasttechnologyBuilt-ininstallation aidsPixel counterAnalyticsAXIS ObjectAnalyticsObject classes:humans,vehicles(types:cars,buses,trucks,bikes)Features:line crossing,object in area,crossline counting BETA,occupancy in area BETA,time in area BETAUp to10scenariosMetadata visualized with color-coded bounding boxesPolygon include/exclude areasPerspective configurationONVIF Motion Alarm eventMetadata Object data:Classes:humans,faces,vehicles(types:cars,buses, trucks,bikes),license platesAttributes:Vehicle color,upper/lower clothing color,confidence,positionEvent data:Producer reference,scenarios,trigger conditions Applications IncludedAXIS Object Analytics,AXIS Video Motion DetectionSupportedAXIS People CounterAXIS Queue MonitorSupport for AXIS Camera Application Platform enablinginstallation of third-party applications,see /acap CybersecurityEdge security Software:Signed firmware,brute force delay protection,digest authentication,password protection,AES-XTS-Plain64256bitSD card encryptionHardware:Axis Edge Vault cybersecurity platformSecure element(CC EAL6+),system-on-chip security(TEE),Axisdevice ID,secure keystore,signed video,secure boot,encryptedfilesystem(AES-XTS-Plain64256bit)Network security IEEE802.1X(EAP-TLS),IEEE802.1AR,HTTPS/HSTS,TLS v1.2/v1.3, Network Time Security(NTS),X.509Certificate PKI,IP addressfilteringDocumentation AXIS OS Hardening GuideAxis Vulnerability Management PolicyAxis Security Development ModelAXIS OS Software Bill of Material(SBOM)To download documents,go to /support/cybersecu-rity/resourcesTo read more about Axis cybersecurity support,go to/cybersecurityGeneralCasing IP42water-and dust-resistant(to comply with IP42,followInstallation Guide),IK08impact-resistant,polycarbonate/ABScasingEncapsulated electronicsColor:white NCS S1002-BFor repainting instructions,contact your Axis partner. Sustainability57%recycled plastics,PVC free,BFR/CFR freePower Power over Ethernet(PoE)IEEE802.3af/802.3at Type1Class2Typical3.6W,max4.2WConnectors RJ4510BASE-T/100BASE-TX PoEAudio:Audio and I/O connectivity via portcast technology Storage Support for microSD/microSDHC/microSDXC cardSupport for SD card encryption(AES-XTS-Plain64256bit)Recording to network-attached storage(NAS)For SD card and NAS recommendations see Operating conditions 0°C to45°C(32°F to113°F) Humidity10–85%RH(non-condensing)Storage conditions -40°C to65°C(-40°F to149°F) Humidity5–95%RH(non-condensing)Approvals EMCICES-3(A)/NMB-3(A),EN55032Class A,EN55035,EN61000-6-1,EN61000-6-2,FCC Part15Subpart B Class A,ICES-003Class A,VCCI Class A,KS C9835,KS C9832Class A,RCM AS/NZS CISPR32Class A,SafetyIEC/EN/UL62368-1,IS13252EnvironmentIEC60068-2-1,IEC60068-2-2,IEC60068-2-6,IEC60068-2-14,IEC60068-2-27,IEC/EN60529IP42,IEC/EN62262Class IK08NetworkNIST SP500-267Dimensions Height:56mm(2.2in)ø101mm(4.0in)Weight150g(0.33lb)IncludedaccessoriesInstallation guide,Windows®decoder1-user licenseOptionalaccessoriesAXIS TM3812Tamper CoverBlack casingSmoked domeAXIS Surveillance microSDXC™CardFor more accessories see VideomanagementsoftwareAXIS Companion,AXIS Camera Station and video managementsoftware from Axis Application Development Partners.For moreinformation,see /vmsLanguages English,German,French,Spanish,Italian,Russian,Japanese,Korean,Portuguese,Simplified Chinese,Traditional Chinese,Dutch,Czech,Swedish,Finnish,Turkish,Thai,Vietnamese Warranty5-year warranty,see /warrantya.Reduced frame rate in Motion JPEG©2022-2023Axis Communications AB.AXIS COMMUNICATIONS,AXIS,ARTPEC and VAPIX are registered trademarks ofAxis AB in various jurisdictions.All other trademarks are the property of their respective owners.We reserve the right tointroduce modifications without notice.T10180059/EN/M13.2/2309。
电气工程前沿科技总结报告
[4]Andrea Tagliasacchi,Matthias Schröder,Anastasia Tkach,Sofien Bouaziz,Mario Botsch,Mark Pauly.Robust Articulated-ICP for Real—Time Hand Tracking.Eurographics Symposium on Geometry Processing 2015.2015,34(5):1—2
[2]Toby Sharpy,Cem Keskiny,Duncan Robertsony,Jonathan Taylory,Jamie Shottony,David Kim,Christoph Rhemann,Ido Leichter,Alon Vinnikov,Yichen Wei,Daniel Freedman,Pushmeet Kohli,Eyal Krupka,Andrew Fitzgibbon,Shahram Izadi。Accurate, Robust, and Flexible Realtime Hand Tracking.CHI '15Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems:3633-3642
3关键技术及发展趋势
虚拟交互的系统关键技术在于手势捕捉。准确的捕捉到手的姿态成为关键问题。
上述提到的文献或者需要数据手套或者采用特定的标记或者使用多台摄像机,均是因为手势捕捉的复杂性造成的.需要在特定的情况下才可以做到实时跟踪和精确捕捉.
跟踪算法主要可以分为两类:即基于外观的方法、基于模型的方法。基于外观的方法训练分类器或回归矩阵为映射图像特征到手姿势。因此,虽然这些系统能够稳健地判断一只手姿态从一个单一的帧,但是基于外观方法是在只需要一个粗略的估计姿态或判别的特征可以明显提取情况下优化。相反地,基于模型的技术逼近跟踪作为对准优化,其中该目标函数典型地测量从模型合成的数据和数据由传感器观测之间的差异。而基于模型的方法可能遭受丢失跟踪的影响,正规化先验概率可被用来推断出高品质的跟踪即使当传感器数据是不完整的或损坏的。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Feature detection and tracking in optical flow on non-flat manifoldsSheraz Khan a ,c ,d ,⇑,Julien Lefevre b ,Habib Ammari c ,Sylvain Baillet a ,eaDepartment of Neurology,Medical College of Wisconsin,Milwaukee,USAbAix-Marseille Univ,Département d’Informatique de Luminy /CNRS,LSIS,UMR 6168,13397Marseille,France cCenter of Applied Mathematics,Ecole Polytechnique,France dDepartment of Neurology,MGH/Harvard Medical School,Boston,USA eMontreal Neurological Institute,McGill University,Montreal,Canadaa r t i c l e i n f o Article history:Received 22September 2010Available online 19September 2011Communicated by A.Shokoufandeh Keywords:Optical flowHelmholtz–Hodge decomposition Feature detection Image processingRiemannian formalism Vector fieldsa b s t r a c tOptical flow is a classical approach to estimating the velocity vector fields associated to illuminated objects traveling onto manifolds.The extraction of rotational (vortices)or curl-free (sources or sinks)fea-tures of interest from these vector fields can be obtained from their Helmholtz–Hodge decomposition (HHD).However,the applications of existing HHD techniques are limited to flat,2D domains.Here we demonstrate the extension of the HHD to vector fields defined over arbitrary surface manifolds.We pro-pose a Riemannian variational formalism,and illustrate the proposed methodology with synthetic and empirical examples of optical-flow vector field decompositions obtained on a variety of surface objects.Ó2011Elsevier B.V.All rights reserved.1.IntroductionThe optical flow in a scene is the apparent motion of objects,induced by the observed temporal variations of local brightness.Under certain conditions,it provides an adequate model for objects’displacement fields,Horn and Schunck (1981).In most applications,however,the definition of the optical flow is restricted to two-dimensional (2D),i.e.flat,surface manifolds.This hinders some potentially high-impact applications of the optical flow,e.g.the study of turbulent flow over general surfaces.We recently intro-duced a new variational framework to estimate optical-flow vector fields on non-flat surfaces using a Riemannian formulation,Lefèvre and Baillet (2008).In most applications,features of interest such as sources,sinks and traveling waves,need to be extracted from time-varying optical-flow vector fields,and be interpreted as meaningful components of the kinematics of the observed phenomena.The Helmholtz–Hodge decomposition (HHD)of vector fields has proven to be a useful technique in that respect,Chorin and Mars-den (1993).HHD proceeds by decomposing vector fields defined in 2D flat domains,Guo et al.(2005)or full 3D geometry,Tong et al.(2003),into the sum of:a curl-free component,deriving from the gradient of a scalar potential U ;a non-diverging component,deriving from the curl of a scalar (in 2D)or vector (in 3D)potential A ;an harmonic component H ,with zero Laplacian.Singularities or vanishing points in the optical-flow vector field can be detected using these components,as shown in e.g.,Guo et al.(2005),and as we shall detail in Section 3.3of this article.As already mentioned,a large number of new fields of application would ben-efit from a generic and principled approach to the HHD of vector fields on surface manifolds:e.g.,experimental fluid dynamics and turbulence (Corpetti et al.,2003;Palit et al.,2005);physiological modeling (Guo et al.,2006);structural and functional neuroimag-ing (Lefevre et al.,2009),and the compressed representation of large,distributed vector fields (Scheuermann and Tricoche,2005).It has been suggested that approximations to surface-based HHD could be achieved using polyhedral surfaces,in the local Euclidean geometry of the manifold (Polthier and Preuss,2003).This however proves to be problematic,when the vector fields are defined over highly-curved surface supports.Indeed,we have shown previously that the surface curvature plays a crucial role in the robust estimation of vector fields within the tangent bundle of the surface support (Lefèvre and Baillet,2008).Our results also showed that the non-flatness properties of the surface may cause convergence issues in the estimation process of the vector field.We note however that new results in the Euclidian domain that do not require a meshed tessellation of the surface were published recently and indicate a promising alternative to our approach (Petronetto et al.,2009).0167-8655/$-see front matter Ó2011Elsevier B.V.All rights reserved.doi:10.1016/j.patrec.2011.09.017⇑Corresponding author at:Department of Neurology,Massachusetts General Hospital,Harvard Medical School,Boston,MA 02129,USA.Tel.:+16176435634;fax:+16179485966.E-mail address:sheraz@ (S.Khan).We show in this article how the HHD can be performed onto non-flat domains and we extended the study to the extraction of basic kinetic features of interest from the HHD components.In line with our previous developments(Lefèvre and Baillet,2008),the present results were worked in the framework of differential geometry on2-Riemannian manifolds and a numerical implemen-tation using the Finite Element Method(FEM).Important concepts and notations from this framework are revisited in Section2.Sec-tion3introduces the proposed framework for HHD.The method is then featured with a variety of results in Section4.We emphasize that the software implementation of the meth-ods are available for download,as a plugin to the Brainstorm aca-demic software for electromagnetic brain imaging(http:// /brainstorm,requires Matlab and has been optimized for multithreaded computation).2.Technical backgroundThis section is a summary of the concepts and notations that are appropriate to the2-Riemannian manifold framework and its FEM implementation.More details can be found in(Do Carmo,1993; Lefèvre and Baillet,2008).2.1.Differential geometryLet us consider M,a surface(or2-Riemannian manifold) equipped by a metric or scalar product g(Á,Á)to measure distances and angles on the surface.M can be parameterized with local charts(x1,x2).Thus,it is possible to obtain a normal vector at each surface point:n p¼@@x1Â@@x2:Note that n p does not depend on the(x1,x2)parametrization.Let U:M!R be a function defined on the surface,and its dif-ferential dU:T M!R,acting on T M,the space of vectorfields.We then can write:dUðv1;v2Þ¼@U1v1þ@U2v2:Let us then define the gradient and divergence operators through duality:d UðVÞ¼gð$M U;VÞ;ZM U div M H¼ÀZMgðH;$M UÞ:We emphasize that the gradient and divergence measures are also independent of the surface parametrization.Lastly,let us define two additional functional spaces that will be useful for a theorem proposed in Section4:H1ðMÞis the space of differentiable functions on the manifold M,and C1ðMÞis the space of vectorfields,which satisfy good properties for regularity(as dis-cussed in(Druet et al.,2004;Lefèvre and Baillet,2008)).2.2.OpticalflowUnder the seminal hypothesis of the conservation of a scalar field I along streamlines,the opticalflow V is a vectorfield which satisfies:@t IþgðV;r M IÞ¼0:ð1ÞNote that the scalar product g(Á,Á)is modified by the local curva-ture of the surface domain M.The solution to Eq.(1)is not unique as long as the components of V(p,t)orthogonal to r M I are left unconstrained.This so-called‘aperture problem’has been addressed by a large number of methods using e.g.,regularization approaches.These latter may be formalized as the minimization problem of an energy functional,which both includes the regular-ity of the solution and the agreement with the model:EðVÞ¼ZM@I@tþgðV;r M IÞ2d lþkZMCðVÞd l;ð2Þwhere d l is a volume form of the manifold M.In this study,we used a regularity operator that is quadratic in the generalized gradient of the expected vectorfield:CðVÞ¼Trðt r VÁr VÞ;ð3Þwe note that the gradient of a vectorfield must be defined as the covariant derivative associated to the manifold M,so that it can be considered as an intrinsic tensor.We refer to the general con-cepts of differential geometry for more information(Do Carmo, 1993).3.Helmholtz–Hodge decomposition on a2-Riemannian manifoldWe now introduce an extended framework to perform HHD on surfaces and show that it can be applied to any vectorfield defined on a2-Riemannian manifold,M.3.1.Definitions and theoremLet usfirst define the scalar and vectorial curls by:Curl M:H1ðMÞ!C1ðMÞcurl M:C1ðMÞ!H1ðMÞCurl M A¼$M AÂn;curl M H¼div MðHÂnÞ:Note that these intrinsic expressions are not dependent on the parametrization of the surface.As established in(Polthier and Preuss,2003),we assume that M is a closed manifold,i.e.it has no boundaries.Given V,a vectorfield in C1ðMÞ,there exists functions U and A(up to an additive con-stant)in H1ðMÞand a vectorfield H in C1ðMÞsuch that:V¼$M UþCurl M AþH;ð4Þwherecurl Mð$M UÞ¼0;div MðCurl M AÞ¼0;div M H¼0;curl M H¼0:To prove this theorem,we show how U and A may be con-structed.Following classical constructions,we consider U and A as minimizers of the two following convex functionals,which are not necessarily unique:ZMk VÀ$M U k2d l;ð5ÞZMk VÀCurl M A k2d l;ð6Þwhere kÁk is the norm associated to the Riemannian metric g(Á,Á).These two functionals carry a global minimum on H1ðMÞ,which satisfies:8/2H1ðMÞ;ZMgðV;$M/Þd l¼ZMgð$M U;$M/Þd l;ð7Þ8/2H1ðMÞ;ZMgðV;Curl M/Þd l¼ZMgðCurl M A;Curl M/Þd l:ð8ÞThese two equations will play a key role in the subsequent numerical computations,as detailed below.The rest of the proof is provided in the Appendix.2048S.Khan et al./Pattern Recognition Letters32(2011)2047–20523.2.DiscretizationEqs.(7)and (8)are crucial since they pave the way to numerical implementation,when H 1ðMÞis approximated by a subspace of fi-nite elements (e.g.,continuous linear piecewise functions).We now detail the numerical implementation of Eqs.(9)and(10),which are defined over a surface tessellation c Mthat approx-imates the ideal manifold.Let N be the number of nodes in the tes-sellation (Fig.1).In the FEM implementation,we define N continuous piecewise affine functions /i ,decreasing from a value of 1at node i ,to 0at the other triangle nodes.Using these basis functions (/1,...,/n ),we can write U =(U 1,...,U n )T ,A =(A 1,...,A n )T ,and Eqs.(7)and (8)become,using array notations:Z Mg ð$M /i ;$M /j Þ i ;jU ¼Z Mg ðV ;$M /i Þi;ð9ÞZMg ðCurl M /i ;Curl M /j Þ i ;jA ¼Z Mg ðV ;Curl M /i Þ i;ð10Þwhere [Á]i ,j is a N ÂN square matrix and [Á]i is a column vector of length N .The harmonic component H of the vector field V is obtained as:H ¼V À$M U ÀCurl M A :ð11ÞThe gradient of each basis function is constant on each triangle and can be simply computed with geometrical quantities (see Lefè-vre and Baillet (2008)).Hence Eq.(9)reads:3.3.HHD feature detection as critical points of potentialsThe critical points of a vector field are often obtained from the eigenvalues of their Jacobian matrix.However,it has been shown that feature detection from critical points in global potential fields is less sensitive to noise in the data (Tong et al.,2003)and there-fore less prone to false positive detections than with methods based on the extraction of local Jacobian eigenvalues (Mann and Rockwood,2002).We therefore define critical points in the flow as local extrema of the divergence-free potential A (representing rotation)and curl-free potential U (representing divergence).A sink (respectively,a source)is thus defined as a local maxi-mum (respectively,minimum)of U .Counterclockwise (respec-tively,clockwise)vortices are featured by local maximum (respectively,minimum)of A .Further,since H is a curl-free vector field with zero divergence,traveling objects may be detected from vector elements bearing the largest magnitudes in the H vector field.4.ResultsThe proposed approach was first evaluated by detecting sources and sinks on a spherical surface bearing a rotating vector field,as shown in Fig.2.A possible real-life situation with such vector field patterns would be the air flow over a rotating spherical projectile.The HHD decomposition was applied on this vector field and vortices were identified from the critical points in A ,which was textured over the surface,as shown in Fig.2.In the case of a coun-ter-clockwise rotating vector field,the vortex was found as the maxima in the distribution of A ;for a clockwise rotating vector S.Khan et al./Pattern Recognition Letters 32(2011)2047–20522049shows a snapshot of a diverging vector field with two sources,overlaid with a texture of the values of U ,thereby revealing the correct locations of the sources in the vector field.Fig.3(c)and (d)extend further these results using vector fields with both rotating and diverging components.It is shown that features of different categories can be extracted at once from a vec-tor field.Our colormap shows sources and clockwise vortices in blue (local minima);sinks and counter-clockwise vortices in red (local maxima).In another set of simulations,a traveling source and a traveling vortex were tracked on the surface of the bunny manifold,by iden-tifying the critical points of the U and A HHD scalar fields (Fig.4(a)).Additionally,a surface patch of constant illumination was dis-placed onto the surface using the advection equation along a pre-defined vector field (Lefèvre and Baillet,2008)that defined a trajectory.This object was tracked with the vectors of largest norm in H vector field (Fig.4(b)).Fig.4shows snapshots of the tracking simulation.A black arrow has been used to trace path of sources and vortex in Fig.4(a)and moving the object in Fig.4(b).We further tested the HHD using experimental functional brain imaging data obtained from magnetoencephalography (MEG)(Baillet et al.,2001).Brain responses were recorded over 151MEG sensors following repeated somatosensory stimulation of the median nerve of the right hand (200trials),sampled at 1024Hz.Cortical currents were constrained on a surface tessella-tion of the subject’s brain containing about 50,000triangle nodes.The brain surface tessellation was obtained from the tissue seg-mentation of T1weighted magnetic resonance image data using brainVISA ( ).The cortical currents were esti-mated with a regularized minimum-norm inverse model as de-tailed in (Baillet et al.,2001).The vector field of the optical flow of these currents were computed using the method described in (Lefèvre and Baillet,2008),as implemented by us in the Brain-Storm software package (/brainstorm ).HHD was then applied to detect sources and sinks in the vector fields of the cortical currents.The source pattern at about 30ms after stimulus onset obtained from the event-related average of all trials revealed the expected primary somatosensory brain re-sponse.Fig.5shows a strong divergent pattern of the vector field over the central sulcus contralateral to the side of the stimulation.Computations to obtain all HHD components on the 50,000-node brain surface tessellation took about 10s using a conven-tional workstation with 8cores.Additional results are featured as Supplementary material .5.ConclusionIn this contribution,we developed a framework for the HHD of optical flow vector fields over 2-Riemannian manifolds.We sug-gest that features of interest regarding the kinetics supported by vector fields can be extracted and tracked through the potential components of the HHD.The required computations are simple and are made available to the academic community as part as the distribution of BrainStorm.Evaluation of this framework under real and simulated environments may open promising applications in the emerging field of multi-dimensional imaging.Appendix AWe show that the scalar potentials U and A can be obtained up to a constant.If we consider two minimizers U 1and U 2for the functional in (5)then from Eq.(7),we get:8/2H 1ðMÞ;ZMg ð$M ðU 1ÀU 2Þ;$M /Þd l ¼0:Green’s formula allows to transform this equation (given that M has no boundaries)as follows:8/2H 1ðMÞ;ZM/div M $M ðU 1ÀU 2Þd l ¼0;which corresponds to the Laplace equation:D M ðU 1ÀU 2Þ¼0;where D M ¼div M $M is the Laplace–Beltrami operator.The same approach can be followed for two minimizers A 1and A 2of the functional in Eq.(6).Hence,writing R =U 1ÀU 2and R 0=A 1ÀA 2,we obtained the following two conditions:D M R ¼0;D M R 0¼0:Multiplying by R the first equation and integrating on the man-ifold weget:rotating vector fields over a spherical surface object (shown with green arrows).(a)Counter-clockwise rotating vector field and divergence-free potentials A obtained from the HHD of the vector fields were mapped onto the surface as a colored texture.In both cases,extrema in the A map (maximum in (a),minimum in (b)).Values of A are shown in arbitrary,scaled units.(For interpretation of the reader is referred to the web version of this article.)ZMR D M R d l¼0:Applying Green’s formula yields:ZMgð$M R;$M RÞd l¼0:It follows that$M R must equal zero and R must be a constant.The same approach can be applied to R0,which must be constant.We then define the conditions on the vectorfield H.Wefirst write H¼VÀ$M UþCurl M A,and obtain:div M H¼div M VÀdiv M$M U and curl M H¼curl M VÀcurl M Curl M A;since div M Curl M A¼0and curl M$M U¼0.It follows that:8/2H1ðMÞ;ZM /div M H d l¼ZM/ðdiv M VÀdiv M$M UÞd l:Using Green’s formula and Eq.(7),we obtain: 8/2H1ðMÞ;ZM/div M H d l¼ZM gðV;$M/ÞÀZMgð$M U;$M/Þd l¼0:We have then proved that div M H¼0.The same strategy applies to obtain curl M H¼0.Appendix B.Supplementary dataSupplementary data associated with this article can be found,in the online version,at doi:10.1016/j.patrec.2011.09.017.ReferencesBaillet,S.,Mosher,J.,Leahy,R.,2001.Electromagnetic brain mapping.IEEE Signal Process.Mag.18(6),14–30.Chorin,A.,Marsden,J.,1993.A Mathematical Introduction to Fluid Mechanics.Springer.Corpetti,T.,MTmin, E.,PTrez,P.,2003.Extraction of singular points from dense motionfields:an analytic approach.J.Math.Imaging Vision19, 175–198.Do Carmo,M.,1993.Riemannian Geometry.Birkhauser.Druet,O.,Hebey, E.,Robert, F.,2004.Blow-Up Theory for Elliptic PDEs in Riemannian Geometry.Princeton University Press,Princeton,N.J(Chapter.Background Material,pp.1–12).Guo,Q.,Mandal,M.,Li,M.,2005.Efficient Hodge–Helmholtz decomposition of motionfields.Pattern Recognition Lett.26(4),493–501.Guo,Q.,Mandal,M.,Liu,G.,Kavanagh,K.,2006.Cardiac video analysis using Hodge–Helmholtzfield put.Biol.Med.36(1),1–20.Horn, B.,Schunck, B.,1981.Determining Optical Flow.Artif.Intell.17(1-3), 185–203.Lefèvre,J.,Baillet,S.,2008.Opticalflow and advection on2-Riemannian manifolds:A common framework.IEEE Trans.Pattern Anal.Machine Intell.30(6),1081–1092rs.fr/publications/2008/LB08.Lefevre,J.,Leroy,F.,Khan,S.,Dubois,J.,Huppi,P.,Baillet,S.,Mangin,J.,2009.Identification of Growth Seeds in the Neonate Brain through Surfacic Helmholtz Decomposition.In:21st Information Processing in Medical Imaging, Williamsburg.Mann,S.,Rockwood,A.,puting singularities of3D vectorfields with geometric algebra.Proc.IEEE Visual.,283–289.Palit,B.,Basu,A.,Mandal,M.,2005.Applications of the discrete Hodge Helmholtz decomposition to image and video processing.Lect.Notes Comput.Sci.3776, 497.Petronetto,F.,Paiva,A.,Lage,M.,Tavares,G.,Lopes,H.,Lewiner,T.,2009.Meshless Helmholtz–Hodge decomposition.IEEE put.Graphics, 338–349.Polthier,K.,Preuss,E.,2003.Identifying vectorfields singularities using a discrete Hodge decomposition.Visual.Math.3,113–134.Scheuermann,G.,Tricoche,X.,2005.Topological Methods for Flow Visualization.The Visualization Handbook,341–356.Tong,Y.,Lombeyda,S.,Hirani,A.,Desbrun,M.,2003.Discrete multiscale vectorfield decomposition.ACM Trans.Graphics22(3),445–452.。