Intensity-based Image Registration using Robust
基于HSI色彩坐标相似度的彩色图像分割方法
基于HSI色彩坐标相似度的彩色图像分割方法作者:李宁许树成邓中亮来源:《现代电子技术》2017年第02期摘要:该文提出一种基于HSI彩色空间的图像分割方法。
欧氏距离作为图像分割中常用的衡量像素点之间彩色关系的依据,在HSI坐标系下却不能很好地反应两个像素点之间的关系。
因此,提出相似度代替欧氏距离作为一种新的衡量两个像素点之间彩色关系的依据。
算法通过确定HSI分量中占主导地位的分量,建立彩色图像分割模型,创建一个和原图尺寸一样的颜色相似度等级图,并利用相应的颜色相似度等级图的颜色信息对像素点进行聚类。
实验结果表明,所提出的分割算法具有很强的鲁棒性和准确性,在其他条件相同的情况下,基于相似度的分割方法优于基于欧氏距离为基准的彩色图像分割。
关键词:图像分割; HSI彩色空间;颜色相似度;欧氏距离中图分类号: TN911.73⁃34 文献标识码: A 文章编号: 1004⁃373X(2017)02⁃0030⁃04Abstract: A new method for color image segmentation which based on HSI color space is presented in this paper. Euclidean distance as a common basis of measuring the colour relationship between two pixels can not reflect the relationship between the two pixels in the HSI coordinate system. Therefore, the traditional Euclidean distance is abandoned, and the color similarity is proposed as a new basis of measuring the relationship between the two pixels. The algorithm is used to build the color image segmentation model by at determining the dominant component in the HSI components and create a color similarity level picture with the size same as the original picture. The color information of the corresponding color level diagram is adopted to cluster the pixel points. The experimental results show that the segmentation algorithm has strong robustness and high accuracy,and under the same conditions, the segmentation method based on similarity is better than the segmentation method based on Euclidean distance.Keywords: image segmentation; HSI color space; color similarity; Euclidean distance0 引言基于彩色信息的图像分割算法在计算机视觉中扮演着重要的角色,并广泛应用于各个领域。
检测翻译词汇3
Hysteresis 磁滞 IACS IACS ID coil ID 线圈 Image definition 图像清晰度 Image contrast 图像对⽐度 Image enhancement 图像增强 Image magnification 图像放⼤ Image quality 图像质量 Image quality indicator sensitivity 像质指⽰器灵敏度 Image quality indicator(IQI)/image quality indication 像质指⽰器 Imaging line scanner 图像线扫描器 Immersion probe 液浸探头 Immersion rinse 浸没清洗 Immersion testing 液浸法 Immersion time 浸没时间 Impedance 阻抗 Impedance plane diagram 阻抗平⾯图 Imperfection 不完整性 Impulse eddy current testing 脉冲涡流检测 Incremental permeability 增量磁导率 Indicated defect area 缺陷指⽰⾯积 Indicated defect length 缺陷指⽰长度 Indication 指⽰ Indirect exposure 间接曝光 Indirect magnetization 间接磁化 Indirect magnetization method 间接磁化法 Indirect scan 间接扫查 Induced field 感应磁场 Induced current method 感应电流法 Infrared imaging system 红外成象系统 Infrared sensing device 红外扫描器 Inherent fluorescence 固有荧光 Inherent filtration 固有滤波 Initial permeability 起始磁导率 Initial pulse 始脉冲 Initial pulse width 始波宽度 Inserted coil 插⼊式线圈 Inside coil 内部线圈 Inside- out testing 外泄检测 Inspection 检查 Inspection medium 检查介质 Inspection frequency/ test frequency 检测频率 Intensifying factor 增感系数 Intensifying screen 增感屏 Interal,arrival time (Δtij)/arrival time interval(Δtij)到达时间差(Δtij) Interface boundary 界⾯ Interface echo 界⾯回波 Interface trigger 界⾯触发 Interference ⼲涉 Interpretation 解释 Ion pump 离⼦泵 Ion source 离⼦源 Ionization chamber 电离室 Ionization potential 电离电位 Ionization vacuum gage 电离真空计 Ionography 电离射线透照术 Irradiance, E 辐射通量密度, E Isolation 隔离检测 Isotope 同位素 K value K 值 Kaiser effect 凯塞(Kaiser)效应 Kilo volt kv 千伏特 Kiloelectron volt keV 千电⼦伏特 Krypton 85 氪85 L/D ratio L/D ⽐ Lamb wave 兰姆波 Latent image 潜象 Lateral scan 左右扫查 Lateral scan with oblique angle 斜平⾏扫查 Latitude (of an emulsion) 胶⽚宽容度 Lead screen 铅屏 Leak 泄漏孔 Leak artifact 泄漏器 Leak detector 检漏仪 Leak testtion 泄漏检测 Leakage field 泄漏磁场 Leakage rate 泄漏率 Leechs 磁吸盘 Lift-off effect 提离效应 Light intensity 光强度 Limiting resolution 极限分辨率 Line scanner 线扫描器 Line focus 线焦点 Line pair pattern 线对检测图 Line pairs per millimetre 每毫⽶线对数 Linear (electron) accelerator(LINAC) 电⼦直线加速器 Linear attenuation coefficient 线衰减系数 Linear scan 线扫查 Linearity (time or distance)线性(时间或距离) Linearity, anplitude 幅度线性 Lines of force 磁⼒线 Lipophilic emulsifier 亲油性乳化剂 Lipophilic remover 亲油性洗净剂 Liquid penetrant examination 液体渗透检验 Liquid film developer 液膜显像剂 Local magnetization 局部磁化 Local magnetization method 局部磁化法 Local scan 局部扫查 Localizing cone 定域喇叭筒 Location 定位 Location accuracy 定位精度 Location computed 定位,计算 Location marker 定位标记 Location upon delta-T 时差定位 Location, clusfer 定位,群集 Location,continuous AE signal 定位,连续AE 信号 Longitudinal field 纵向磁场 Longitudinal magnetization method 纵向磁化法 Longitudinal resolution 纵向分辨率 Longitudinal wave 纵波 Longitudinal wave probe 纵波探头 Longitudinal wave technique 纵波法 Loss of back reflection 背⾯反射损失 Loss of back reflection 底⾯反射损失 Love wave 乐甫波 Low energy gamma radiation 低能γ辐射 Low-enerugy photon radiation 低能光⼦辐射 Luminance 亮度 Luminosity 流明 Lusec 流西克 Maga or million electron volts MeV 兆电⼦伏特 Magnetic history 磁化史 Magnetic hysteresis 磁性滞后 Magnetic particle field indication 磁粉磁场指⽰器 Magnetic particle inspection flaw indications 磁粉检验的伤显⽰ Magnetic circuit 磁路 Magnetic domain 磁畴 Magnetic field distribution 磁场分布 Magnetic field indicator 磁场指⽰器 Magnetic field meter 磁场计 Magnetic field strength 磁场强度(H) Magnetic field/field,magnetic 磁场 Magnetic flux 磁通 Magnetic flux density 磁通密度 Magnetic force 磁化⼒ Magnetic leakage field 漏磁场 Magnetic leakage flux 漏磁通 Magnetic moment 磁矩 Magnetic particle 磁粉 Magnetic particle indication 磁痕 Magnetic particle testing/magnetic particle examination 磁粉检测 Magnetic permeability 磁导率 Magnetic permeability 磁导率 Magnetic pole 磁极 Magnetic saturataion 磁饱和 Magnetic saturation 磁饱和 Magnetic slorage meclium 磁储介质 Magnetic writing 磁写 Magnetizing 磁化 Magnetizing current 磁化电流 Magnetizing coil 磁化线圈 Magnetostrictive effect 磁致伸缩效应 Magnetostrictive transducer 磁致伸缩换能器 Main beam 主声束 Manual testing ⼿动检测 Markers 时标 MA-scope; MA-scan MA 型显⽰ Masking 遮蔽 Mass attcnuation coefficient 质量吸收系数 Mass number 质量数 Mass spectrometer (M.S.)质谱仪 Mass spectrometer leak detector 质谱检漏仪 Mass spectrum 质谱 Master/slave discrimination 主从鉴别 MDTD 最⼩可测温度差 Mean free path 平均⾃由程 Medium vacuum 中真空 Mega or million volt MV 兆伏特 Micro focus X - ray tube 微焦点X 光管 Microfocus radiography 微焦点射线透照术 Micrometre 微⽶ Micron of mercury 微⽶汞柱 Microtron 电⼦回旋加速器 Milliampere 毫安(mA) Millimetre of mercury 毫⽶汞柱 Minifocus x- ray tube ⼩焦点调射线管 Minimum detectable leakage rate 最⼩可探泄漏率 Minimum resolvable temperature difference (MRTD)最⼩可分辨温度差(MRDT) Mode 波型 Mode conversion 波型转换 Mode transformation 波型转换 Moderator 慢化器 Modulation transfer function (MTF)调制转换功能(MTF) Modulation analysis 调制分析。
基于去尘估计和多重曝光融合的煤矿井下图像增强方法
基于去尘估计和多重曝光融合的煤矿井下图像增强方法郝博南1,2,3(1. 煤炭科学技术研究院有限公司,北京 100013;2. 煤矿应急避险技术装备工程研究中心,北京 100013;3. 北京市煤矿安全工程技术研究中心,北京 100013)摘要:煤矿井下粉尘和暗光等因素导致采集的图像质量低,而现有图像增强方法存在图像细节丢失、局部特征不清晰、无法消除噪声、去尘效果不理想等问题。
针对上述问题,提出了一种基于去尘估计和多重曝光融合的煤矿井下图像增强方法。
该方法通过尘化图像简易模型及暗原色理论,并引入自适应衰减系数估算出图像透射率,再根据透射率分布,通过尘化图像简易模型复原物体的原始图像,将煤矿井下图像中的粉尘去除;利用多重曝光融合算法为曝光不足的原始图像生成一组不同曝光比的图像,并引入权值矩阵将这些不同曝光比的图像与原始图像进行融合,有效提升暗光图像质量。
实验结果表明:相较于直方图均衡法、带色彩恢复的Retinex (MSRCR )方法、改进Retinex 方法,该方法在去尘及暗光增强方面效果较好,颜色还原度较高,白边和过曝等现象得到抑制,且增强后的图像平均对比度分别提升了169.00%,42.50%,10.88%,平均图像熵分别提升了51.80%,16.45%,8.99%,平均亮度顺序误差(LOE )分别降低了31.01%,16.94%,7.83%,同时该方法运算耗时最短。
关键词:图像增强;去尘;多重曝光融合;暗光增强;透射率;曝光比中图分类号:TD67 文献标志码:ACoal mine underground image enhancement method based on dust removal estimation andmultiple exposure fusionHAO Bonan 1,2,3(1. CCTEG China Coal Research Institute, Beijing 100013, China ; 2. Engineering Research Center forTechnology Equipment of Emergency Refuge in Coal Mine, Beijing 100013, China ;3. Beijing Engineering and Research Center of Mine Safe, Beijing 100013, China)Abstract : Factors such as dust and dim light in coal mines lead to low quality of collected images. The existing image enhancement methods have problems such as loss of image details, unclear local features, inability to eliminate noise, and unsatisfactory dust removal effects. In order to solve the above problems, a coal mine underground image enhancement method based on dust removal estimation and multiple exposure fusion is proposed. This method uses a simplified model of dust image and dark primary color theory, and introduces an adaptive attenuation coefficient to estimate the image transmittance. Based on the transmittance distribution, the original image of the object is restored using the simplified model of dust image to remove dust from the coal mine underground image. The method uses a multiple exposure fusion algorithm to generate a set of images with收稿日期:2023-08-28;修回日期:2023-11-15;责任编辑:盛男。
具有强抗噪声能力的图像分割方法
第31卷第3期 红外与激光工程 2002年6月Vol.31No.3 Infrared and Laser Engineering J un.2002具有强抗噪声能力的图像分割方法曹永锋,郑建生,万显容(武汉大学电子信息学院通信系,湖北武汉 430079) 摘要:尽可能多地综合利用图像整体和细部信息,是精确分割图像、提取特征的关键。
流域思想利用了整体信息,应用于梯度图像上,兼顾了细部信息。
引入待分割物体全局连通性,将这种方法用于物体与背景的分割,得到了极好的分割效果,对噪声有极强的抑制能力,同时具有原理简单、分割结果为连续单像素边缘、边缘定位准确、可以同时标记多个区域等优点。
给出了形态学流域算法的基本原理及数学实现,同时也总结了方法的缺点和解决途径。
关 键 词: 数学形态学; 流域算法; 抗噪声; 图像分割; 轮廓提取中图分类号:TP391 文献标识码:A 文章编号:100722276(2002)0320208204Method of im age segmentation with high anti2noise perform anceCAO Y ong2feng,ZHEN G Jian2sheng,WAN Xian2rong(Department of Communication,College of Electronic Information,Wuhan University,Wuhan430079,China)Abstract:The key problem of accurate image segmentation and character extraction is to utilize information of both whole image and local sections as much as possible.Idea of drainage area based onwhole image information takes account of local information when it is applied on gradient image.Ex2cellent segmentation result and the ability of restraining image noise are obtained when this approach isused in the segmentation of object and background and the overall connectivity of object to be segment2ed is lead in.Advantages,such as simple principle,connected single pixel edge output,accurate edgeorientation and simultaneous output of different signaled areas,are proven.The principle and mathe2matical description of the watershed algorithm are shown,and the disadvantages and resolving strategyare summarized.K ey w ords: Mathematical morphology; Watershed algorithm; Anti2noise;Image segmen2tation; Contour extraction 收稿日期:2001212204作者简介:曹永锋(19762),男,河北省冀州市人,助教,在职研究生,主要从事图像处理、机器视觉和SAR图像处理分析等方面的研究。
20130549修改--黄
图1Βιβλιοθήκη EMBI 算法的流程图Fig.1 Flowchart of EMBI algorithm
2.1 不透水层组分图像的提取 根据 Ridd 等建立的城市地物覆盖模型 V-I-S(vegetation- impervious surface-soil/water) ,建筑物是城市地物 中不透水层覆盖的主要构成。因此,本文首先提取不透水层作为建筑物提取的基础特征图像,来表征建筑物的 物理属性,该过程主要利用了凸几何端元提取方法和线性混合光谱分解理论。首先依据 V-I-S 模型定义高亮度
2. 增强型形态学建筑物指数(EMBI)
EMBI 算法主要由三部分构成,首先对高分辨多光谱图像进行混合光谱分解[15]提取不透水层组分图像 g(x) 来表征建筑物的物理不透水属性,然后利用多尺度、多方向形态学开闭重建运算建立建筑物 EMBI 特征图像, 最后利用决策树的分析方法对 EMBI 特征图像、连通区域的形状特征即长宽比 Ratio(x)和面积 Area(x)进行分层 阈值判决分析完成对建筑物的提取,具体流程如图 1。
为了充分利用多尺度的形态学特征,Benediktsson 等[20]在 MP 基础上,发展了一种新的形态学算子,称为多尺
CFO g (d ,s ) Rg (Rg (d ,s ))
(1)
g g R (d , s) 为对不透水层组分图像 g 进行闭重建之后所得到的图像, CFO g (d , s ) 为对 R (d , s) 进行开重建后得
到的图像。它能够同时平滑小于结构体的明、暗细节信息同时保持总体特征稳定,提高对象区域内部的一致性。 在经 CFO 计算后的图像中,建筑物表现为明亮的灰度,由于白帽变换可以提取影像中明亮的结构和移除黑暗的 结构,故而对剔除建筑物之外的地物非常有意义,定义基于 CFO 算子的白帽变换(W-TH)为
基于HSI空间的Retinex低照度图像增强算法
2. Retinex 理论及分析
射图像和亮度图像,分别用 R ( x, y ) 和 L ( x, y ) 表示。Retinex 图像增强的思想就是从原图像中剔除环境亮 度的影响, 求解出物体本身的颜色特性, 从而实现图像增强的目的。 在 SSR 算法中, 亮度图像是平滑的, 在 Retinex 模型中,图像 I ( x, y ) 是由场景中物体的反射亮度和光照亮度两部分组成的,也称之为反
Retinex Enhancement Algorithm for Low Intensity Images Based on HSI Space
Xin Song, Shuhua Xiong, Xiaohai He, Pengxin Kang
College of Electronic and Information Engineering, Sichuan University, Chengdu Sichuan Received: Dec. 11th, 2016; accepted: Dec. 26th, 2016; published: Dec. 30th, 2016 Copyright © 2017 by authors and Hans Publishers Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). /licenses/by/4.0/
Ri ( x, = y)
∑ ωk {log I i ( x, y ) − log Fk ( x, y, ck ) ∗ I i ( x, y ) }
M k =1
(5)
ωk 在这里代表权重,∑ ωk = 1 ; 式中,M 表示 MSR 算法中所选择的尺度个数, 通常情况下 M 取值为 3;
一种基于像素块的纹理优先自适应隐写算法
一种基于像素块的纹理优先自适应隐写算法师夏阳;马赛兰;胡永健;周琳娜【摘要】与平坦区域相比,图像纹理区域在结构上表现出更多的随机性,因此将密信嵌入到纹理丰富的区域可获得更好的安全性.本文利用相邻像素的LSB与次低LSB 位之间的关系,提出一种基于像素块的纹理优先自适应隐写算法,将密信优先嵌入到复杂纹理区域.所设计的纹理判别准则能结合待嵌入密信长度,自适应选择嵌入区域.针对在嵌密过程中嵌密块可能出现的异常情况,提出一种像素值调整方案,并从理论上证明了其可行性.实验结果表明本文算法比两种经典的LSB算法和一种现有的基于边缘优先的自适应算法具有更高的嵌入效率,且对四种有代表性的隐写分析算法通常具有更强的抵御能力.【期刊名称】《电子学报》【年(卷),期】2015(043)006【总页数】7页(P1094-1100)【关键词】LSB隐写;图像纹理;自适应嵌入;安全性;隐写分析【作者】师夏阳;马赛兰;胡永健;周琳娜【作者单位】华南理工大学电子与信息工程学院,广东广州510641;华南理工大学电子与信息工程学院,广东广州510641;华南理工大学电子与信息工程学院,广东广州510641;北京电子技术应用研究所,北京100091【正文语种】中文【中图分类】TP391LSBR(LSB Replacement,最小意义位替换)是目前最常用的隐写方法之一,它在嵌入密信时对载体图像的偶数像素加1或者保持不变,但绝不会减小偶数像素;对奇数像素反之亦然.这种操作规则会导致隐写图像像素值统计上的异常,即使在很小嵌入率下,隐写分析者也能检测到密信的存在[1],某些隐写分析算法甚至还可估计出嵌入密信的长度[2].LSBM(LSB Matching)是LSBR的改进算法,通过对像素值随机±1操作,消除了LSBR引起的统计不对称.与LSBR隐写算法相比,LSBM更难检测.事实上,仅利用单个像素值进行隐写时,LSBM隐写算法近乎最优[3].但LSBM算法最大的弱点在于假设了载体图像像素的LSB位之间是不相关的,不能真实反映自然图像的特性.隐写分析者可利用这个弱点攻破LSBM算法[4~7].不同于LSBR和LSBM算法,LSBMR(LSB Matching Revisited)隐写算法[8]利用相邻像素值对作为嵌入单元,实现了两个像素嵌入两个密信比特,具体为第一个像素的LSB位嵌入第一个密信比特,而利用两个像素值的奇偶关系嵌入余下的那个密信比特.通过像素对关系嵌入机制,算法的安全性有了明显改善.借助文献[8]的嵌入机制,文献[9]用相邻两个像素的差值判别自然图像的边缘,提出了一种优先利用图像边缘特性进行嵌入的方法,改善了安全性.然而,Tan等人[10]利用B-样条函数从含密图像绝对差值直方图拟合出近似原始图像的绝对差值直方图,通过比较拟合前后直方图的局部差异,成功实现了对文献[9]隐写方法的检测.通常利用三个或更多像素作为嵌入单元可提高隐写算法的安全性.最近Omoomi等人[11]提出一种多像素±1隐写算法 (EPES),选择多个像素作为一个嵌入单元,利用相邻像素的LSB与次低LSB位之间的关系,对像素的修改更少,算法的安全性得到进一步改善.本文在文献[11]的基础上提出一种基于图像块的纹理优先自适应嵌入算法,更充分地利用了自然图像的纹理特性,得到比文献[9]更高的嵌入效率和更好的安全性.为了利用一个像素对之间的关系嵌入两个比特密信,文献[11]引入了一个刻画次低LSB位与最低LSB位关系的二值函数B(u,v):这里bi(u)是单变量二值函数,bi(u)=u/2i」mod 2,i=0,1,2…,bi(u)表示像素值u的第i+1个LSB位,且B(u,v)有如下两个性质:性质1 因为b0(v)=v mod 2,而b0(v)为v的LSB,必有,故当u值不变时,v值的±1操作必然引起B(u,v)值的翻转,即性质2 因为b1(u)=u/2」mod 2,而b1(u)为u的次低LSB,必有,故当v值不变时,u 值的±2操作必然引起B(u,v)值的翻转,则有当u为偶数时,b1(u)=b1(u+1),当u为奇数时,b1(u)=b1(u-1),即,∀u,v∈Z.综上,可得二值函数B的性质2:B(u,v)=, ∀u,v∈Z这里分别表示bi,B值的翻转.文献[11]所提出的EPES算法的嵌入过程是将上述B函数作用于两个相邻像素,利用B的两个性质实现l个像素值嵌入l个比特密信.下面简单介绍其做法.首先引入一个差异量di来描述密信与B值之间的差异,即:在密信嵌入过程中,根据差异量值di的变化并结合二值函数B的性质确定将要修改的像素值.这里以两个像素的情形(即l=2)为例具体描述密信的嵌入和抽取机制.记x1,x2和y1,y2分别为载体和含密像素,m1,m2为待嵌密信比特.EPES算法设计其密信提取公式为了做到这一点,其嵌入过程被精心构造.首先根据式(4)计算差异量d1和d2:然后根据差异量d1,d2的四种组合,并结合二值函数B的性质,按表1修改像素值x1,x2.具体为:若d1=0且d2=0,则由式(6)可知,要嵌入的密信m1和m2与两个载体像素x1和x2的B函数满足异或关系,即有m1=B(x1,x2)和m2=B(x2,x1),故在嵌入时对x1和x2不做修改,即y1=x1,y2=x2.则抽取时仍有m1=B(y1,y2),m2=B(y2,y1).若d1=0且d2=1,则由式(6)可知m1=B(x1,x2)且m2≠B(x2,x1).要使m1=B(y1,y2)成立,根据式(2)和式(3),须满足y2=x2或y1=x1+(-1)x1;另一方面,为了使m2=B(y2,y1),可使,则只要y2=x2-(-1)x2或y1=x1±1就行.显然,y1=x1+(-1)x1且y2=x2可同时使式(5)成立.同理可分析d1=1且d2=0和d1=1且d2=1的情况.归总可得表1所示的嵌入方式.若一次嵌入3个比特的密信m1,m2和m3(l=3),则可利用三个载体像素x1,x2和x3,得到含密像素y1,y2和y3,使其满足m1=B(y1,y2),m2=B(y2,y3),m3=B(y3,y1).广义而言,可利用l个载体像素嵌入l个比特密信,其抽取关系仍为将密信嵌入到纹理区域更不易被人眼所察觉,因此本文提出纹理优先的原则自适应选择嵌入区域.首先引入一个简单判别图像局部复杂度的准则.将尺寸为p×q的载体图像划分为互不相交的大小为m×n的块Di,j,定义该块纹理复杂度block(Di,j)为: 这里Sum(Di,j)为块内全体像素值之和,Min(Di,j)为块内的最小像素值,i∈1,2,…,p/m」,j∈1,2,…,q/n」.对于一个给定阈值T,若block(Di,j)≥T,则认为Di,j是纹理区域;否则,则认为Di,j是平坦区域.本文将密信嵌入到所选中的纹理块中.可通过调整T值来控制嵌入容量.若密信长度为M,则可用下述方法求门限T:这里,D(t)={Di,j|block(Di,j)≥t},i∈1,2,…,p/m」,j∈1,2,…,q/n」,且D(t)为block(Di,j)的值大于或者等于参数t的集合.下面用一个例子具体描述单块密信的嵌入和提取过程.在嵌入密信前,先对每个待嵌块从左上角开始进行Zigzag扫描,生成像素序列.嵌入密信完成后重新还原成像素块.密信提取时与嵌入的扫描方式相同.设块大小为2×2,阈值T=8,扫描得到的像素序列为(x1,x2,x3,x4)=(22,25,27,24),由式(8)可得block(Di,j)=10>8,则此块为待嵌块.若待嵌密信为(m1,m2,m3,m4)=(1,0,1,1),计算差异量di,可得此时,4个比特分别嵌入4个像素对:(x1,x2),(x2,x3),(x3,x4),(x4,x1).根据表1并综合考虑x1,x2,x3,x4这4个像素之间的关系,只需修改x1和x3就行,具体为y1=22-(-1)22=21和y3=27+(-1)27=26.抽取时,扫描此块后像素序列为(y1,y2,y3,y4)=(21,25,26,24),按式(8)进行判断,有,则为嵌入块.利用式(5)抽取密信,有m1=B(21,25)=1,m2=B(25,26)=0,m3=B(26,24)=1,m4=B(24,21)=1.可见所抽取出的密信与原始密信完全一致.上述自适应算法将对≥T的块抽取密信,但可能出现嵌入密信后像素块不满足提取条件的情况.仍用上例来解释,只将待嵌密信改为(m1,m2,m3,m4)=(0,1,1,1).由于此时(d1,d2,d3,d4)=(0,0,0,1),根据4个像素之间的关系,只需修改一个像素x1,且改动为y1=x1+(-1)x1=23.所得嵌密像素序列为(y1,y2,y3,y4)=(23,25,27,24).异常发生在抽取端.当抽取时,由式(8)可知,不满足抽取条件,这就造成密信丢失.本文称这类块为异常块.为解决上述问题,本文提出一种像素值调整方案使得调整后的块仍满足抽取条件.具体做法是对异常块内的每一个像素值进行±4k操作,即为了保证上述操作不影响密信的抽取,调整后块中的像素序列需满足下列条件:定理1 ∀u,v∈N,存在整数k,k=0,±1,±2,…,使得函数B(u,v)的值与函数B(u±4k,v±4k)的值恒相等.证明根据B函数的定义式(1)可得B(u±4k,v±4k)={(u±4k)/2」mod 2} ⊕{(v±4k)mod 2}首先考察式(10)右边异或操作的后项,因为±4k恒为偶数,那么任一正整数v加上±4k,其奇偶性不变,恒有v mod 2==(v±4k) mod 2,即b0(v)=b0(v±4k).又考察式(10)等号右边异或操作的前项(u±4k)/2」mod 2.因为±4k恒为偶数,则有下面分成两种情形进行讨论:(1)当u为偶数时,则u/2」=u/2,且(u/2」+±2k」)的奇偶性与u/2」相同,则(u/2+±2k」)mod 2=(u/2)mod 2,那么b1(u±4k)=b1(u).(2)当u为奇数时,u/2」=(u-1)/2,且(u/2」+±2k」)的奇偶性与u/2」相同,则(u-1)/2+±2k」mod 2=(u-1)/2mod 2,那么b1(u±4k)=b1(u).综上,必有成立.证毕.参数k的大小直接影响异常块中像素值的调整,且存在不同的k值满足调整方案.为了使得对载体图像的改动尽可能小,并使调整后的像素值仍落在0到255范围内,本文提出一种寻找k值的优化方案这里y表示块内的像素值,y′表示块内的像素值.仍用上例解释本文的调整方案.调整前(y1,y2,y3,y4)=(23,25,27,24),有,根据式(11)对含密像素值y进行调整.解得最优调整参数(k1,k2,k3,k4)=(-1,0,0,0),则=(23,25,27,24)+(-4,0,0,0)=(19,25,27,24)此时,显然,调整前后函数B的值不变,但)=19>8,满足抽取条件,即调整后的块仍能被正确地判别含密块.值得指出的是,LSB隐写算法存在一个普遍的问题:当载体图像的像素值为0或255时,对LSB位的-1或+1操作会导致像素值边界的溢出.为解决上述问题,本文对满足嵌入条件的块进行预处理.若预嵌入块中含有0(或者255),则将0变为1(255变为254).这就避免了嵌入溢出问题.5.1 密信嵌入位置分析图1显示5种隐写算法(即LSBM,LSBMR,EALSBMR[9],EPES[11]和本文算法)对含密图像的LSB位平面的影响.实验在灰度图像上进行,且本文算法在嵌入密信时采用3×3分块.传统基于LSB的隐写算法通常假设LSB位的值是随机的,但图1(b)右上角显示天空位置的LSB值有一定的结构性,这里是整块的白或黑,故传统算法或多或少的破坏了载体图像平坦区域的LSB位平面结构.图1(c)显示LSBM算法将密信随机散布在载体图像中,使得平坦区域也嵌入了密信,白色区域出现了黑点而黑色区域也出现了白点,从而降低了图像的视觉质量,影响了隐写算法的安全性.LSBMR是LSBM的改进,比较图1(c)和图1(d)发现,LSBMR算法对平坦区域的LSB位平面结构分布的破坏有所减弱.EPES算法同样是基于LSB的±1修改,不过它利用了相邻像素LSB与次低LSB之间的关系,进一步降低了像素的修改概率,观察图1(e)发现,与LSBMR和LSBM相比,EPES在保留平坦区域结构分布方面有进一步改善.为了减小对自然图像平坦区域结构分布的破坏,EALSBMR利用像素对的差值为依据,把密信比特嵌入到自然图像的边缘.图1(f)显示,EALSBMR对平坦区域的结构保留完好,仅能观察到一些边缘处的毛刺.图1(g)是本文算法的结果,与图1(f)比,几乎观察不到平坦区结构分布的变化.为了更清楚地说明本文算法对平坦区域的影响,我们将图1(b),图1(f)和图1(g)右上角分别放大为图2(a)、图2(b)和图2(c).对比可发现,EALSBMR边缘毛刺较明显,而且在一些全白的区域还出现了较大黑点.图2(c)与图2(a)最接近,原因是本文算法优先将密信比特隐藏到载体图像的纹理区域,最大限度保留了载体图像平坦区域的结构.5.2 嵌入效率比较嵌入效率EE(Embedding Efficiency)定义为嵌入率ER(Embedding Rate)和嵌入改变率CR(Change Rate)的比值[11].同样与LSBM,LSBMR,EALSBMR,EPES进行比较.由于篇幅所限,这里仅给出Baboon,Lena和Boat的实验结果.图像大小为512×512.图3显示在相同嵌入率下本文算法的嵌入效率明显高于LSBM,LSBMR 和EALSBMR隐写算法,而与EPES相近,这是因为本文算法继承了EPES的优点,同时利用相邻像素LSB与次低LSB之间的关系,从而减少了嵌入对像素的修改.由于本文算法与EPES在嵌入位置上有差异,所以两者的曲线有细微的不同.5.3 抗隐写分析攻击能力评估安全性是评估隐写算法性能的重要指标之一.本文使用两种主流和一种最新的通用隐写分析算法来评估隐写算法的安全性,它们按顺序分别是基于直方图极值的隐写分析算法(ALE)[6],基于灰度图像的高阶马尔科夫链(SPAM)隐写分析算法[7]和基于投影空间富模型(Projection Spatial Rich Model,PSRM)的隐写分析算法[12].其中文献[12]不同于传统的基于共生矩阵的隐写分析算法(例如文献[7]),对图像残差引入一种新的统计描述子,从而获得更好的检测性能.该算法既可检测空域隐写,也可检测压缩域隐写.本文实验在UCID[13]和NRCS[14]这两个常用的图像库中进行.UCID和NRCS数据库分别包含1338幅TIFF图像和3148幅TIFF图像,其中UCID图像大小分为512×384和384×512两种,NRCS原始图像较大,为计算方便,以图像中心为基准,剪切成512×512大小.全部实验在灰度图像中进行.图4和图5分别显示ALE(10维特征),SPAM(686维特征)和PSRM(12870维特征)对5种隐写算法检测的ROC曲线(接收机工作特性曲线).图4显示,在UCID图像库中,本文算法的ROC曲线明显最偏向于右下,说明本文算法的隐写最难被发现.图5是NRCS图像库的检测结果.在嵌入率为0.5bpp时,图5(a)显示本文算法的ROC 曲线与LSBM,LSBMR,EPES的曲线很接近,说明此时这4种算法的抗隐写分析能力接近.出现这种现象的原因可能是NRCS库中的图像为扫描图像,因此含有扫描设备噪声,对检测所用的局部极值振幅特征影响较大,故干扰了ALE算法的检测结果.另一方面,图5(c)显示SPAM算法的表现.本文算法的ROC曲线最偏向于右下,其次是EPES,它们都低于LSBM,LSBMR和EALSBMR,说明本文算法抗SPAM分析的能力最强.而图5(e)是PSRM算法的表现.LSBM的ROC曲线最偏于左上方,LSBMR,EALSBMR 和EPES三种算法的性能基本接近,此时本文算法的ROC曲线仍位于最右下,比较说明本文算法抗PSRM分析能力最强.当嵌入率增加到0.8bpp时,本文算法的ROC曲线更偏右下方,如图5(b)、图5(d)和图5(f)所示.对于SPAM而言,为了降低维数灾难,SPAM优先以图像的平坦区域为检测对象,对图像纹理区域的特征描述相对弱一些,而本文算法则是优先利用纹理区域进行嵌入,故对SPAM有较明显的抗分析能力.值得指出的是图5(a)和图5(b)显示EALSBMR的ROC曲线较其它四种算法更大幅度地靠近左上角,反映其抗ALE攻击能力最弱.出现上述现象的原因可能是NRCS是扫描图像库,扫描操作引入了扫描噪声,而EALSBMR算法一次只利用两个相邻像素(水平或垂直)的差值来简单判别图像边缘,故这些因扫描而引入的高频噪声对其检测结果有较明显的影响.除了上述抗通用隐写分析能力,本文算法对文献[10]中专用的隐写分析算法也有较好的抵御能力.Tan等人在文献[10]中指出文献[9]的再调整机制会引起含密图像的绝对差分图像直方图(Histogram of the Absolute Difference of the Pixel Pairs,HADPP)的局部脉冲失真,提出利用B-样条拟合得到与原始图像近似的HADPP,通过比较拟合前后直方图的局部变化,可检测是否存在隐写.图6显示,在两种嵌入率下,不论是在UCID图像库还是NRCS图像库,本文算法的ROC曲线与EALSBMR算法的ROC曲线相比都大幅偏向右下,说明它比EALSBMR算法具有强得多的抗此类攻击的能力.我们注意到,在NRCS库中,无论嵌入率为0.5 bpp还是0.8 bpp,EALSBMR算法对文献[10]的隐写分析攻击都非常敏感.本文提出了一种基于像素块的纹理优先自适应隐写算法,与经典的LSB隐写算法(即LSBM和LSBMR)以及现有的基于边缘优先的自适应嵌入算法(即EALSBMR)相比,提高了嵌入效率,更好保护了图像的平坦区域,间接地提高了安全性.针对四种有代表性的隐写分析算法,本文算法与LSBM,LSBMR,EALSBMR以及EPES这4种算法相比,具有更佳的抗隐写分析能力.本文在理论上的贡献是针对自适应嵌入过程中可能出现的异常块问题,提出了一种像素值调整方案,并严格证明了这种方案能确保密信的正确、完整的抽取.我们还具体给出了减少像素值改动的优化方案.目前本文只对3×3的嵌入块进行了讨论,将来会对更大的块和更多的相邻像素(例如位于“+”和“×”形的近邻)进行研究,期望进一步提高算法的嵌入率和安全性.师夏阳男,1978年4月出生,河南省平顶山市人,现为华南理工大学信号与信息处理专业博士研究生,主要从事数字图像信息隐藏、隐写分析和小波分析方面的研究. E-mail:*****************马赛兰女,1989年8月出生,湖北省仙桃市人,现为华南理工大学信号与信息处理专业硕士研究生,主要从事数字图像信息隐藏和隐写分析方面的研究.E-mail:*****************胡永健(通信作者) 男,1962年12月出生,湖北武汉人.教授、博士生导师、中国电子学会和中国计算机学会高级会员、IEEE高级会员.1990年和2002年分别在西安交通大学和华南理工大学获工学硕士和工学博士学位.主要研究方向是多媒体信息安全、信息隐藏和数字图像处理.E-mail:***************.cn周琳娜女,1972年4月出生于湖南省邵阳市,2007年于北京邮电大学获工学博士学位,清华大学博士后,现为北京电子技术应用研究所研究员.主要研究方向为图像分析与处理、信息隐藏、多媒体信息安全.E-mail:**********************.cn【相关文献】[1]王朔中,张新鹏,张卫明.以数字图像为载体的隐写分析进展[J],计算机学报,2009,32(7):1247-1263. WANG Shuo-Zhong,ZHANG Xin-Peng,ZHANG Wei-Ming.Recent advances in image-based steganalysis research[J].Chinese Journal of Computers,2009,32(7):1247-1263.(in Chinese)[2]Westfeld A,Pfitzmann A.Attacks on steganographic systems[A].Proceedings of Third International Workshop,Information Hiding[C].Berlin:Springer-Verlag,2000.61-76.[3]Filler T,Fridrich J.Fisher information determines capacity of ε-secure steganography[A].Proceedings of 11th International Workshop,InformationHiding[C].Germany:Darmstadt,2009.31-47.[4]Goljan M,Fridrich J,Holotyak T.New blind steganalysis and itsimplications[A].Proceedings of SPIE 6072,Security,Steganography,and Watermarking of Multimedia Contents VIII[C].San Jose,CA:SPIE,2006.1-13.[5]Harmsen J J,Pearlman W A.Steganalysis of additive-noise modelable information hiding[A].Proceedings of SPIE 5020,Security and Watermarking of Multimedia Contents V[C].Santa Clara,CA:SPIE,2003.131-142.[6]Cancelli G,Doёrr G,Cox I J,et al.Detection of ±1 LSB steganography based on the amplitude of histogram local extrema[A].Proceedings of 15th IEEE International Conference.Image Processing[C].San Diego,CA:IEEE,2008.1288-1291.[7]Pevny T,Bas P,Fridrich J.Steganalysis by subtractive pixel adjacency matrix[J].IEEE Transactions on Information Forensics and Security,2010,5(2):215-224.[8]Mielikainen J.LSB matching revisited[J].IEEE Signal Processing Letters,2006,13(5):285-287.[9]Luo W,Huang F,Huang J.Edge adaptive image steganography based on LSB matching revisited[J].IEEE Transactions on Information Forensics and Security,2010,5(2):201-214. [10]Tan S,Li B.Targeted steganalysis of edge adaptive image steganography based on LSB matching revisited using B-Spline fitting[J].IEEE Signal Processing Letters,2012,19(6):336-339.[11]Omoomi M,Samavi S,Dumitrescu S.An efficient high payload ±1 data embedding scheme[J].Multimedia Tools and Applications,2011,54(2):201-218.[12]Holub V,Fridrich J.Random projections of residuals for digital imagesteganalysis[J].IEEE Transactions on Information Forensics and Security,2013,8(12):1996-2006.[13]Schaefer G,Stich M.UCID:an uncompressed color image database[A].Proceedings of the Storage and Retrieval Methods and Applications for Multimedia[C].SanJose,CA:SPIE,2004.472-480.[14]United States Department of Agriculture.Natural Resources Conservation Service Photo Gallery.[OL].Available:.2013-05-09.。
数字图像处理-冈萨雷斯-课件(英文)Chapter03 空域图像增强
Power-Law Transformations : Gamma Correction Application Ariel images after Gamma Correction
(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.
Desired image
Image displayed at Monitor
After
Gamma correction
Image displayed at Monitor
(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.
Ds
Dr Smaller Dr yields wider Ds = increasing Contrast
(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.
Gray Level Slicing
(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.
Histogram of an Image (cont.)
low contrast image has narrow histogram
high contrast image has wide histogram
Intensity gradient based registration and fusion of multi-modal images
Intensity gradient based registration and fusionof multi-modal imagesEldad Haber1and Jan Modersitzki21Mathematics and Computer Science,Emory University,Atlanta,GA,USA,haber@mathcs.emory,edu2Institute of Mathematics,L¨u beck,Germany,modersitzki@math.uni-luebeck.de Abstract.A particular problem in image registration arises for multi-modal images taken from different imaging devices and/or modalities.Starting in1995,mutual information has shown to be a very successfuldistance measure for multi-modal image registration.However,mutualinformation has also a number of well-known drawbacks.Its main dis-advantage is that it is known to be highly non-convex and has typicallymany local maxima.This observation motivate us to seek a different image similarity mea-sure which is better suited for optimization but as well capable to handlemulti-modal images.In this work we investigate an alternative distancemeasure which is based on normalized gradients and compare its perfor-mance to Mutual Information.We call the new distance measure Nor-malized Gradient Fields(NGF).1IntroductionImage registration is one of today’s challenging medical image processing prob-lems.The objective is tofind a geometrical transformation that aligns points in one view of an object with corresponding points in another view of the same object or a similar one.An important area is the need for combining informa-tion from multiple images acquired using different modalities,sometimes called fusion.Typical examples include the fusion of computer tomography(CT)and magnetic resonance(MRI)images or of CT and positron emission tomography (PET).In the past two decades computerized image registration has played an increasingly important role in medical imaging(see,e.g.,[2,10]and references therein).One of the challenges in image registration arises for multi-modal images taken from different imaging devices and/or modalities.In many applications, the relation between the gray values of multi-modal images is complex and a functional dependency is generally missing.However,for the images under con-sideration,the gray value patterns are typically not completely arbitrary or random.This observation motivated the usage of mutual information(MI)as a distance measure between two images cf.[4,18].Starting in1995,mutual in-formation has shown to be a successful distance measure for multi-modal imageregistration.Therefore,it is considered to be the state-of-the-art approach to multi-modal image registration.However,mutual information has a number of well-known drawbacks;cf.e.g.,[14,13,12,17,20].Firstly,mutual information is known to be highly non-convex and has typically many local maxima;see for example the discussion in [10],[20],and Section3.Therefore,the non-convexity and hence non-linearity of the registration problem is enhanced by the usage of mutual information. Secondly,mutual information is defined via the joint density of the gray value distribution,and therefore,approximations of the density are required.These approximations are nontrivial to compute and typically involve some very sen-sitive smoothing parameters(e.g.a binning size or a Parzen window width,see [16]).Finally,mutual information decouples the gray value from the location information.Therefore judging the output of the registration process is difficult. These difficulties had stem a vast amount of research into mutual information registration,introducing many nuisance parameters to help and bypass at least some of the difficulties;see,e.g,[14].These observations motivate us to seek a different image similarity measure which is capable to handle multi-modal images but better suited for optimization and interpretation.In this paper we investigate an alternative distance measure which is based on normalized gradients.As we show,the alternative approach is deterministic,simpler,easier to interpret,fast and straightforward to implement, faster to compute,and also much more suitable to optimization.The idea of using derivatives to characterize similarity between images is based on the observation that image structure can be defined by intensity changes. The idea is not new.In inverse problems arising in geophysics,previous work on joint inversion[9,21,7]discussed the use of gradients in order to solve/fuse inverse problems of different modalities.In image registration,a more general framework similar to the one previously suggested for joint inversion was given in[6]and[14].Our approach is similar to[14]however,we have some main differences.Unlike the work in[14]we use only gradient information and avoid using mutual information.Furthermore,our computation of the gradient is less sensitive and allow us to deal with noisy images.This allows us to use simple optimization routines(e.g.Gauss Newton).The rest of the paper is organized as follows:In Section2we shortly lay the mathematical foundation of image registration.Section3presents an illustrative example showing some of the drawbacks of mutual information.In Section4we discuss the proposed alternative image distance measure.Finally,in Section5 we demonstrate the effectiveness of our method.2The mathematical settingGiven a reference image R and a template image T,the goal of image registration is tofind a“reasonable”transformation,ϕ,such that the“distance”between the reference image and a deformed template image is small.In this paper we use both affine linear and nonparametric registration.In the linear registration,we set the transformationϕtoϕ(γ,x)= γ1γ2γ3γ4x1x2+γ5γ6.(1)Given a distance measure D,the registration problem is tofind a minimizerγoff(γ):=DR(·),T(ϕ(γ,·)).(2)For nonparametric registration we setϕ=x+u(x)and use regularizationminimizingf(γ):=DR(x),T(x+u))+αS(u).(3)whereαS(u)is a regularization term such as elastic(see[10]).3An illustrative exampleTo emphasize the difficulty explained above,we present an illustrative example. Figure1shows a T1and a T2weighted magnetic resonance image(MRI)of a brain.Since the image modalities are different,a direct comparison of gray values is not advisable and we hence study a mutual information based approach.Figure1c)displays our approximation to the joint density which is based on a kernel estimator,where the kernel is a compactly supported smooth func-tion.Note that the joint density is completely unrelated to the spatial image content(though there is some interpretation[11]).We now slide the template along the horizontal axis.In the language of equation(1),wefixγ1,...,5and obtain the transformed image by changingγ6.Figure1e)shows the negative mutual information versus the shift ranging from-2to2pixels.Thisfigure clearly demonstrates that mutual information is a highly non-convex function with respect to the shift parameter.We have experimented with different interpolation methods and all have resulted in similar behavior. In particular,the curve suggests that there are many pronounced local minima which are closed in value to the global minimum.Our particular example is not by any means pathologic.Similar,and even less convex curves of Mutual Information appear also in[20]pp293who used different interpolation.Figure1d)displays a typical visualization of our alternative NGF distance between R and T(discussed in the next section).Note that for the alternative distance measure,image differences are related to spatial positions.Figure1f) shows the alternative distance measure versus the shift parameter.For this par-ticular example,it is obvious that the alternative measure is capable for multi-modal registration and it is much better suited for optimization.4A simple and robust alternativeThe alternative multi-modal distance measure is based on the following simple though general interpretation of similarity:(a)(b)(c)(d)(e)(f)Fig.1.Distance measures versus shifts;(a)Original BrainWeb [3]T1(b)Original Brain-Web [3]T2(c)the joint density approximation for the images,(d)the normalized gradient field d d (T,R )for the images,(e)negative mutual information versus shift,(f)normalized gradient field versus shift.two image are considered similar,if intensity changes occur at the same locations.An image intensity change can be detected via the image gradient.Since the mag-nitude of the gradient is dependent upon the modality of the image,it would be unwise to base an image similarity measure on gradient magnitude.We therefore consider the normalized gradient field,i.e.,the local orientation,which is purely geometric information.n (I,x ):=∇I (x )∇I (x ) ,∇I (x )=0,0,otherwise.(4)For two related points x in R and ϕ(x )in T or,equivalently,x in T ◦ϕ,we look at the vectors n (R,x )and n (T ◦ϕ,x ).These two vectors form an angle θ(x ).Since the gradient fields are normalized,the inner product (dot-product)of the vectors is related to the cosine of this angle,while the norm of the outer product (cross-product)is related to the sine.In order to align the two images,we can either minimize the square of the sine or,equivalently,maximize thesquare of the cosine.Thus we define the following distance measuresd c(T,R)= n(R,x)×n(T,x) 2;D c(T,R)=12Ωd c(T,R)d x,(5)d d(T,R)= n(R,x),n(T,x) 2;D d(T,R)=−12Ωd d(T,R)d x,(6)Note that from an optimization point of view,the distances D c or D d are equiv-alent.The definition of the normalized gradientfield(4)is not differentiable in areas where the image is constant and highly sensitive to small values of the gra-dientfield.To avoid this problem we define the following regularized normalized gradientfieldsn E(I,x):=∇I(x)∇I(x) E, ∇I(x) E:=∇I(x) ∇I(x)+E2.(7)In regions where E is much larger than the gradients the maps n E(I,x)are almost zero and therefore do not have a significant effect on the measures D c or D d,respectively.Indeed,the choice of the parameter E in(7)answers the question,“what can be interpreted as a jump?”.As suggested in[1],we propose the following automatic choice:E=ηVΩ|∇I(x)|d x,(8)whereηis the estimated noise level in the image and V is the volume of the domainΩ.The measures(5)and(6)are based on local quantities and are easy to com-pute.Another advantage of these measures is that they are directly related to the resolution of the images.This property enables a straightforward multi-resolution approach.In addition,we can also provide plots of the distancefields d c and d d,which enables a further analysis of image similarity;see,e.g.,Fig-ure1d).Note that in particular d c=0everywhere if the images match perfectly. Therefore,if in some areas the function d c takes large values,we know that these areas did not register well.5Numerical ExperimentsIn our numerical experience,we have use various examples with results along the same lines.Since it is impossible to show every result we restrict ourselves to three illustrative and representative examples.In thefirst example we use the images in Figure1.We take the T1image and generate transformed versions of the image by using an affine linear trans-formation(1).We then use the transformed images to recoverγand the original image.The advantage of this synthetic experiment is that it is controlled,i.e., we know the exact answer to the registration problem and therefore we are able(a)(b)(c)(d)(e)(f)Fig.2.Experiments with Viola’s example;(a)reference R ,(b)template T ,(c)reg-istered T ,(d)overlay of T and R (202pixels checkerboard presentation),(e)cross product n T ×n R ,(f)joint density at the minimum.to test our algorithm under a perfectly controlled environment.We randomly choose γand then run our code.Repeating the experiment 100times we find that we are able to converge to the correct γwith accuracy of less than 0.25pixels.In the second example we use the images from Viola’s Ph.D thesis [18].We compare the results of both MI and our NGF registration.The difference between the MI registration and the NGF registration was less than 0.25of a pixel,thus we conclude that the methods give virtually identical minima.However,to obtain the minima using MI we needed to use a random search technique to probe the space which is rather slow.The global MI minima has the value of about −9.250×10−2while the guess γ=0has the value of about −9.115×10−2.Thus the “landscape”of the MI function for this example is similar to the one plotted in Figure 1.In comparison,our NGF algorithm used only 5iteration on the finest grid.The registration was achieved in a matter of seconds and no special space probing was needed to obtain the minima.The value of the NGF function at γ=0was −4.63×101while at the minima its value was −2.16×102thus our minima is much deeper compared with the MI minima.The results of our experiments are also presented in Figure 2.(a)(b)(c)(d)Fig.3.Experiments with PET/CT images:(a)reference R,(b)template T,(c)MI registered T and and deformed grid,(d)NGF registered T and deformed grid;Another advantage of our method is the ability to quickly evaluate the reg-istration result by looking at the absolute value of the cross-product|n T×n R|. This product has the same dimension as the images and therefore can be viewed in the same scale.If the match is perfect then the cross product should vanish and therefore any deviation from zero implies a imperfectfit.Such deviations are expected to be present due to two different imaging processes,the noise in the images and possible additional nonlinear features.Figure2e)shows the cross product for the image matched above.It is evident that the matching is very good besides a small number of locations where we believe the image to be noisy.The previous two examples demonstrate the ability of our new algorithm to successfully perform parametric registration.Nevertheless,when the number of parameters is very large then stochastic optimization techniques stall and differentiable smooth distance measure functions are much more important.We therefore test our algorithm on a PET/CT registration,see Figure3.We perform an elastic registration of CT and a PET images(transmission)of a human thorax.For this example,the differences between the MI and NGF approach are very small;cf.Figure3.For the NGF no special effort was needed in order to register the images.For MI we needed a good starting guess to converge to a local minima.References1.U.Ascher,E.Haber and H.Haung,On effective methods for implicit piecewisesmooth surface recovery,SISC28(2006),no.1,339–358.2.Lisa Gottesfeld Brown,A survey of image registration techniques,ACM ComputingSurveys24(1992),no.4,325–376.3. C.A.Cocosco,V.Kollokian,R.K.-S.Kwan,and A.C.Evans,BrainWeb MR sim-ulator,Available at http://www.bic.mni.mcgill.ca/brainweb/.4. A.Collignon,A.Vandermeulen,P.Suetens,and G.Marchal,multi-modality med-ical image registration based on information theory,Kluwer Academic Publishers: Computational Imaging and Vision3(1995),263–274.5.J.E.Dennis and R.B.Schnabel,Numerical methods for unconstrained optimizationand nonlinear equations,SIAM,Philadelphia,1996.6.M.Droske and M.Rumpf,A variational approach to non-rigid morphological reg-istration,SIAM Appl.Math.64(2004),no.2,668–687.7.L.A.Gallardo and M.A.Meju,Characterization of heterogeneous near-surface ma-terials by joint2d inversion of dc resistivity and seismic data,Geophys.Res.Lett.30(2003),no.13,1658–1664.8.G.Golub,M.Heath,and G.Wahba,Generalized cross-validation as a method forchoosing a good ridge parameter,Technometrics21(1979),215–223.9. E.Haber and D.Oldenburg,Joint inversion a structural approach,Inverse Prob-lems13(1997),63–67.10.J.Modersitzki,Numerical methods for image registration,Oxford,2004.11.H.Park,P.H.Bland,K.K.Brock,and C.R.Meyer,Adaptive registration usinglocal information measures,Medical Image Analysis8(2004),465–473.12.Josien PW Pluim,J.B.Antoine Maintz,and Max A.Viergever,Image registrationby maximization of combined mutual information and gradient information,IEEE TMI19(2000),no.8,809–814.13.J.P.W.Pluim,J.B.A.Maintz,and M.A.Viergever,Interpolation artefacts in mu-tual information based image registration,Proceedings of the SPIE2004,Medical Imaging,1999(K.M.Hanson,ed.),vol.3661,SPIE,1999,pp.56–65.14.,Mutual-information-based registration of medical images:a survey,IEEETransactions on Medical Imaging22,1999,986–1004.15.L.Rudin,S.Osher and E.Fatemi,Nonlinear total variation based noise removalalgorithms,Proceedings of the eleventh annual international conference of the Cen-ter for Nonlinear Studies on Experimental mathematics:computational issues in nonlinear science,1992,259–268.16.R.Silverman,Density estimation for statistics and data analysis,Chapman&Hall,1992.17.M.Unser and P.Th´e venaz,Stochastic sampling for computing the mutual infor-mation of two images,Proceedings of the Fifth International Workshop on Sam-pling Theory and Applications(SampTA’03)(Strobl,Austria),May26-30,2003, pp.102–109.18.Paul A.Viola,Alignment by maximization of mutual information,Ph.D.thesis,Massachusetts Institute of Technology,1995.19.G.Wahba,Spline models for observational data,SIAM,Philadelphia,1990.20.Terry S.Yoo,Insight into images:Principles and practice for segmentation,regis-tration,and image analysis,AK Peters Ltd,2004.21.J.Zhang and F.D.Morgan,Joint seismic and electrical tomography,Proceedingsof the EEGS Symposium on Applications of Geophysics to Engineering and Envi-ronmental Problems,1996,pp.391–396.。
【科研进展】磁敏感定量成像的介绍与应用
【科研进展】磁敏感定量成像的介绍与应用Quantitative susceptibility mapping (QSM) 磁敏感定量成像是近年来比较新的一项成像技术。
相比传统的成像对比,QSM最大的特点是图像对比完全源自图像的相位而并非磁敏感信号的幅度,并且反映的是不同物质在磁场中所产生的磁敏感效应所致。
下面我们对QSM 做一个简单的介绍。
磁敏感的来源关于磁敏感的确切物理解释,目前并没有一个公认的定论。
磁敏感的最直接效应是造成局部磁场的偏移,并且可以体现在梯度回波成像的相位图之中。
由于我们影像的物质都是真实的,所以按照原理并不应该产生相位,但是由于物质本身在磁场中会造成额外的磁敏感效应,导致我们在梯度回波图像中可以观测到额外的相位。
所以,如果我们可以得到滤波去除背景相位(主要来源于接收线圈)后的相位图,我们是可以进行反推而得到物体的磁敏感信息。
从物质的相位图(由梯度回波采集所得)到物质的磁敏感图主要有三个步骤:1)相位反卷绕;2)背景相位去除;3)相位反卷积到磁敏感[1]。
由于磁敏感数据采集往往是3D采集,这三个步骤都有相当的计算难度,并且不断有新的技术来提高这些步骤的计算效率和结果。
磁敏感的应用由于磁敏感本身带来的是一种不同于以往图像的全新对比度,这样可以给予我们诊断或者研究全新的信息。
由于体内组织的磁敏感分布往往是缓和变化,并且和组织形态是吻合的,所以由于病灶的存在而导致的磁敏感是可以比较容易区分出来的。
并且由于磁敏感是可以完全定量测量的,磁敏感的单位是偏离主磁场的幅度(ppm,百万分之;ppb,十亿分之),不同的磁敏感测量值是和物质的密度或者总量成正比对应的。
并且由于磁敏感采集本身使用的是3D梯度回波,使得临床数据采集可以比较简单高效的进行,磁敏感的应用自从2009年以来已经在不同的部位以及不同的病灶上有过很多尝试。
01高清脑组织成像:大脑是人体结构组织最复杂的部位,科研人员一直在挑战高清脑部成像的极限。
基于视觉模型的迭代AQIM水印算法
152 2 基于 Watson 视觉模型 AQIM 算法
电 子 学 报
2010 年
QIM 是首先利用待嵌入信息 ( 水印) 来调制索引或 者索引序列 , 然后使用相应的量化器或量化器序列来 量化作品 , 以达到嵌入信息目的的算法 :
y = q ( x , m) ( 1)
w
w
1・ tL
w
( mE + δ ) 1 - w・ = m E・ tL ( ) Case2 :mod m E , 2 ≠W ( 1) 如果 δ≥ 0 Cw
( 6)
= ( m E + 1) ・ step E = ( m E + 1) ・ C0
( mE + δ ) 1 - w・ = ( m E + 1) ・ tL
w
w
1・ tL
w
( 7)
w
( 2) 如果 δ< 0 Cw
= ( m E - 1) ・ step E = ( m E - 1) ・ C0
( mE + δ ) 1 - w・ = ( m E - 1) ・ tL
w
1・ tL
w
tL ( i , j ,
( 8)
, 即 tL ( i , j , k ) ≥ C0 ( i , j , k ) 时 , step E = tL ( i ,
round ( q E) ,δ= q E - m E , 则其中 δ ≤ . C0 , mE = step E
式中 x 表示作品中待嵌入信息的载体向量 , m 表示待 嵌入信息 , q ( ) 表示量化函数 , y 表示经过量化后的载 体向量 . 但是 , 传统的 QIM 算法量化步长与待嵌入信息无 关 ,为了平衡嵌入信息的不可见性和鲁棒性 ,Cox 等提 出 AQIM 算法 , 采用 Watson 视觉模型[13 ,14 ] 中的对比度 掩蔽阈值 s ( i , j , k ) 作为量化步长 :
遥感图像场景分类综述
人工智能及识别技术本栏目责任编辑:唐一东遥感图像场景分类综述钱园园,刘进锋*(宁夏大学信息工程学院,宁夏银川750021)摘要:随着科技的进步,遥感图像场景的应用需求逐渐增大,广泛应用于城市监管、资源的勘探以及自然灾害检测等领域中。
作为一种备受关注的基础图像处理手段,近年来众多学者提出各种方法对遥感图像的场景进行分类。
根据遥感场景分类时有无标签参与,本文从监督分类、无监督分类以及半监督分类这三个方面对近年来的研究方法进行介绍。
然后结合遥感图像的特征,分析这三种方法的优缺点,对比它们之间的差异及其在数据集上的性能表现。
最后,对遥感图像场景分类方法面临的问题和挑战进行总结和展望。
关键词:遥感图像场景分类;监督分类;无监督分类;半监督分类中图分类号:TP391文献标识码:A文章编号:1009-3044(2021)15-0187-00开放科学(资源服务)标识码(OSID ):Summary of Remote Sensing Image Scene Classification QIAN Yuan-yuan ,LIU Jin-feng *(School of Information Engineering,Ningxia University,Yinchuan 750021,China)Abstract:With the progress of science and technology,the application demand of remote sensing image scene increases gradually,which is widely used in urban supervision,resource exploration,natural disaster detection and other fields.As a basic image pro⁃cessing method,many scholars have proposed various methods to classify the scene of remote sensing image in recent years.This pa⁃per introduces the research methods in recent years from the three aspects of supervised classification,unsupervised classification and semi-supervised classification.Then,combined with the features of remote sensing images,the advantages and disadvantages of these three methods are analyzed,and the differences between them and their performance performance in the data set are com⁃pared.Finally,the problems and challenges of remote sensing image scene classification are summarized and prospected.Key words:remote sensing image scene classification;Unsupervised classification;Supervise classification;Semi-supervised clas⁃sification遥感图像场景分类,就是通过某种算法对输入的遥感场景图像进行分类,并且判断某幅图像属于哪种类别。
TITANImage_说明书
前言人类生活在茫茫宇宙之中的一颗小小星球—地球上,时间和地理位置构成了人类生存的四维空间,离开了时空定位,我们的生活将因失去参照而不知所踪。
人类在漫漫的历史长河中,用各类介质(如笔、纸、文字、图纸等)记录和描述着人类的生存空间。
加拿大人率先提出并建立了地理信息系统(GIS)。
地理信息系统与其它传统意义的信息系统的根本区别在于,它不仅能够存储、分析和表达现实世界中各种对象的属性信息,而且能够处理其空间定位信息,将空间和属性信息有机地结合起来,从空间和属性两个方面对现实对象进行查询、检索和分析,并将结果以各种直观的形式,形象、精确地表达出来。
随着空间技术、计算机技术、通信技术的飞速发展,人类利用遥感(RS)、全球定位系统(GPS)大大提高了改造生存空间的能力。
北京东方泰坦科技有限公司在国际著名的加拿大阿波罗科技集团GIS软件基础上,开发了具有自主知识产权的TITAN 系列软件。
TITAN 系列软件具有技术起点高、稳定性好、二次开发能力强、使用方便灵活、性能价格比高等突出特点,是包括地理信息系统、图像处理软件、三维虚拟景观软件、三维实体建模软件、WEB地理信息系统、矢量化软件、海量数据矢量库和影像库管理的空间信息处理整体解决方案。
TITAN 软件可广泛应用于教育、科研、铁路、交通、民航、电力、水利、航空航天、测绘、公安、军队、国土资源调查、土地规划利用、城市规划、林业资源管理、地下管网管理、社区管理、国防建设、数字油田、数字城市等行业和领域。
总之,哪里有空间信息处理问题,哪里就需要TITAN 软件。
北京东方泰坦科技有限公司奉行“客户至上、信誉第一”的经营理念,遵循“诚信、敬业、务实、创新、发展”的原则,借鉴国际上最先进的软件技术,现代化的管理经验,开发出中国自主知识产权的优秀软件产品,并提供售前、售中、售后的全过程、全方位的优质服务。
用户的需求,就是我们努力的目标。
朋友们,让我们以中华民族的团结奋斗、不屈不饶的民族精神,在世界软件之林共建中国的软件王国,共享 TITAN 软件带来的丰硕成果。
NEC MultiSync PA322UHD 32英寸专业显示器商品说明书
NEC MultiSync® PA322UHDProfessional Reference 10-bit IPS type with IGZO technology and W-LED Display unites high reliability, uncompromising image quality and highly accurate colour reproduction. The PA322UHD allows usage error-free, intensive viewing. The included SpectraView II application software works with the in-built SpectraView Engine 14-bit LUT and an external sensor (optional) to deliver the outstanding hardware calibration with life-like colours.With the Ambient Light Sensor and Carbon Footprint Meter, the Display also promotes professional green productivity and minimises the life-cycle environmental impact.The ideal display for all mission critical applications, creative professionals, designers, photographers, CAD-CAM, video-editing, finance, precision engineering, medical imaging, broadcasting and industrial applications (e.g.NDT) and anyone who cares about their visual work.BENEFITSAmbient Light Sensor - with Auto Brightness function always sets optimised brightness level according to ambient light and content conditions.Enhanced imaging performance - advanced settings of all relevant visual parameters for full control of brightness, colour, gamma and uniformity with Spectraview Engine.Ergonomic Office - full height adjustability (150 mm), swivel, tilt and pivot functionality ensures perfect individual ergonomic set-up.Stunning "pixel-free" ergonomic viewing - delivered by an Ultra High Definition (3,840 x 2,160) Professional LED backlit 10-bit IPS type LCD panel with IGZO technology.Reliable colour reproduction - due to 10-bit colour performance, wide gamut covering 99% of AdobeRGB, and hardware calibratable 14-bit LUT for accurate image presentation.Future Ready with Extension Slot based on OPS Form Factor - upgrade the capabilities of your display at any time without the need for external cables or devices.Uncompromising Image Quality - full colour control thanks to 10-bit IPS, Digital Uniformity Control, 14-bit hardware LUT, SpectraView Engine performance and SpectraView II calibration control.Advanced Picture by Picture - simultaneously show up to 4 different video inputs.。
Spectrum Living Image 40软件操作流程
Spectrum Living Image 4.0软件操作流程本流程主要介绍Spectrum 的Living Image 4.0软件的应用,包括参数介绍、生物发光、荧光二维及三维成像的图像获取及数据分析1、 控制面板参数介绍整个图像的获取过程都通过该控制面板操作,其中曝光时间、bin 值及光圈大小是最为关键的参数,直接影响图像获取的质量。
曝光时间(exposure time )曝光时间决定了信号强度,曝光时间越长,信号强度越高,实验时应根据实际情况设定适当的曝光时间,初次实验若不知道合适曝光时间,可选择Auto 曝光,仪器会自动检测一个合适的曝光时间以成像,当信号较弱时可适当增加曝光时间以获取较佳信号。
注:生物发光成像的曝光时间为分钟(min )级别,而荧光成像的曝光时间为秒(sec )级别,实验时应注意参数设定。
关键参数,决定灵敏度Bin 值(Binning )Bin 值表述的是将多少像素作为感光单元,bin 值越大则表示应用更多的像素作为一个感光单元,因此,bin 值越大则灵敏度越高,而分辨率会有所牺牲,当信号强度较弱时可适当增加bin 值以获取信号。
光圈大小(f/stop )光圈大小决定了光的透过量,光圈越大,穿过光圈到达CCD 的光越多,面板中的数字为F/stop 中的分母数字,因此,数字越小代表光圈孔径越大,透过的光越多,当信号较弱时可通过增大光圈(减小数字)而增强信号探测能力。
f/1Large Binning (16)Medium Binning (8)Small Binning (4)视野范围(Filed of View )视野范围用于调节观测的视野大小,共有A/B/C/D 四档可选,分别代表不同大小的视野范围,实验时根据一次观察小鼠个数选择合适的视野范围进行成像。
成像聚焦(Focus )成像的聚焦包含自动聚焦和手动聚焦功能,在用小鼠进行成像时,一般默认成像对象的高度为1.5cm ,仪器会自动完成聚焦,在用其他对象进行成像时(如大鼠),可调节Subject height 参数,设定合适的聚焦高度,也可在Focus 下拉选项中选择聚焦设定 FOV A: 4.0 cm FOV B: 6.5 cm FOV C: 12.5 cm FOV D: 21.5 cm2、生物发光二维成像的图像获取勾选Luminescent、Photograph及overlay选项,设置好曝光时间(生物发光曝光时间为分钟级别)、bin值、光圈大小及视野范围等参数(软件有默认参数,可按照默认参数成像,若效果不佳,则再调整各参数,调整的次序按照曝光时间>bin值>光圈大小顺序调整),Emission Filter选项设置为open,Photograph选项的参数一般情况无需调整,所有参数设定完毕后点击Acquire按钮即可开始成像。
Deformable Medical Image Registration
Deformable Medical Image Registration:A Survey Aristeidis Sotiras*,Member,IEEE,Christos Davatzikos,Senior Member,IEEE,and Nikos Paragios,Fellow,IEEE(Invited Paper)Abstract—Deformable image registration is a fundamental task in medical image processing.Among its most important applica-tions,one may cite:1)multi-modality fusion,where information acquired by different imaging devices or protocols is fused to fa-cilitate diagnosis and treatment planning;2)longitudinal studies, where temporal structural or anatomical changes are investigated; and3)population modeling and statistical atlases used to study normal anatomical variability.In this paper,we attempt to give an overview of deformable registration methods,putting emphasis on the most recent advances in the domain.Additional emphasis has been given to techniques applied to medical images.In order to study image registration methods in depth,their main compo-nents are identified and studied independently.The most recent techniques are presented in a systematic fashion.The contribution of this paper is to provide an extensive account of registration tech-niques in a systematic manner.Index Terms—Bibliographical review,deformable registration, medical image analysis.I.I NTRODUCTIOND EFORMABLE registration[1]–[10]has been,alongwith organ segmentation,one of the main challenges in modern medical image analysis.The process consists of establishing spatial correspondences between different image acquisitions.The term deformable(as opposed to linear or global)is used to denote the fact that the observed signals are associated through a nonlinear dense transformation,or a spatially varying deformation model.In general,registration can be performed on two or more im-ages.In this paper,we focus on registration methods that involve two images.One is usually referred to as the source or moving image,while the other is referred to as the target orfixed image. In this paper,the source image is denoted by,while the targetManuscript received March02,2013;revised May17,2013;accepted May 21,2013.Date of publication May31,2013;date of current version June26, 2013.Asterisk indicates corresponding author.*A.Sotiras is with the Section of Biomedical Image Analysis,Center for Biomedical Image Computing and Analytics,Department of Radi-ology,University of Pennsylvania,Philadelphia,PA19104USA(e-mail: aristieidis.sotiras@).C.Davatzikos is with the Section of Biomedical Image Analysis,Center for Biomedical Image Computing and Analytics,Department of Radi-ology,University of Pennsylvania,Philadelphia,PA19104USA(e-mail: christos.davatzikos@).N.Paragios is with the Center for Visual Computing,Department of Applied Mathematics,Ecole Centrale de Paris,92295Chatenay-Malabry,France,and with the Equipe Galen,INRIA Saclay-Ile-de-France,91893Orsay,France,and also with the Universite Paris-Est,LIGM(UMR CNRS),Center for Visual Com-puting,Ecole des Ponts ParisTech,77455Champs-sur-Marne,France. Digital Object Identifier10.1109/TMI.2013.2265603image is denoted by.The two images are defined in the image domain and are related by a transformation.The goal of registration is to estimate the optimal transforma-tion that optimizes an energy of the form(1) The previous objective function(1)comprises two terms.The first term,,quantifies the level of alignment between a target image and a source image.Throughout this paper,we in-terchangeably refer to this term as matching criterion,(dis)sim-ilarity criterion or distance measure.The optimization problem consists of either maximizing or minimizing the objective func-tion depending on how the matching term is chosen.The images get aligned under the influence of transformation .The transformation is a mapping function of the domain to itself,that maps point locations to other locations.In gen-eral,the transformation is assumed to map homologous loca-tions from the target physiology to the source physiology.The transformation at every position is given as the addition of an identity transformation with the displacementfield,or.The second term,,regularizes the trans-formation aiming to favor any specific properties in the solution that the user requires,and seeks to tackle the difficulty associ-ated with the ill-posedness of the problem.Regularization and deformation models are closely related. Two main aspects of this relation may be distinguished.First, in the case that the transformation is parametrized by a small number of variables and is inherently smooth,regularization may serve to introduce prior knowledge regarding the solution that we seek by imposing task-specific constraints on the trans-formation.Second,in the case that we seek the displacement of every image element(i.e.,nonparametric deformation model), regularization dictates the nature of the transformation. Thus,an image registration algorithm involves three main components:1)a deformation model,2)an objective function, and3)an optimization method.The result of the registration algorithm naturally depends on the deformation model and the objective function.The dependency of the registration result on the optimization strategy follows from the fact that image regis-tration is inherently ill-posed.Devising each component so that the requirements of the registration algorithm are met is a de-manding process.Depending on the deformation model and the input data,the problem may be ill-posed according to Hadamard’s definition of well-posed problems[11].In probably all realistic scenarios, registration is ill-posed.To further elaborate,let us consider some specific cases.In a deformable registration scenario,one0278-0062/$31.00©2013IEEEseeks to estimate a vector for every position given,in general, scalar information conveyed by image intensity.In this case,the number of unknowns is greater than the number of constraints. In a rigid setting,let us consider a consider a scenario where two images of a disk(white background,gray foreground)are registered.Despite the fact that the number of parameters is only 6,the problem is ill-posed.The problem has no unique solution since a translation that aligns the centers of the disks followed by any rotation results in a meaningful solution.Given nonlinear and nonconvex objective functions,in gen-eral,no closed-form solutions exist to estimate the registration parameters.In this setting,the search methods reach only a local minimum in the parameter space.Moreover,the problem itself has an enormous number of different facets.The approach that one should take depends on the anatomical properties of the organ(for example,the heart and liver do not adhere to the same degree of deformation),the nature of observations to be regis-tered(same modality versus multi-modal fusion),the clinical setting in which registration is to be used(e.g.,offline interpre-tation versus computer assisted surgery).An enormous amount of research has been dedicated to de-formable registration towards tackling these challenges due to its potential clinical impact.During the past few decades,many innovative ideas regarding the three main algorithmic registra-tion aspects have been proposed.General reviews of thefield may be found in[1]–[7],[9].However due to the rapid progress of thefield such reviews are to a certain extent outdated.The aim of this paper is to provide a thorough overview of the advances of the past decade in deformable registration.Never-theless,some classic papers that have greatly advanced the ideas in thefield are mentioned.Even though our primary interest is deformable registration,for the completeness of the presenta-tion,references to linear methods are included as many prob-lems have been treated in this low-degree-of-freedom setting before being extended to the deformable case.The main scope of this paper is focused on applications that seek to establish spatial correspondences between medical im-ages.Nonetheless,we have extended the scope to cover appli-cations where the interest is to recover the apparent motion of objects between sequences of successive images(opticalflow estimation)[12],[13].Deformable registration and opticalflow estimation are closely related problems.Both problems aim to establish correspondences between images.In the deformable registration case,spatial correspondences are sought,while in the opticalflow case,spatial correspondences,that are associ-ated with different time points,are looked for.Given data with a good temporal resolution,one may assume that the magnitude of the motion is limited and that image intensity is preserved in time,opticalflow estimation can be regarded as a small defor-mation mono-modal deformable registration problem.The remainder of the paper is organized by loosely following the structural separation of registration algorithms to three com-ponents:1)deformation model,2)matching criteria,and3)op-timization method.In Section II,different approaches regarding the deformation model are presented.Moreover,we also chose to cover in this section the second term of the objective function, the regularization term.This choice was motivated by the close relation between the two parts.In Section III,thefirst term of the objective function,the matching term,is discussed.The opti-mization methods are presented in Section IV.In every section, particular emphasis was put on further deepening the taxonomy of registration method by grouping the presented methods in a systematic manner.Section V concludes the paper.II.D EFORMATION M ODELSThe choice of deformation model is of great importance for the registration process as it entails an important compromise between computational efficiency and richness of description. It also reflects the class of transformations that are desirable or acceptable,and therefore limits the solution to a large ex-tent.The parameters that registration estimates through the op-timization strategy correspond to the degrees of freedom of the deformation model1.Their number varies greatly,from six in the case of global rigid transformations,to millions when non-parametric dense transformations are considered.Increasing the dimensionality of the state space results in enriching the de-scriptive power of the model.This model enrichment may be accompanied by an increase in the model’s complexity which, in turns,results in a more challenging and computationally de-manding inference.Furthermore,the choice of the deformation model implies an assumption regarding the nature of the defor-mation to be recovered.Before continuing,let us clarify an important,from imple-mentation point of view,aspect related to the transformation mapping and the deformation of the source image.In the in-troduction,we stated that the transformation is assumed to map homologous locations from the target physiology to the source physiology(backward mapping).While from a theoretical point of view,the mapping from the source physiology to the target physiology is possible(forward mapping),from an implemen-tation point of view,this mapping is less advantageous.In order to better understand the previous statement,let us consider how the direction of the mapping influences the esti-mation of the deformed image.In both cases,the source image is warped to the target domain through interpolation resulting to a deformed image.When the forward mapping is estimated, every voxel of the source image is pushed forward to its esti-mated position in the deformed image.On the other hand,when the backward mapping is estimated,the pixel value of a voxel in the deformed image is pulled from the source image.The difference between the two schemes is in the difficulty of the interpolation problem that has to be solved.In thefirst case,a scattered data interpolation problem needs to be solved because the voxel locations of the source image are usually mapped to nonvoxel locations,and the intensity values of the voxels of the deformed image have to be calculated.In the second case,when voxel locations of the deformed image are mapped to nonvoxel locations in the source image,their intensities can be easily cal-culated by interpolating the intensity values of the neighboring voxels.The rest of the section is organized by following coarsely and extending the classification of deformation models given 1Variational approaches in general attempt to determine a function,not just a set of parameters.SOTIRAS et al.:DEFORMABLE MEDICAL IMAGE REGISTRATION:A SURVEY1155Fig.1.Classi fication of deformation models.Models that satisfy task-speci fic constraints are not shown as a branch of the tree because they are,in general,used in conjunction with physics-based and interpolation-based models.by Holden [14].More emphasis is put on aspects that were not covered by that review.Geometric transformations can be classi fied into three main categories (see Fig.1):1)those that are inspired by physical models,2)those inspired by interpolation and ap-proximation theory,3)knowledge-based deformation models that opt to introduce speci fic prior information regarding the sought deformation,and 4)models that satisfy a task-speci fic constraint.Of great importance for biomedical applications are the con-straints that may be applied to the transformation such that it exhibits special properties.Such properties include,but are not limited to,inverse consistency,symmetry,topology preserva-tion,diffeomorphism.The value of these properties was made apparent to the research community and were gradually intro-duced as extra constraints.Despite common intuition,the majority of the existing regis-tration algorithms are asymmetric.As a consequence,when in-terchanging the order of input images,the registration algorithm does not estimate the inverse transformation.As a consequence,the statistical analysis that follows registration is biased on the choice of the target domain.Inverse Consistency:Inverse consistent methods aim to tackle this shortcoming by simultaneously estimating both the forward and the backward transformation.The data matching term quanti fies how well the images are aligned when one image is deformed by the forward transformation,and the other image by the backward transformation.Additionally,inverse consistent algorithms constrain the forward and backward transformations to be inverse mappings of one another.This is achieved by introducing terms that penalize the difference between the forward and backward transformations from the respective inverse mappings.Inverse consistent methods can preserve topology but are only asymptotically symmetric.Inverse-consistency can be violated if another term of the objective function is weighted more importantly.Symmetry:Symmetric algorithms also aim to cope with asymmetry.These methods do not explicitly penalize asym-metry,but instead employ one of the following two strategies.In the first case,they employ objective functions that are by construction symmetric to estimate the transformation from one image to another.In the second case,two transformation functions are estimated by optimizing a standard objective function.Each transformation function map an image to a common domain.The final mapping from one image to another is calculated by inverting one transformation function and composing it with the other.Topology Preservation:The transformation that is estimated by registration algorithms is not always one-to-one and cross-ings may appear in the deformation field.Topology preserving/homeomorphic algorithms produce a mapping that is contin-uous,onto,and locally one-to-one and has a continuous inverse.The Jacobian determinant contains information regarding the injectivity of the mapping and is greater than zero for topology preserving mappings.The differentiability of the transformation needs to be ensured in order to calculate the Jacobian determi-nant.Let us note that Jacobian determinant and Jacobian are in-terchangeably used in this paper and should not be confounded with the Jacobian matrix.Diffeomorphism:Diffeomoprhic transformations also pre-serve topology.A transformation function is a diffeomorphism,if it is invertible and both the function and its inverse are differ-entiable.A diffeomorphism maps a differentiable manifold to another.1156IEEE TRANSACTIONS ON MEDICAL IMAGING,VOL.32,NO.7,JULY2013In the following four subsections,the most important methods of the four classes are presented with emphasis on the approaches that endow the model under consideration with the above desirable properties.A.Geometric Transformations Derived From Physical Models Following[5],currently employed physical models can be further separated infive categories(see Fig.1):1)elastic body models,2)viscousfluidflow models,3)diffusion models,4) curvature registration,and5)flows of diffeomorphisms.1)Elastic Body Models:a)Linear Models:In this case,the image under deforma-tion is modeled as an elastic body.The Navier-Cauchy Partial Differential Equation(PDE)describes the deformation,or(2) where is the forcefield that drives the registration based on an image matching criterion,refers to the rigidity that quanti-fies the stiffness of the material and is Lamésfirst coefficient. Broit[15]first proposed to model an image grid as an elastic membrane that is deformed under the influence of two forces that compete until equilibrium is reached.An external force tries to deform the image such that matching is achieved while an internal one enforces the elastic properties of the material. Bajcsy and Kovacic[16]extended this approach in a hierar-chical fashion where the solution of the coarsest scale is up-sam-pled and used to initialize thefiner one.Linear registration was used at the lowest resolution.Gee and Bajscy[17]formulated the elastostatic problem in a variational setting.The problem was solved under the Bayesian paradigm allowing for the computation of the uncertainty of the solution as well as for confidence intervals.Thefinite element method(FEM)was used to infer the displacements for the ele-ment nodes,while an interpolation strategy was employed to es-timate displacements elsewhere.The order of the interpolating or shape functions,determines the smoothness of the obtained result.Linear elastic models have also been used when registering brain images based on sparse correspondences.Davatzikos[18]first used geometric characteristics to establish a mapping be-tween the cortical surfaces.Then,a global transformation was estimated by modeling the images as inhomogeneous elastic ob-jects.Spatially-varying elasticity parameters were used to com-pensate for the fact that certain structures tend to deform more than others.In addition,a nonzero initial strain was considered so that some structures expand or contract naturally.In general,an important drawback of registration is that when source and target volumes are interchanged,the obtained trans-formation is not the inverse of the previous solution.In order to tackle this shortcoming,Christensen and Johnson[19]pro-posed to simultaneously estimate both forward and backward transformations,while penalizing inconsistent transformations by adding a constraint to the objective function.Linear elasticity was used as regularization constraint and Fourier series were used to parametrize the transformation.Leow et al.[20]took a different approach to tackle the incon-sistency problem.Instead of adding a constraint that penalizes the inconsistency error,they proposed a unidirectional approach that couples the forward and backward transformation and pro-vides inverse consistent transformations by construction.The coupling was performed by modeling the backward transforma-tion as the inverse of the forward.This fact was also exploited during the optimization of the symmetric energy by only fol-lowing the gradient direction of the forward mapping.He and Christensen[21]proposed to tackle large deforma-tions in an inverse consistent framework by considering a se-quence of small deformation transformations,each modeled by a linear elastic model.The problem was symmetrized by consid-ering a periodic sequence of images where thefirst(or last)and middle image are the source and target respectively.The sym-metric objective function thus comprised terms that quantify the difference between any two successive pairs of images.The in-ferred incremental transformation maps were concatenated to map one input image to another.b)Nonlinear Models:An important limitation of linear elastic models lies in their inability to cope with large defor-mations.In order to account for large deformations,nonlinear elastic models have been proposed.These models also guar-antee the preservation of topology.Rabbitt et al.[22]modeled the deformable image based on hyperelastic material properties.The solution of the nonlinear equations was achieved by local linearization and the use of the Finite Element method.Pennec et al.[23]dropped the linearity assumption by mod-eling the deformation process through the St Venant-Kirchoff elasticity energy that extends the linear elastic model to the non-linear regime.Moreover,the use of log-Euclidean metrics in-stead of Euclidean ones resulted in a Riemannian elasticity en-ergy which is inverse consistent.Yanovsky et al.[24]proposed a symmetric registration framework based on the St Venant-Kir-choff elasticity.An auxiliary variable was added to decouple the regularization and the matching term.Symmetry was im-posed by assuming that the Jacobian determinants of the defor-mation follow a zero mean,after log-transformation,log-normal distribution[25].Droske and Rumpf[26]used an hyperelastic,polyconvex regularization term that takes into account the length,area and volume deformations.Le Guyader and Vese[27]presented an approach that combines segmentation and registration that is based on nonlinear elasticity.The authors used a polyconvex regularization energy based on the modeling of the images under deformation as Ciarlet-Geymonat materials[28].Burger et al.[29]also used a polyconvex regularization term.The au-thors focused on the numerical implementation of the registra-tion framework.They employed a discretize-then-optimize ap-proach[9]that involved the partitioning voxels to24tetrahedra.2)Viscous Fluid Flow Models:In this case,the image under deformation is modeled as a viscousfluid.The transformation is governed by the Navier-Stokes equation that is simplified by assuming a very low Reynold’s numberflow(3) These models do not assume small deformations,and thus are able to recover large deformations[30].Thefirst term of theSOTIRAS et al.:DEFORMABLE MEDICAL IMAGE REGISTRATION:A SURVEY1157Navier-Stokes equation(3),constrains neighboring points to de-form similarly by spatially smoothing the velocityfield.The velocityfield is related to the displacementfield as.The velocityfield is integrated in order to estimate the displacementfield.The second term al-lows structures to change in mass while and are the vis-cosity coefficients.Christensen et al.[30]modeled the image under deformation as a viscousfluid allowing for large magnitude nonlinear defor-mations.The PDE was solved for small time intervals and the complete solution was given by an integration over time.For each time interval a successive over-relaxation(SOR)scheme was used.To guarantee the preservation of topology,the Jaco-bian was monitored and each time its value fell under0.5,the deformed image was regridded and a new one was generated to estimate a transformation.Thefinal solution was the con-catenation of all successive transformations occurring for each regridding step.In a subsequent work,Christensen et al.[31] presented a hierarchical way to recover the transformations for brain anatomy.Initially,global affine transformation was per-formed followed by a landmark transformation model.The re-sult was refined byfluid transformation preceded by an elastic registration step.An important drawback of the earliest implementations of the viscousfluid models,that employed SOR to solve the equa-tions,was computational inefficiency.To circumvent this short-coming,Christensen et al.employed a massive parallel com-puter implementation in[30].Bro-Nielsen and Gramkow[32] proposed a technique based on a convolutionfilter in scale-space.Thefilter was designed as the impulse response of the linear operator defined in its eigen-function basis.Crun et al.[33]proposed a multi-grid approach towards handling anisotropic data along with a multi-resolution scheme opting forfirst recovering coarse velocity es-timations and refining them in a subsequent step.Cahill et al.[34]showed how to use Fourier methods to efficiently solve the linear PDE system that arises from(3)for any boundary condi-tion.Furthermore,Cahill et al.extended their analysis to show how these methods can be applied in the case of other regu-larizers(diffusion,curvature and elastic)under Dirichlet,Neu-mann,or periodic boundary conditions.Wang and Staib[35]usedfluid deformation models in an atlas-enhanced registration setting while D’Agostino et al. tackled multi-modal registration with the use of such models in[36].More recently,Chiang et al.[37]proposed an inverse consistent variant offluid registration to register Diffusion Tensor images.Symmetrized Kullback-Leibler(KL)diver-gence was used as the matching criterion.Inverse consistency was achieved by evaluating the matching and regularization criteria towards both directions.3)Diffusion Models:In this case,the deformation is mod-eled by the diffusion equation(4) Let us note that most of the algorithms,based on this transforma-tion model and described in this section,do not explicitly state the(4)in their objective function.Nonetheless,they exploit the fact that the Gaussian kernel is the Green’s function of the diffu-sion equation(4)(under appropriate initial and boundary condi-tions)to provide an efficient regularization step.Regularization is efficiently performed through convolutions with a Gaussian kernel.Thirion,inspired by Maxwell’s Demons,proposed to perform image matching as a diffusion process[38].The proposed algo-rithm iterated between two steps:1)estimation of the demon forces for every demon(more precisely,the result of the appli-cation of a force during one iteration step,that is a displace-ment),and2)update of the transformation based on the cal-culated forces.Depending on the way the demon positions are selected,the way the space of deformations is defined,the in-terpolation method that is used,and the way the demon forces are calculated,different variants can be obtained.The most suit-able version for medical image analysis involved1)selecting all image elements as demons,2)calculating demon forces by considering the opticalflow constraint,3)assuming a nonpara-metric deformation model that was regularized by applying a Gaussianfilter after each iteration,and4)a trilinear interpo-lation scheme.The Gaussianfilter can be applied either to the displacementfield estimated at an iteration or the updated total displacementfield.The bijectivity of the transformation was en-sured by calculating for every point the difference between its initial position and the one that is reached after composing the forward with the backward deformationfield,and redistributing the difference to eachfield.The bijectivity of the transformation can also be enforced by limiting the maximum length of the up-date displacement to half the voxel size and using composition to update the transformation.Variants for the contour-based reg-istration and the registration between segmented images were also described in[38].Most of the algorithms described in this section were inspired by the work of Thirion[38]and thus could alternatively be clas-sified as“Demons approaches.”These methods share the iter-ative approach that was presented in[38]that is,iterating be-tween estimating the displacements and regularizing to obtain the transformation.This iterative approach results in increased computational efficiency.As it will be discussed later in this section,this feature led researchers to explore such strategies for different PDEs.The use of Demons,as initially introduced,was an efficient algorithm able to provide dense correspondences but lacked a sound theoretical justification.Due to the success of the algo-rithm,a number of papers tried to give theoretical insight into its workings.Fischer and Modersitzki[39]provided a fast algo-rithm for image registration.The result was given as the solution of linear system that results from the linearization of the diffu-sion PDE.An efficient scheme for its solution was proposed while a connection to the Thirion’s Demons algorithm[38]was drawn.Pennec et al.[40]studied image registration as an energy minimization problem and drew the connection of the Demons algorithm with gradient descent schemes.Thirion’s image force based on opticalflow was shown to be equivalent with a second order gradient descent on the Sum of Square Differences(SSD) matching criterion.As for the regularization,it was shown that the convolution of the global transformation with a Gaussian。
基于图像处理的烟草甲自动监测系统设计与应用
收稿日期:2022-06-09作者简介:胡逸超(1988-),女,广西桂林人,硕士,主要从事烟草发酵技术、卷烟害虫防治工作,(电话)158****1663(电子信箱)*****************。
胡逸超,苏赞,陈义昌,等.基于图像处理的烟草甲自动监测系统设计与应用[J ].湖北农业科学,2024,63(4):163-167.烟草甲[Lasioderma serricorne (Fabricius )]属鞘翅目窃蠢科,是烟草仓储、生产加工过程中的主要害虫;烟草甲虫情监测主要依靠人工检查,同时借助灯光诱捕器和性信息素诱捕器进行监测[1]。
由于在烟草原料仓库及卷烟生产加工车间存在烟虫监测面积大、范围广、点位多的特点,因此虫情检查记录人员工作量大,检查记录过程中可能出现数据不准确、信息分析及传递不及时等问题[2]。
目前害虫自动监测技术主要有电容法[3]、声音信号识别技术[4]、图像识别技术[5-8]、软X 射线检测法、近红外光谱法[9]等。
图像识别技术是指对观测的图像进行分割和特征提取,并根据分类器进行相应分类[10]。
目前粮食基于图像处理的烟草甲自动监测系统设计与应用胡逸超1,苏赞1,陈义昌1,张龑2,苏晨阳2,刘勇2(1.广西中烟工业有限责任公司,南宁530001;2.武汉东昌仓贮技术有限公司,武汉430074)摘要:根据烟草仓储及卷烟生产车间的烟草甲自动监测需求,结合烟草甲[Lasioderma serricorne (Fabricius )]实际图像特征,在基于标记分水岭算法的基础上叠加分割图像算法,设计了基于图像处理的烟草甲自动监测系统,通过全自动拍照设备定时采集相应诱捕器图像,利用有线网络将图像传输至服务器,在服务器端完成图像识别与计数、实时展示、超标报警、历史曲线查看等功能,解决了烟草甲实际监测过程中虫板烟尘、虫体重合、光线等诸多干扰因素,实现了烟草甲精准的自动图像识别及计数功能。
实际应用表明,该系统工作稳定,计数平均准确率大于94.00%,在卷烟生产车间烟虫监测上具有较好的应用前景。
用数字水印新技术进行图像保护英文
I. INTRODUCTION In recent years of internet world, many people transmit their secret information through the Internet. The transmission of images is a daily routine and it is necessary to find an efficient way to transmit them over the net. The works presented in this paper how encryption and watermarking algorithms give security in medical imagery. In order to do this, the images can be encrypted in their source codes in order to apply. In this paper we propose a new technique to cipher an image for safe transmission. Our research deals with image encryption and watermarking. There are several methods to encrypt binary or grey level images [1], [2], [3], [4]. In this paper we propose a technique for safe transmission image and deals with image cryptography, watermarking .To embed the image in the patient information we have used a lossless watermarking technique. The watermarking objective is to embed invisibly message inside the image. For secure image this technique is very useful for image transmission through the internet. This technique we can use in different many area. In previous methods owner encrypts the original uncompressed image using an encryption key to produce an encrypted image and then a data hider embeds additional data into the encrypted image using. A data-hiding but there was a problem that embedding of patient information to encrypted image considered like noise. Since few years, a new problem is trying to combine in a single step, compression, encryption and data hiding .So far, few solutions have been proposed to
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I. I NTRODUCTION Image registration is a useful technique for aiding diagnosis, performing patient set-up estimation for radiation therapy [1] and for image-guided surgery [2], [3], etc. For the setup estimation problem, a pre-operative image (usually a CT volume) is transformed geometrically to align with measured radiographs. Intensity-based registration methods work by maximizing a similarity measure based on the intensity values of the two images. Therefore, designing an effective similarity measure is very important. This paper proposes a robust similarity measure for intra-modality image registration. One fundamental design criterion is that a similarity measure should be maximized at the true registered position in the absence of noise. Establishing this characteristic analytically is challenging since the behavior of the objective function depends on the nature of the images being registered. Another important criteria is the statistical efficiency of the registration method, i.e., the variability that would result from repeating the registration with identical images except for noise. In addition, registration methods can differ in their robustness to the presence of unexpected objects in images. Many intensity-based image registration methods implicitly treat the intensity pairs taken from corresponding spatial
locations in two images as i.i.d. (independent and identically distributed) samples of two random variables. With that assumption, statistical concepts such as correlation, joint entropy and mutual information (MI) are used as similarity measures by estimating those statistical properties from the i.i.d. samples. The correlation coefficient is a particularly popular similarity measure, and is a natural choice when registering two images from the same modality [4], [5]. Although correlation is poor similarity measure for multi-modality image registration, in terms of statistical efficiency and computational efficiency, the correlation coefficient is one of the best similarity measures for intra-modality image registration. Since image registration for set-up estimation in radiation therapy and image-guided surgery often involves images from the same (or similar) modality, the correlation coefficient can be useful for those applications. The sample correlation coefficient has been used widely to estimate the correlation coefficient due to its simplicity. However, a drawback of the sample correlation coefficient is its sensitivity to outliers [6, p. 199]. Even a few outliers can affect the sample correlation coefficient greatly and thus degrade image registration performance. A significant number of “outliers” may be present in the image-guided surgery application due to the presence of operating instruments and in the radiation therapy application due to the effect of radiotherapy table [1]. For X-ray CT images, differences in contrast agents also occur. Although a bias in estimating the correlation coefficient need not directly imply a bias in image registration, we have observed such biases empirically when outliers are present [7]. The MI similarity measure is used widely for multi-modality image registration since it does not assume any functional relationship between the two image values [8]–[10]. In this sense, the MI method has an inherent degree of robustness. However, as illustrated by our empirical results in Section III and analyses in the Appendices, for intra-modality image registration, the robustness of the MI method depends on the particular images being registered. Moreover, the MI method can be statistically inefficient, i.e., the registration variability due to noise can exceed that of the sample correlation coefficient. To overcome the drawbacks of the sample correlation method and the MI method, we have investigated an image registration method that uses robust correlation coefficients [6, p. 204] as a similarity measure, thereby improving the robustness without compromising the statistical efficiency much. Robust estimation of mean and covariance has been studied