Abstract Enhancing retinal image by the Contourlet transform

合集下载

基于边界优先的图像超分辨率重建

基于边界优先的图像超分辨率重建

单位代码: 10293 密 级:硕 士 学 位 论 文论文题目: 基于边界优先的图像超分辨率重建1013010601 司丹丹 干宗良 副教授 信号与信息处理图像处理与多媒体通信工学硕士 二零一六年三月学号姓名导 师学 科 专 业 研 究 方 向 申请学位类别 论文提交日期Single Image Super Resolution Based on Edge -PriorThesis Submitted to Nanjing University of Posts and Telecommunications for the Degree ofMaster of EngineeringByDandan SiSupervisor: Vice Prof. Zongliang GanMarch2016南京邮电大学学位论文原创性声明本人声明所呈交的学位论文是我个人在导师指导下进行的研究工作及取得的研究成果。

尽我所知,除了文中特别加以标注和致谢的地方外,论文中不包含其他人已经发表或撰写过的研究成果,也不包含为获得南京邮电大学或其它教育机构的学位或证书而使用过的材料。

与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表示了谢意。

本人学位论文及涉及相关资料若有不实,愿意承担一切相关的法律责任。

研究生签名:_____________ 日期:____________南京邮电大学学位论文使用授权声明本人授权南京邮电大学可以保留并向国家有关部门或机构送交论文的复印件和电子文档;允许论文被查阅和借阅;可以将学位论文的全部或部分内容编入有关数据库进行检索;可以采用影印、缩印或扫描等复制手段保存、汇编本学位论文。

本文电子文档的内容和纸质论文的内容相一致。

论文的公布(包括刊登)授权南京邮电大学研究生院办理。

涉密学位论文在解密后适用本授权书。

研究生签名:____________ 导师签名:____________ 日期:_____________摘要图像超分辨率重建是一种由低分辨率(LR)图像获得高分辨率(HR)图像的技术,其目的是恢复图像在降质过程中损失的高频信息和细节信息。

基于多通道图像深度学习的恶意代码检测

基于多通道图像深度学习的恶意代码检测

2021⁃04⁃10计算机应用,Journal of Computer Applications2021,41(4):1142-1147ISSN 1001⁃9081CODEN JYIIDU http ://基于多通道图像深度学习的恶意代码检测蒋考林,白玮,张磊,陈军,潘志松*,郭世泽(陆军工程大学指挥控制工程学院,南京210007)(∗通信作者电子邮箱hotpzs@ )摘要:现有基于深度学习的恶意代码检测方法存在深层次特征提取能力偏弱、模型相对复杂、模型泛化能力不足等问题。

同时,代码复用现象在同一类恶意样本中大量存在,而代码复用会导致代码的视觉特征相似,这种相似性可以被用来进行恶意代码检测。

因此,提出一种基于多通道图像视觉特征和AlexNet 神经网络的恶意代码检测方法。

该方法首先将待检测的代码转化为多通道图像,然后利用AlexNet 神经网络提取其彩色纹理特征并对这些特征进行分类从而检测出可能的恶意代码;同时通过综合运用多通道图像特征提取、局部响应归一化(LRN )等技术,在有效降低模型复杂度的基础上提升了模型的泛化能力。

利用均衡处理后的Malimg 数据集进行测试,结果显示该方法的平均分类准确率达到97.8%;相较于VGGNet 方法在准确率上提升了1.8%,在检测效率上提升了60.2%。

实验结果表明,多通道图像彩色纹理特征能较好地反映恶意代码的类别信息,AlexNet 神经网络相对简单的结构能有效地提升检测效率,而局部响应归一化能提升模型的泛化能力与检测效果。

关键词:多通道图像;彩色纹理特征;恶意代码;深度学习;局部响应归一化中图分类号:TP309文献标志码:AMalicious code detection based on multi -channel image deep learningJIANG Kaolin ,BAI Wei ,ZHANG Lei ,CHEN Jun ,PAN Zhisong *,GUO Shize(Command and Control Engineering College ,Army Engineering University Nanjing Jiangsu 210007,China )Abstract:Existing deep learning -based malicious code detection methods have problems such as weak deep -level feature extraction capability ,relatively complex model and insufficient model generalization capability.At the same time ,code reuse phenomenon occurred in large number of malicious samples of the same type ,resulting in similar visual features of the code.This similarity can be used for malicious code detection.Therefore ,a malicious code detection method based on multi -channel image visual features and AlexNet was proposed.In the method ,the codes to be detected were converted into multi -channel images at first.After that ,AlexNet was used to extract and classify the color texture features of the images ,so as to detect the possible malicious codes.Meanwhile ,the multi -channel image feature extraction ,the Local Response Normalization (LRN )and other technologies were used comprehensively ,which effectively improved the generalization ability of the model with effective reduction of the complexity of the model.The Malimg dataset after equalization was used for testing ,the results showed that the average classification accuracy of the proposed method was 97.8%,and the method had the accuracy increased by 1.8%and the detection efficiency increased by 60.2%compared with the VGGNet method.Experimental results show that the color texture features of multi -channel images can better reflect the type information of malicious codes ,the simple network structure of AlexNet can effectively improve the detection efficiency ,and the local response normalization can improve the generalization ability and detection effect of the model.Key words:multi -channel image;color texture feature;malicious code;deep learning;Local Response Normalization (LRN)引言恶意代码已经成为网络空间的主要威胁来源之一。

基于改进卷积神经网络的苹果叶部病害识别

基于改进卷积神经网络的苹果叶部病害识别

2021年1月第45卷第1期安徽大学学报(自然科学版)Journal of Anhui University(Natural Science Edition)January2021Vol.45No.1doi:10.3969/j.issn.1000-2162.2021.01.008基于改进卷积神经网络的苹果叶部病害识别鲍文霞,吴刚,胡根生,张东彦,黄林生(安徽大学农业生态大数据分析与应用技术国家地方联合工程研究中心,安徽合肥230601.)摘要:针对苹果病害叶片图像病斑区域较小导致的传统卷积神经网络不能准确快速识别的问题,提出基于改进卷积神经网络的苹果叶部病害识别的网络模型•首先,将VGG16网络模型从ImagcNct数据集上学习到的先验知识迁移到苹果病害叶片数据集上;然后,在瓶颈层后采用选择性核(selective kernel,简称SK)卷积模块;最后,使用全局平均池化代替全连接层.实验结果表明:与其他传统网络模型相比,该模型能更准确快速捕获苹果病害叶片上微小的病斑•关键词:苹果叶部病害;图像识别;VGG16;SK卷积;迁移学习;全局平均池化中图分类号:TP391.41文献标志码:A 文章编号:10002162(2021)0-005307Apple leaf disease recognition based on improved convolutional neural networkBAO Wenxia,WU Gang,HU Gensheng,ZHANG Dongyan,HUANG Tlnsheng(National Engineering Research Center for Agro-Ecological Big Data Analysis&Application,Anhui University,Hefti230601,China.)Abstract:Aiming at the problem of small disease spots in apple leaf images,which couldn?t be accurately and quickly identified by using traditional convolutional neural networks,a network model based on improved convolutional neural network for apple leaf disease identification was proposed.First.,the prior knowledge learned from the VGG16network model was transferred from the ImageNet.dataset,to the apple disease leaf dataset.Then, after the bottleneck layer the selective kernel(SK)convolution module was adopted.Finally,the fully connected layer was replaced by Global average pooling.The experimental result,showed that,compared with other traditional network models,this model could capture the tiny spots on the diseased leaves of apples more accurately and quickly.Keywords:apple leaf disease;image recognition;VGG16;SK convolution;transfer learning;global average pooling快速、准确识别农作物病害对提高农作物的产量及质量具有重大意义.为了高效控制苹果病害,提高苹果的产量及质量,研究人员采用传统的机器学习方法识别各种类型的苹果叶片病害[12].传统方法需要大量的图像预处理及分割,工作量大且缺乏灵活性.由于病害区域的颜色、形状、纹理等信息复杂,收稿日期:20200522基金项目:国家自然科学基金资助项目(11771463);安徽省科技重大专项(6030701091);农业生态大数据分析与应用技术国家地方联合工程研究中心开放课题(A E2018009)作者简介:鲍文霞(980—),女,安徽铜陵人,安徽大学副教授,硕士生导师,博士,E-mail:bwxia@.54安徽大学学报(自然科学版)第45卷因此很难保证分割的区域为目标特征区域,这会导致特征提取的效率及识别的准确率降低•近年来,卷积神经网络(convolutional neural network,简称CNN)在模式识别领域取得了一定的成绩[-5]•与传统方法不同,CNN可自动从原始病害叶片图像数据中提取病斑特征,然后对提取的特征自动进行分类识别•Mohanty等[]在开源数据集PlantVillage[7]的基础上,使用GoogleNet[]和AlexNet[]识别14种农作物的26种病害,结果表明参数微调后的GoogleNet预训练模型的病虫害识别精度最高.孙俊等[0]以PlantVillage数据集及病害叶片图像为研究对象,通过对AlexNet网络进行改进,得到了较高的识别率.Baranwal等[1]通过具有2层卷积及2全连接层的网络,能识别苹果的3种病(黑星病、灰斑病及雪松锈病)害叶片.Geetharamani等[2]定义了1个卷积核大小为3X3的3层卷积,在每层卷积后接一个最大池化层,经3层卷积3层最大池化后,再通过2个全连接层进行特征融合,最后送入Softmax进行分类,分类精度高达96.46%.Ferentinos等[3]以复杂背景和简单背景下的病害叶片图像为研究对象,使用VGG16[4],GoogLeNet,AlexNet等网络模型进行训练,结果表明VGG16的效果最好.Guan等[5]先按病害严重程度对苹果灰斑病害叶片进行分级,后通过VGG16,VGG19, ResNet50[6],InceptionV3[7]的预训练模型进行迁移学习,结果表明迁移学习后的VGG16识别效果最好.研究表明,深度CNN网络模型在农作物病害叶片识别领域取得了较好的效果.但是•上述网络模型存在以下问题:①AlexNet,VGG16等网络模型通过全连接层进行特征融合、参数优化,延长了模型的收敛时间,当训练样本数不足时易导致模型过拟合•②寻找一个合适的网络层数需反复实验、比较,缺乏灵活性,选取的层数过浅易致欠拟合、过深易致过度拟合.③每层限定了卷积核的大小,不能针对植物病害叶片上病斑的不同尺度自适应调节卷积核的大小,降低了特征提取的效率•针对上述问题,该文利用在ImageNet数据集[8]上的预训练模型初始化瓶颈层的参数;在VGG16网络模型的基础上,去掉全连接层,在瓶颈层后采用选择性核(selective kernel,简称SK)卷积模块及全局平均池化,以减少模型的训练时间、提高识别的准确率•1数据集1.1数据集获取选取苹果的健康叶片图像及苹果叶片上发病概率较高的5种病(黑星病、灰斑病、雪松锈病、斑点落叶病和花叶病)害叶片图像为研究对象•其中健康、黑星病、灰斑病和雪松锈病叶片图像来自PlantVillage数据集,斑点落叶病和花叶病叶片图像来自Google网站.实验中所有图像的像素均为224X224,各类叶片图像如图1所示.(a)为健康叶片;(b)为病害程度一般的黑星病叶片,有少许黑色斑点;(c)为病害程度严重的黑星病叶片,有大量黑色斑点;(d)为灰斑病叶片;(e)为病害程度一般的雪松锈病叶片,有少量橘红色斑点;(f)为病害程度严重的雪松锈病叶片,表面接近枯萎,有大量病斑;(g)为斑点落叶病叶片,有褐色小圆点;(h)为花叶病叶片,病斑呈鲜黄色.图1各类叶片图像1.2数据增广通过数据增广,可增加噪声数据,提升模型的泛化能力,提高模型的鲁棒性•该文首先按4:1将原始数据集划分为训练集及测试集,对训练集中数据量小的病害类别进行数据增广,其具体操作有:图像随机旋转、裁剪、缩放,图像镜像操作、平移变换,图像亮度、对比度增强等[9].表1给出了不同类型的训第1期鲍文霞,等:基于改进卷积神经网络的苹果叶部病害识别55练集、增广后的训练集及测试集图像数•表1不同类型的训练集、增广后的训练集及测试集图像数类别图像数训练集增广后的训练集测试集健康1 3161 316329一般的黑星病29659274严重的黑星病20862452灰斑病497497124一般的雪松锈病17251643严重的雪松锈病4848012斑点落叶病4848012花叶病6036015总计2 6454 8656612改进的VGG16网络模型2.1网络模型结构在植物叶部病害识别领域,VGG16网络模型通过预训练模型得到了较好的识别结果[315].但是, VGG16网络模型中,一方面,卷积核大小均为3X3,不能自适应调整其感受野的大小;另一方面,其全 连接层复杂的参数优化操作会导致模型过拟合,且增加模型的训练时间•因此,该文在VGG16网络模 型的基础上,使用迁移学习的策略,利用预训练模型初始化瓶颈层参数,且在瓶颈层后接一个SK 卷积 模块,重新训练网络参数,使网络能根据输入特征图自适应选择卷积核,提高多尺度特征提取的水平,提 升对叶片微小病斑识别的能力.将全局平均池化代替全连接层,能加快网络模型的收敛、减少模型参数, 解决全连接层因大量参数优化而带来的过拟合问题•改进后的网络模型结构如图2所示,其中蓝色矩形 框表示卷积和激活操作,黄色矩形框表示池化操作,蓝色矩形框及黄色矩形框共同构成网络的瓶颈层,最右边长方形的长度表示该类病害出现概率的大小.改进后的网络模型结构■健康叶片I 雪松锈病一般雪松锈病严重灰斑病■黑星病一般黑星病严重斑点落叶病花叶病图22.2全局平均池化图3为全连接及全局平均池化示意图.特征提取后,全连接层产生了大量的参数,这些参数的优化 会加大模型的复杂度、延长模型的收敛时间、导致过拟合.通过采用全局平均池化,不仅可以解决全连接 层中过多参数优化导致的模型收敛速度较慢的问题,而且可以增强网络的抗过拟合能力[0]•通过特征 图像素的平均值得到1维特征向量,此过程没有涉及参数的优化,因此防止了过拟合•56安徽大学学报(自然科学版)第45卷平化4 局池 全均o o b o图 征 特s 特征图图3全连接(左)及全局平均池化(右)示意图2.3 卷积核的自适应选择传统CNN 大多在每一特征层上设置相同大小的卷积核[,4].尽管GoogleNet 在同一层采用多个卷 积核,但并不能根据输入特征图的大小自适应调整卷积核,影响了特征提取的效率.采用SKNt 21]中的 SK 卷积,能让网络根据输入特征图自适应选择大小适当的卷积核,从而提高特征提取的效率.SK 卷积 由分离、融合、选择3部分构成.(1)分离•分离操作中,利用多个卷积核对输入特征向量X 进行卷积,形成多个分支•对X 分别进 行大小为3X3和5X5的卷积操作,得到U € R H X W X C 和U 〃 € R H X W X C 两个特征向量.图4为SKNet 中 的SK 卷积示意图.刁 厂~㊉表示元素相加;表示元素相乘.图4 SKNet 中的SK 卷积示意图(2) 融合.设计门控机制,以分配流入下一个卷积层的信息流.在融合过程中,将前面卷积得到的' 和U 〃做如下式所示的像素融合u =U +u 〃. (1)对融合后的U ,通过全局平均池化压缩全局信息.图4中S 的第C 维度特征S c ,可由U 的第C 维度 特征经U c 全局平均池化后得到,如下式所示HWS c = /gp U c ) = ££u c a j ). ⑵全局平均池化后,将得到的特征送入全连接层处理,进一步压缩特征,以降低特征维度及提高特征 的提取效率•压缩后的特征为n =&(B(3s )), (3)其中:为函数ReLU ;B 表示批量归一化[1];n € R d x1,w s € R d ><C ,通过设置衰减比厂确定参数d 的 大小,其表达式为d = max(C/r L ) , (4)其中:L 的大小一般取32.(3) 选择.在Softmax 操作后,通过注意力机制[2],得到a ,软注意力向量为e " b e B z /匚、a =e A z +e B c z ,c — e A z + e B z , ⑸第1期鲍文霞,等:基于改进卷积神经网络的苹果叶部病害识别57其中:A,B e R CXd,A c表示A的第c行a为a的第c个元素;B c及b c同理•通过对应的权重矩阵及卷积核加权求和,得到输出特征为V1,V2,・・・V c],(6)其中:V c e R H X W V c=a U+b U,c+b c=1.3实验及其结果分析实验采用TensorFlow深度学习框架,其硬件及软件配置如表2所示.表2硬件及软件配置类别配置CPU Intel E5—2650V3GPU英伟达GTX1080Ti11GB内存128GB硬盘2TB操作系统Ubuntu16.04.02LTS(64位)3.1实验参数设置在训练过程中,使用随机梯度下降(stochastic gradient descent,简称SGD)的优化模型,使损失函数最小化.由于学习率过大导致模型学习不稳定、过小导致训练时间较长,因此将输入神经网络的训练图片的bach size设为32.虽然模型在开始时收敛速率较快,但随着训练的进行其收敛速率会逐步降低,因此,将初始学习率设为0.001;整个网络训练的epoch设为40;学习率下降间隔数设为8个epoch,其调整倍数为0.1.初始学习率可设置大一点,然后逐渐减少学习率,这样能避免因学习率过大而导致模型无法收敛.在SK卷积中,将路径数量n设为3,基数L设为32.3.2特征图可视化从特征图中,可更直观看到每层的特征•图5给出了苹果灰斑病图像及其特征图,为方便可视化,只列出25个通道特征图•(b)■S9BH HIIII ■■■■H■■■■B KHHai■■■■■IB■目■■■■■■3 3IBIH liaiB(d)(e)(a)为原图;(b),(c),(d),(e)分别为原图经图2中Conv1_1,Conv3_1,Conv5_1,Pool5操作后得到的特征图.图5苹果灰斑病图像及其特征图3.3不同网络模型性能对比为了验证笔者所提网络模型对苹果叶部病害识别的性能,需要比较同一条件下不同网络模型的识别准确率、训练时间•识别准确率的表达式为N-1工TP,P=-------------------------X100%,(7)工(TP,+FPQi=0其中:TP,表示第z种病害(包括健康叶片)分类正确数,FP,表示第z种病害(包括健康叶片)分类错误数N为病害总数•表3展示了增广后的训练集图像经不同网络模型训练后的识别准确率、生成的模型大小及训练时间.58安徽大学学报(自然科学版)第45卷表3不同网络模型性能对比网络模型识别准确率/%训练时间/s模型大小/MBAlexNet91.532768.16217.0GoogleNet93.042062.7347.1VGG1693.804561.24537.2VGG1993.504921.42558.4ResNet-5093.654859.2494.3SKNet-5093.955440.37186.0该文模型94.702138.5670.2由表3可知,不同CNN模型的识别准确率、训练时间以及模型大小存在较大差异•浅层模型AlexNet准确率只有91.53%,深层模型VGG16,VGG19,ResNet50的准确率分别达到了93.80%, 93.50%,93.65%,这得益于迁移学习的运用.通过加载预训练模型参数,充分学习到了苹果叶部病斑的特征,解决了由于训练样本不足而带来的过拟合问题•该文模型的准确率达到了94.70%,这是因为在VGG16模型的特征输出层采用了SK卷积模块,提升了模型的多尺度特征提取能力.AlexNet, VGG16,VGG19生成的模型较大,而该文模型生成的模型大小只有70.2MB,节省了计算成本.图6展示了不同网络模型经过不同训练轮数后的识别准确率.从图6中可以看出,与其他模型相比,该文模型收敛最快.这是因为全连接层的大量参数优化耗时较长,而全局平均池化无繁杂的参数优化操作,加快了模型的收敛速度.4结束语苹果病害初期叶片病斑相对较小,使用传统CNN不易准确识别.笔者在传统VGG16网络模型的基础上,提出了一个改进的网络模型,使用预训练模型初始化瓶颈层参数,采用SK卷积块及全局平均池化.该模型提高了识别微小病斑的能力,提升了叶部病害识别的准确率,加快了网络模型的收敛速度,降低了时间开销•参考文献:[1]ZHANG C L,ZHANG S W,YANG J C,et al Apple leaf disease identification using genetic algorithm andcorrelation based feature selection method[J].Int J Agric&Biol Eng,2017,10(2):74-83.[2]张云龙,袁浩,张晴晴,等•基于颜色特征和差直方图的苹果叶部病害识别方法[].江苏农业科学,2017,第1期鲍文霞,等:基于改进卷积神经网络的苹果叶部病害识别5945(14):17-174.[3]DYNMANN M,KARSTOFT H,MIDTIBY H S.Plant species classification using deep convolutional neuralnctwork[].Biosystems Engineering,2016,151:72-80.[4]AL.-SAFFAR A M,TAO H,TALB M A.Review of deep convolution neural network in image classification[C]//2017International Conference on Radar,Antenna,Microwave,Electronics,and Telecommunications,2018:26-31.[5]SLADOJEVIC S,ARSENOVIC M,ANDE2RLA A,ct al.Deep neural networks based recognition of plantdiseases by leaf image classificationJJ].Computational Intelligence and Neuroscience,2016(6):-11.[6]MOHANTY S P,HUGHES D P,SALATHE ing deep learning for image-based plant diseasedctcction[].Frontiers in Plant Science,2016,7:1419-143乙[7]LIU B,ZHANG Y,HE D J,ct al.Identification of apple leaf diseases based on deep convolutional neuralnetworks]」].Symmetry,2017,10(1):553-564.[8]SZEGEDY C,IIU W,JIA Y,ct al.Going deeper with con\91utions[C]//IEEE Conference on ComputerVision and Pattern Recognition,2014:-9.[9]KR1ZHEVSKY A,SUTSK EVER I,H INTON G E.ImagcNct classification with deep convolutional neuralnctworksEC]//Advances in Neural Information Processing Systems,202:1097-1105.[10]孙俊,谭文军,毛罕平,等.基于改进卷积神经网络的多种植物叶片病害识别[]•农业工程学报,2017,33(19):209-215.[11]BARANWAL S,KHANDE2LWAL S,ARORA A.Deep learning convolutional neural network for appleleaves disease detection[C]//International Conference on Sustainable Computing in Science,Technology&Management,2019:1-8.[12]GEET HARA MANI G,PANDIAN J A.Identification of plant leaf diseases using a nine-layer deepconvolutional neural nctwork[].Computers&Electrical Engineering,2019,76:323-338.[13]FERENTINOS K P.Deep learning models for plant disease detection and diagnosis[J].Computers andElectronics in Agriculture,2018,145:311-318.[14]SIMONYAN K,Z1SSERMAN A.Very deep convolutional networks for large-scale image recognition[EB/OL].[2020-05-03].https:///abs/1409.1556.[15]GUAN W,YU S,J1ANX1N W.Automatic image-based plant disease severity estimation using deep learning[EB/OL].[2020-05-06].https://dot org/10.1155/2017/2917536.[16]H EK,ZHANG X,RENS,ct al.Deep residual learning for image rccognition[C]//Procccdings of the IEEEConference on Computer Vision and Pattern Recognition,2016:770-778.[17]SZEGEDY C,VANHOUCKE V,IOFFE S,ct al.Rethinking the inception architecture for computer vision[C]//Proceedings of the2016IEEE Conference on Computer Vision and Pattern Recognition,2016:2818-2826.[18]MEHDIPOUR G M,YAN1KOGLU B,APTOULA E.Plant identification using deep neural networks viaoptimizatonoftransferlearningparameters[J].Neurocomputing,2017,235:228-235.[19]ZHANG D,CHEN P,ZHANG J,ct al.Classification of plant leaf diseases based on improved convolutionalneuralnetwork[J].Sensors,2019,19(19):4161-4180.[0]II X,WANG W,HU X,ct al.Selective kernel nctworks[C]//Proceedings of the2019IEEE Conference onComputer Vision and Pattern Recognition,2019:246-258.[21]NEWELL A,YANG K,DENG J.Stacked hourglass networks for human pose estimation[C]//EuropcanConference on Computer Vision,2016:483-499.[2]LIN M,CHEN Q,YAN work in nctwork[EB/OL].[2020-05-07].https:///abs/132.4400.(责任编辑郑小虎)。

基于LabVIEW的医学图像去噪与增强(IJIGSP-V7-N11-6)

基于LabVIEW的医学图像去噪与增强(IJIGSP-V7-N11-6)
I.J. Image, Graphics and Signal Processing, 2015, 11, 42-47
Published Online October 2015 in MECS (/) DOI: 10.5815/ijigsp.2015.11.06
Denoising and Enhancement of Medical Images Using Wavelets in LabVIEW
43
input and output can be represented as [6] Xa,L[n] = Xa,H[n] =
x a-1,L[2n-k] g[k] x a-1,H[2n-k] h[k]
Denoising and Enhancement of Medical Images Using Wavelets in LabVIEW
Yogesh Rao
Vyogeshrao.vjti@ VJTI, Mumbai; 2SAMEER, Mumbai, India E-mail: nishasarvade@.in,2roshanmakkar@
II. DWT AND SVD THEOREM A. Discrete Wavelet Transform The discrete wavelet transform is developed from continuous wavelet transform with discrete input, but it is simplified mathematical derivation. The relation between I.J. Image, Graphics and Signal Processing, 2015, 11, 42-47

图像增强技术外文翻译参考文献综述

图像增强技术外文翻译参考文献综述

图像增强技术外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Hybrid Genetic Algorithm Based Image EnhancementTechnologyAbstract—in image enhancement, Tubbs proposed a normalized incomplete Beta function to represent several kinds of commonly used non-linear transform functions to do the research on image enhancement. But how to define the coefficients of the Beta function is still a problem. We proposed a Hybrid Genetic Algorithm which combines the Differential Evolution to the Genetic Algorithm in the image enhancement process and utilize the quickly searching ability of the algorithm to carry out the adaptive mutation and searches. Finally we use the Simulation experiment to prove the effectiveness of the method.Keywords- Image enhancement; Hybrid Genetic Algorithm; adaptive enhancementI. INTRODUCTIONIn the image formation, transfer or conversion process, due to other objective factors such as system noise, inadequate or excessive exposure, relative motion and so the impact will get the image often a difference between the original image (referred to as degraded or degraded) Degraded image is usually blurred or after the extraction of information through the machine to reduce or even wrong, it must take some measures for its improvement.Image enhancement technology is proposed in this sense, and the purpose is to improve the image quality. Fuzzy Image Enhancement situation according to the image using a variety of special technical highlights some of the information in the image, reduce or eliminate the irrelevant information, to emphasize the image of the whole or the purpose of local features. Image enhancement method is still no unified theory, image enhancement techniques can be divided into three categories: point operations, and spatial frequency enhancement methods Enhancement Act. This paper presents an automatic adjustment according to the image characteristics of adaptive image enhancement method that called hybrid genetic algorithm. It combines the differential evolution algorithm of adaptive search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.II. IMAGE ENHANCEMENT TECHNOLOGYImage enhancement refers to some features of the image, such as contour, contrast, emphasis or highlight edges, etc., in order to facilitate detection or further analysis and processing. Enhancements will not increase the information in the image data, but will choose the appropriate features of the expansion of dynamic range, making these features more easily detected or identified, for the detection and treatment follow-up analysis and lay a good foundation.Image enhancement method consists of point operations, spatial filtering, and frequency domain filtering categories. Point operations, including contrast stretching, histogram modeling, and limiting noise and image subtraction techniques. Spatial filter including low-pass filtering, median filtering, high pass filter (image sharpening). Frequency filter including homomorphism filtering, multi-scale multi-resolution image enhancement applied [1].III. DIFFERENTIAL EVOLUTION ALGORITHMDifferential Evolution (DE) was first proposed by Price and Storn, and with other evolutionary algorithms are compared, DE algorithm has a strong spatial search capability, and easy to implement, easy to understand. DE algorithm is a novel search algorithm, it isfirst in the search space randomly generates the initial population and then calculate the difference between any two members of the vector, and the difference is added to the third member of the vector, by which Method to form a new individual. If you find that the fitness of new individual members better than the original, then replace the original with the formation of individual self.The operation of DE is the same as genetic algorithm, and it conclude mutation, crossover and selection, but the methods are different. We suppose that the group size is P, the vector dimension is D, and we can express the object vector as (1): xi=[xi1,xi2,…,xiD] (i =1,…,P) (1)And the mutation vector can be expressed as (2):()321r r r i X X F X V -⨯+= i=1,...,P (2) 1r X ,2r X ,3r X are three randomly selected individuals from group, and r1≠r2≠r3≠i.F is a range of [0, 2] between the actual type constant factor difference vector is used to control the influence, commonly referred to as scaling factor. Clearly the difference between the vector and the smaller the disturbance also smaller, which means that if groups close to the optimum value, the disturbance will be automatically reduced.DE algorithm selection operation is a "greedy " selection mode, if and only if the new vector ui the fitness of the individual than the target vector is better when the individual xi, ui will be retained to the next group. Otherwise, the target vector xi individuals remain in the original group, once again as the next generation of the parent vector.IV . HYBRID GA FOR IMAGE ENHANCEMENT IMAGEenhancement is the foundation to get the fast object detection, so it is necessary to find real-time and good performance algorithm. For the practical requirements of different systems, many algorithms need to determine the parameters and artificial thresholds. Can use a non-complete Beta function, it can completely cover the typical image enhancement transform type, but to determine the Beta function parameters are still many problems to be solved. This section presents a Beta function, since according to the applicable method for image enhancement, adaptive Hybrid genetic algorithm search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.The purpose of image enhancement is to improve image quality, which are more prominent features of the specified restore the degraded image details and so on. In the degraded image in a common feature is the contrast lower side usually presents bright, dim or gray concentrated. Low-contrast degraded image can be stretched to achieve a dynamic histogram enhancement, such as gray level change. We use Ixy to illustrate the gray level of point (x, y) which can be expressed by (3).Ixy=f(x, y) (3) where: “f” is a linear or nonlinear function. In general, gray image have four nonlineartranslations [6] [7] that can be shown as Figure 1. We use a normalized incomplete Beta function to automatically fit the 4 categories of image enhancement transformation curve. It defines in (4):()()()()10,01,1011<<-=---⎰βαβαβαdt t t B u f u (4)where:()()⎰---=10111,dt t t B βαβα (5)For different value of α and β, we can get response curve from (4) and (5).The hybrid GA can make use of the previous section adaptive differential evolution algorithm to search for the best function to determine a value of Beta, and then each pixel grayscale values into the Beta function, the corresponding transformation of Figure 1, resulting in ideal image enhancement. The detail description is follows:Assuming the original image pixel (x, y) of the pixel gray level by the formula (4), denoted by xy i ,()Ω∈y x ,, here Ω is the image domain. Enhanced image is denoted by Ixy. Firstly, the image gray value normalized into [0, 1] by (6).min max min i i i i g xy xy --=(6) where: max i and m in i express the maximum and minimum of image gray relatively.Define the nonlinear transformation function f(u) (0≤u ≤1) to transform source image to Gxy=f(xy g ), where the 0≤ Gxy ≤ 1.Finally, we use the hybrid genetic algorithm to determine the appropriate Beta function f (u) the optimal parameters α and β. Will enhance the image Gxy transformed antinormalized.V. EXPERIMENT AND ANALYSISIn the simulation, we used two different types of gray-scale images degraded; the program performed 50 times, population sizes of 30, evolved 600 times. The results show that the proposed method can very effectively enhance the different types of degraded image.Figure 2, the size of the original image a 320 × 320, it's the contrast to low, and some details of the more obscure, in particular, scarves and other details of the texture is not obvious, visual effects, poor, using the method proposed in this section, to overcome the above some of the issues and get satisfactory image results, as shown in Figure 5 (b) shows, the visual effects have been well improved. From the histogram view, the scope of the distribution of image intensity is more uniform, and the distribution of light and dark gray area is more reasonable. Hybrid genetic algorithm to automatically identify the nonlinear transformation of the function curve, and the values obtained before 9.837,5.7912, from the curve can be drawn, it is consistent with Figure 3, c-class, that stretch across the middle region compression transform the region, which were consistent with the histogram, the overall original image low contrast, compression at both ends of the middle regionstretching region is consistent with human visual sense, enhanced the effect of significantly improved.Figure 3, the size of the original image a 320 × 256, the overall intensity is low, the use of the method proposed in this section are the images b, we can see the ground, chairs and clothes and other details of the resolution and contrast than the original image has Improved significantly, the original image gray distribution concentrated in the lower region, and the enhanced image of the gray uniform, gray before and after transformation and nonlinear transformation of basic graph 3 (a) the same class, namely, the image Dim region stretching, and the values were 5.9409,9.5704, nonlinear transformation of images degraded type inference is correct, the enhanced visual effect and good robustness enhancement.Difficult to assess the quality of image enhancement, image is still no common evaluation criteria, common peak signal to noise ratio (PSNR) evaluation in terms of line, but the peak signal to noise ratio does not reflect the human visual system error. Therefore, we use marginal protection index and contrast increase index to evaluate the experimental results.Edgel Protection Index (EPI) is defined as follows:(7)Contrast Increase Index (CII) is defined as follows:min max min max,G G G G C C C E O D +-== (8)In figure 4, we compared with the Wavelet Transform based algorithm and get the evaluate number in TABLE I.Figure 4 (a, c) show the original image and the differential evolution algorithm for enhanced results can be seen from the enhanced contrast markedly improved, clearer image details, edge feature more prominent. b, c shows the wavelet-based hybrid genetic algorithm-based Comparison of Image Enhancement: wavelet-based enhancement method to enhance image detail out some of the image visual effect is an improvement over the original image, but the enhancement is not obvious; and Hybrid genetic algorithm based on adaptive transform image enhancement effect is very good, image details, texture, clarity is enhanced compared with the results based on wavelet transform has greatly improved the image of the post-analytical processing helpful. Experimental enhancement experiment using wavelet transform "sym4" wavelet, enhanced differential evolution algorithm experiment, the parameters and the values were 5.9409,9.5704. For a 256 × 256 size image transform based on adaptive hybrid genetic algorithm in Matlab 7.0 image enhancement software, the computing time is about 2 seconds, operation is very fast. From TABLE I, objective evaluation criteria can be seen, both the edge of the protection index, or to enhance the contrast index, based on adaptive hybrid genetic algorithm compared to traditional methods based on wavelet transform has a larger increase, which is from This section describes the objective advantages of the method. From above analysis, we can see that this method.From above analysis, we can see that this method can be useful and effective.VI. CONCLUSIONIn this paper, to maintain the integrity of the perspective image information, the use of Hybrid genetic algorithm for image enhancement, can be seen from the experimental results, based on the Hybrid genetic algorithm for image enhancement method has obvious effect. Compared with other evolutionary algorithms, hybrid genetic algorithm outstanding performance of the algorithm, it is simple, robust and rapid convergence is almost optimal solution can be found in each run, while the hybrid genetic algorithm is only a few parameters need to be set and the same set of parameters can be used in many different problems. Using the Hybrid genetic algorithm quick search capability for a given test image adaptive mutation, search, to finalize the transformation function from the best parameter values. And the exhaustive method compared to a significant reduction in the time to ask and solve the computing complexity. Therefore, the proposed image enhancement method has some practical value.REFERENCES[1] HE Bin et al., Visual C++ Digital Image Processing [M], Posts & Telecom Press,2001,4:473~477[2] Storn R, Price K. Differential Evolution—a Simple and Efficient Adaptive Scheme forGlobal Optimization over Continuous Space[R]. International Computer Science Institute, Berlaey, 1995.[3] Tubbs J D. A note on parametric image enhancement [J].Pattern Recognition.1997,30(6):617-621.[4] TANG Ming, MA Song De, XIAO Jing. Enhancing Far Infrared Image Sequences withModel Based Adaptive Filtering [J] . CHINESE JOURNAL OF COMPUTERS, 2000, 23(8):893-896.[5] ZHOU Ji Liu, LV Hang, Image Enhancement Based on A New Genetic Algorithm [J].Chinese Journal of Computers, 2001, 24(9):959-964.[6] LI Yun, LIU Xuecheng. On Algorithm of Image Constract Enhancement Based onWavelet Transformation [J]. Computer Applications and Software, 2008,8.[7] XIE Mei-hua, WANG Zheng-ming, The Partial Differential Equation Method for ImageResolution Enhancement [J]. Journal of Remote Sensing, 2005,9(6):673-679.基于混合遗传算法的图像增强技术摘要—在图像增强之中,塔布斯提出了归一化不完全β函数表示常用的几种使用的非线性变换函数对图像进行研究增强。

运动模糊图像的恢复技术研究

运动模糊图像的恢复技术研究

2021年第40卷第4期传感器与微系统(Transducer and Microsystem Technologies)63DOI:10.13873/J.1000-9787(2021)04-0063-03运动模糊图像的恢复技术研究**收稿日期:2019-09-27*基金项目:国家自然科学基金资助项目(61762067)陈英,洪晨丰(南昌航空大学软件学院,江西南昌330063)摘要:针对运动模糊图像的恢复,提出了基于生成式对抗神经网络(GAN)网络与FSRCNN网络的方案。

采用GoPr。

模糊图像数据集与DIV2K图像恢复数据集进行网络训练;通过图像处理对模糊图像进行规格化、归一化、色彩空间转换等预处理操作;利用GAN对模糊图像进行图像恢复,并结合FSRCNN针对模糊图像进行图像增强;针对GAN与FSRCNN的处理结果进行分析与对比,经由FSRCNN网络图像增强的恢复图像的峰值信噪比虽然有一定的下降,但结构相似度则得到了提升。

实验结果表明本文的算法方案具有较好的可行性。

关键词:生成式对抗神经网络;卷积神经网络;运动模糊;图像处理中图分类号:TP391.41文献标识码:A 文章编号:1000-9787(2021)04-0063-03Research on restoration technology of motion blur imageCHEN Ying,HONG Chenfeng(School of Software,Nanchang Hangkong University,Nanchang330063,China)Abstract:An image restoration scheme based on generative adversarial networks(GAN)network and FSRCNN network for motion blurred images is proposed.GoPro fuzzy image dataset and lhe DIV2K image restoration dataset are used for network training・Image processing operation is used to normalize the fuzzy image,convert color space・GAN is used to restore lhe image of lhe blurred image,and the FSRCNN is combined to enhance the image for the blurred image・Processing results of GAN and FSRCNN are analyzed and compared.The peak signal-noise ratio of the reconstructed image enhanced by the FSRCNN network image has a certain decline・However,the structural similarity is improved・Experimental results show that the proposed algorithm has good feasibility.Keywords:generative adversarial networks(GAN);convolutional neural network(CNN);motion blur;image processing0引言计算机领域在近年来的急速进步与图像存储在日常生活中逐渐普及,焦距、相机抖动和目标物体的运动等因素都是图像模糊的原因,同样因为这些因素,导致了图像信息发生小不服或与空间的大面积退化,从而使图像信息存储与使用发生错误E。

计算机论文范文

计算机论文范文

计算机论文范文Title: A Study on the Application of Deep Learning in Image Recognition。

Abstract:Deep learning has become a popular and powerful tool in the field of image recognition. This paper explores the application of deep learning in image recognition, focusing on the development of convolutional neural networks (CNNs) and their use in various image recognition tasks. The paper also discusses the challenges and future directions of deep learning in image recognition.1. Introduction。

Image recognition is a fundamental task in computer vision, with applications in various fields such as autonomous driving, medical imaging, and security surveillance. Deep learning, particularly CNNs, has shownremarkable performance in image recognition tasks, surpassing traditional methods in accuracy and efficiency. This paper aims to provide an overview of the application of deep learning in image recognition, highlighting the development of CNNs and their use in different image recognition tasks.2. Development of Convolutional Neural Networks。

二次改进加权导向滤波的内窥镜图像增强算法

二次改进加权导向滤波的内窥镜图像增强算法

二次改进加权导向滤波的内窥镜图像增强算法林金朝\杨光、王慧倩\庞宇、刘金花\陈竹2,严崇源2(1.重庆邮电大学光电信息感测与传输技术重庆市重点实验室,重庆,400065 )(2.重庆西山科技股份有限公司,重庆,401120)摘要:从本文提出一种改进加权导向滤波算法,并应用于内窥镜图像增强。

原始内窥镜图像通过导向滤波 算法处理后再将其作为第二次导向滤波的导向图像,与传统导向滤波算法相比能克服其噪声残余的缺陷。

最后通过图像增强算法,对内窥镜图像的血管细节和亮度进行增强实验表明,本文算法的SSIM指标和 PSNR指标均优于传统导向滤波算法和加权导向滤波算法关键词:改进加权导向滤波;导向滤波;内窥镜图像;图像增强中图分类号:TP391.4 文献标识码:A DOl:10.11967/2020181212Endoscope Image Enhancement Algorithm Based on Twice ImprovedWeighted Guided FilteringLin Jinzhao', YangGuang', Wang Huiqian', Pang Yu', Liu Jinhua', ChenZhi^, Yan Chongyuan2 (l.Photoelectronic Information Sensing and Transmission Technology Laboratory, Chongqing University o f P osts and Telecommunications, Chongqing 400065, P. R. China.2.Chongqing Xishan Technology Co., Ltd., Chongqing 401120, P. R.China)Abstract:This paper proposes a improved weighted guided filtering algorithm and applies it to endoscopic image enhancement. The original endoscopic image is processed by the guided filtering algorithm and then used as the guided image for the second guided filtering. Compared with the traditional guided filtering algorithm, the defect of residual noise can be overcome. Finally, the image enhancement algorithm is used to enhance the blood vessel details and brightness of the endoscopic image. Experiments show that the SSIM index and PSNR index of the algorithm in this paper are better than traditional guided filtering algorithm and weighted guided filtering algorithm. Key words:Improved weighted guided filtering;Guided image filtering ;Endoscopic image;Edge preserving;Image enhancement[CLC Number)TP391.4 |Document Code|A DOI:10.11967/2020181212引言现代医学技术的飞速发展,使得内窥镜逐渐 成为辅助医疗诊断中常用的医疗设备。

英语有关眼睛的说明文作文

英语有关眼睛的说明文作文

The human eye is a remarkable organ,designed to capture and process light, allowing us to perceive the world around us.Here is a detailed explanation of the structure and function of the eye,highlighting its various components and their roles in the process of vision.Structure of the Eye:1.Cornea:The cornea is the transparent,domeshaped front surface of the eye.It acts as a protective barrier and helps focus light entering the eye.2.Sclera:The sclera is the white,tough outer layer of the eye that provides protection and maintains the shape of the eye.3.Iris:The iris is the colored part of the eye,which controls the amount of light entering the eye by adjusting the size of the pupil.4.Pupil:The pupil is the black,circular opening in the center of the iris that allows light to enter the eye.5.Lens:The lens is a transparent,flexible structure that changes shape to focus light onto the retina.It is located behind the iris.6.Retina:The retina is the lightsensitive tissue at the back of the eye.It contains cells called photoreceptors rods and cones that convert light into electrical signals.7.Optic Nerve:The optic nerve is responsible for transmitting the electrical signals from the retina to the brain,where they are interpreted as visual images.8.Vitreous Humor:This is a clear,gellike substance that fills the space between the lens and the retina,helping to maintain the eyes shape and transmit light to the retina.Function of the Eye:1.Light Reception:Light enters the eye through the cornea,which bends the light and directs it towards the lens.2.Focusing:The lens adjusts its shape to focus the light onto the retina,a process known as accommodation.3.Image Formation:The retina contains millions of photoreceptor cells that are sensitiveto light.These cells convert the light into electrical signals.4.Color Perception:The cones in the retina are responsible for color vision.There are three types of cones,each sensitive to different parts of the color spectrum:red,green, and blue.5.Night Vision:Rods in the retina are responsible for vision in low light conditions. They are more sensitive to light than cones but do not detect color.6.Signal Transmission:The electrical signals generated by the photoreceptors are transmitted to the brain via the optic nerve.7.Visual Processing:The brain processes these signals and constructs the images that we perceive as sight.Health and Care of the Eye:Regular eye exams are crucial for maintaining eye health and detecting any issues early. Wearing protective eyewear during sports or activities where the eyes may be at risk is important.A balanced diet rich in vitamins and minerals,particularly those beneficial for eye health like vitamin A,C,and E,and omega3fatty acids,can support eye function.Limiting screen time and taking regular breaks can help reduce eye strain.Understanding the eyes structure and function is essential for appreciating the complexity of this vital sensory organ and the importance of taking care of our vision.。

基于深度卷积集成网络的视网膜多种疾病筛查和识别方法

基于深度卷积集成网络的视网膜多种疾病筛查和识别方法

2021年9月Chinese Journal of Intelligent Science and Technology September 2021 第3卷第3期智能科学与技术学报V ol.3No.3 基于深度卷积集成网络的视网膜多种疾病筛查和识别方法王禾扬,杨启鸣,朱旗(南京航空航天大学计算机科学与技术学院,江苏南京 211100)摘 要:针对视网膜疾病种类繁多、病灶位置不固定等特点,提出一种基于深度卷积集成网络的视网膜多种疾病筛查和识别方法。

首先,根据视网膜眼底图像裁剪掉两侧黑色边框,并去除图像中的噪声,以降低对眼底图像的干扰,提高图像的清晰度;之后,通过对处理完成的视网膜眼底图像使用裁剪、旋转等数据增强方法来扩增数据集;再建立基于深度卷积神经网络的模型进行特征提取,并在网络模型微调后完成视网膜疾病筛查和识别任务,最终将多个模型的结果进行集成。

实验结果表明,该方法针对视网膜疾病的筛查和识别的问题取得了较好的效果,视网膜疾病筛查的准确率达到96.05%,视网膜疾病识别的准确率达到72.55%。

关键词:视网膜眼底图像;疾病筛查;疾病识别;深度卷积网络;集成模型中图分类号:TP391.4文献标识码:Adoi: doi:10.11959/j.issn.2096−6652.202127Retinal multi-disease screening and recognition method based ondeep convolution ensemble networkWANG Heyang, YANG Qiming, ZHU QiCollege of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China Abstract: As for the characteristics of various types of retinal diseases and uncertainty of the location of the lesions, a re-tinal multi-disease screening and recognition method based on deep convolutional ensemble network was proposed.Firstly, the black borders on both sides of the retinal fundus image were cut off, and the noise in the image was removed to reduce the interference to the retinal image and increase the clarity of the image. After that, data augmentation methods such as cropping and rotating were performed to process retinal fundus image to amplify the dataset. Then, a model based on deep convolutional neural network was built for feature extraction, and the network model was fine-tuned to complete the task of screening and identifying retinal diseases. Finally, the results of multiple models were ensembled. The expe-rimental results show that this method has achieved good results for the screening and recognition of retinal diseases, the accuracy of retinal disease screening is 96.05%, and the accuracy of retinal disease recognition is 72.55%.Key words: retinal fundus image, disease screening, disease recognition, deep convolutional network, ensemble model1引言视网膜疾病发病率高,且相关疾病种类繁多。

基于人眼视觉特性的图像增强算法研究毕业设计

基于人眼视觉特性的图像增强算法研究毕业设计

毕业设计基于人眼视觉特性的图像增强算法研究摘要:利用图像增强技术,可以使图像获得更佳的视觉效果,提高人眼对信息的辨别能力,另一方面,图像增强作为一种预处理技术,能使处理后的图像比原图像更适合于参数估计、图像分割和目标识别等后续图像分析工作。

因此,图像增强技术的研究一直是图像处理的一项重要内容。

但传统的基于直方图的图像增强方法存在以下几个问题:1)传统直方图灰度级统计量与信息量存在不一致问题;2)传统直方图均衡方法在灰度级调整过程中,没有充分利用视觉敏感区段;3)没有针对图像内容多变特点,自适应地获取灰度级调整的优化配置参数。

针对上述问题本文展开以下几个方面研究:首先,针对传统直方图对图像信息的描述存在不足,本文提出了一种新的基于视觉注意机制的灰度级信息量直方图构造方法。

这种新的直方图在灰度级统计过程中同时考虑各灰度级数量和空间分布情况,采用视觉注意机制计算模型测算出不同位置灰度级的重要性(或显著性),并依据各像素灰度级的重要性进行加权统计,使得统计结果可以客观反映各灰度级对图像信息刻画所起的作用。

其次,在灰度级调整过程中考虑人眼视觉感知的非线性特性,并针对其特点提出了将不同比例的灰度级信息量分配至不同的视觉敏感度区段。

其分配原则遵循敏感度大的区段分配较多的信息量,同时为避免主导灰度级在直方图拉伸处理中占用较大范围的灰度级空间,本文利用人眼感知能力曲线约束主导灰度级动态范围。

最后,由于图像内容存在多变的特点,为了获取更好的图像增强效果,有必要对各视觉敏感度区段的信息量分配比例做一定的调节,为此本文提出了依据图像增强质量客观评估算法的分析结果,自适应地获取最佳调节参数。

图像增强质量客观评估算法是依据视觉感知模型设计的,大量测试结果表明,客观评估算法的分析结果与主观评估结果基本吻合。

关键词:图像增强;视觉注意机制;人眼调制传递函数;临界可见偏差;图像质量评价1引言1.1研究背景及意义在一个图像系统中,从图像的获取,到图像的发送、传输、接收、输出(显示)、复制等等,每一个环节都会产生干扰,都会使图像质量降低,不能很好的贴合人眼直接观察到的图像。

一种基于改进U形网络的眼底图像视网膜新生血管检测方法

一种基于改进U形网络的眼底图像视网膜新生血管检测方法

收稿日期:2020-04-10基金项目:国家自然科学基金资助项目(61573380,61702559),National Natural Science Foundation of China (61573380,61702559);国家重大科技专项(2018AAA0102100),National Science and Technology Major Project of the Ministry of Science and Technology of China (2018AAA0102100)作者简介:邹北骥(1961—),男,江西南昌人,中南大学教授†通信联系人,E-mail :*****************第48卷第4期2021年4月湖南大学学报(自然科学版)Journal of Hunan University (Natural Sciences )Vol.48,No.4Apr.2021DOI :10.16339/ki.hdxbzkb.2021.04.003文章编号:1674—2974(2021)04—0019—07一种基于改进U 形网络的眼底图像视网膜新生血管检测方法邹北骥2,3,易博松1,3,刘晴2,3†(1.中南大学自动化学院,湖南长沙410083;2.中南大学计算机学院,湖南长沙410083;3.湖南省机器视觉与智慧医疗工程技术研究中心,湖南长沙410083)摘要:糖尿病性视网膜病变(简称糖网病)是主要的致盲眼疾病之一,视网膜新生血管的出现是糖网病恶化的重要标志.为了更准确地检测出视网膜新生血管,本文提出了一种基于彩色眼底图的视网膜新生血管检测方法.首先通过一种改进的U 形卷积神经网络对血管进行分割;然后利用滑动窗口提取特定区域内血管的形态特征,通过支持向量机将窗口内的血管分为普通血管和新生血管.使用来自MESSIDOR 数据集和Kaggle 数据集的含有视网膜新生血管的彩色眼底图对实验进行训练和测试,结果表明该方法对视网膜新生血管检测的准确率为95.96%;该方法在糖网病计算机辅助诊断方面有潜在的应用前景.关键词:视网膜新生血管检测;血管分割;U 形网络;深度学习中图分类号:TP391.41文献标志码:AA Method of Retinal Neovascularization Detection onRetinal Image Based on Improved U-netZOU Beiji 2,3,YI Bosong 1,3,LIU Qing 2,3†(1.School of Automation ,Central South University ,Changsha 410083,China ;2.School of Computing ,Central South University ,Changsha 410083,China ;3.Hunan Machine Vision and Intelligent Medical Research Center ,Changsha 410083,China )Abstract :Diabetic retinopathy (DR )is one of the major causes of blindness,and the appearance of retinal neo -vascularization (RN )is an important sign of DR deterioration.In order to detect RN more accurately,a method basedon color fundus photograph for retinal neovascularization detection is proposed.First,an improved U-shaped convo -lutional neural network is used to segment the blood vessels.Then,a sliding window is used to extract the morphologi -cal characteristics of blood vessels in the specific area.A support vector machine (SVM )is used to classify the blood vessels into normal vessels and retinal neovascularization in the window.The experiments use color fundus pho -tographs with retinal neovascularization from the MESSIDOR dataset and the Kaggle dataset for training and testing.The result shows that the accuracy of this method for the RN detection is 95.96%;This method has potential applica -糖尿病性视网膜病变是糖尿病的微血管主要并发症之一,也是主要的致盲疾病之一[1].据估计,到2030年全世界将有约3.6亿人罹患糖尿病.我国目前的糖尿病患病率为9.7%,约有9400万人罹患糖尿病[1].临床上视网膜新生血管的出现是非增殖期糖尿病性视网膜病变恶化至增殖期糖尿病性视网膜病变的主要标志,也是医生是否需要对患者立刻进行积极治疗的关键判断依据[1-2].目前已有的对于视网膜新生血管的检测方法主要是通过传统图像处理方法对彩色眼底图中的新生血管进行分割,去除背景和大部分图像噪声,只提取血管图像,再使用机器学习方法训练分类器并对眼底图像全局或者特定区域的血管进行分类以达到新生血管检测的目的.Agurto 等[3]将AM-FM 算法用于检测正常和非正常的血管,以此筛查糖尿病性视网膜病.Goatman 等[4]直接提取血管的形状、位置、方向、密度等特征,并使用支持向量机(Support Vector Machine ,SVM )分类器来区分正常和非正常血管.Hassan 等[5]选择固定大小区域里的血管数目和血管所占面积作为特征来检测新生血管.Maryam [6]证明了在各种描述新生血管的特征中,Gabor 滤波器方法能实现最好的特异性和较好的敏感性.Welikala 等[7]分别从标准线性算子和修正线性算子生成的二值化的血管图中提取两组不同的特征集,并提出了一个基于SVM 的双重分类系统用于正常血管和新生血管的分类.Gupta 等[8]提出将视网膜图像拆分成小块来检测,提取小块的纹理特征和灰度特征等525个特征,使用随机森林分类器来训练并进行血管的分类.Pujitha 等[9]提出了一种半监督方法来解决训练数据量不够的问题.对于含有新生血管的小片段,使用Gabor 滤波器来提取特征,通过在特征空间中使用领域信息,将特征融合在基于共同训练的半监督的框架中来分类.Yu 等[10]提出通过对新生血管候选区域进行再次筛选并使用SVM 分类新生血管和普通血管.近年来,U 形网络[11]在医学图像分割任务上获得了巨大的成功.在彩色眼底图像普通血管分割任务上,相比于传统方法,U 形网络能够实现更好的分割性能[12].但与常规血管不同的是,视网膜新生血管在形态上更加细小[1],分割和检测的难度均比常规血管大.针对新生血管较难检测的问题,为了提升检测的准确率,本文提出一种改进U 形网络的眼底图像血管分割算法,使之能更好地分割出常规血管和新生血管,方便后续新生血管的检测.具体地,本文首先设计了一种改进型的U 形网络用于新生血管和常规血管分割.然后使用一个滑动窗口对分割后的新生血管图像进行完整遍历,使用SVM 对每一个窗口生成的子图像内的血管图像按照特征信息进行分类,分类成普通血管或新生血管以完成检测.使用MESSIDOR 数据集和Kaggle 数据集进行方法的训练和测试.实验证明,改进型U 形网络对新生血管的分割精度高于原始U 形网络,本文提出的新生血管检测方法可以准确地对新生血管进行检测.1方法本文提出的新生血管检测方法流程如图1所示.首先对眼底图像的绿色通道图像进行预处理,预处理操作包括对比度受限的自适应直方图均衡化和伽马变换,接着使用改进的U 形网络进行血管分割.然后使用滑动窗口对分割出来的血管图像进行遍历,使用SVM 对每个窗口中的血管图像提取特征并分类,最后统计分类结果,根据分类结果确定该图是否检测到新生血管.1.1预处理彩色眼底图像在绿色通道中整体对比度最高,细节损失最少,相较原图可以更好地表现整体血管结构[10],因此,本文后续实验操作都将在彩色眼底图的绿色通道上完成.在针对彩色眼底图的视网膜新生血管处理中,新生血管像素所占整张眼底图像素的比例非常低,使用形态学方法进行预处理会导致大量新生血管像素丢失,严重影响实验结果.为了保证眼底图中新生tion prospects in the computer-aided diagnosis of diabetic retinopathy.Key words :retinal neovascularization detection ;segmentation of blood vessels ;U-shaped neural network ;deep learning湖南大学学报(自然科学版)2021年20邹北骥等:一种基于改进U形网络的眼底图像视网膜新生血管检测方法血管结构的信息完整,本文采用对比度受限的自适应直方图均衡化方法[10]和伽马变换方法[10]来对眼底图进行增强,改善图像低对比度的情况,同时将对新生血管像素的影响降到最小.彩色眼底图绿色通道直方图均衡化和伽马变换改进型U形网络进行血管分割计算滑动窗口中的特征信息SVM对当前窗口血管分类新生血管或非新生血管Features:Vessels area4250 Number of Junctions12 Mean Vessels Length639.61…图1本文方法流程Fig.1The method flow of this study1.2血管分割为了提高对新生血管分割的准确率,本文设计了一种改进型U形网络并使用该网络分割新生血管.本文设计的改进型U形网络在原始U形网络的基础上,引入了改进的残差模块(Improved ResBlock)[13]和金字塔场景解析(Pyramid Scene Parsing,PSP)池化模块[14],可以实现对新生血管图像的语义分割,即将视觉输入分为不同的语义可解释类别,把同类的像素用同一种颜色进行标记,每一种分类类别都具有对应的现实意义.在本文中,白色像素对应血管,黑色像素对应着除血管外的所有背景信息.同时本文在所有卷积层之后执行批归一化操作,将每个批次的输入进行归一化处理,使模型在训练过程中更易于优化,同时降低模型过拟合的风险.本文设计的改进U形网络结构如图2所示.该网络由下采样路径和上采样路径组成,其下采样路径通过下采样操作逐渐减少图像的空间维度,而上采样路径通过上采样操作逐步修复图像的细节和空间维度.对于下采样路径和上采样路径中的每一个卷积层级,使用融合模块实现对应层级的图像输出特征的拼接,使网络可以利用不同层级的图像特征信息.ImprovedResBlockConv2D(1×1)Conv2D(1×1)1/2Conv2D(1×1)1/2ImprovedResBlockImprovedResBlockImprovedResBlockConv2D(1×1)1/2Conv2D(1×1)1/2ImprovedResBlockPSP PoolingCombineUpSample×2ImprovedResBlockImprovedResBlockImprovedResBlockImprovedResBlockOutputCombineCombineCombineUpSample×2UpSample×2UpSample×2CombineInput图2改进U形网络结构Fig.2Improved U-shaped network structure为了缓解网络层数加深而引发的梯度消失问题[13],在原始U形网络每个卷积层之间,引入一系列的残差模块.为了提高网络的感受野,使网络更好地捕捉上下文信息,在每一个残差模块中,除了已经存在的两个普通卷积的分支,本文设计的残差模块将额外引入一组空洞卷积分支[15].改进的残差模块结构图如图3所示.为了在网络中引入更多的上下文信息,进一步融合网络多尺度特征,本文在上采样路径和下采样路径之间引入了PSP池化模块.该模块将输入从通道维度平均划分为4个层级,针对划分后的4个层级,分别对其使用1×1、2×2、3×3、6×6尺寸的池化核进行池化操作.在每组池化的结果之后,采用原始特征数量1/4的1×1卷积实现特征维度的缩减,最终在4组池化后的特征上采样到与输入特征相同的尺寸,并进行特征的拼接,融合多尺度特征.本文采用的PSP池化模块的结构图如图4所示.第4期21Conv2D Input ReLU ReLU ReLU ReLU BatchNorm BatchNormSum OutputConv2D Atrous Conv2DBatchNorm BatchNorm Atrous Conv2D图3改进的残差模块结构Fig.3Improved residual module structureConv2D (1×1)Conv2D (1×1)Conv2D (1×1)Conv2D (1×1)BatchNormBatchNormBatchNormBatchNormUpsample Concat OutpuInputInput (1/4)Input (1/4)Input (1/4)Input (1/4)MaxPooling (1×1)MaxPooling (2×2)MaxPooling (3×3)MaxPooling (6×6)图4PSP 池化模块结构Fig.4PSP pooling module structure本网络使用ReLU 函数作为网络激活函数.输入的彩色眼底图经过改进型U 形网络的分割会输出血管的二值图,这些输出图像将用于新生血管的分类.1.3血管分类在Yu 等[10]的研究中证明了,面对两类样本数量不均衡的情况下的新生血管二分类任务时,SVM 比卷积神经网络有着更强的分类能力,可以极大地提高分类的效率和准确性.故本文使用SVM 对经过分割的新生血管图像进行血管的分类.在训练SVM 的过程中,本文设置了一个滑动窗口,每个滑动窗口大小为50×50像素,按照从左至右,从上至下,步长为20像素的顺序对新生血管图像进行遍历.计算每个窗口内的血管特征信息,比对专家的标定结果附上属于新生血管或不属于新生血管的标签,形成一个训练样本.结合本文的应用场景和新生血管的形态特征,采用血管段弯曲度平均值、血管段弯曲度方差、血管宽度平均值、血管宽度方差、血管段长度平均值、血管段长度方差、血管面积、血管分支点个数和血管段个数等9个参数作为样本特征.使用AngioTool 软件可以批量地从窗口中提取以上特征值.AngioTool 是一款专业的血管分析工具,它可以快速并且准确地测量一系列的血管形态学指标和空间参数.本文所选取的9个特征值均可通过使用AngioTool 获取.AngioTool 对血管图像测量参数的效果如图5所示.图5AngioTool 测量血管特征参数Fig.5Measuring vascular characteristicparameters by AngioTool在分类过程中,窗口每滑动一次都会对当前窗口区域内的图像进行血管特征信息的计算,由训练好的SVM 给出分类结果,判断该窗口图像内是否包含新生血管.2实验设置与实验结果2.1实验数据与实验设置本文分别从MESSIDOR 数据集和Kaggle 数据集中各挑选了29张和90张共119张包含新生血管的彩色眼底图作为实验材料,并将图片分辨率统一调整至1024×1024像素.这119张新生血管图均由专家进行了新生血管区域的标定.湖南大学学报(自然科学版)2021年22此外,本文还从Kaggle数据集中随机选取80张正常的彩色眼底图像,统一分辨率1024×1024像素,作为负样本参与检验本方法中新生血管分类器的性能.本文的改进型U形网络和SVM的训练和测试均使用Nvidia Titan Xp GPU,在Keras深度学习框架上进行.对于改进型U形网络的训练,为了避免训练数据过少而导致网络训练过程中出现过拟合现象,本文从每一个训练集图像中随机提取等大小的10000个图像块,38张图像,共380000个图像块参与训练,每一个图像块大小为48×48像素.这38张图像均配有专家标定的金标准图像.本文采用随机梯度下降的优化函数来对网络参数进行优化,采用对数交叉熵代价函数作为损失函数来衡量网络的分类误差.学习率初始化为0.01,每次训练38个图像块,共训练200轮.对于SVM的训练,本文选取46张新生血管图像,共9026个带标签的样本(有效的窗口子图像)参与训练.测试集数据由80张正常眼底图像和另外73张新生血管图像组成.在本文中,SVM的C参数值设为0.55.使用K折交叉验证方法,将训练数据随机分为k个较小的子集,并在每次迭代中对数据的k-1部分训练模型.数据的其余部分用作验证和性能评估.该过程重复k次,平均性能报告为总体性能.在本文中,k=10. 2.2实验指标对于新生血管分割网络的分割结果评估,其分割结果将会与专家标注的金标准血管图进行比对,计算4种统计指标:真阴性(true negative,TN)、真阳性(true positive,TP)、假阴性(false negative,FN)和假阳性(false positive,FP).TN表示被正确地分为背景像素的像素点;TP表示被正确地分为血管像素的像素点;FN表示被错误地分为血管像素的像素点;FP表示被错误地分为背景像素的像素点.对于分类器的分类结果评估,其分类结果将会与专家标注的诊断结果进行比对,同样计算4种统计指标:TN、TP、FN和FP.如果存在一个窗口将当前的图像区域正确地分类为新生血管区域,则该图像正确地检测到了新生血管,计入TP;如果所有窗口都将当前的图像区域正确地分类为非新生血管,则该图像正确地被分类为不含新生血管图像,计入TN;如果一张不包含新生血管的图像存在一个窗口将当前区域的图像错误地分类为新生血管,则该图像被错误地分类为新生血管图像,计入FN;如果一张包含新生血管的图像的所有窗口都将当前图像区域错误地分类为非新生血管,则该图像被错误地分类为不含新生血管的图像,计入FP.对于血管分割和血管分类两个任务,通过这4种统计指标,进一步计算出3个评估指标:特异性(Specificity,Sp)、灵敏度(Sensitivity,Sn)、准确率(Accuracy,Acc),这3个评估指标的计算方法为:Sp=TN TN+FP(1)Sn=TP TP+FN(2)Acc=TP+TNTP+FN+TN+FP(3)2.3实验结果改进型U形网络对21张含新生血管的眼底图像进行了分割测试,这21张图像均带有专家标注的血管金标准图像.同样地,使用原始U形网络对这21张图像进行分割测试.两个模型的分割结果对比见表1.两种模型对新生血管分割的结果对比见图6.表1分割结果与原始U形网络对比Tab.1The segmentation results comparedwith the original U-shaped networkMethod Acc/%Sn/%Sp/% Original U-net84.1175.2594.79 Proposed Method87.6079.1294.16(a)原图(b)专家标注(c)原始U-net(d)本文方法图6改进型U形网络与原始U形网络对局部新生血管分割的结果对比Fig.6Comparison of segmentation resultsof local neovascularization between improvedU-shaped network and original U-shaped network邹北骥等:一种基于改进U形网络的眼底图像视网膜新生血管检测方法第4期23由图6可以看出,改进型U 形网络和原始U 型网络对于普通血管的分割效果相近.但对于低对比度血管和细小血管的分割,改进型U 形网络效果明显优于原始U 形网络.由表1可知,本文提出的改进型U 形网络对新生血管的分割准确率优于原始U 形网络,达到87.60%.改进型U 形网络对血管全局分割的效果见图7.图7改进型U 形网络对血管进行全局分割效果图Fig.7Improved U-shaped network for globalsegmentation of blood vessels分类器对153张眼底图像进行了分类测试,这153张图像中有73张图像被专家标注为含新生血管图像,80张正常眼底图像.分类结果与其他新生血管检测研究的对比见表2.表2分类结果与其他新生血管检测研究对比Tab.2Comparison of classification results with other neovascularization detection studiesMethodAcc/%Sn/%Sp/%Goatman et al [4]—84.285.9Welikala et al [7]95.0010092.50Agurto et al [3]887894Gupta et al [8]90.7683.396.1Yu et al [10]95.2392.9096.30Proposed Method95.9691.1097.16从表2可见,本文提出的方法最终取得了对新生血管分类准确率95.96%、灵敏度91.10%、特异性97.16%的结果,对比其他已有研究结果,本文提出的方法在新生血管分类的准确率和特异性上获得了最优的水平.分类准确率代表着分类器在进行分类任务时,对多个标签的样本分类的总体正确率.考虑到本文的实际情况是非新生血管标签数量远大于新生血管标签数量,说明本文提出的方法对非新生血管区域可以更好地分类.灵敏度在本文中代表着对新生血管区域分类的正确率,灵敏度越高,说明有更多的新生血管区域被正确地分类,有更少的非新生血管区域被错误地分类成新生血管区域.特异性在本文中代表着对非新生血管区域分类的正确率,特异性越高,说明有更多的非新生血管区域被正确地分类,有更少的新生血管区域被错误地分类成非新生血管区域.本文提出的方法对非新生血管区域分类的效果极佳,有很高的特异性.这样可以最大程度地避免在实际操作中将非新生血管区域误诊为新生血管.本文的方法在整体的准确率上也取得了最优的水平.综合来看,本文提出的方法可以作为计算机辅助诊断方法辅助医生进行新生血管的检测.3结论本文提出的改进型U 形网络对新生血管分割的效果优于原始U 形网络,可以较好地分割新生血管.本文提出的新生血管检测方法相比目前文献,在MESSIDOR 数据集和Kaggle 数据集上可以较好地检测新生血管,且准确率最高.少量的新生血管图像被本方法错误分类,这是因为新生血管并不是糖网病在眼底图上会出现的唯一病症,其他病症如出血、渗出等会覆盖新生血管,干扰检测.本文提出的方法可以用于计算机辅助诊断,帮助医生做出决策.今后将构建专门的新生血管图像数据库,进一步优化分割神经网络和新生血管分类器,寻找有效的噪声消除方法,提高本方法对新生血管检测的准确率和鲁棒性.参考文献[1]张承芬.眼底病学[M ].北京:人民卫生出版社,2010:261-281.ZHANG C F.Diseases of ocular fundus [M ].Beijing :People ’s Medical Publishing House ,2010:261—281.(In Chinese )[2]中华医学会眼科学会眼底病学组.我国糖尿病视网膜病变临床湖南大学学报(自然科学版)2021年24诊疗指南[J].中华眼科杂志,2014,50(11):854—861.Ophthalmology Group of Ophthalmology Society of Chinese MedicalAssociation.Guidelines for clinical diagnosis and treatment of dia-betic retinopathy in China[J].Chinese Journal of Ophthalmology,2014,50(11):854—861.(In Chinese)[3]AGURTO C,YU H,MURRAY V.Detection of neovascularization in the optic disc using an AM-FM representation granulometry andvessel segmentation[C]//Annual International Conference of theIEEE Engineering in Medicine and Biology Society.Piscataway,U-nited States:IEEE Press,2012:4946—4949.[4]GOATMAN K A,FLEMING A D,PHILIP S.Detection of new ves-sels on the optic disc using retinal photographs[J].IEEE Transac-tions on Medical Imaging,2011,30(4):972—979.[5]HASSAN S S A,BONG D B L,PREMSENTHIL M.Detection of neo-vascularization in diabetic retinopathy[J].Journal of Digital Imag-ing,2012,25(3):437—444.[6]MARYAM V.A feasibility study on detection of neovascularization in retinal color images using texture[J].IEEE Journal of Biomedi-cal and Health Informatics,2014,21(1):184—192.[7]WELIKALA R A,DEHMESHKI J,HOPPE A,et al.Automated de-tection of proliferative diabetic retinopathy using a modified line op-erator and dual classification[J].Compute Methods ProgramsBiomed,2014,114(3):247—261.[8]GUPTA G,KULASEKARAN S,RAM K,et al.Local characteriza-tion of neovascularization and identification of proliferative diabeticretinopathy in retinal fundus images[J].Computerized MedicalImaging and Graphics,2017,55(S C):124—132.[9]PUJITHA A K,JAHNAVI G S,SIVASWAMY J.Detection of neo-vascularization in retinal images using semi-supervised learning[C]//International Symposium on Biomedical Imaging.Beijing,China:ISBI,2017:688—691.[10]YU S,XIAO D,KANAGASINGAM Y.Machine learning based au-tomatic neovascularization detection on optic disc region[J].IEEEJournal of Biomedical and Health Informatics,2018,22(3),886—894.[11]RONNEBERGER O,FISCHER P,BROX T.U-net:Convolutional networks for biomedical image segmentation[C]//International Con-ference on Medical image computing and computer-assisted inter-vention.Berlin,Germany:Springer,Cham,2015:234—241.[12]SONG J,LEE B.Development of automatic retinal vessel segmenta-tion method in fundus images via convolutional neural networks[C]//2017Annual International Conference of the IEEE Engineering inMedicine and Biology Society.Piscataway,United States:IEEEPress,2017:681—684.[13]HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition.Piscataway,United States:IEEEPress,2016:770—778.[14]ZHAO H,SHI J,QI X,et al.Pyramid scene parsing network[C]// Proceedings of the IEEE Conference on Computer Vision and Pat-tern Recognition.Piscataway,United States:IEEE Press,2017:2881—2890.[15]CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Deeplab:Se-mantic image segmentation with deep convolutional nets,atrousconvolution,and fully connected CRFs[J].IEEE Transactions onPattern Analysis and Machine Intelligence,2017,40(4):834—848.邹北骥等:一种基于改进U形网络的眼底图像视网膜新生血管检测方法第4期25。

深度学习探讨视网膜成像在阿尔茨海默病管理中的应用

深度学习探讨视网膜成像在阿尔茨海默病管理中的应用

深度学习探讨视网膜成像在阿尔茨海默病管理中的应用1. 内容综述深度学习是一种模拟人脑神经网络的机器学习方法,近年来在计算机视觉领域取得了显著的成果。

视网膜成像技术是一种用于捕捉眼底影像的方法,可以为阿尔茨海默病(Alzheimers disease)的研究和诊断提供重要的数据。

本文将探讨如何利用深度学习技术对视网膜成像数据进行分析,以期为阿尔茨海默病的管理提供新的思路和方法。

本文将介绍视网膜成像技术的基本原理和应用背景,视网膜成像技术通过使用高分辨率摄像头捕捉眼底影像,可以清晰地显示视网膜的结构和功能。

这些影像对于研究阿尔茨海默病的病理变化具有重要意义,因为阿尔茨海默病患者的视网膜会出现一系列异常表现,如色素沉着、血管纤维化等。

通过对这些特征的分析,可以帮助研究人员更准确地诊断阿尔茨海默病,并评估病情的严重程度。

本文将介绍深度学习在视网膜成像数据分析中的应用,已有多种深度学习模型被应用于视网膜成像数据的处理和分析,如卷积神经网络(CNN)、循环神经网络(RNN)等。

这些模型可以在不同层次上对眼底影像进行特征提取和表示,从而实现对视网膜病变的自动识别和分类。

深度学习还可以利用大量的标注数据进行模型训练和优化,提高诊断的准确性和鲁棒性。

本文将讨论深度学习在阿尔茨海默病管理中的应用前景,通过对视网膜成像数据的深度分析,可以帮助医生更早地发现患者的视网膜病变,从而实现对阿尔茨海默病早期干预和治疗。

深度学习还可以辅助医生制定个性化的治疗方案,提高治疗效果。

深度学习在阿尔茨海默病管理中的应用仍面临一些挑战,如数据稀缺、模型解释性不足等。

未来的研究需要进一步探索这些问题,以期为阿尔茨海默病的管理和治疗提供更多有效的手段。

1.1 研究背景阿尔茨海默病(Alzheimers disease,AD)是一种常见的神经退行性疾病,严重影响患者的生活质量和家庭的稳定。

随着全球人口老龄化的加剧,AD的发病率和患病率逐年上升,给社会和家庭带来了巨大的负担。

改进Retinex-Net的低光照图像增强算法

改进Retinex-Net的低光照图像增强算法

改进Retinex⁃Net的低光照图像增强算法欧嘉敏1 胡 晓1 杨佳信1摘 要 针对Retinex⁃Net存在噪声较大㊁颜色失真的问题,基于Retinex⁃Net的分解-增强架构,文中提出改进Ret⁃inex⁃Net的低光照图像增强算法.首先,设计由浅层上下采样结构组成的分解网络,将输入图像分解为反射分量与光照分量,在此过程加入去噪损失,抑制分解过程产生的噪声.然后,在增强网络中引入注意力机制模块和颜色损失,旨在增强光照分量亮度的同时减少颜色失真.最后,反射分量和增强后的光照分量融合成正常光照图像输出.实验表明,文中算法在有效提升图像亮度的同时降低增强图像噪声.关键词 低光照图像增强,深度网络,视网膜大脑皮层网络(Retinex⁃Net),浅层上下采样结构,注意机制模块引用格式 欧嘉敏,胡晓,杨佳信.改进Retinex⁃Net的低光照图像增强算法.模式识别与人工智能,2021,34(1):77-86.DOI 10.16451/ki.issn1003⁃6059.202101008 中图法分类号 TP391.4Low⁃Light Image Enhancement Algorithm Based onImproved Retinex⁃NetOU Jiamin1,HU Xiao1,YANG Jiaxin1ABSTRACT Aiming at the problems of high noise and color distortion in Retinex⁃Net algorithm,a low⁃light image enhancement algorithm based on improved Retinex⁃Net is proposed grounded on the decomposition⁃enhancement framework of Retinex⁃Net.Firstly,a decomposition network composed of shallow upper and lower sampling structure is designed to decompose the input image into reflection component and illumination component.In this process,the denoising loss is added to suppress the noise generated during the decomposition process.Secondly,the attention mechanism module and color loss are introduced into the enhancement network to enhance the brightness of the illumination component and meanwhile reduce the image color distortion.Finally,the reflection component and the enhanced illumination component are fused into the normal illumination image to output.The experimental results show that the proposed algorithm improves the image brightness effectively with the noise of enhanced image reduced.Key Words Low⁃Light Image Enhancement,Deep Network,Retinal Cortex Theory⁃Net,Shallow Up⁃per and Lower Sampling Structure,Attention Mechanism ModuleCitation OU J M,HU X,YANG J X.Low⁃Light Image Enhancement Algorithm Based on Improved Retinex⁃Net.Pattern Recognition and Artificial Intelligence,2021,34(1):77-86.收稿日期:2020-05-12;录用日期:2020-09-17 Manuscript received May12,2020;accepted September17,2020国家自然科学基金项目(No.62076075)资助Supported by National Natural Science Foundation of China(No. 62076075)本文责任编委黄华Recommended by Associate Editor HUANG Hua1.广州大学电子与通信工程学院 广州5100061.School of Electronics and Communication Engineering,Guang⁃zhou University,Guangzhou510006 由于低光照环境和有限的摄像设备,图像存在亮度较低㊁对比度较低㊁噪声较大㊁颜色失真等问题,不仅会影响图像的美学㊁人类的视觉感受,还会降低运用正常光照图像的高级视觉任务的性能[1-3].为了有效改善低光图像质量,学者们提出许多低光照图像增强算法,经历灰度变换[4-5]㊁视网膜皮层理论[6-11]和深度神经网络[12-19]三个阶段.早期,通过直方图均衡[4-5]㊁伽马校正等灰度变换方法对低亮度区域进行灰度拉抻,可达到提高暗区亮度的目的.然而,因为未考虑像素与其邻域像素的关系,灰度变换常会导致增强图像缺乏真实感.第34卷 第1期模式识别与人工智能Vol.34 No.1 2021年1月Pattern Recognition and Artificial Intelligence Jan. 2021Land [6]提出视网膜皮层理论(Retinal CortexTheory,Retinex).该理论认为物体颜色与光照强度无关,即物体具有颜色恒常性.基于该理论,相继出现经典的单尺度视网膜增强算法(Single Scale Retinex,SSR)[7]和色彩恢复的多尺度视网膜增强算法(Multi⁃scale Retinex with Color Restoration,MSR⁃CR)[8].算法主要思想是利用高斯滤波器获取低光照图像的光照分量,再通过像素间逐点操作求得反射分量作为增强结果.Wang 等[9]利用亮通滤波器和对数变换平衡图像亮度和自然性,使增强后的图像趋于自然.Fu 等[10]设计用于同时估计反射和光照分量(Simultaneous Reflectance and Illumination Esti⁃mation,SRIE)的加权变分模型,可有效处理暗区过度增强的问题.Guo 等[11]提出仅估计光照分量的低光照图像增强算法(Low Light Image Enhancementvia Illumination Map Estimation,LIME),主要利用局部一致性和结构感知约束条件计算图像的反射分量并作为输出结果.然而,这些基于Retinex 理论模型的算法虽然可调整低光图像亮度,但增亮程度有限.研究者发现卷积神经网络(Convolutional NeuralNetwork,CNN)[12]与Retinex 理论结合能进一步提高增强图像的视觉效果,自动学习图像的特征,解决Retinex 依赖手工设置参数的问题.Lore 等[13]提出深度自编码自然低光图像增强算法(A Deep Auto⁃encoder Approach to Natural Low Light Image Enhan⁃cement,LLNet),有效完成低光增强任务.Lü等[14]提出多边低光增强网络(Multi⁃branch Low⁃Light En⁃hancement Network,MBLLEN),学习低光图像到正常光照图像的映射.Zhang 等[15]结合最大信息熵和Retinex 理论,提出自监督的光照增强网络.Wei 等[16]基于图像分解思想,设计视网膜大脑皮层网络(Retinex⁃Net),利用分解-增强架构调整图像亮度.Zhang 等[17]基于Retinex⁃Net 设计低光增强器.然而,由于噪声与光照水平有关,Retinex⁃Net 提取反射分量后,图像暗区噪声高于亮区.因此,Retinex⁃Net 的增强结果存在噪声较大㊁颜色失真的问题,不利于图像质量的提升.为此本文提出改进Retinex⁃Net 的低光照图像增强算法.以Retinex⁃Net 的分解与增强框架为基础,针对噪声问题,在分解网络采用浅层上下采样结构[15],利用反射分量梯度项[15]作为损失.同时为了改善增强图像的色彩偏差,保留丰富的细节信息,在增强网络中嵌入注意力机制模块[18]和颜色损失[19].实验表明,本文算法在LOL 数据集和其它公开数据集上取得较优的视觉效果和客观结果.1 改进Retinex⁃Net 的低光照图像增强算法改进Retinex⁃Net 的低光照图像增强算法框图如图1所示.S lowS normalR highI highI lowR lowI end图1 本文算法框图Fig.1 Flowchart of the proposed algorithm87模式识别与人工智能(PR&AI) 第34卷 Retinex理论[6]认为彩色图像可分解为反射分量和光照分量:S=R I.(1)其中: 表示逐像素相乘操作;S表示彩色图像,可以是任何具有不同曝光程度的图像;R表示反射分量,反映物体内在固有属性,与外界光照无关;I表示光照分量,不同曝光度的物体光照分量不同.本文算法主要利用2个相互独立训练的子网络,分别是分解网络与增强网络.具体地说,首先,分解网络以数据驱动方式学习,将低光照图像和与之配对的正常光照图像分解为相应的反射分量(R low, R normal)和光照分量(I low,I normal).然后,增强网络以低光图像的光照分量I low作为输入,在结构感知约束下,提升光照分量的亮度.最后,重新组合增强的光照分量I en与反射分量R low,形成增强图像S en,作为网络输出.1.1 分解网络的浅层上下采样结构由于式(1)是一个不适定问题[20],很难设计适用于多场景的约束函数.本文算法以数据驱动的方式进行学习,不仅能解决该问题,还能进一步提高网络的泛化能力.如图1所示,在训练阶段,分解网络以低光照图像S low和与之对应的正常光照图像S normal 作为输入,在约束条件下学习输出它们一致的反射分量R low和R normal,及不同的光照分量I low和I normal.值得注意的是,S low与S normal共享分解网络的参数.区别于常用的深度U型网络(U⁃Net)结构及Retinex⁃Net简单的堆叠卷积层,本文算法的分解网络是一个浅层的上下采样结构,由卷积层与通道级联操作组成,采样层只有4层,网络训练更简单.实验表明,运用此上下采样结构变换图像尺度时,下采样操作一定程度上舍去含有噪声的像素点,达到降噪效果的目的,但同时会引起图像的模糊.因此为了提高分解图像清晰度,减少语义特征丢失,在图像上采样后应用通道数级联操作,可给图像补偿下采样丢失的细节信息,增强清晰度.在浅层上下采样结构中,首先,使用1个9×9的卷积层提取输入图像S low的特征.然后,采用5层以ReLU作为激活函数的卷积层变换图像尺度,学习反射分量与光照分量的特征.最后,分别利用2层卷积层及Sigmoid函数,将学习到的特征映射成反射图R low和光照图I low后再输出.对于分解网络的约束损失,本文算法沿用Retinex⁃Net的重构损失l rcon㊁不变反射率损失l R及光照平滑损失l I.另外为了在分解网络中更进一步减小噪声,添加去噪损失l d.因此总损失如下:l=l rcon+λ1l R+λ2l I+λ3l d,其中,λ1㊁λ2㊁λ3为权重系数,用于平衡各损失分量.对于L1㊁L2范数和结构相似性(Structural Similarity, SSIM)损失的选择,当涉及图像质量任务时,L2范数与人类视觉对图像质量的感知没有很好的相关性,在训练中容易陷入局部最小值,而SSIM虽然能较好地学习图像结构特征,但对平滑区域的误差敏感度较低,引起颜色偏差[21].因此本文算法使用L1范数约束所有损失.在分解网络中输出的结果R low和R normal都可与光照图重构成新的图像,则重构损失如下:l rcon=∑i=low,normalW1Rlow I i-S i1+∑j=low,normal W2R normal I j-S j1,其中 表示逐像素相乘操作.当i为low或j为normal 时,权重系数W1=W2=1,否则W1=W2=0.001.对于配对的图像,使用较大的权重能够使分解网络更好地学习配对图像的特征.对于配对的图像对,使用较大的权重可使分解网络更好地学习配对图像的特征.不变反射率损失l R是基于Retinex理论的颜色恒常性,在分解网络中主要用于约束学习不同光照图像的一致反射率:l R=Rlow-R normal1.对于光照平滑损失l I,本文采用结构感知平滑损失[16].该损失以反射分量梯度项作为权重,在图像梯度变化较大的区域,光照变得不连续,从而亮度平滑的光照图能保留图像结构信息,则l I=ΔIlow exp(-λgΔR low)1+ΔI normal exp(-λgΔR normal)1,其中,Δ表示图像水平和垂直梯度和,λg表示平衡系数.Rudin等[22]观察到,噪声图像的总变分(Total Variation,TV)大于无噪图像,通过限制TV可降低图像噪声.然而在图像增强中,限制TV相当于最小化梯度项.受TV最小化理论[22-23]启发,本文引入反射分量的梯度项作为损失,用于控制反射图像噪声,故称为去噪损失:l d=λΔRlow1.当λ值增加时,噪声减小,同时图像会模糊.因此对于权重参数的选择十分重要,经过实验研究发现,当权重λ=0.001时,图像获得较好的视觉效果.1.2 增强网络的注意力机制如图1所示,增强网络以分解网络的输出I low作97第1期 欧嘉敏 等:改进Retinex⁃Net的低光照图像增强算法为输入,学习增强I low的亮度,将增强结果I en与分解网络另一输出R low重新结合为增强图像S en后输出.在增强网络中,I low经过多个下采样块生成较小尺度图像,使增强网络有较大尺度角度分配光照,从而具有调节亮度的能力.网络采用上采样方式重构局部光照,对亮的区域分配较低亮度,对较暗的区域调整较高亮度.此外,将上采样层的输出进行通道数的级联,在调整不同局部光照的同时,保持全局光照一致性.而且跳过连接是从下采样块引入相应的上采样块,通过元素求和,强制网络学习残差.针对Retinex⁃Net出现的颜色失真问题,在增强网络中嵌入注意力机制模块.值得注意的是,与其它复杂的注意力模块不同,注意力机制模块由简单卷积层和激活操作组成,不要求强大的硬件设备,也不需要训练多个模型和大量额外参数.在光照调整过程中,可减少对无关背景的特征响应,只激活感兴趣的特征,提高算法对图像细节的处理能力和对像素的敏感性,指导网络既调整图像亮度又保留图像结构.由图1可见,注意力模块的输入是图像特征αi㊁βi,输出为图像特征γi,i=1,2,3,表示注意力机制模块的序号.αi为下采样层输出的图像特征,βi为上采样层的输出特征.这2个图像特征分别携带不同的亮度信息,两者经过注意力模块后,降低亮度无关特征(如噪声)的响应,使输出特征γi携带更多亮度信息被输入到下一上采样层,提高网络对亮度特征的学习能力.αi与重建尺度后的βi分别经过一个独立的1×1卷积层,在ReLU激活之前进行加性操作.依次经过1×1卷积层㊁Sigmoid函数,最后与βi通过逐元素相乘后将结果与αi进行通道级联.在此传播过程中,注意机制可融合不同尺度图像信息,同时减少无关特征的响应,增强网络调整亮度能力.独立于分解网络的约束损失,增强网络调整光照程度是基于局部一致性和结构感知[16]的假设.本文算法除了沿用Retinex⁃Net中约束增强网络的损失外,在实验中,针对Retinex⁃Net出现的色彩偏差,增加颜色损失[19],因此增强网络损失:L=L rcon+L I+μL c,其中,L rcon为增强图像的重构损失,L rcon=Snormal-R low I en1,L I表示结构感知平滑损失,L c表示本文的颜色损失,μ表示平衡系数.L rcon定义表示增强后的图像与其对应的正常光照图像的距离项,结构感知平滑损失L I 与分解网络的平滑损失类似,不同的是,在增强网络中,I en以R low的梯度作为权重系数:L I=ΔIen exp(-λgΔR low)1.此外,本文添加颜色损失L c,衡量增强图像与正常光照图像的颜色差异.先对2幅图像采用高斯模糊,滤除图像的纹理㊁结构等高频信息,留下颜色㊁亮度等低频部分.再计算模糊后图像的均方误差.模糊操作可使网络在限制纹理细节干扰情况下,更准确地衡量图像颜色差异,进一步学习颜色补偿.颜色损失为L c=F(Sen)-F(S normal)21.其中:F(x)表示高斯模糊操作,x表示待模糊的图像.该操作可理解为图像每个像素以正态分布权重取邻域像素的平均值,从而达到模糊的效果,S en为增强图像,S normal为对应的正常光照图像,F(x(i,j))=∑k,lx(i+k,j+l)G(k,l),G(k,l)表示服从正态分布的权重系数.在卷积网络中G(k,l)相当于固定大小的卷积核,G(k,l)=0.æèçç053exp k2-l2öø÷÷6.2 实验及结果分析2.1 实验环境本文算法采用LOL训练集[16]和合成数据集[16]训练网络.测试集选取LOL的评估集㊁DICM数据集㊁MEF数据集.在训练过程中,网络采用图像对训练,批量化大小(Batch Size)设为32,块大小(Patch Size)设为48×48.分解网络的损失平衡系数λ1=0.001,λ2=0.1,λ3=0.001.增强网络的平衡系数μ=0.01,λg=10.本文采用自适应矩估计优化器(Adaptive Moment Estima⁃tion,Adam).网络的训练和测试实验均在Nvidia GTX2080GPU设备上完成,实现代码基于TensorFlow框架.为了验证本文算法的性能及效果,采用如下对比算法:Retinex⁃Net,SRIE[10],LIME[11],MBLLEN[14]㊁文献[15]算法㊁全局光照感知和细节保持网络(Global Illumination⁃Aware and Detail⁃Preserving Net⁃work,GLADNet)[24]㊁无成对监督深度亮度增强(Deep Light Enhancement without Paired Supervision, EnlightenGAN)[25].在实验过程中,均采用原文献提供的模型或源代码对图像进行测试.采用如下客观评估指标:峰值信噪比(Peak Signal to Noise Ratio,PSNR)㊁结构相似性(Structural08模式识别与人工智能(PR&AI) 第34卷Similarity,SSIM)[26]㊁自然图像质量评估(NaturalQuality Evaluator,NIQE)[27]㊁通用图像质量评估(Universal Quality Index,UQI)[28]㊁基于感知的图像质量评估(Perception⁃Based Image Quality Evaluator,PIQE)[29].SSIM㊁PSNR㊁UQI 值越高,表示增强结果图质量越优.相反,PIQE㊁NIQE 值越高,表示图像质量越差.2.2 消融性实验为了进一步验证本文算法各模块的有效性,以Retinex⁃Net 为基础设计消融性实验,利用PSNR 衡量噪声水平,采用SSIM 从亮度㊁对比度㊁结构评估图像综合质量.实验结果如表1所示,表中S⁃ULS 表示浅层上下采样结构,l d 表示去噪损失.Enhan _I low 表示增强网络输入仅为光照分量,AMM 表示注意力机制模块,L c 表示颜色损失.参数微调1表示增强网络的平滑损失系数由原Retinex⁃Net 的3设为1;参数微调2是增强网络的平滑损失系数为1,批量化大小由16设为32.表1 各改进模块及损失的消融性实验结果Table 1 Ablation experiment results of improved modules and loss序号基础框架改进方法PSNRSSIM1-Retinex⁃Net 16.7740.55923Retinex⁃Net 添加S⁃ULS,不添加l d 添加S⁃ULS,添加l d17.45217.4940.6890.699456Retinex⁃Net+S⁃ULS+l d添加Enhan _I low ,不添加AMM,不添加L c 添加Enhan _I low ,添加AMM,不添加L c 添加Enhan _I low ,添加AMM,添加L c17.89718.00218.0910.7030.7080.70478Retinex⁃Net+S⁃ULS+l d +AMM+L c参数微调1参数微调218.27218.5290.7190.720 表1中序号2给出以Retinex⁃Net 为基础,采用浅层上下采样结构作为分解网络的结果.相比Retinex⁃Net,PSNR 值显著提高,表明此结构可抑制由图像分解带来的噪声.在此基础上添加去噪损失,进一步降低噪声,见序号3.由此验证浅层上下采样结构与去噪损失的有效性.在本文算法中,由于采用两步训练的方式,即先训练分解网络后训练增强网络,因此在验证浅层上下采样结构和去噪损失的有效性后,以此为基础评估增强网络引入的注意力机制模块和颜色损失的有效性.在Retinex⁃Net 中增强网络的输入为反射分量与光照分量通道级联后的结果.该设置一定程度上会导致反射分量丢失图像结构和细节,同时影响光照分量的亮度提升.为此,先设置序号4的实验验证上述分析.由结果可见:PSNR㊁SSIM 值大幅提高,证明此分析的正确性,表明本文算法的增强网络仅以光照分量作为输入的有效性.另外,从序号5结果看出,利用注意力模块后,图像噪声显著降低,这归功于注意力模块可减少对图像无关特征的响应,集中注意力学习亮度特征,从而降低图像噪声水平.在颜色损失的消融性实验中,尽管客观数值上没有直观体现颜色的恢复,但根据图2和图3可知,该损失是有效的.为了使各模块更好地发挥优势,本文算法对参数进行微调.从序号7㊁序号8的实验结果可见,微调参数后本文算法各模块作用进一步体现,取得更优结果. (a)输入图像 (b)参考图像 (a)Input image (b)Ground truth18第1期 欧嘉敏 等:改进Retinex⁃Net 的低光照图像增强算法 (c)SRIE (d)LIME (e)GLADNet (f)MBLLEN (g)EnlightenGAN (h)Retinex⁃Net (i)文献[15]算法 (j)本文算法 (i)Algorithm in reference[15] (j)The proposed algorithm图2 各算法在LOL数据集上的视觉效果Fig.2 Visual results of different algorithms on LOLdatasetA B C D(a)输入图像(a)Inputimages(b)LIME(c)GLADNet28模式识别与人工智能(PR&AI) 第34卷(d)MBLLEN(e)EnlightenGAN(f)Retinex⁃Net(g)文献[15]算法(g)Algorithm in reference[15](h)本文算法(h)The proposed algorithm图3 各算法在DICM㊁MEF数据集上的视觉效果Fig.3 Visual results of different algorithms on DICM and MEF datasets38第1期 欧嘉敏 等:改进Retinex⁃Net的低光照图像增强算法2.3 对比实验各算法在3个数据集上的客观评估结果如表2所示,表中黑体数字表示最优结果,斜体数字表示次优结果.在LOL 数据集上,SSIM 可从亮度㊁对比度㊁结构度量2幅图像相似度,与人类视觉系统(Human Vision System,HVS)具有较高相关性[21,30],可较全面体现图像质量.从表2可见,在SSIM㊁UQI 指标上,本文算法取得最高数值,表明低光照图像经本文算法增强后图像质量得到明显提升.从图2和图3发现,本文算法也提升视觉效果的表现力.由表2的LOL 数据集上结果可知,在PSNR 指标上,本文算法总体上优于先进算法.根据文献[21]㊁文献[30]和文献[31]的研究,PSNR 指标因为容易计算,被广泛用于评估图像,但其计算是基于误差敏感度,在评估中常出现与人类感知系统不一致的现象,因此与图像主观效果结合分析能更好地体现图像质量.结合图2和图3的分析,GLADNet 的增强图像饱和度较低,存在颜色失真现象.文献[15]算法使图像过曝光.本文算法在Retinex⁃Net 基础上显著降低图像噪声,保留图像丰富结构信息,相比其它方法,视觉效果更佳,符合人类的视觉感知系统.在LOL 数据集上,对比大部分算法,本文取得与参考图像相近的NIQE 数值,表明本文算法的增强结果更接近参考图像.DICM㊁MEF 数据集没有正常光照图作为参照.本文只采用盲图像质量评估指标(NIQE㊁PIQE)评估各算法.在PIQE 指标上,本文算法取得最优值.对于NIQE,虽然未取得较好优势,但相比Retinex⁃Net,本文算法取得更好的增强结果.综上所述,虽然本文算法未在所有指标上取得最优结果,但仍有较高优势.在与人类视觉感知系统有较好相关性的SSIM 指标及噪声抑制和避免过曝能力上,本文算法最优.表2 各算法在3个数据集上的客观评估结果Table 2 Objective evaluation results of different algorithms on 3datasets 算法LOL 数据集SSIMUQI PSNRNIQEDICM 数据集PIQE NIQEMEF 数据集PIQE NIQESRIE0.4980.48211.8557.28716.95 3.89810.70 3.474LIME 0.6010.78916.8348.37815.60 3.8319.12 3.716GLADNet 0.7030.87919.718 6.47514.85 3.6817.96 3.360MBLLEN0.7040.82517.5633.58412.293.27012.043.322EnlightenGAN 0.6580.80817.483 4.68414.613.5627.863.221文献[15]算法0.7120.86019.150 4.79316.21 4.71811.78 4.361Retinex⁃Net 0.5590.87916.7749.73014.16 4.41511.90 4.480本文算法0.7200.88018.5294.49010.11 3.9607.77 3.820参考图像1--4.253---- 由图2可见,SRIE㊁LIME㊁EnlightenGAN 的增亮程度有限,增强结果偏暗.GLADNet㊁MBLLEN 改变图像饱和度,降低图像视觉效果.相比Retinex⁃Net,本文算法的增强结果图噪声水平较低,可保持图像原有的色彩.由图3可见,SRIE 增强程度远不足人类视觉需求,在图3的图像A 中,未展示其增强结果.在图3中,根据人眼视觉系统,首先能判断LIME㊁GLADNet㊁MBLLEN㊁EnlightenGAN 对人脸的增亮程度仍不足,GLADNet㊁MBLLEN 分别存在饱和度过低和过高现象.而文献[15]算法㊁Retinex⁃Net㊁本文算法能较好观察到人脸细节,但同时从左下角细节图可见,Retinex⁃Net 人脸的边缘轮廓对比度过强.从颜色上进一步分析可知,文献[15]算法亮度过度增强,导致图像颜色失真,如天空的颜色对比输入图像色调偏白㊁曝光难以观察远处的景物等.同样观察图3中图像B 右下角细节,经过分析可得,文献[15]算法㊁Retinex⁃Net㊁本文算法增强程度满足人类视觉需求,但文献[15]算法过曝光,从Retinex⁃Net 细节图可见人物服饰失真.对比其它算法,本文算法结果的亮度适中,图像具有丰富细节,避免光照伪影与过曝光现象.另外,从图3中图像C㊁D 可见,LIME㊁GLAD⁃Net㊁MBLLEN㊁EnlightenGAN 没能较好处理局部暗48模式识别与人工智能(PR&AI) 第34卷区,如笔记本㊁右下角拱门窗户区域仍未增亮,导致难以识别边缘细节.而文献[15]算法饱和度发生变化.Retinex⁃Net存在伪影和噪声等问题.本文算法不仅可增强图像的亮度,减小噪声,还保留图像的细节,图像效果更有利于高级视觉系统的识别或检测.综合上述分析发现,本文算法对低光照图像增强效果更优.3 结束语针对Retinex⁃Net噪声较大㊁颜色失真问题,本文提出改进Retinex⁃Net的低光照图像增强算法.算法在分解网络采用浅层上下采样结构及去噪损失,在增强网络嵌入注意力机制模块和颜色损失.实验表明,本文算法不仅能增强图像亮度,而且能显著降低噪声,并取得较优结果.本文算法较好地处理亮度增强过程中无法避免的噪声问题,兼顾提升亮度和降低噪声任务,可给未来研究图像多属性增强提供思路,如低光增强㊁去噪㊁颜色恢复㊁去模糊等多任务同步进行.今后研究重心将是实现图像多属性同步增强.同时扩展该研究网络结构到其它高级视觉任务中,作为图像预处理模块,期望实现网络的端到端训练.参考文献[1]HU X,MA P R,MAI Z H,et al.Face Hallucination from Low Quality Images Using Definition⁃Scalable Inference.Pattern Reco⁃gnition,2019,94:110-121.[2]陈琴,朱磊,后云龙,等.基于深度中心邻域金字塔结构的显著目标检测.模式识别与人工智能,2020,33(6):496-506. (CHEN Q,ZHU L,HOU Y L,et al.Salient Object Detection Based on Deep Center⁃Surround Pyramid.Pattern Recognition and Artificial Intelligence,2020,33(6):496-506.)[3]杨兴明,范楼苗.基于区域特征融合网络的群组行为识别.模式识别与人工智能,2019,32(12):1116-1121. (YANG X M,FAN L M.Group Activity Recognition Based on Re⁃gional Feature Fusion Network.Pattern Recognition and Artificial Intelligence,2019,32(12):1116-1121.)[4]CHENG H D,SHI X J.A Simple and Effective Histogram Equaliza⁃tion Approach to Image Enhancement.Digital Signal Processing, 2004,14(2):158-170.[5]ABDULLAH⁃AI⁃WADUD M,KABIR M H,DEWAN M A A,et al.A Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Transactions on Consumer Electronics,2007,53(2):593-600.[6]LAND E H.The Retinex Theory of Color Vision.Scientific Ameri⁃can,1977,237(6):108-128.[7]JOBSON D J,RAHMAN Z,WOODELL G A.Properties and Per⁃formance of a Center/Surround Retinex.IEEE Transactions on Im⁃age Processing,1997,6(3):451-462.[8]JOBSON D J,RAHMAN Z,WOODELL G A.A Multiscale Retinex for Bridging the Gap between Color Images and the Human Observa⁃tion of Scenes.IEEE Transactions on Image Processing,1997, 6(7):965-976.[9]WANG S H,ZHENG J,HU H M,et al.Naturalness Preserved En⁃hancement Algorithm for Non⁃uniform Illumination Images.IEEE Transactions on Image Processing,2013,22(9):3538-3548.[10]FU X Y,ZENG D L,HUANG Y,et al.A Weighted VariationalModel for Simultaneous Reflectance and Illumination Estimation// Proc of the IEEE Conference on Computer Vision and Pattern Re⁃cognition.Washington,USA:IEEE,2016:2782-2790. [11]GUO X J,LI Y,LING H B.LIME:Low⁃Light Image Enhance⁃ment via Illumination Map Estimation.IEEE Transactions on Image Processing,2017,26(2):982-993.[12]FUKUSHIMA K.Neocognitron:A Self⁃organizing Neural NetworkModel for a Mechanism of Pattern Recognition Unaffected by Shift in Position.Biological Cybernetics,1980,36:193-202. [13]LORE K G,AKINTAYO A,SARKAR S.LLNet:A Deep Autoen⁃coder Approach to Natural Low⁃Light Image Enhancement.Pattern Recognition,2017,61:650-662.[14]LÜF F,LU F,WU J H,LIM C S.MBLLEN:Low⁃Light Image/Video Enhancement Using CNNs[C/OL].[2020-05-11].http:// /bmvc/2018/contents/papers/0700.pdf. [15]ZHANG Y,DI X G,ZHANG B,et al.Self⁃supervised Image En⁃hancement Network:Training with Low Light Images Only[C/ OL].[2020-05-11].https:///pdf/2002.11300.pdf.[16]WEI C,WANG W J,YANG W H,et al.Deep Retinex Decom⁃position for Low⁃Light Enhancement[C/OL].[2020-05-11].https: ///pdf/1808.04560.pdf.[17]ZHANG Y H,ZHANG J W,GUO X J.Kindling the Darkness:APractical Low⁃Light Image Enhancer//Proc of the27th ACM In⁃ternational Conference on Multimedia.New York,USA:ACM, 2019:1632-1640.[18]AI S,KWON J.Extreme Low⁃Light Image Enhancement for Sur⁃veillance Cameras Using Attention U⁃Net.Sensors,2020,20(2): 495-505.[19]IGNATOV A,KOBYSHEV N,TIMOFTE R,et al.DSLR⁃QualityPhotos on Mobile Devices with Deep Convolutional Networks// Proc of the IEEE International Conference on Computer Vision.Washington,USA:IEEE,2017:3297-3305.[20]TIKHONOV A N,ARSENIN V Y.Solutions of Ill⁃Posed Pro⁃blems.SIAM Review,1979,21(2):266-267. [21]ZHAO H,GALLO O,FROSIO I,et al.Loss Functions for ImageRestoration with Neural Networks.IEEE Transactions on Computa⁃58第1期 欧嘉敏 等:改进Retinex⁃Net的低光照图像增强算法。

基于Andriod移动设备嵌入式机器视觉的人脸识别毕业设计

基于Andriod移动设备嵌入式机器视觉的人脸识别毕业设计

二○ 一三届毕业设计基于Andriod移动设备嵌入式机器视觉的人脸识别系统设计学院:专业:姓名:学号:指导教师:完成时间:2013年6月16日二〇一三年七月毕业设计报告纸摘要人脸识别是在图像或视频流中进行人脸的检测和定位,其中包括人脸在图像或视频流中的所在位置、大小、形态、个数等信息,近年来由于计算机运算速度的飞速发展使得图像处理技术在许多领域得到了广泛应用,其中包含智能监控、安全交易、更安全更友好的人机交互等。

如今在许多公司或研究所已经作为一门独立的课题来研究探索。

近年来,随着移动互联网的发展,智能手机平台获得了长足的发展。

然而,手机钱包、手机远程支付等新应用的出现使得手机平台的安全性亟待加强。

传统的密码认证存在易丢失、易被篡改等缺点,人脸识别不容易模仿、篡改和丢失,因而适用于手机安全领域中的应用。

本论文在分析国内外人脸识别研究成果的基础上,由摄像头采集得到人脸图像,在高性能嵌入式系统平台上,采用JA VA高级语言进行编程,对检测得到的图像进行人脸检测、特征定位、人脸归一化、特征提取和特征识别。

在Android 平台上实现了基于图像的人脸识别功能。

关键词:Android,OpenCV,人脸识别,EclipseI毕业设计报告纸AbstractThe face recognition is to face detection and location in the image or video stream,including the location of the face in the image or video stream, the size,shape, and then number of information in recent years due to the rapid computing speedmakes thedevelopment of image processing technology has been widely applied in many fields, whichincludes intelligent monitoring, secure transactions, safer and more friendly andhuman-computer interaction. Today, as a separate subject many companies or research are tostudy and explore.In recent years,smart phone platforms achieve rapid development according toprosperous of 3G wireless technology.The applications,like mobile payment,remotetransaction,make our life easier but bring more safety issues too.Traditional safetycertification uses password as authentication method.which is 1iable to falsificationand forgetfulness.Facial feature Call overcome the disadvantages brought bytraditional methods,So it is fit for safety applications on smart phone platform.Basedon the research results of the analysis of face recognition at home and abroad in this paper, We obtained the facial imagesobtained by the camera and then used Senior JA VA language toprogram for face detection, feature localization , face normalization, feature extraction and pattern recognition in in high-performance embedded systemplatform. It implemented the face recognition function based on images on the Android platform.The research contents in this paper are as follows: first introduced the current status of the face recognition technology andthe common face detection and face recognition methods briefly, and then focused on the Adaboost face detection algorithm and face recognition algorithm of matching people through LBP histogram.At last, it enabled the face recognition function of mobile devices by transplanting OpenCV and programing on the Android platformbased on these two face detection and face recognitionalgorithm.KEYWORDS: Android,OpenCV,face recognition,EclipseII毕业设计报告纸目录第一章绪论 (1)1.1 研究背景及意义 (1)1.2国内外研究现状 (2)1.31.4 论文结构安排 (5)1.5 本章小结 (5)第二章人脸检测和识别的算法选择 (6)2.1人脸识别的研究内容 (6)2.2 人脸检测 (6)2.2.1 基于知识的方法 (8)2.2.2 特征不变量方法 (9)2.2.3 模板匹配的方法 (9)2.2.4 基于表象的方法 (10)2.3 人脸识别 (11)2.4.1 基于几何特征的识别方法 (11)2.4.2 基于特征脸的识别方法 (11)2.4.3 基于神经网络的方法 (12)2.4.4 基于支持向量机的方法 (12)2.4 本章小结 (12)第三章AdaBoost算法和直方图匹配原理 (13)3.1 特征与特征值计算 (13)3.1.1 矩形特征 (13)3.1.2 积分图 (14)3.2 AdaBoost 分类器 (17)3.2.1 PAC 学习模型 (17)3.2.2 弱学习与强学习 (17)3.2.3 AdaBoost 算法 (18)3.2.4 弱分类器 (20)3.2.5 弱分类器的训练及选取 (22)3.2.6 强分类器 (23)III毕业设计报告纸3.2.7 级联分类器 (23)3.3人脸匹配原理(直方图匹配) (26)3.3.1直方图的均衡化 (26)3.3.2灰度变换 (27)3.4 本章小结 (28)第四章基于Andriod平台的人脸识别系统实现 (30)4.1 Android 系统平台 (30)4.2 开发环境搭建 (32)4.2.2 OpenCV 介绍 (32)4.2.3 OpenCV 编译移植 (33)4.3 整体设计 (34)4.4 应用软件设计 (34)第五章软件实现和测试 (36)5.1 软件实现 (36)5.1.1 软件实现过程 (36)5.1.2 建立UI界面 (36)5.1.3 JAV A平台程序开发 (37)5.1.4 JNI层函数接口 (38)5.1.5 编写脚5.2 软件测试 (39)5.2.1 实验环境 (39)5.2.2 实验结果 (39)5.3 人脸识别 (42)5.3.1 图片抓取 (42)5.3.2 实验结果 (44)第六章小结与展望 (46)6.1 总结 (46)6.2 展望 (46)致谢 (48)参考文献 (49)附录 (51)IV毕业设计报告纸第一章绪论1.1 研究背景及意义人脸识别是一种生物特征识别技术,也是模式识别、计算机视觉和图像处理领域的研究热点。

医学影像技术毕业设计范文

医学影像技术毕业设计范文

医学影像技术毕业设计范文英文回答:Title: Development and Evaluation of a Novel Deep Learning-Based Model for Automated Detection of Diabetic Retinopathy.Abstract:Diabetic retinopathy (DR) is a leading cause of blindness worldwide. Early detection and intervention can significantly improve patient outcomes. However, manual screening of retinal images for DR is time-consuming and subjective. In this study, we propose a novel deep learning-based model for automated detection of DR. Our model utilizes a convolutional neural network (CNN) architecture to extract high-level features from retinal images and classify them into normal or DR. The model was trained and evaluated using a large dataset of retinal images. The results show that the proposed model achieveshigh accuracy, sensitivity, and specificity for DR detection, outperforming traditional machine learning methods. The model has the potential to improve the efficiency and accuracy of DR screening, facilitating early detection and timely intervention.Introduction:Diabetic retinopathy (DR) is a common complication of diabetes that can lead to severe vision loss and blindness. It is caused by damage to the blood vessels in the retina, the light-sensitive tissue at the back of the eye. Early detection and treatment of DR are crucial for preventing vision loss. However, manual screening of retinal imagesfor DR is time-consuming and subjective, which can lead to missed diagnoses or delayed treatment.Methods:In this study, we propose a novel deep learning-based model for automated detection of DR. The model utilizes a convolutional neural network (CNN) architecture, which hasbeen successfully applied to image classification tasks. The CNN is trained on a large dataset of retinal imagesthat have been labeled as normal or DR. The model istrained to extract high-level features from the retinal images and classify them into the corresponding categories.Results:The proposed model was evaluated on a separate dataset of retinal images. The results show that the model achieves an accuracy of 95%, a sensitivity of 90%, and a specificity of 98% for DR detection. These results are significantly better than those of traditional machine learning methods, such as support vector machines and random forests.Discussion:The proposed model has the potential to improve the efficiency and accuracy of DR screening. The model can be used to automatically screen retinal images for DR, reducing the time and effort required for manual screening. The model can also help to improve the accuracy of DRdetection, reducing the number of missed diagnoses or delayed treatment.Conclusion:We have developed a novel deep learning-based model for automated detection of DR. The model achieves high accuracy, sensitivity, and specificity for DR detection,outperforming traditional machine learning methods. The model has the potential to improve the efficiency and accuracy of DR screening, facilitating early detection and timely intervention.中文回答:题目,基于深度学习的糖尿病视网膜病变自动检测模型开发与评估。

基于人眼视觉特性的医学图像增强技术

基于人眼视觉特性的医学图像增强技术

目录第一章绪论 (6)1.1 研究背景及意义 (6)1.2 医学图像增强的国内外研究现状 (8)1.3医学图像增强的特点 (10)1.4增强算法的创新点 (11)1.5基于人眼视觉特性的医学图像增强技术主要内容 (11)第二章常用的医学图像增强方法 (13)2.1医学图像对比度增强 (13)2.1.1灰度映射 (14)2.1.2直方图均衡化 (18)2.1.3直方图规定化 (18)2.2医学图像去噪 (20)2.2.1 图像平滑概述 (20)2.2.2 几种图像平滑方法的原理 (21)2.4医学图像边缘增强 (22)2.4.1 图像锐化概述 (23)2.4.2 几种常见的图像锐化方法原理 (23)本章小结: (25)第三章人眼视觉综述 (26)3.1人眼视觉系统概述 (26)3.2 人眼视觉感知概述 (28)3.2.1 人眼视觉信息传递过程 (28)3.2.2人眼视觉的感受野 (28)3.2.3 人眼视觉系统的注意机制 (29)3.3人眼视觉特性总括 (30)3.4医学图像增强算法用到的人眼视觉特性 (32)3.4.1人眼的微动理论 (32)3.4.2人眼感兴趣区域规律 (32)3.4.3人眼的边缘敏感性 (32)3.4.4人眼对比灵敏度特性 (33)本章小结: (33)第四章基于人眼视觉特性的医学图像增强算法 (34)4.1基于人眼视觉特性的医学图像增强算法的框架 (34)4.2医学图像的预处理过程 (36)4.2.1 基于大津法的阈值二值化 (36)4.2.2计算最大面积闭合轮廓 (37)4.3基于人眼的对比灵敏度特性的灰度映射 (38)4.3.1灰度映射函数的构造 (38)4.3.2构造的灰度映射函数分析 (39)4.4分解滤波 (41)4.4.1 拉普拉斯算子 (41)4.4.2 分解滤波的原理 (41)4.4.3基于拉普拉斯算子基础上的分解滤波 (42)本章小结: (43)第五章增强算法的实现 (43)5.1基于人眼视觉特性的医学图像增强的算法流程 (43)5.2本文算法应用于医学图像实验 (45)本章小结: (51)第六章总结与展望 (52)参考文献: (55)致谢: (59)摘要随着科学技术,特别是电子技术和计算机技术的发展,图像的采集和处理技术有了长足的发展。

图像复原与超分辨率重构基本适用条件及提高空间分辨率上限的研究

图像复原与超分辨率重构基本适用条件及提高空间分辨率上限的研究

第29卷第5期2010年10月红外与毫米波学报J .I nfrared M illi m .W avesV o.l 29,N o .5O ctobe r ,2010文章编号:1001-9014(2010)05-0351-06收稿日期:2009 04 23,修回日期:2010 06 10 R eceived da te :2009 04 23,revised da te :2010 06 10作者简介:吴 艳(1980 ),女,湖北天门人,电子科学与技术工学博士,从事图像信息处理,超分辨率技术应用的研究.E m ai :l w uyan206@163.co m.图像复原与超分辨率重构基本适用条件及提高空间分辨率上限的研究吴 艳, 陈凡胜, 陈桂林(中国科学院上海技术物理研究所,上海 200083)摘要:从离散 离散的成像系统出发,在理论上分析了图像复原与超分辨率重构的基本适用条件及提高图像分辨率的上限,从图像处理的角度给出了基本适用条件的量化指标,最后对理论分析结论进行了实验验证,并对实际应用中不可避免的噪声影响进了研究,实验结果与理论研究结论一致.关 键 词:图像复原;超分辨率重构;分辨率上限中图分类号:TP751 文献标识码:AAPPLICABLE CONDITIONS AND I MPROVING SPATIAL RES OLUTION UPPER LI M ITS OF I MAGE RESTORATIONAND S UPER RES OLUTION RECONSTRUCTIONWU Y an , CHEN Fan Sheng , C HEN Gui Lin(Shangha i Instit u te o f T echnical Phy si cs ,Ch i nese A cade m ic of Sciences ,Shangha i 200083,Ch i na)Abstrac t :Basic appli cab le cond iti ons and upper li m its for i m prov i ng spa tia l reso l u ti on o f i m age rest o ra ti on and super resol u tion reconstructi on w ere researched theoreticall y by usi ng discrete to discrete i m ag i ng mode.l T he quantitati v e i nd ices o f app licab le cond iti ons based on i m age processing w ere g i ven .Experi m ents on restorati on and S RR were carried out by si m u lati ng different sa m pli ng frequenc i es and S NR l ow resoluti on i m ages ,and t he resu lts were analyzed i n deta i.l The perfect consistence w as obta i ned be t w een t he t heory and experi m ent .K ey word s :i m age resto ra tion ;super reso l ution reconstruc tion (SRR );reso l uti on upper li m it引言在获取图像的过程中,大气扰动、运动模糊、光学系统像差和系统噪声等都会导致图像质量下降.引起图像质量下降的模糊程度常用图像获取过程中各环节的PSF (Point Spread Function )来描述.对近距离成像,图像模糊主要由光学系统、传感器的PSF 造成,但在远距离成像如航天遥感领域中由于成像距离远,大气扰动的影响造成系统的PSF 增大,从而使投影在焦平面上影像的极限分辨率严重下降(远小于光学系统衍射极限分辨率).图像复原是提高单幅图像分辨率的经典方法,可通过获得的图像测试出系统PSF 或MTF(M odu lation Transfer Func ti o n),然后对获得的图像去噪、解卷积或进行MTF补偿,从而得到分辨率提高的图像[1~4].超分辨率重构(SRR)是从多幅具有欠采样的互有子像素位移的图像,重构超过系统采样的Nyquist 极限频率的图像本文中系统的Nyqu ist 采样频率均是针对系统PSF 决定的图像的最高极限频率与探测器采样频率之间的关系,其实质是通过信息融合技术来提高图像的采样频率,因此要保证投影在焦平面上的图像的极限频率高于系统采样的Nyqu ist 频率.本文从离散 离散的成像系统出发,介绍了图像复原与SRR 的理论,从理论上分析了能有效应用图像复原与SRR 技术的条件及它们提高空间分辨率的上限,并对分析的结论进得了仿真实验,实验结果与理论研究结论一致.对以上问题的深入研究,对推进图像复原与SRR 结合应用来提高图像分辨率有重要意义.红外与毫米波学报29卷图1 适用于图像复原的成像模型F i g .1 T he i m ag i ng m ode l appli cable to i m age rest o ra ti on1 图像复原与SRR 适用条件及提高分辨率的上限1.1 图像复原的适用条件及分辨率上限的提高图1与图2给出了适用图像复原与SRR 的离散 离散成像模型.这里讨论的成像系统是模糊不变的情况,即模糊矩阵B 恒定.图1用方程表示为:L =D (B *H )+N ,(1)方程(1)中H,L 分别代表高分辨率与低分辨率的数字图像,D 为下采样矩阵,N 为白噪声.将式(1)改写为:L =PSF *R +N ,(2)其中,PSF 为系统点扩散函数,是大气扰动、运动模糊、光学系统像差、像元下采样共同作用的结果,即:PSF=PSF at m osphere PSF m otion PSF optical PSF sensor ,PSF 可实验测出,R 表示待复原的图像.图像复原是根据测得的系统PSF(在频域复原中用MTF),然后对获得的图像进行去噪、解卷积或MTF 补偿.从采样理论的角度分析式(1),其中模糊因子B =PSF atmosphere PSF motion PSF optical ,B *H 相当于对高分辨率图像进行了低通滤波,可通过的最高分辨极限频率为PSF 扩散半径倒数.在图像处理中,只有当PSF 的扩散瓣大于2个像元时,才可应用复原技术,根据采样定理,在成像系统满足Nyquist 采样定理时PSF 的扩展半径等于2p i x e,l 因此,理论上只有高于0.5倍Nyquist 采样频率的系统复原才有效.图像复原是消除模糊的影响,理想情况下相当于消除由系统PSF 造成的模糊后的采样,所以复原后的图像相对于原高分辨率图像中高于Nyqu ist 采样频率的成分出现混叠现象,等于或小于Nyquist 采样频率的部分清晰度提高分辨率增强.因此,复原后图像的最高空间分辨率不能超过系统采样的Nyqu ist 频率,实际复原由于待复原图像噪声、复原算法误差等的影响,复原效果远达不到理想情况.图像复原可在空域或频域中进行.在空间域中通过测得的系统PSF 求出反卷积函数[1],利用求得的反卷函数进行复原.设反卷积函数为C,C 可补偿PSF 引起的模糊作用,即C 和PSF 的卷积可以生成除中心位置外其它位置的值均为0的图像,数学表达为:C *PSF = (x,y ) ,(3)其中对式(3)进行FT(Fourier T ransfor m ):(x ,y )=1 x =m,y =n 0 x !m,y !nfft (C ) fft (PSF )=1 ,(4)fft (C )=1fft (PSF ),(5)若考虑噪声对图像复原的影响,式(5)可变为:fft (C )=fft *(PSF )fft (PSF )2+p n p f,(6)式中,fft *(PSF)为fft(PSF)的共轭复数,p n ,p f 分别为信号与噪声的功率谱,通常可以用SNR (S i g na l No ise R ation)代替,式(6)反FT 变化得式(7).C =ifftfft *(PSF )fft (PSF )2+SNR,(7)由式(2)得:L -N =PSF *R (8)式(8)两边同时卷积C 得:C *(L -N )=C *PSF *R =R ,(9)由式(9)可知复原图像可通过低分辨率图像去噪之后与反解卷积因子的卷积获得.采用MTF 补偿的方法[3]先将式(2)转化到频域:fft (L -N )=fft (PSF ) fft (R ) ,(10)式(10)中fft(PSF)=MTF k e i,其中,k 为fft (PSF)零频率幅值, 为相位.若PSF 是中心对称的,则MTF 以频谱中心为圆心的等半径圆上MTF 值相同,则式(10)可表示为:fft (L -N )=MTF k fft (R ) ,(11)则:R =i fft fft (L -N )MTF k.(12)1.2 SRR 的适用条件及提高分辨率的上限SRR 是通过多幅有亚像元位移的有欠采样的图像重构超过采样频率极限的图像,其适用模型见图2,用方程表示为:L ^i =D ^i (B *H )+N ^i (i =1∀n) ,(13)式(13)中L i ,D i ,N i 分别为矢量,表示为矩阵的形式为:L 1L 2L n=D 1D 2 D n(B *H )+N 1N 2 N n,(14)式(14)可近似表示为:3525期吴 艳等:图像复原与超分辨率重构基本适用条件及提高空间分辨率上限的研究图2 适用于图像SRR 的成像模型F i g.2 T he i m ag i ng model app licab l e to i m age SRRL 1L 2 L n=(D 1 B )*(D 1 H ~)(D 2 B )*(D 2 H ~)(D n B )*(D n H ~)+N 1N 2 N n,(15)式(15)中H 为待重构的高分辨率图像.对于投影在焦平面上的图像,当以低于其N yqu ist 采样0.5倍的频率采样时,在数字图像中系统PSF 小于2个像元,在数字图像中D i B(i =1∀N )小于2个像元,所以下采样之后模糊体现不出来,即式(15)可以简化为:L 1L 2 L n=(D 1 H ~)(D 2 H ~)(D n H ~)+N 1N 2 N n,(16)表示为矢量方程为L ^i =D ^i H ~+N ^i ,(17)对于投影在焦平面上的图像,当以高于其N yqu ist 采样0.5倍低于其N yqu ist 采样频率采样时,在数字图像中系统PSF 为大于2个像元小于4个像元的矩阵,式(14)可变为:L 1L 2 L n=PSF 1*(D 1 H ~)PSF 2*(D 2 H ~)PSF n *(D n H ~)+N1N 2N n,(18)PSF i 是通过获得的低分辨率图像测得的系统点扩散函数,如怱略其差异,式(18)可变为式(19):L 1L 2 L n=PSF*(D 1 H ~)(D 2 H ~)(D n H ~)+N 1N 2 N n,(19)对于满足式(19)条件的情况,理论上是对获得的低分辨率图像先复原,然后再SRR 获得更高分辨率的图像,相当于高分辨率图像未被模糊直接下采样得到低分辨率图像,然后对其重构,实际可先重构图像然后通过估计其扩大瓣的PSF 进行复原来实现,原理如式(20).L 1L 2L n=D 1 (PSF ~*H ~)D 2 (PSF ~*H ~)D n (PSF ~*H ~)+N 1N 2 N n,(20)PSF ~为估计的扩大瓣的点扩散函数,先对低分辨率图像进行重构得到高于单幅图像采样频率的有模糊的高分辨率图像,然后采用PSF ~对其进行复原.根据采样理论,重构图像可打破系统采样的Nyquist 频率极限,但不可超出系统衍射极限频率.在式(19)、(20)情况下,重构图像通过再复原可以得到超出系统衍射极限频率的图像(对于这种可结合应用图像复原与SRR 的情况将在后续文章中详细讨论).理论上,有子像素位移的有欠采样的低分辨率图像越多,重构图像分辨率越接近系统衍射极限分辨率(投影在焦平面上图像分辨率).SRR 是一个病态求逆问题,可以采用I BP(Iter ati v e Backw ard Pro jecti o n),POCS(Projection on Con vex Set)等方法[5,6]估计H ~,以式(17)中n 个矢量方程采用I BP 重构为例,步骤如下:Step1:对获得的低分辨率图像进行滤波,即F i =L ^i -N ^i =D ^i H ~(获得图像SNR 较高时,这一步不需要);Step2:选取一幅滤波后的图像2倍插值作为初始高分辨率图像,即H ~=i m resize((L 1-N 1),2,#b icubic #);Step3:H ~1=H ~0;(赋初值)For i=1:n(n 为低分辨率图像数量)H ~i+1=H ~i +H BP (D H ~i -F i );(式中D 为下采因子,H BP 为反向投影算子)H ~i =H ~i+1end2 实验研究对第2节中分析的图像复原与SRR 重构适用353红外与毫米波学报29卷表1 相对原始高分辨率图像1/2下采样图像相对评价数据指标T ab le 1 Re l at i ve valuati on i nd i ce based on half do wn sa mp li ng i m age of th e in itial i mage采样频率(Nyqu ist 条件为基准)0.50.7511.251.51.752采样图像加均值为0、方差为0.001的噪声2b i cub ic 插值插值复原图M SE 0.00440.00570.00750.00840.00920.01000.0108SNR 36.489333.894531.273130.123729.152228.320327.6025M SE 0.00430.00430.00530.00600.00670.00740.0081SNR 36.762236.846234.764833.442232.298731.301130.4191采样图像加均值为0、方差为0.003的噪声2b i cub ic 插值插值复原图M SE 0.00450.00580.00760.00850.00930.01010.0109SNR 36.262133.711031.123730.001429.065328.219627.5134M SE 0.00450.00480.00580.00650.00720.00790.0085SNR 36.353735.779433.760732.598931.648930.730629.9529采样图像加均值为0、方差为0.01的噪声2b i cub ic 插值插值复原图M SE 0.00550.00680.00850.00940.01020.01100.0118SNR 34.332932.159529.964128.966028.114927.391326.7283M SE0.00610.00930.01160.01200.01220.01240.0125基本条件及提高分辨率的上限作实验验证.在实验中,用1/4下采样,相当于采样率为1/4P ,分别用fspecia l(∃gaussian %,4,0.5)、fspecia l(∃gaussian %,8,1)、fspec ial(∃gaussi a n %,12,2)、fspecia l(∃gaussi an %,16,3)、fspecia l(∃gaussian %,20,3.5)、fspec i a l (∃gaussian %,24,4)、fspecia l(∃gaussian %,28,4.5)、fspecial(∃gaussian %,32,5)仿真系统PSF 获得欠采样、Nyqu ist 采样与过采样图像.实验中原始高分辨率图像采用I KONOS 卫星台北市中心地段遥感图像.2.1 复原适用条件实验实验内容:&研究>=0.5Nyquist 采样频率时图像复原的有效性;∋比较复原图像与原始高分辨率图像直接下采样得到的图像的分辨率;(研究高斯噪声对图像复原的影响.图3、图4,图6、图7,图8、图9分别为0.5Nyquist 、N yqu is,t 2Nyquist 采样图像加均值为0、方差0.001的高斯噪声时的2b icubic 插值放大图与其复原图像(2b icubic 插值不改变原采样图像的分辨率,通过插值放大了图像,更有利于主观比较复原图像、重构图像相对于原采样图像分辨率改善效果,所以文中采用2bicub ic 插值图与其复原图像、重构图像进行比较研究.).从以上三组图可以明显看到,复原图相对于插值图像分辨率有明显的提高,图4相对于图3分辨率的提高效果不如图7(图9)相对于图6(图8)显著.本文对不同SNR 不同采样频率的低分辨率图像进行了实验,其复原图像的分辨率都不超过图5(原始高分辨率图像直接下采样2bicub ic 插值得到图像)的分辨率.采用相对于原始高分辨率图像1/2下采样图像的M SE 与SNR 作相对评价的数据指标.表1给出了不同采样频率、不同SNR 采样图像的插值图像与其复原图相对评价指标,其中有底色的为图3、图4,图6、图7,图8、图9三组图像的评价数据.3545期吴 艳等:图像复原与超分辨率重构基本适用条件及提高空间分辨率上限的研究图10 不同采样频率、S NR 采样图像(a )2bicubic 插值图与(b)其复原图像相对评价指标曲线图F i g .10 Curve p l o ts of re lati ve valua ti on i ndices of (a)2bicub i c i n terpolation i m age and (b )its resto rted i m ag e for d ifferent frequences and S NR i m age图10给出了不同采样频率、不同SNR 采样图像的插值图像与其复原图的M SE 与SNR 曲线.实线为复原图像SNR (MSE ),虚线为插值图像SNR (M SE ),线组1、线组2、线组3分别代表三组采样图像SNR 依次减小的情况.实验结论:当采样图像SNR 较高时,采样频率在0.5Nyqu ist 处时,复原效果不明显,随着采样频率的增大,复原有效性增强,当采样频率达到N yqu ist 频率处之后复原有效性不再有明显的增强;当采样图像的SNR 的降低时,复原效果明显减少,最终导致复原效果的恶化.2.2 S RR 适用条件实验实验内容:&研究<Nyqu ist 采样频率条件下图像重构效果;∋比较重构图像与投影到焦平面上的图像分辨率;(研究噪声对重构效果的影响.4幅行、列均错位0.5pixel 的低分辨率图像通过原始高分辨率图像1/4下采样(加高斯噪声)得到.图11、图12,图14、图15,图17、图18分别为<0.5Nyqu ist 采样频率(PSF=fspecial(∃ga ussi a n %,4,0.5))、=0.5Nyqu ist采样频率(PSF =fspecial(∃gaussian %,8,1))、>0.5Nyqu ist 与<Nyquist 采样频率(PSF =fspec ial(∃gaussian %,12,2))条件下采样加均值为0、方差0.001的高斯噪声得到的低分辨率图像中的一幅的355红外与毫米波学报29卷图20 不同采样频率、S NR采样图像,(a)2bicub ic插值图与(b)其复原图像相对评价指标曲线图F ig.20 Curve plots of relati ve va l uati on i nd i ces o f(a) 2bicub ic i nterpo lati on i m age and(b)its restortated i m age for different frequc ies and S NR i m age2b icubic插值图与4幅低分辨率图像重构图像.实验中采用的1/4下采样的四幅图像重构,在不同实验条件下,重构图像的分辨率都明显比系统衍射极限分辨率低,可主观评价图12(图15,图18)与图13(图16,图19),文中没有对其进行客观评价.图20给出了图14、图15、图16,图17、图18、图19三组图像MSE与SNR.曲线图,实线表示重构图像,虚线表示插值图像,线组1、线组2、线组3分别代表三组采样图像的SNR依次减小的情况.从图20可以看出,低分辨率图像SNR越高,重构图像分辨率提高越明显,欠采样程度越大提高效 果越明显.当达到采样的Nyqu i s t频率时,重构图像相对于低分辨率图像分辨率已没有提高.3 结论理论分析与实验表明,图像复原可应用于大于0.5N yqu ist采样的成像系统,在获得图像SNR较高的情况下,过采样程度越大,复原效果越明显,随着SNR降低,复原效果逐渐恶化,复原所获得图像的空间分辨率极限为系统N yqu ist采样频率;SRR应用于欠采样系统,低分辨率图像SNR较高时,欠采样程度越大,重构效果越明显,当SNR降低时,重构图像分辨率仍有改善,但程度减小,重构图像的最高空间分辨率能打破成像系统Nyqu i s t极限频率,但不能超过系统衍射极限分辨率.对于>0.5Nyquist采样频率、<Nyqu ist采样频率条件,可结合复原与SRR的问题在后续论文中详细研究.REFERENCES[1]L I U Zheng jun,WANG Chang yao,LUO Cheng feng.Estima ti on o f CBER S 1P o i nt Spread Functi on and I m age R esto ration[J].Journal of R e m o te S ensi ng(刘正军,王长耀,骆成凤.CBERS 1PSF估计与图像复原.遥感学报),2004, 8(3):234 238.[2]R uiz C P,Lopez F J A.R estor i ng SPOT I m ag es us i ng PSFderived D econvo l u ti on filters[J].In ternati onal J ournal of R e m ote Sensing,2002,23(12):2379 2391.[3]C HENG Q i ang,DA Q I yan,X I A D e shen.R esto ration ofre m o te sesensi ng i m ages based on M TF t heory[J].Journal of Image and Graphics(陈强,戴奇燕,夏德深.基于M TF 理论的遥感图像复原.中国图象图形学报),2006,11(9):1299 1305.[4]P atra S K,M is h ra N,Chandrakanth R.I mage quality i mprovement t hrough M TF compensati on.a treat m ent t o h i gh reso l u tion data[J].Indian Cartog rapher,2002:86 93. [5]M er i no M ariaT eresa,N unez Jorge.Super resoluti on o f remo tely sensed i m ages w ith va riab l e p i xe l li near reconstruc tion[J].IEEE T ransactions on G eoscience and Re m ote Sens ing,2007,45:1446 1457.[6]P ark SungCheo,l P ark M i n K yu,K ang M oonG,i Super resol ution i m age reconstruc tion:A technical overv ie w[J].IEEE S i gnal P rocessing M agaz i ne,2003,20:21 36.356。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Enhancing retinal image by the Contourlet transformPeng Fenga,*,Yingjun Pan a ,Biao Wei a ,Wei Jin b ,Deling MiaaKey Laboratory of Opto-electronics Technology and System,Ministry of Education,ChongQing University,ChongQing 400044,PR ChinabFaculty of Information Science and Technology,NingBo University,NingBo 315211,PR ChinaReceived 22June 2005;received in revised form 14May 2006Available online 28November 2006Communicated by C.-F.WestinAbstractThe evaluation of retinal images is widely used to help doctors diagnose many diseases,such as diabetes or hypertension.Due to the acquisition process,retinal images often have low grey level contrast and dynamic range.This problem may seriously affect the diagnostic procedure and its results.Here we present a new multi-scale method for retinal image contrast enhancement based on the Contourlet transform.The Contourlet transform has better performance in representing edges than wavelets for its anisotropy and directionality,and is therefore well-suited for multi-scale edge enhancement.We modify the Contourlet coefficients in corresponding subbands via a nonlinear function and take the noise into account for more precise reconstruction and better visualization.We compare this approach with enhancement based on the Wavelet transform,Histogram Equalization,Local Normalization and Linear Unsharp Masking.The application of this method on images from the DRIVE database showed that the proposed approach outperforms other enhancement methods on low contrast and dynamic range images,with an encouraging improvement,and might be helpful for vessel segmentation.Ó2006Elsevier B.V.All rights reserved.Keywords:The Contourlet transform;Anisotropy and directionality;Contrast enhancement;Retinal imaging1.IntroductionThe evaluation of retinal images is a diagnostic tool widely used to gather important information about patient retinopathy.Retinal lesions,related both to vascular aspects,such as increased vessel tortuosity (Williams et al.,1999;Heneghan et al.,2002)or focal narrowing (Hubbard et al.,1999),and to nonvascular features,such as haemorrhages,exudates,microaneurysms and others (Ege et al.,2000),are crucial indicators of serious systemic diseases,such as diabetes or hypertension.It is thus very important for the doctors to be able to clearly detect,appreciate and recognize the lesions among the numerous capillary vessels and optic nerve present in the image.But the retinal images acquired with a fundus camera oftenhave low grey level contrast and dynamic range.Fig.1is one example of such a retinal image.This problem may seriously affect the diagnostic procedure and its results,because lesions and vessels in some areas of the FOV are hardly visible to the eye specialist.Obviously,contrast enhancement is a necessary pre-processing step if the original retinal image is not a good candidate for subsequent accurate segmentation.Several techniques have been used to improve the image quality.The classic one is Histogram Equalization (Zimmerman and Pizer,1988)which has good performance for ordinary images,such as human portraits or natural images,but is not a good choice for ophthalmic images due to its ampli-fication of noise and the absence of some grey levels after enhancement.More complex methods such as unsharp masking (Polesel et al.,1997;Yang et al.,2003)and a local normalization (Joes et al.,2004),have been proposed to enhance the contrast.The former,based on space filter,tries to add high frequency components of the image to0167-8655/$-see front matter Ó2006Elsevier B.V.All rights reserved.doi:10.1016/j.patrec.2006.09.007*Corresponding author.Tel.:+862365106793;fax:+862365102515.E-mail addresses:andy_feng_peng@ ,coe-fp@ (P.Feng)./locate/patrecPattern Recognition Letters 28(2007)516–522the original image.The latter locally normalizes each pixel of the retinal image to zero mean and unit variance,aimed at compensating for the lighting variations and enhancing local contrast.It is clearly that both methods act as high-pass filters and,inevitably,augment the noise as well as improving the contrast.Others techniques based on matched filters have also been introduced (Chaudhuri et al.,1989;Hoover et al.,2000;Lin et al.,2003).These techniques are good at enhancing local contrast,especially for blood vessel,in a small area,but for the whole image,the computation becomes difficult due to needing many various matched filters.Recently,the wavelet transform has been widely used in the medical image processing.Mallat (1989)introduced a fast discrete wavelet transform algorithm that is the method of choice in many ine and Song (1992,)and Laine et al.(1994)use this algorithm to enhance the microcalcifications in mammograms.Fu et al.(2000a,b)used a wavelet-based histogram equaliza-tion to enhance sonogram images.The wavelet transform is a type of multi-scale analysis that decomposes input sig-nal into high frequency detail and low frequency approxi-mation components at various resolutions.To enhance features,the selected detail wavelet coefficients are multi-plied by an adaptive gain value.The image is then enhanced by reconstructing the processed wavelet coeffi-cients.In our opinion,the wavelet transform may be not the best choice for the contrast enhancement of a retinal image.This observation is based on the fact that wavelets are blind to the smoothness along the edges commonlyfound in images.In other words,wavelet can not provide a ‘sparse’representation for such an image because of the intrinsic limitation of wavelet.Some new transforms have been introduced to take advantage of this property.TheCurvelet (Cande`s and Donoho,1999)and Contourlet (Do and Vetterli,2005)transforms are examples of two new transforms with a similar structure,which are deve-loped to sparse represent natural images.Both of these geo-metrical transforms offer the two important features of an anisotropy scaling law and directionality and therefore are good choice for edge enhancement.Do and Vetterli (2005)utilized a double filter banks structure to develop the Con-tourlet transform and used it for some nonlinear approxi-mation and de-noising experiments and obtained some encouraging results.In this work,a new approach for ret-inal image contrast enhancement that is based on Contour-let transform is proposed.The main reason for the choice of Contourlet is based on its better performance of repre-senting edges and textures of natural images,i.e.better rep-resentation of lesions and blood vessels of a retinal image.We compare this approach with other contrast enhance-ment methods:Histogram Equalization (HE),the local normalization (LN)(Joes et al.,2004),linear unsharp masking (LUM)and the wavelet-based contrast enhance-ment in addition to the proposed Contourlet transform method.Our experimental results show encouraging improvement and achieve better visual results and out-performed the previous methods.2.System architectureFig.2shows a flow chart for the proposed scheme.First,the retinal images captured from camera need to be trans-formed from RGB to greyscale.The histogram stretching is applied to the grey image for preliminary enhancement.Then Contourlet transform is applied.Here we do not use the histogram equalization as the first step although it is more effective.Ideally,histogram equalization should enhance the image contrast by adjusting the pixel distribu-tion so that they can conform to an uniform distribution.However,this method will lose lots of information which maybe very important for lesion or vessel detection due to its absence of some grey-level after processing.Fig.1.Example of observed retinal image before and after enhancement.P.Feng et al./Pattern Recognition Letters 28(2007)516–5225172.1.Retinal image databaseMost of our test images are taken from a standard reti-nal image source—the Utrecht DRIVE database.1It was obtained from a screening program in the Netherlands. All the retinal images are captured in digital form from a Canon CR5nonmydriatic3CCD camera at45°field of view.The images are of size768·584pixels,8bits per col-our channel and have afield of view(FOV)of approxi-mately540pixels in diameter.Because the green channel of colour retinal images formatted as an RGB image gives the highest contrast between vessels and background,this channel is a good choice for contrast enhancement.Wefirst extract the green channel of retinal images.Fig.3shows the each channel of an RGB retinal image and their histo-grams,respectively.It is easily to show that red and blue channels are either too bright or too dark.2.2.The Contourlet transformFig.4(a)shows aflow graph of the Contourlet trans-form.It consists of two steps:the subbands decomposition and the directional transform.A Laplacian pyramid(LP) isfirst used to capture point discontinuities,then followed by a directionalfilter bank(DFB)to link point disconti-nuity into linear structure.The overall result is an image expansion using basic elements like contour segments, and is thus named the Contourlet.Fig.4(b)shows an example of the frequency decomposition achieved by the DFB.It depicts the Contourlet coefficients of one retinal image using three LP levels and eight directions at thefin-est level.Quincunxfilter banks are the building blocks of the DFB.2.3.Image contrast enhancementIf background has similar grey level with object,the details of this object is hard to detect.So if the wide blood vessels in retinal image which represent as strong edges are easy tofind,the thin lesions and nerves are hard to detect due to their similar grey levels to the background. The proposed strategy softens the strongest edges and amplifies the faint edges.We try to reduce the ratio of strong features to faint features so that the slim vessels become visible.Since the Contourlet transform is well-adapted to rep-resent images containing edges,it is a good candidate for microstructure enhancement in retinal images as well as edge enhancement in natural images.Contourlet coeffi-cients can be modified via a nonlinear function y a.Taking noise into consideration,we introduce explicitly a noise standard deviation r in the equation(Velde,1999;Starck et al.,2003)yaðx;rÞ¼1if x<aryaðx;rÞ¼xÀararÁtarqþ2arÀxarif r6x<2aryaðx;rÞ¼txqif2ar6x<tyaðx;rÞ¼txsif x P tð1ÞHere,t determines the degree of nonlinearity and s introduces a dynamic range ing a nonzero s will enhance the faintest edges and soften the stron-gest edges.a is a normalization parameter.The t parameter is the value under which coefficients are amplified.This value depends obviously on the pixel values.We can derive the t value from the data.Two options are possible:(1)t=F t r,where r is standard noise deviation and F tis an additional parameter which is independent of the Contourlet coefficient values,and therefore much easier for a user to set.For instance,using a=3andF t=10amplifies all coefficients between3and30.(2)t=lM a,with l<1,where M a is the maximumContourlet coefficient of the relative band.In this case,choosing for instance a=3and l=0.5,we amplify all coefficients with an absolute value between3r and half the maximum absolute value of the band.Thefirst choice allows the user to define the coefficients to be amplified as a function of their signal to noise ratio, while the second one gives an easy and general way tofix t independently of the range of the pixel values.Fig.5shows a plot representing the enhanced coefficients versus the ori-ginal coefficients.The Contourlet enhancement method for green channel images consists of the following steps:Step1.Input the colour retinal image and extract its green channel.Step2.Apply histogram stretching to the grey retinal image.Step3.Estimate the noise standard deviation r in the input image I(Starck et al.,2002).Step4.Calculate the Contourlet transform of the input image(Do and Vetterli,2005).We get a set of sub-bands V j,each band V j contains N j coefficients C j,k(k2[1,N j])and corresponds to a given resolutionlevel.Step5.Calculate the noise standard deviation r j for each band j of the Contourlet transform.Step6.For each band V j do(1)Calculate the maximum value M j of the band.(2)Multiply each Contourlet coefficient C j,k by y a(j C j,k j,r j).Step7.Reconstruct the enhanced image from the modified Contourlet coefficients.1The images are available at:http://www.isi.uu.nl/Research/Databases/DRIVE/.518P.Feng et al./Pattern Recognition Letters28(2007)516–5223.Experiments and evaluationIn order to better appreciate the results obtained with our proposed algorithm,we used four approaches for our contrast enhancement experiments:Histogram Equaliza-tion (HE),local normalization (LN)(Joes et al.,2004),linear unsharp masking (LUM)and the wavelet-based con-trast enhancement in addition to the proposed Contourlet transform.Most of our test images are from the standard retinal image source-DRIVE ing the green channel and the central part of the test image which is a 540·540square (it covers the FOV area),we carriedoutFig.3.Top:The original colour retinal image and its component of each channel,bottom:corresponding histogram of the four images.(a)Original image,(b)green channel,(c)red channel,(d)blue channel.(For interpretation of the references in colour in this figure legend,the reader is referred to the web version of this article.)(2,2)↓ImageLPDFBFig.5.Enhanced coefficients versus original coefficients;parameters are (a)t =30,a =3,q =0.5and s =0;(b)t =30,a =3,q =0.5and s =0.6.P.Feng et al./Pattern Recognition Letters 28(2007)516–522519all the experiments.For the Contourlet transform,we use five LP levels and32directions at thefinest level.In the LP stage we chose the‘‘9–7’’filters partly because these bi-orthogonalfilters have linear phase which is crucial for image processing.In the DFB stage we use the‘‘23–45’’bi-orthogonal quincunxfilters designed by Phoong et al. (1995)and modulated them to obtain the bi-orthogonal fanfilters.In particular,here we use the wavelet transform with the same contrast enhancement function to compare with the Contourlet transform.The wavelet transform uses Daubechies bi-orthogonal9–7wavelet and a four level decomposition.Fig.6shows one example of the results of histogram equalization,the local normalization,linear unsharp mask-ing,wavelet-based and Contourlet-based enhancement.It is clear that the image contrast has been improved after Contourlet transform approach.A few unrecognizable capillary vessels can be easily identified.And specifically, due to the introduction of noise standard deviation in the enhancement equation,it is superior to HE and LNin Fig.6.Results on an image from DRIVE database with different enhancement approaches:(a)original image,(b)histogram equalization(HE),(c)local normalization(LN),(d)linear unsharp masking(LUM),(e)wavelet,(f)Contourlet.520P.Feng et al./Pattern Recognition Letters28(2007)516–522the sense that it does not strongly amplify the high fre-quency part of the whole image,i.e.noise.The uniform background distribution is also helpful for results because we do not amplify the low-amplitude coefficients.It appears that histogram equalization has better per-formance in the experiments.Unfortunately some intrinsic disadvantages also exist:the absence of some grey levels and nonuniform background/luminosity distribution.The drawback is easy tofind:parts of the vessels become invis-ible for the too bright background.Another disadvantage is well-known:histogram equalization strongly amplifies the noise.This feature will make the next step-vessel seg-mentation-hard to complete and even destroy the vessel or lesion.Local normalization which aims at normalizing each pixel of the image to zero mean and unit variance also amplifies the noise strongly.Although the edges,blood ves-sels and nerves of the retinal image are easy to identify,the noise is too great.With carefully choosing the parameters of linear unsharp masking,we get a better result compared with histogram equalization and local normalization,but the background grey distribution is not uniform and unsharp masking results in more noise in the background.Fig.6(e)and(f)shows the comparison of wavelet and Contourlet transform enhancement with the same enhance-ment function.Both of two transform-based approaches improve the contrast of retinal image and have the artifacts due to the same reason:shift-variance.It seems that the wavelet is inferior to Contourlet in the sense that(b)has the higher contrast between vessels and background than (a)does,we still can see some faint and thin vessels which are almost invisible in(a).As mentioned in introduction, the anisotropy scaling law and directionality is critical for the Contourlet transform to keep the thin vessels and nerves undistorted during the decomposition and recon-struction and represent the image more sparsely than the wavelet transform.In summary,the results of thesefigures indicate that the Contourlet-based enhancement approach works well and bring some advantages.A worry for the proposed scheme is that the widths of some vessels are changed slightly.This phenomenon is partial because the Contourlet transform is not shift-invariant.It uses a2·2down-sampling after a Laplacian pyramid(LP)transform which will inevitably affect the reconstructed image after a manipulation of coef-ficients.That is why there are some artifacts outside the FOV,especially for the area where the grey level distribu-tion has an obvious quickly change.Another problem is how to choose the appropriate parameters in our algo-rithm,the relationship between the statistical property of retinal image and the parameters of enhancement function need further investigation.Future visual evaluation and quantitative assessment will be the aim of future work. 4.ConclusionIn this paper,we studied image contrast enhancement by modifying Contourlet coefficients.Experimental results show that this method provides an effective and promising approach for retinal image enhancement.A number of properties,are important for contrast stretching.•Reconstructing the enhanced image from the modified Contourlet coefficients.Noise must be taken into con-sideration and not be amplified in enhancing edges.•Reconstructing the enhanced image from the modified Contourlet coefficients.It is very advantageous there is no block effect.Then our conclusions are as follows:1.The Contourlet enhancement functions need takeaccount very well of image noise.2.As evidenced by the experiments with the Contourlettransform,there is better preservation of contours than with other methods.3.The effect of shift-variance of Contourlet transformneed further investigation in order to get more precise enhancement retinal image.4.Careful research about the relationship between imagestatistical properties and the parameters of enhancement function should be applied in the future.For the low dynamic range and low contrast images, there is a large improvement by Contourlet enhancement over other proposed approaches since the Contourlet can detect the contours and edges quite adequately.The enhancement function tends to changes the width of blood vessels,so what we need to do in next step is to concentrate on the effect of shift-variance of Contourlet transform and find the best appropriate parameters for this promising method.AcknowledgementThe author would like to thank Dr.Yaxun Zhou for the helpful suggestion and revision of the original version of the manuscript and the experiments.ReferencesCande`s, E.J.,Donoho, D.L.,1999.Curvelets—a surprisingly effective nonadaptive representation for objects with edges.Download from /~emmanuel/publications.html. Chaudhuri,S.,Chatterjee,S.,Katz,N.,et al.,1989.Detection of blood vessels in retinal images using two-dimensional matchedfilters.IEEE Trans.Med.Imaging8(3),263–269.Do,M.N.,Vetterli,M.,2005.The Contourlet transform:an efficient directional multiresolution image representation.IEEE Trans.Image Process.14(12),2091–2106.Ege, B.M.,Hejlesen,O.K.,Larsen,O.V.,et al.,2000.Screening for diabetic retinopathy using computer based image analysis and statis-tical classifiputer Methods Programs Biomed.62(3),165–175.Fu,J.C.,Chai,J.W.,Wong,S.T.C.,2000a.Wavelet-based enhancement for detection of left ventricular myocardial boundaries in magnetic resonance images.Magn.Reson.Imaging18(9),1135–1141.P.Feng et al./Pattern Recognition Letters28(2007)516–522521Fu,J.C.,Lien,H.C.,Wong,S.T.C.,2000b.Wavelet-based histogram equalization enhancement of gastric sonogram puter.Med.Imaging Graph.24(2),59–68.Heneghan,C.,Flynn,J.,O’Keefe,M.L.,et al.,2002.Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis.Med.Image Anal.6(4),407–429. Hoover, A.,Kouznetsova,V.,Goldbaum,M.,2000.Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response.IEEE Trans.Med.Imaging19(3),203–210. Hubbard,L.D.,Brothers,R.J.,King,W.N.,et al.,1999.Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the atherosclerosis risk in communities study.Ophthalmology106(12),2269–2280.Joes,S.,Michael,D.A.,Meindert,N.,et al.,2004.Ridge-based vessel segmentation in colour images of the retina.IEEE Trans.Med.Imaging24(4),501–509.Laine, A.,Song,S.,1992a.Multiscale wavelet representations for mammographic feature analysis.In:Proc.SPIE Conf.on Mathemat-ical Methods in Medical Image.vol.1768,pp.306–316.Laine,A.,Song,S.,1992b.Wavelet processing techniques for digital mammography.In:Proc.SPIE Conf.on Visualization in Biomedical Computation,vol.1808,pp.610–624.Laine, A.,Schuler,S.,Fan,J.,et al.,1994.Mammographic feature enhancement by multiscale analysis.IEEE Trans.Med.Imaging13(4), 725–740.Lin,T.S.,Du,M.H.,Xu,J.T.,2003.The Preprocessing of subtraction and the enhancement for biomedical image of retinal blood vessels.J.Biomed.Eng.20(1),56–59.Mallat,S.G.,1989.A theory for multi-resolution signal decomposition: the wavelet representation.IEEE Trans.Pattern Anal.Machine Intelligence11(7),674–689.Phoong,S.M.,Kim,C.W.,Vaidyanathan,P.P.,et al.,1995.A new class of two-channel biorthogonalfilter banks and wavelet bases.IEEE Trans.Signal Processing43(3),649–665.Polesel, A.,Ramponi,G.,Mathews,V.J.,1997.Adaptive unsharp masking for contrast enhancement.In:IEEE Internat.Proc.Image Process.,vol.1,pp.267–270.Starck,J.L.,Cande`s,E.J.,Donoho,D.L.,2002.The curvelet transform for image denoising.IEEE Trans.Image Process.11(6),670–684. Starck,J.L.,Murtagh,M.,Cande`s,E.J.,Donoho,D.L.,2003.Gray and color image contrast enhancement by the curvelet transform.IEEE Trans.Image Process.12(6),706–717.Velde,K.V.,1999.Multi-scale colour image enhancement.In:IEEE Internat.Proc.Image Process.,vol.3,pp.584–587.Williams,E.H.,Michael,G.M.,Brad,C.,et al.,1999.Measurement and classification of retinal vascular r-matics53(2–3),239–252.Yang, C.Y.,Shang,H.B.,Jia, C.G.,et al.,2003.Adaptive unsharp masking method based on region segmentation.Opt.Precision Eng.11(2),188–192.Zimmerman,J.B.,Pizer,S.M.,1988.An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement.IEEE Trans Med.Imaging7(4),304–312.522P.Feng et al./Pattern Recognition Letters28(2007)516–522。

相关文档
最新文档