Multi-Focus Image Fusion Based on NSCT and NSST
多尺度特征融合的脊柱X线图像分割方法
脊柱侧凸是一种脊柱三维结构的畸形疾病,全球有1%~4%的青少年受到此疾病的影响[1]。
该疾病的诊断主要参考患者的脊柱侧凸角度,目前X线成像方式是诊断脊柱侧凸的首选,在X线图像中分割脊柱是后续测量、配准以及三维重建的基础。
近期出现了不少脊柱X线图像分割方法。
Anitha等人[2-3]提出了使用自定义的滤波器自动提取椎体终板以及自动获取轮廓的形态学算子的方法,但这些方法存在一定的观察者间的误差。
Sardjono等人[4]提出基于带电粒子模型的物理方法来提取脊柱轮廓,实现过程复杂且实用性不高。
叶伟等人[5]提出了一种基于模糊C均值聚类分割算法,该方法过程繁琐且实用性欠佳。
以上方法都只对椎体进行了分割,却无法实现对脊柱的整体轮廓分割。
深度学习在图像分割的领域有很多应用。
Long等人提出了全卷积网络[6](Full Convolutional Network,FCN),将卷积神经网络的最后一层全连接层替换为卷积层,得到特征图后再经过反卷积来获得像素级的分类结果。
通过对FCN结构改进,Ronneberger等人提出了一种编码-解码的网络结构U-Net[7]解决图像分割问题。
Wu等人提出了BoostNet[8]来对脊柱X线图像进行目标检测以及一个基于多视角的相关网络[9]来完成对脊柱框架的定位。
上述方法并未直接对脊柱图像进行分割,仅提取了关键点的特征并由定位的特征来获取脊柱的整体轮廓。
Fang等人[10]采用FCN对脊柱的CT切片图像进行分割并进行三维重建,但分割精度相对较低。
Horng等人[11]将脊柱X线图像进行切割后使用残差U-Net 来对单个椎骨进行分割,再合成完整的脊柱图像,从而导致分割过程过于繁琐。
Tan等人[12]和Grigorieva等人[13]采用U-Net来对脊柱X线图像进行分割并实现对Cobb角的测量或三维重建,但存在分割精度不高的问题。
以上研究方法虽然在一定程度上完成脊柱分割,但仍存在两个问题:(1)只涉及椎体的定位和计算脊柱侧凸角度,却没有对图像进行完整的脊柱分割。
Multimodal Medical Image Fusion
Yosuke Uchimura, Atsushi Kobayashi
Graduate School of Engineering Kyushu Institute of Technology, KIT Kitakyushu, Japan ** serikawa@elcs.kyutech.ac.jp
Miroslaw Trzupek‡
Department of Automatics, Lab. of Biocybernetics AGH University of Science and Technology Krakow, Poland ‡ mtrzupek@.pl and MR images are better in present the normal and pathological soft tissue. Therefore, only one type of image with a single sensor cannot provide a complete view of the scene in many applications. The fused images, if suitably obtained from a set of multimodal sensor images, can provide a better view than that provided by any of the individual source images. In recent decades, growing interest has focused on the multimodal medical image fusion. Multimodal medical image fusion is the process of extracting significant information from multiple images and synthesizing them in an image. In literature, it is well established that the multi-resolution analysis (MRA) is the best approach that suits image fusion. Some MRA based fusion multimodal medical methods [5], such as wavelets [6], Laplacian pyramids [7], wedgelets [8], bandelets [9], curvelets [10], shearlets [36], contourlets [11], have been recognized as one of the most methods to obtain fine fusion images at different resolutions [29]. The pyramid method for multimodal image fusion method firstly constructs the input image pyramid, and then takes some feature selection approach to form the fusion value pyramid. By the inverter of the pyramid, the pyramid of image can be reconstructed, to produce fusion images. This method is relatively simple, but it has some drawbacks [6]. So, discrete wavelet transform (DWT) method is proposed to improve the multi-resolution problem [7]. Discrete wavelet transform (DWT) can be decomposed into a series of subband images with different resolution, frequency and direction characteristics. The spectral characteristics and spatial characteristics of image are completely separation. And then the different resolution image fusion is performed. But because of limited directional of wavelets, it cannot express line- or curve-singularities in two- or higher dimensional signal. So, other excellent MRA
基于多景深融合模型的显微三维重建方法
Abstract: A new micro 3D reconstruction method based on multi-focus image fusion model was proposed which overcame the disadvantage of low efficiency in the processing of the laser scanning confocal microscopy metal samples. In the method, the micro 3D reconstruction problem was transformed into a two-dimensional image fusion problem. The whole process can be divided into three stages, including image fusion, optimal matching and iterative recovery stages. In the first phase, a novel microscopy image fusion algorithm based on m-PCNN in non-subsampled Contourlet transform (NSCT) domain was proposed, in which details of each original image can be retained to the first fusion result. In the next phase, the second fusion result and the height map were obtained by an optimal matching algorithm based on the correlation coefficient of regional image. In the last phase, an en收稿日期 : 2016-09-19; 修回日期 : 2017-02-14. 基金项目 : 国家 “ 八六三 ”高技术研究发展计划 (2015AA015408); 中国科学院西部 之光基金 (2011180). 闫 涛 (1987 — ), 男 , 博士研究生 , 主要研究方向为图像处理与机器视觉、智能计算 ; 陈 斌 (1970— ), 男 , 博士 , 研究员 , 博士生导师 , 论文通讯作者 , 主要研究方向为图像理解、三维视觉分析 ; 刘凤娴 (1987 — ), 女 , 博士研究生 , 主要研究方向为 大气环境化学 ; 王佐才 (1987 — ), 男 , 博士研究生 , 主要研究方向为图像处理 ; 郭玉峰 (1985 — ), 男 , 博士研究生 , 主要研究方向为机 器视觉方法 ; 郭四稳 (1965— ), 男 , 博士 , 教授 , 博士生导师 , 主要研究方向为图像处理与计算机自动推理、数字图像处理 .
基于GHM多小波变换的非织造布多焦面图像融合
基于GHM多小波变换的非织造布多焦面图像融合陈阳;辛斌杰;邓娜【摘要】针对光学显微镜在单一焦平面下拍摄的织物图像部分区域纤维会模糊的问题,提出基于GHM多小波变换的非织造布多焦面图像融合算法.利用自行搭建的非织造布显微成像系统采集不同焦平面下的织物图像序列,对初始图像序列进行临界采样预滤波处理,使用2种融合方法逐一处理图像的高低频,初始织物融合图像经多小波融合及逆变换后获得,之后按上述方法将初始融合图像与后续单焦面图像融合,叠加循环至融合后所有纤维区域均能清晰显示为止结束收敛.实验结果表明,该融合方法能将不同焦平面下拍摄的图像序列进行数字化图像融合,达到单幅图像内全视野区域的纤维网清晰聚焦融合的效果,为之后的计算机图像处理及测量提供便利.【期刊名称】《纺织学报》【年(卷),期】2019(040)006【总页数】8页(P125-132)【关键词】临界采样预滤波;GHM多小波;多焦面融合;非织造布图像;显微成像【作者】陈阳;辛斌杰;邓娜【作者单位】上海工程技术大学电子电气工程学院,上海 201620;上海工程技术大学服装学院,上海 201620;上海工程技术大学电子电气工程学院,上海 201620【正文语种】中文【中图分类】TP311.1非织造布是由纤维随机或定向排列而成,生产以纺黏法为主。
非织造布的性能与纤维网的孔隙构造紧密相关,而纤维的厚度、宽度、取向度以及纤维网的形成方式等都与其构造有关,因此能够得到这些结构参数进而找到性能间的联系,对生产及用途都具有十分重要的指导意义。
目前主要使用间接法对非织造布孔隙结构进行解析,而其存在的问题集中在费时费力且不能考虑到孔结构的复杂性。
计算机数字图像处理技术的发展为研究非织造布结构和性能提供了有效工具。
图像质量对纤维的形态测量与结构解析至关重要。
非织造布的厚度太大使得一般光学显微镜的景深不足以将所有纤维清晰地显现在一幅图像中。
基于这种不完全聚焦图像的测量,纤维结构将是不准确的,甚至会对后续处理有一定的误导性[1]。
基于Contourlet变换的多聚焦图像融合
基于Contourlet变换的多聚焦图像融合作者:丁岚来源:《电脑知识与技术》2008年第34期摘要:由于可见光成像系统的聚焦范围有限,很难获得同一场景内所有物体都清晰的图像,多聚焦图像融合技术可有效地解决这一问题。
Contourlet变换具有多尺度多方向性,将其引入图像融合,能够更好地提取原始图像的特征,为融合图像提供更多的信息。
该文提出了一种基于区域统计融合规则的Contourlet变换多聚焦图像融合方法。
先对不同聚焦图像分别进行Contourlet变换,采用低频系数取平均,高频系数根据区域统计值决定的融合规则,再进行反变换得到融合结果。
文中给出了实验结果,并对融合结果进行了分析比较,实验结果表明,该方法能够取得比基于小波变换融合方法更好的融合效果。
关键词:图像融合;Contourlet变换;小波变换中图分类号:TP391文献标识码:A文章编号:1009-3044(2008)34-1700-03Multifocus Image Fusion Based on Contorlet TransformDING Lan(College of Information Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)Abstract: Due to the limited depth-of-focus of optical lenses , it is often difficult to get an image that contains all relevant objects in focus. Multifocus image fusion method can solve this problem effectively. Contoulet transform has varying directions and multiple scales. When the contourlet transform is introduced to image fusion , the characteristics of original images are taken better and more information for fusion is obtained. A new multifocus image fusion method is proposed in this paper, based on contourlet transform with the fusion rule of region statistics. Different focus images are decomposed using contourlet transform firstly, then low-bands are integrated using the weighted average , high-bands are integrated using region statistics rule. Then the fused image will be obtained by inverse contourlet transform. The experimental results are showed, and compared with the method based on wavelet transform. Experiments show that this approach can achieve better results than the method based on wavelet transform.Key words: image fusion; contourlet transform; wavelet transform1 引言对于可见光成像系统来讲,由于成像系统的聚焦范围有限,场景中的所有目标很难同时都成像清晰,这一问题可以采用多聚焦图像融合技术来解决,即用同一成像镜头对场景中的两个(多个)目标分两次(多次)进行成像,将这些成像中清晰部分融合成一幅新的图像,以便于人的观察或计算机的后续处理。
MultifocusImageF...
J. Wang, X. Liao, and Z. Yi (Eds.): ISNN 2005, LNCS 3497, pp. 753–758, 2005.© Springer-Verlag Berlin Heidelberg 2005Multifocus Image Fusion Using Spatial Featuresand Support Vector MachineShutao Li 1,2 and Yaonan Wang 11 College of Electrical and Information Engineering, Hunan UniversityChangsha, Hunan 410082, China 2 National Laboratory on Machine Perception, Peking University, Beijing 100871, China *******************.cnAbstract. This paper describes an application of support vector machine to pixel-level multifocus image fusion problem based on the use of spatial features of image blocks. The algorithm first decomposes the source images into blocks. Given two of these blocks (one from each source image), a SVM is trained to determine which one is clearer. Fusion then proceeds by selecting the clearer block in constructing the final image. Experimental results show that the pro-posed method outperforms the discrete wavelet transform based approach, par-ticularly when there is movement in the objects or misegistration of the source images.1 IntroductionOptical lenses often suffer from the problem of limited depth of field. Consequently, the image obtained will not be in focus everywhere. A possible way to alleviate this problem is by image fusion [1], in which several pictures with different focus parts are combined to form a single image. This fused image will then hopefully contain all relevant objects in focus.In recent years, various methods based on multiscale transforms have been pro-posed, including the Laplacian pyramid [2], the gradient pyramid [1], the ratio of Low pass pyramid [3] and the morphological pyramid [4]. More recently, the discrete wavelet transform (DWT) [5], [6] has also been used. In general, DWT is superior to the previous pyramid based methods [6]. While these methods often perform satisfac-torily, their multiresolution decompositions and consequently the fusion results are not shift invariant because of an underlying down sampling process. When there is slight camera/object movement or when there is misregistration of the source images, their performance will thus quickly deteriorate.In this paper, we propose a pixel level multifocus image fusion method based on the use of spatial features of image blocks and support vector machines (SVM). The implementation is computationally simple and is robust to shift problem. Experimen-tal results show that it outperforms the DWT based method. The rest of this paper is organized as follows. The proposed fusion scheme will be described in Section 2. Experiments will be presented in Section 3, and the last section gives some conclud-ing remarks.754 Shutao Li and Yaonan Wang2 SVM Based Multifocus Image Fusion2.1 Feature ExtractionIn this paper, we extract two measures from each image block to represent its clarity. These are described in detail as follows.2.1.1 Spatial Frequency (SF)Spatial frequency is used to measure the overall activity level of an image [7]. For an N M × image F , with the gray value at pixel position (m ,n ) denoted by F (m ,n ), its spatial frequency is defined as22CF RF SF +=. (1)where RF and CF are the row frequency∑∑==−−=M m N n n m F n m F MN RF 122))1,(),((1, and column frequency∑∑==−−=N n M m n m F n m F MN CF 122)),1(),((1,respectively. 2.1.2 Absolute Central Moment (ACM) [8])(10i p u i ACM I i ∑−=−=.(2)where µis the mean intensity value of the image, and i is the gray level.2.1.3 Demonstration of the Effectiveness of the MeasuresIn this section, we experimentally demonstrate the effectiveness of the two focus features. An image block of size 64h 64 (Fig. 2(a)) is extracted from the “Lena” im-age. Fig. 2(b) to Fig. 2(e) show the degraded versions by blurring with a Gaussian filter of radius 0.5, 0.8, 1.0 and 1.5 respectively. As can be seen from Table 1, when the image becomes more blurred, the two features are monotonic accordingly. These results suggest that both two features can be used to reflect image clarity.2.2 The Fusion AlgorithmFig.2 shows a schematic diagram of the proposed multifocus image fusion method. Here, we consider the processing of just two source images, though the algorithm can be extended straightforwardly to handle more than two.Multifocus Image Fusion Using Spatial Features and Support Vector Machine 755(a) original region (b) radius=0.5 (c) radius=0.8 (d) radius=1.0 (e) radius=1.5Fig. 1. Original and blurred regions of an image block extracted from “Lena”Table 1. Feature values for the image regions in Fig.1.Fig. 1(a) Fig. 1(b) Fig. 1(c) Fig. 1(d) Fig. 1(e) SF 40.88 20.61 16.65 14.54 11.70 ACM 51.86 48.17 47.10 46.35 44.96Fig. 2. Schematic diagram of the proposed fusion method.In detail, the algorithm consists of the following steps:1. Decompose the two source images A and B into blocks with size of M×N. De-note the i th image block pair by Aiand Birespectively.2. From each image block, extract two features above described that reflect its clar-ity. Denote the feature vectors for Aiand Biby (iiAAACMSF,) and (iiBBACMSF,)re-spectively.3. Train a SVM to determine whether Aior Biis clearer. The difference vector),(iiiiBABAACMACMSFSF−− is used as input, and the output is labeled accordingto=otherwiseanclearer thisif1target iiiBA. (3)4. Perform testing of the trained SVM on all image block pairs obtained in Step 1.The i th block, Zi, of the fused image is then constructed as>=otherwise5.0ifiiii BoutAZ. (4)where outiis the SVM output using the i th image block pair as input.756 Shutao Li and Yaonan Wang5. Verify the fusion result obtained in Step 4. Specifically, if the SVM decides thata particular block is to come from A but with the majority of its surrounding blocks from B, this block will be switched to come from B. In the implementation, a majority filter with a 3×3 window is used.3 ExperimentsExperiment is performed on 256 level source images shown in Fig.3(a) and (b). Their sizes are 512h512. The true gray value of the reference image is not available and so only a subjective visual comparison is intended here. Image blocks of size 32h32 are used. Two pairs of regions in Fig.3(a),(b), each containing 18 image block pairs, are selected as training set. In 9 of these block pairs, the first image is clearer than the second image, and the reverse is true for the remaining 9 pairs. The two spatial fea-tures are extracted and normalized to the range [0,1] before feeding into SVM. In the experiment, linear kernel is used.For comparison purposes, we also perform fusion using the DWT. The wavelet ba-sis “db8”, together with a decomposition level of 5, is used. Similar to [6], we employ a region based activity measurement for the active level of the decomposed wavelet coefficients, a maximum selection rule for coefficient combination, together with a window based consistency verification scheme.(a) focus on the ‘Pepsi’ can. (b) focus on the testing card.(c) fused image using DWT (db8, level=5). (d) fused image using SVM.Fig. 3. The “Pepsi” source images and fusion results. The training set is selected from regions marked by the rectangles in Fig. 3(a) and Fig. 3(b).Multifocus Image Fusion Using Spatial Features and Support Vector Machine 757(a)difference between the fused image usingDWT (Fig.3(c)) and source image Fig.3(a). (b)difference between the fused image using DWT (Fig.3(c)) and source image Fig.3(b).(c) difference between the fused image usingSVM (Fig. 3(d)) and source image Fig.3(a). (d)difference between the fused image using SVM (Fig.3(d)) and source image Fig. 3(b). Fig. 4. Differences between the fused images in Fig.3(c),(d) and source images in Fig.3(a),(b). Fusion results on using DWT and SVM are shown in Fig. 3(c),(d). Take the “Pepsi” images as an example. Recall that the focus in Fig. 3(a) is on the Pepsi can while that in Fig. 3(b) is on the testing card. It can be seen from Fig.3(d) that the fused image produced by SVM is basically a combination of the good focus can and the good focus board. In comparison, the result by DWT shown in Fig.3(c) is much infe-rior. Clearer comparisons of their performance can be made by examining the differ-ences between the fused images and each source image (Fig.4).4 ConclusionsIn this paper, we proposed a method for pixel level multifocus image fusion by using the spatial features of image blocks and SVM. Features indicating the clarity of an image block, namely, spatial frequency and absolute central moment, are extracted and fed into the support vector machine, which then learns to determine which source image is clearer at that particular physical location. Experimental results show that this method outperforms the DWT based approach, particularly when there is object movement or registration problems in the source images.758 Shutao Li and Yaonan WangAcknowledgementsThis work is supported by the National Natural Science Foundation of China (No.60402024).References1. Burt, P.J., Kolczynski, R.J.: Enhanced Image Capture through Fusion. In Proc. of the 4thInter. Conf. on Computer Vision, Berlin (1993) 173-1822. Burt, P.J., Andelson, E.H.: The Laplacian Pyramid as a Compact Image Code. IEEE Trans.Comm., 31 (1983) 532-5403. Toet, A., Ruyven, L.J., Valeton, J.M.: Merging Thermal and Visual Images by a ContrastPyramid. Optic. Eng., 28 (1989) 789-7924. Matsopoulos, G.K., Marshall, S., Brunt, J.N.H.: Multiresolution Morphological Fusion ofMR and CT Images of the Human Brain. Proc. of IEE: Vision, Image and Signal, 141 (1994) 137-1425. Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor Image Fusion using the Wavelet Trans-form. Graph. Models Image Proc., 57 (1995) 235-2456. Zhang, Z., Blum, R.S.: A Categorization of Multiscale-Decomposition-Based Image FusionSchemes with a Performance Study for a Digital Camera Application. Proc. of the IEEE, 87 (1999) 1315-13257. Eskicioglu, A.M., Fisher, P.S.: Image Quality Measures and Their Performance. IEEE Trans.Comm., 43 (1995) 2959-29658. Shirvaikar, M.V.: An Optimal Measure for Camera Focus and Exposure. 36th IEEE South-eastern Symp. on Sys. Theory, Atlanta (2004) 472-475。
结合VAM和模糊逻辑的NSCT图像融合方法
总显著图
图 2 显著图计算模型图
3
显著分割区域 (SSR)
图1
NSCT 分解框架图
2.2 模糊逻辑 (Fuzzy logic)
在实际问题的判断中, 往往不是非此即彼的关系, 一般会 存在二者的重叠区域, 即存在介于彼此之间的不确定性区域, 而模糊逻辑就是研究该类不确定问题的理论[10]。 在使用模糊逻辑解决问题之前, 需要定义讨论对象的全 体所构成的集合 U, 称之为论域; 对于论域中的元素, 其映射的 定义为:
[2] 变换 (DWT) 、 Contourlet 变换 (CT) 等。然而, 以上多尺度变
换在图像融合过程中无法保证平移不变或各向异性, 不可避 免地会在融合后图像的尖锐边缘附近的均匀局部区域引入振 铃和抖动 [3]。而非下采样 Contourlet 变换 (NSCT) 满足各向异 性尺度关系, 有很好的方向性, 可以准确地捕捉图像中的边缘 轮廓信息和纹理细节信息, 适合于表达具有丰富细节及方向
… 非下采样多 尺度分解
特征通道选择 …… 中央-周围差分和均化 特征分布图 …… 跨级组合和均化 特征通道显著图 …… 线性组合
继续分解 低通子带
非下采 非下采 样多尺 样方向 低通子带 度分解 带通子带 滤波 带通方 向子带 非下采 非下采 样多尺 样方向 度分解 带通子带 滤波 带通方 向子带
多年高中历史授课经验具有相应专业背景善于与学生沟通善于调动课堂气氛
Computer Engineering and Applications 计算机工程与应用
2011, 47 (12)
173
结合 VAM 和模糊逻辑的 NSCT 图像融合方法
2 3 2 3 2, 3 郑义军 1, , 任仙怡 2, , 刘秀坚 1, , 胡 涛 2, , 张基宏 1, 2 3 2 3 2, 3 ZHENG Yijun1, , REN Xianyi2, , LIU Xiujian1, , HU Tao2, , ZHANG Jihong1,
shearlet变换和区域特性相结合的图像融合
shearlet变换和区域特性相结合的图像融合郑伟;孙雪青;李哲【摘要】为了提高多模医学图像或多聚焦图像的融合性能,结合shearlet变换能够捕捉图像细节信息的性质,提出了一种基于shearlet变换的图像融合算法。
首先,用shearlet变换将已精确配准的两幅原始图像分解,得到低频子带系数和不同尺度不同方向的高频子带系数。
低频子带系数使用改进的加权融合算法,用平均梯度来计算加权参量,以此来改善融合图像轮廓模糊度高的问题,高频子带系数采用区域方差和区域能量相结合的融合规则,以得到丰富的细节信息。
最后,进行shearlet逆变换得到融合图像。
结果表明,此算法在主观视觉效果和客观评价指标上优于其它融合算法。
%In order to improve the performance of multi-modality medical image fusion and multi-focus image fusion, since the shearlet transform can capture the detail information of images, an image fusion algorithm based on shearlet transform was proposed.Firstly, the shearlet transform was used to decompose the two registered original images, thus the low frequency sub-band coefficients and high frequency sub-band coefficients of different scales and directions were obtained.The fusion principle of low frequency sub-band coefficients was based on the method of weighted fusion, using the average gradient to calculate the weighted parameters in order to improve the edge fuzzy of the fused image.As for the high frequency sub-band coefficients, a fusion rule adopting the region variance combining with the region energy to get the detail information was presented.Finally, the fused image was reconstructed by inverse shearlet transform.The results show that thealgorithm is superior to other fusion algorithms on subjective visual effect and objective evaluation.【期刊名称】《激光技术》【年(卷),期】2015(000)001【总页数】7页(P50-56)【关键词】图像处理;图像融合;shearlet变换;加权融合;区域方差;区域能量【作者】郑伟;孙雪青;李哲【作者单位】河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002;河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002;河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002【正文语种】中文【中图分类】TP391E-mail:*********************图像融合是将两幅或两幅以上的图像合成一幅新图像的过程,但这并不意味着仅仅是把多幅图像相加,而是要使用特定的规则把每幅图像的有用信息综合起来,以获取一幅更加准确、更加全面的新图像。
一种基于 DCT 域局部能量的多聚焦图像融合算法
一种基于 DCT 域局部能量的多聚焦图像融合算法梁聪;唐振华【摘要】为了在无线视觉传感器网络等资源受限的应用中实现图像信息融合,提出一种基于离散余弦变换域 DCT(Discrete Co-sine Transform)中局部能量的多聚焦图像融合算法。
该算法没有采用当前常用的基于对比度和方差的融合准则,而是通过计算图像在 DCT 域的局部能量,选择局部能量较大的分块实现融合。
实验结果表明,该算法能获得良好的主观视觉融合效果,且在互信息、峰值信噪比和空间频率等客观指标上优于常用的基于小波变换的融合算法,计算量与基于 DCT 域方差的融合算法相比降低约41%~49%,该算法适用于诸如无线视觉传感器网络等资源受限或实时性要求高的应用场合。
%In order to apply image information fusion to resource-constrained applications such as wireless visual sensor networks,we presented a multi-focus image fusion algorithm,which is based on local energy in discrete cosine transform (DCT)domain.The proposed algorithm doesn’t employ commonly used rule of image fusion based on contrast and variance,but realises fusion by calculating the local energy of image in DCT domain and selecting the blocks with bigger local energy.Experimental results showed that the proposed algorithm could get good subjective visual effect of fusion,and outperformed those usual algorithms based on discrete wavelet transform in objective evaluation indexes such as mutual information,peak signal to noise ratio,space frequency and so on.Moreover,since the calculation decreases by around 41 percent to 49 percent in comparison with the fusion algorithm based on variance in DCT domain,the presentedalgorithm is suitable for the applications with resource-constrained or high real-time property like visual sensor networks.【期刊名称】《计算机应用与软件》【年(卷),期】2016(033)005【总页数】4页(P235-238)【关键词】图像融合;离散余弦变换;局部能量;视觉传感器网络【作者】梁聪;唐振华【作者单位】广西大学计算机与电子信息学院广西南宁 530004;广西大学计算机与电子信息学院广西南宁 530004【正文语种】中文【中图分类】TP391图像融合通过利用待融合图像之间信息的冗余性和互补性,将两幅或多幅图像合成为一幅比任一待融合图像包含更多信息的新图像。
基于结构功能交叉神经网络的多模态医学图像融合
第 32 卷第 2 期2024 年 1 月Vol.32 No.2Jan. 2024光学精密工程Optics and Precision Engineering基于结构功能交叉神经网络的多模态医学图像融合邸敬,郭文庆*,任莉,杨燕,廉敬(兰州交通大学电子与信息工程学院,甘肃兰州 730070)摘要:针对多模态医学图像融合中存在纹理细节模糊和对比度低的问题,提出了一种结构功能交叉神经网络的多模态医学图像融合方法。
首先,根据医学图像的结构信息和功能信息设计了结构功能交叉神经网络模型,不仅有效地提取解剖学和功能学医学图像的结构信息和功能信息,而且能够实现这两种信息之间的交互,从而很好地提取医学图像的纹理细节信息。
其次,利用交叉网络通道和空间特征变化构造了一种新的注意力机制,通过不断调整结构信息和功能信息权重来融合图像,提高了融合图像的对比度和轮廓信息。
最后,设计了一个从融合图像到源图像的分解过程,由于分解图像的质量直接取决于融合结果,因此分解过程可以使融合图像包含更多的细节信息。
通过与近年来提出的7种高水平方法相比,本文方法的AG,EN,SF,MI,Q AB/F和CC客观评价指标分别平均提高了22.87%,19.64%,23.02%,12.70%,6.79%,30.35%,说明本文方法能够获得纹理细节更清晰、对比度更好的融合结果,在主观视觉和客观指标上都优于其他对比算法。
关键词:多模态医学图像融合;结构功能信息交叉网络;注意力机制;分解网络中图分类号:TP391 文献标识码:A doi:10.37188/OPE.20243202.0252Multimodal medical image fusion method based on structuralfunctional cross neural networkDI Jing,GUO Wenqing*,REN Li,YANG Yan,LIAN Jing(School of Electronic and Information Engineering, Lanzhou Jiaotong University,Lanzhou 730070, China)* Corresponding author, E-mail:344385945@Abstract: To solve the problems of texture detail blurring and low contrast in multimodal medical image fusion, a multimodal medical image fusion method with structural-functional crossed neural networks was proposed. Firstly, this method designed a structural and functional cross neural network model based on the structural and functional information of medical images.Within each structural-functional cross mod⁃ule,a residual network model was also incorporated.This approach not only effectively extracted the structural and functional information from anatomical and physiological medical images but also facilitated interaction between structural and functional information. As a result, it effectively captured texture details from multi-source medical images, creating fused images that closely align with human visual characteris⁃文章编号1004-924X(2024)02-0252-16收稿日期:2023-05-05;修订日期:2023-07-13.基金项目:甘肃省科技计划资助项目(No.22JR5RA360);国家自然科学基金资助项目(No.62061023);甘肃省杰出青年基金资助项目(No.21JR7RA345);甘肃省教育科技创新产业支撑项目(No.2021CYZC-04)第 2 期邸敬,等:基于结构功能交叉神经网络的多模态医学图像融合tics. Secondly, a new attention mechanism module was constructed by utilizing the effective channel atten⁃tion mechanism and spatial attention mechanism model (ECA-S),which continuously adjusted the weights of structural and functional information to fuse images, thereby improving the contrast and contour information of the fused image, and to make the fused image color more natural and realistic. Finally, a decomposition process from the fused image to the source image was designed, and since the quality of the decomposed image depends directly on the fusion result, the decomposition process could make the fused image contain more texture detail information and contour information of the source image. By comparing with seven high-level methods for medical image fusion proposed in recent years, the objective evaluation indexes of AG, EN, SF, MI, Q AB/F and CC of this paper's method are improved by 22.87%, 19.64%,23.02%, 12.70%, 6.79% and 30.35% on average, respectively, indicating that this paper's method can obtain fusion results with clearer texture details,higher contrast and better contours in subjective visual and objective indexes are better than other seven high-level contrast methods.Key words: multimodal medical image fusion;structural and functional information cross-interacting net⁃work; attention mechanism; decomposition network1 引言现代医学影像设备提供了人体不同部位的病变图像,协助医生对疾病进行快速诊断和治疗。
基于多尺度特征融合网络的多聚焦图像融合技术
基于多尺度特征融合网络的多聚焦图像融合技术作者:吕晶晶张荣福来源:《光学仪器》2021年第05期摘要:多聚焦圖像融合技术是为了突破传统相机景深的限制,将焦点不同的多幅图像合成一幅全聚焦图像,以获得更加全面的信息。
以往基于空间域和基于变换域的方法,需要手动进行活动水平的测量和融合规则的设计,较为复杂。
所提出的方法与传统的神经网络相比增加了提取浅层特征信息的部分,提高了分类准确率。
将源图像输入训练好的多尺度特征网络中获得初始焦点图,然后对焦点图进行后处理,最后使用逐像素加权平均规则获得全聚焦融合图像。
实验结果表明,本文方法融合而成的全聚焦图像清晰度高,保有细节丰富且失真度小,主、客观评价结果均优于其他方法。
关键词:多聚焦图像融合;卷积神经网络;多尺度特征中图分类号:TP 183文献标志码:AMulti-focus image fusion technology based on multi-scale feature fusion networkLU Jingjing,ZHANG Rongfu(School of Optoelectronic Information and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China)Abstract:The multi-focus image fusion technology is to break through the limitation of the traditional camera's depth of field and combine multiple images with different focal points into a full-focus image to obtain more comprehensive information. In the past,methods based on the spatial domain and the transform domain required manual activity level measurement and fusion rule design,which was more complicated. Compared with the traditional neural network,the method proposed in this paper increases the part of extracting shallow feature information and improves the classification accuracy. The source image is input into the trained multi-scale feature network to obtain the initial focus map. Then,the focus map is post-processed. Finally ,the pixel- by-pixel weighted average rule is used to obtain the all-focus fusion image. The experimental results show that the all-focus image fused by the method in this paper has high definition,richdetails,and low distortion,the subjective and objective evaluation results are better than those of other methods for comparison.Keywords:multi-focus image fusion;convolutional neural network;multi-scale features 引言图像融合技术是指将多张图像中的重要信息组合到一张图像中,比单一源图像具有更丰富的细节[1]。
一种新的结合 NSCT和 PCNN的图像融合方法
一种新的结合 NSCT和 PCNN的图像融合方法杨丹;何建农【摘要】考虑到小波变换存在一些局限性,提出一种把非下采样Contourlet变换(NSCT)与脉冲耦合神经网络(PCNN)相结合的图像融合新方法。
用NSCT变换从多尺度和多方向上分解配准后的原始图像。
低频应用改进的边缘能量结合空间频率的融合方法;高频应用PCNN简化数学模型,其链接强度用改进的拉普拉斯能量和表示。
并且选择点火映射图的点火次数与其标准差相结合的方法。
最后经过NSCT逆变换得出融合图像。
实验分析可知,与其他几种图像融合方法进行比较,新方法取得了更高质量的融合图像。
%Considering the wavelet transform has some limitations , a novel image fusion method based on the combination of Nonsubsampled Contourlet Transform(NSCT) and Pulse Coupled Neural Networks (PCNN) is proposed.Firstly,the registered original images are decomposed at multi-scale and multi-direction by NSCT.Then, the fusion method combing improved edge energy with spatial frequency is used in the low frequency part .Simplified mathematical model of PCNN is used in the high frequency part .Improved sum of Laplaceenergy is used as linking strength of PCNN .The high frequency fusion coefficients are acquired by the method of combing the fire numbers with the standard deviation of the fire mapping images.Finally, the fusion image is obtained by inverse NSCT .The experimental analysis show that the novel method has high-er quality fusion image than the other several methods .【期刊名称】《微型机与应用》【年(卷),期】2016(035)023【总页数】4页(P46-48,55)【关键词】图像融合;非下采样轮廓波变换;脉冲耦合神经网络;边缘能量;拉普拉斯能量和【作者】杨丹;何建农【作者单位】福州大学数学与计算机科学学院,福建福州350116;福州大学数学与计算机科学学院,福建福州350116【正文语种】中文【中图分类】TP391图像融合是将由不同传感器获得的图像的重要信息进行互相补充,得到一幅含有更加全面信息的新图像,它能够更加准确地描述真实场景[1]。
基于NSCT的多聚焦图像融合毕业论文
本科毕业论文(科研训练、毕业设计)题目:基于NSCT的多聚焦图像融合姓名:学院:软件学院系:专业:软件工程年级:学号:指导教师(校内):职称:指导教师(校外):沈贵明职称:年月摘要多聚焦图像融合是采用一定的算法将两幅或多幅聚焦不同的图像合并成一幅新的图像。
多聚焦图像融合的关键在于融合方法的特征提取能力。
Contourlet变换是最近由Minh N. Do和Nartin Vetterli首先提出的,与小波变换相比,它是真正的二维变换,Contourlet 在每个尺度提供不同数目的、灵活的方向,能够捕捉图像内在的几何结构,这使Contourlet 能够更好的提取图像的边缘和纹理等特征信息。
然而,由于拉普拉斯和方向滤波器组(DFB)中并非无下采样的,最初的Contourlet并不具平移不变性,这在图像的奇异位置容易产生伪吉布斯效应。
由于基于无下采样的Contourlet (NSCT)具有完全的平移不变性,因此在多聚焦图像融合领域比Contourlet具有更大的优势。
本文以Wavelet变换(WT)和Contourlet变换(CT)等多尺度分析方法为基础,对其原理和性质进行分析,并开展了NSCT 及其在多聚焦图像融合应用领域的研究。
此外,针对多聚焦图像融合,本文比较了多种在变换域提取图像特征的方法,并据此提出了基于NSCT域邻域空间频率(Spatial Frequency)的融合方法,简称NSCT-NSF。
与传统的空间频率算法不同,NSCT-NSF算法并非在空间域内,而是首先对图像进行多尺度多方向分解,再在NSCT变换域内对图像进行特征提取,最后选择特征大的变换系数来重构融合图像。
考虑到NSCT高频子带邻域系数的相关性,我们还对NSCT域内的空间频率算法做了改进,实验结果表明,在客观和主观评价标准上,本文提出的算法要优于典型的基于小波和基于NSCT像素最大值的融合算法。
关键词:图像融合; Contourlet变换; NSCT; 空间频率AbstractMulti-focus image fusion is the combination of two or more different images with different focus to form a new image by using a certain algorithm. The ability to extract feature information is key to multi-focus image fusion. Contourlet transform was recently pioneered by Minh N. Do and Martin Vetterli. Compared with wavelet transform, it is a “true” two-dimensional transform. Contourlet provides different and flexible number of directions at each scale and can capture the intrinsic geometrical structure, which make it possible to better extract feature information such as edge and texture. However, due to the down-sample and up-sample presented in both the Laplacian pyramid and the directional filter banks (DFB), the foremost Contourlet transform is not shift-invariant, which causes pseudo-Gibbs phenomena around singularities. Nonsubsampled contourlet is fully shift-invariant and performs better in multi-focus image fusion than contourlet. In this paper, the principle and characteristic of Contourlet Transform (CT) is introduced and study of NSCT’s application in multi-focus image fusion is developed. In addition, several classical methods directed at multi-focus image fusion for feature extraction is compared. Based on this, we propose a fusion method based on neighbor region spatial frequency (SF) in NSCT domain, NSCT-NSF namely. In contrast with traditional spatial frequency method, the source image is first decomposed to several scales and directions, and then the proposed algorithm is applied to capture feature information in NSCT domain rather than spatial domain, at last the coefficient with larger feature value is selected to reconstruct the fused image. In addition, considering the correlation between neighbor coefficients, we also make some modification on spatial frequency in NSCT domain. Experimental results demonstrate that the proposed algorithms outperform typical wavelet-based and pixel maximum NSCT-based fusion algorithms in term of objective criteria and visual appearance.Key words: Image Fusion; Contourlet Transform; NSCT; Spatial Frequency目录摘要 (1)ABSTRACT (2)目录 (3)CONTENTS (4)第一章引言 (5)第二章小波变换 (6)小波变换 (6)2.1.1小波变换的发展 (6)2.1.2小波变换原理 (7)2.1.3二维小波变换实现框架 (8)小波变换的局限性 (9)第三章 CONTOURLET变换 (9)C ONTOURLET变换的提出 (9)C ONTOURLET变换的原理 (10)3.2.1拉普拉斯塔形分解 (10)3.2.2方向滤波器组(DFB) (11)3.2.3多尺度、多方向分解:塔型方向滤波器组 (15)无下采样的C ONTOURLET(NSCT)变换 (17)3.3.1 NSCT变换原理 (17)3.3.2 NSCT域的邻域和兄弟信息 (18)第四章基于NSCT变换的多聚焦图像融合 (19)基于小波变换的多聚焦图像融合 (19)基于NSCT的多聚焦图像融合规则 (20)4.2.1融合框架 (20)4.2.2融合规则 (21)4.2.2 NSCT域的特征提取 (22)4.2.3特征提取方法的选取 (23)基于NSCT-NSF的融合算法 (25)4.3.1 NSCT-NSF融合规则[22] (26)4.3.2实验仿真和评价 (27)第五章总结与展望 (32)本文工作总结 (32)NSCT应用前景展望 (32)致谢 (34)参考文献 (35)附录一离散小波图像融合代码 (37)附录二“ATROUS”多孔小波图像融合代码 (37)附录三基于NSCT变换的图像融合代码 (39)Contents摘要 (1)ABSTRACT (2)目录 (3)第一章引言 (5)第二章小波变换 (6)小波变换 (6)2.1.1小波变换的发展 (6)2.1.2小波变换原理 (7)2.1.3二维小波变换实现框架 (8)小波变换的局限性 (9)第三章 CONTOURLET变换 (9)C ONTOURLET变换的提出 (9)C ONTOURLET变换的原理 (10)3.2.1拉普拉斯塔形分解 (10)3.2.2方向滤波器组(DFB) (11)3.2.3多尺度、多方向分解:塔型方向滤波器组 (15)无下采样的C ONTOURLET(NSCT)变换 (17)3.3.1 NSCT变换原理 (17)3.3.2 NSCT域的邻域和兄弟信息 (18)第四章基于NSCT变换的多聚焦图像融合 (19)基于小波变换的多聚焦图像融合 (19)基于NSCT的多聚焦图像融合规则 (20)4.2.1融合框架 (20)4.2.2融合规则 (21)4.2.2 NSCT域的特征提取 (22)4.2.3特征提取方法的选取 (23)基于NSCT-NSF的融合算法 (25)4.3.1 NSCT-NSF融合规则[22] (26)4.3.2实验仿真和评价 (27)第五章总结与展望 (32)本文工作总结 (32)NSCT应用前景展望 (32)致谢 (34)参考文献 (35)附录一离散小波图像融合代码 (37)附录二“ATROUS”多孔小波图像融合代码 (37)附录三基于NSCT变换的图像融合代码 (39)第一章引言多聚焦图像融合是指对经过不同传感器得到的同一目标聚焦不同的图像进行一定的处理,形成一幅满足特定需求的图像的技术,从而提高对图像信息分析和提取的能力。
基于NSCT的自适应多聚焦图像融合算法
基于NSCT的自适应多聚焦图像融合算法施文娟【摘要】针对非降采样轮廓波变换(NSCT)具有多尺度、方向性和平移不变性等特点,为改善融合后图像模糊现象,提出了一种基于区域特性的非降采样轮廓波变换的多聚焦图像融合算法。
该算法结合NSCT的特点,将图像进行NSCT,变换为不同方向的各子带信息;然后基于局部均值和局部方差选择低频子带系数,并在带通方向子带中引用局部方向对比度作为测量算子来选择带通方向子带系数;最后,通过反变换得到融合图像。
实验结果表明,本算法融合效果优于传统的加权平均、小波变换及NSCT算法。
%For the nonsubsampled contourlet transform (NSCT)has features of flexible multi-scale, multi-direction and shiftinvariance, to solve the fuzzy phenomenon of fusion images, an adaptive multi-focus image fusion algorithm is proposed based on NSCT and regional feature. It effectively combines characteristics of NSCT, after NSCT, images change into subband information of different direction. The lowpass subband coefficients is selected based on the local energy and local variance. The bandpass directional subbands coefficients is selectedby introducing the local directonal contrast as measuring operator. At last, fusion image is obtained through inverse transform. Experimental results indicate that this algorithm performs better than the traditional methods based on weighted average, the wavelet transform and NSCT.【期刊名称】《微型机与应用》【年(卷),期】2012(031)019【总页数】3页(P45-47)【关键词】多聚焦图像融合;非降采样轮廓波变换;区域特性【作者】施文娟【作者单位】盐城师范学院物理科学与电子技术学院,江苏盐城224002【正文语种】中文【中图分类】TN941图像融合是指将两幅或多幅包含不同信息的图像融合成一幅信息量最大化的图像,以获得更多信息。
多波段图像融合的直觉模糊化处理方法比较
多波段图像融合的直觉模糊化处理方法比较赵竞超;蔺素珍;李大威;王丽芳;杨晓莉【摘要】为了在多波段图像融合中选用合适的直觉模糊化方法来处理不确定性问题,对5种常用的直觉模糊集方法在多波段图像融合中的应用进行了比较.首先将多波段图像进行直觉模糊化处理,对隶属度图像进行去模糊化得到直觉模糊图像;然后,将直觉模糊图像进行非下采样轮廓波变换(Nonsubsampled contourlet transform,NSCT),对低频图像进行直觉模糊化处理,所得隶属度作为权重进行加权融合,高频使用取大规则进行融合;最后通过逆变换得到融合图像.对融合结果采用主观人眼视觉观察和客观评价指标体系进行分析比较,得到较好的直觉模糊集方法的优势性能.实验结果表明,与Sugeno、Yogita、Yager及Chaira四种直觉模糊化方法相比,Bala直觉模糊化方法可以有效提高融合结果的亮度和对比度,而且融合结果边缘清晰,纹理特征明显,具有更好的视觉融合效果和客观质量评价.【期刊名称】《红外技术》【年(卷),期】2018(040)009【总页数】6页(P881-886)【关键词】图像融合;直觉模糊集;多波段图像;方法比较;非下采样轮廓波变换【作者】赵竞超;蔺素珍;李大威;王丽芳;杨晓莉【作者单位】中北大学大数据学院,山西太原030051;中北大学大数据学院,山西太原030051;中北大学大数据学院,山西太原030051;中北大学大数据学院,山西太原030051;中北大学大数据学院,山西太原030051【正文语种】中文【中图分类】TP391利用多波段图像融合技术可以在降低数据冗余的同时,通过综合利用不同谱段信息,达到对场景的更全面更准确地认识,其前提是探测图像本身具有良好的像质和互补性[1]。
实际上,在复杂场景成像过程中,强噪声干扰、光照不均匀和设备固有缺陷等诸多非可预测因素在所难免,使得探测图像中往往存在边缘模糊、目标过暗或过亮和纹理不清等,随之而来的是探测结果的模糊、不精确等不确定性[2-3]。
基于NSCT和泰勒级数的图像超分辨率重建
基于NSCT和泰勒级数的图像超分辨率重建刘涛;张登福【摘要】By combining the multiscale, multidirection, anisotropy and shift-invariant of Nonsubsampled Contourlet Transform(NSCT) transform and effective approach characteristic of Taylor series, this paper propose a superresolution image reconstruction algorithm, which uses Taylor series to replace double-line.Experimental result shows that the algorithm can recovery the detailed information and texture features of image, and resist Ganssian noise.%结合非下采样Contourlet变换(NSCT)的多尺度、方向特性、各向异性、平移不变性以及泰勒级数高效逼近的优点,提出一种利用泰勒级数插值代替双线性插值的超分辨率图像重建算法.实验结果显示,该算法可以较好地恢复图像的细节信息和纹理特征,有效抵抗高斯噪声的干扰.【期刊名称】《计算机工程》【年(卷),期】2011(037)010【总页数】3页(P189-191)【关键词】泰勒级数;非下采样Contourlet变换;多尺度;平移不变性;超分辨率重建【作者】刘涛;张登福【作者单位】空军工程大学工程学院,西安,710038;空军工程大学工程学院,西安,710038【正文语种】中文【中图分类】TP3911 概述数字图像作为一种重要的信息载体,已经成为人们日常生活中不可或缺的一部分。
图像融合报告
▽2、Spatial frequency(空间频率)
• 空间频率源于人们的目视系统,表明在一个图像中的全部活动程度, 一个图像块的空间频率定义如下: 假定一个像素为M×N的图像,行频率RF(row frequencies),列频率
CF(column frequencies),则
F(m,n)是图像F在位置(m,n)的像素灰度值
• 例如,对于聚集不同的多幅对准图像,如果图像中 的一些景物在其中的一幅图像中很清晰,而在别的 图像中较为模糊的话,那么可以采取图像融合的方 法获得一幅新的图像,融合后的图像比融合前的任 意一幅图像具有更多的信息量。
图一:左聚焦图像
图二:右聚焦图像
图三:融合图像
研究动态
• 图像融合近年来在很多领域都有了越来越多的应 用和发展。在医学上,医学图像的配准和融合为 医生提供更加丰富、可靠的图像依据,以便更加 直观地利用这些信息并结合临床经验做出准确诊 断;随着遥感技术的发展,高空间分辨率、波谱 分辨率和时间分辨率的图像数据已经问世;根据 各种不同类型的多光谱数据信息之间存在着重叠 和互补,利用图像融合技术对遥感图像进行融合 ,近年来在土地动态监测、防洪防灾和军事侦察 方面得到应用。
Multifocus image fusion using region segmentation and spatial frequency
xoulongxia 专业:计算机科学与技术
一 研究背景 二 研究动态
三 研究内容
四 文章理解 五 想 法
研究背景
• 定义
图像融合(Image Fusion)是通过对源图像间冗余 信息和互补信息进行处理,使得到的融合图像可 靠性增强,能更客观地、更精确地和更全面地对 某一场景进行图像描述,更加适合人眼和机器视 觉感知,更加有利于图像分割、特征提取和目标 识别等更深层次的图像处理任务。
结合NSCT与引导滤波的图像融合方法
结合NSCT与引导滤波的图像融合方法甘玲;张倩雯【摘要】针对可见光和红外图像进行图像融合时,红外图像细节信息丢失严重、边缘模糊和可见光图像对比度不足等问题,提出一种结合非下采样轮廓波变换(Non-subsampled contourlet transform,NSCT)与引导滤波的图像融合方法.首先采用模糊逻辑算法对可见光图像进行增强提高其对比度,突出图像的有效信息;再对增强后的可见光和红外图像进行NSCT分解得到低频与高频子带;然后对红外图像的高频子带采用改进后的引导滤波增强边缘等细节信息;其次使用平均梯度策略和模糊逻辑策略对高、低频子带进行融合;最后利用NSCT逆变换得到融合后的图像.通过在不同数据集上的实验结果表明,该方法在信息熵、标准差和互信息等评价指标上均要优于其他几种方法,验证了本文所提方法的有效性和优越性.%To mitigate the problems of serious image information loss, blurred edges, and insufficient contrast of visible images, a novel image fusion method based on non-subsampled contourlet transform and guided filtering is proposed for the image fusion of visible and infrared images. First, a fuzzy logic algorithm is used to enhance the contrast of the visible image to highlight the effective information of the image. Subsequently, the NSCT decomposition of the enhanced visible and infrared images is performed to obtain the low-frequency and high-frequency sub-bands. Further, the high-frequency sub-band of the infrared image is adopted to improved edge filtering and other information, Next, the average gradient strategy and fuzzy logic strategy are used to fuse the high- and low-frequency sub-bands. Finally, theNSCT inverse transform is used to obtain the fused image. Theexperimental results on different data sets demonstrate that the proposed method is superior to other methods in evaluating the entropy, standard deviation, and mutual information, all of which verify the effectiveness and superiority of the proposed method.【期刊名称】《红外技术》【年(卷),期】2018(040)005【总页数】6页(P444-448,454)【关键词】非下采样轮廓波;引导滤波;模糊逻辑算法;平均梯度【作者】甘玲;张倩雯【作者单位】重庆邮电大学计算机科学与技术学院,重庆 400065;重庆邮电大学计算智能重庆市重点实验室,重庆 400065;重庆邮电大学计算机科学与技术学院,重庆400065【正文语种】中文【中图分类】TP3910 引言图像融合技术能够把图像的特征信息有效地重组在一起,并将其结合成高质量的图像。
穿透毛玻璃的可见光成像系统
穿透毛玻璃的可见光成像系统李成勇;应春霞;胡晶晶【摘要】透过复杂介质获取目标物体图像精细信息的能力是光电图像采集处理的一大难点,选用CMOS光电图像传感器,设计了CMOS成像系统以及后端读取和处理电路,透过毛玻璃对目标物体成像,将采集的图像信息传送到计算机中进行处理.该系统按照相机光学成像系统原理制作,采用通用CMOS图像传感器芯片完成电路设计,加之红外激光辅助照明拍摄采集图像,由远及近不同距离分别对同一目标物成像,对成像图像进行迭代图像增强算法优化,可以解决毛玻璃非匀质问题,使光源重建精度大大提高,得到的可见光图像轮廓清晰,与一般CCD成像系统相比,识别率超过95%,远大于一般成像系统,且成像性能良好.【期刊名称】《应用光学》【年(卷),期】2019(040)003【总页数】6页(P416-421)【关键词】可见光;CMOS图像传感器;光电成像;图像处理【作者】李成勇;应春霞;胡晶晶【作者单位】重庆工程学院电子信息学院,重庆400056;重庆工程学院电子信息学院,重庆400056;重庆工程学院电子信息学院,重庆400056【正文语种】中文【中图分类】TN29引言毛玻璃也叫雾面玻璃、防眩玻璃等,是用金刚砂等磨过或以化学方法处理过的一种表面粗糙不平整的半透明玻璃,毛玻璃表面不平整,光线通过毛玻璃被反射后向四面八方射出去,因为毛玻璃表面不是光滑的平面,使光产生了漫反射,折射到视网膜上已经是不完整的像,于是就看不见玻璃背后的图像。
毛玻璃对于不同光波段的成像影响差别比较大,常见的成像光波段有可见光、紫外光、红外光。
紫外光波长较短,吸收大,穿透深度短,常用于透射皮肤等物质成像。
毛玻璃因为有水等少量杂质,紫外光相对于可见光在穿透毛玻璃时有着较高的吸收,因此穿透毛玻璃成像选择透射率高的可见光,毛玻璃对可见光而言表面平整,吸收弱,折射率低。
穿透毛玻璃成像技术,是通过光电图像传感器捕捉视频图像,进行图像信息采集,可以获取人类视觉上看不到的图像信息,让更多的潜在信息被捕捉到。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
O R I G I N A L P A P E RMulti-Focus Image Fusion Based on NSCT and NSSTAltan-Ulzii Moonon •Jianwen HuReceived:2February 2015/Published online:19February 2015ÓSpringer Science+Business Media New York 2015Abstract In this paper,a multi-focus image fusion algorithm based on the nonsub-sampled contourlet transform (NSCT)and the nonsubsampled shearlet transform (NSST)is proposed.The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients.Then,the average method is used to fuse low frequency coefficient of the NSCT.To obtain more accurate salience mea-surement,the high frequency coefficients of the NSST and NSCT are combined to measure salience.The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients.Finally,the fused image is reconstructed by the inverse NSCT.We adopt three metrics (Q AB/F ,Q e and Q w )to evaluate the quality of fused images.The experimental results demonstrate that the proposed method outper-forms other methods.It retains highly detailed edges and contours.Keywords Multi-focus image fusion ÁNonsubsampled contourlet transform ÁNonsubsampled shearlet transform ÁLow frequency coefficient ÁHigh frequency coefficientThis article is part of the Topical Collection on Hybrid Imaging and Image Fusion.A.-U.MoononCollege of Electrical and Information Engineering,Hunan University,Changsha 410082,People’s Republic of Chinae-mail:altanolzii_m@J.Hu (&)College of Electrical and Information Engineering,Changsha University of Science and Technology,Changsha 410114,People’s Republic of China e-mail:hujianwen1@Sens Imaging (2015)16:4DOI 10.1007/s11220-015-0106-31IntroductionIn optic systems,a lens can exactly focus on one given distance at a time.Figure 1illustrates an optic lens model.The optic lens receives the light and projects it onto the sensor,and then the sensor measures the light.In order that light from the object ‘‘B’’coverage at the same location on the sensor,the object ‘‘B’’will sharp and ‘‘in focus’’in image.Besides,the objects ‘‘A’’and ‘‘C’’will out focus in image.It is dependent on the depth of field (DOF),which is a region wherein objects appear sharp in the image.The DOF is described by various limitations of technical parameters such as aperture,focal length and circle rger aperture and longer focal length (object close to lens)make a larger circle of rger circle of confusion,given shallow DOF,means only an object focused in image.Therefore,it is impossible that all objects are in focus when you capture an image at one distance.One way to solve this problem is using multi-focus image fusion.Multi-focus image fusion is the process of combining different focus points of two or more images of a scene,into a single highly informative and everywhere focused image that is more suitable for computer processing and visual perception.Multi-focus image fusion is widely used in medical diagnostics,remote sensing and military applications.In recent years the commonly used image fusion algorithms are based on multi-scale geometric analysis (MGA)such as contourlet and shearlet [1–3].The MGA methods can more accurately capture the image edge information and provide higher directional sensitivity than wavelets [4–7].Do and Vetterli proposed the contourlet transform (CT)in 2002,which can represent image salient features more effectively [1].In 2006,the NSCT introduced by Cunha et al.[2]discards the downsampling step during image decomposition stage of the contourlet transform.The NSCT is shift invariant versions of contourlet transform.Shift-invariance is very important for image fusion.Therefore the NSCT is more suitable for imagefusion.Fig.1Optic lens model4Page 2of 16Sens Imaging (2015)16:4The shearlet transform designed by Guo and Labate is one new MGA method [3].It is expected to obtain a two-dimensional affine system for exhibiting the geometric and mathematical properties of images,e.g.,directionally,elongated shapes,scales and oscillation [3].Miao et al.suggested that the shearlet transform presents good fusion performance,but the lack of shift invariance cannot be ignored [8].Easley et al.proposed the nonsubsampled shearlet transform,which combines the nonsubsampled Laplacian pyramid transform with several different shearing filters [9].In other words,the NSST is a shift invariant version of the shearlet transform.After considering the advantages of the NSST and NSCT,we propose a new image fusion method by combining the NSCT and NSST in this paper.The remainder of the paper is structured as follows:In Sect.2,we briefly review the NSCT and NSST.The proposed algorithm is presented in Sect.3.The experimental results and analysis are given in Sect.4,and the paper is concluded in Sect.5.2NSCT and NSSTThe NSCT and NSST are two new multi-scale geometrical analysis methods.In this section,we will briefly review the NSCT and NSST,which can decompose source images into a low frequency sub-band and several high frequency sub-bands with different directions.2.1Nonsubsampled Contourlet TransformThe NSCT is a shift invariant version of the CT.The NSCT is built by the nonsubsampled pyramid (NSP)and the nonsubsampled directional filter bank (NSDFB).The NSP is the multi-scale property of the NSCT.It is obtained from a shift-invariant filtering structure.The subband decomposition of the NSP is similar to the Laplacian pyramid which is achieved by using two channel nonsubsampled 2-D filter banks.The NSP developed from a `trous algorithm.The tree structure and shift-invariant directional expansion of the NSCT are obtained from the nonsub-sampled directional filter bank (NSDFB),which is constructed by eliminating the down sampler and up sampler in the directional filter bank (DFB).The DFB is constructed by combining critically sampled fan filter banks and re-sampling operations [2].All filter banks in the NSDFB tree structure are obtained from a single nonsubsampled filter bank (NSFB)with fan filter.It decomposes high frequency sub-band into several directional sub-bands.The structure of the NSCT is illustrated in Fig.2a.Figure 2b illustrates the corresponding frequency division of the NSCT,where the number of directions is increased with frequency [2].The NSCT is flexible in that it allows any number of 2l directions in each scale,where l is a positive integer.If the numbers of directions in the NSDFB expansion are doubled at every other scale,the NSCT can satisfy anisotropic scaling law that is a key property in establishing the expansion nonlinear approximation behaviour.The NSCT has redundancy given by one lowpassfrequency sub-band and P J j ¼12ljhigh frequency sub-band,where lj denotes the number of levels in the NSDFB at the j th scale.Sens Imaging (2015)16:4Page 3of 1642.2Nonsubsampled Shearlet TransformFor a continuous wavelet w 2L 2R 2ÀÁ,consider the two-dimensional affine systemw ast x ðÞ¼det M as j j 12w M À1as x Àt ÀÁ:t 2R 2;M as 2C n o ð1Þwhere C is the 2parameter dilation groupC ¼M as ¼a ffiffiffia p s 0ffiffiffia p:ða ;s Þ2R þÂR &'ð2ÞFor any row vectors n ¼n 1;n 2ðÞ2^R2;n 1¼0w is that:^w n ðÞ¼^w n 1;n 2ðÞ¼^w 1n 1ðÞ^w 2n 2n 1ð3Þwhere continuous wavelets ^w 1;^w 22C 1R ðÞ.According to the Eq.(3),w is a continuous wavelet and for a 2R þ,s 2R and a 2R 2Sf a ;s ;t ðÞ¼\f ;w ast [ð4ÞEquation (4)is called continuous shearlet transform of f 2L 2R ðÞ[9,10].Thediscrete shearlet transform ^w jlkj !0;À2j l 2j À1;k 2Z 2ÀÁ,which can deal with distributed discontinuities,is obtained by sampling continuous shearlet trans-form Sf (a ,s ,t )[3].Each element of ^w jlksupported on a pair of trapezoids of approximate size 22j 92j.This is illustrated on Fig.3a.In Fig.3b illustrated tilling of frequency plane that it is induced by shearlet transform.One important advantage of the shearlet transform is that there are no restrictions on the number of directions for the shearing [11].Also,in the shearlet,there are no constraints on the size of the supports for the shearing,unlike the construction of the directional filter banks in [9].The NSST consists of two phases,which are thenon-Fig.2The nonsubsampled contourlet transform.a Structure of the NSCT.b Correspond frequency division of the NSCT4Page 4of 16Sens Imaging (2015)16:4subsampled Laplacian pyramid and several different combinations of shearing filters [3].The NSST is not only shift–invariant,but also multi-scale and exhibits multi-directional expansion.3Combined Algorithm of NSCT and NSSTThe block diagram of the proposed method is shown in Fig.4.The detailed steps are presented as follows:Step 1:Decompose the source images into a pair of low frequency coefficients L A nsct ;L B nsct ÈÉ;L A nsst ;L Bnsst ÈÉand high frequency coefficients H s ;d ðÞnsct ;H s ;d ðÞnsct n o ;H s ;d ðÞnsst ;H s ;d ðÞnsst n o via the NSCT and NSST,respectively.Where s is the number of multi-scale decomposition levels and d is direction of decomposition at the s -th level.Step 2:Take the average of low frequency coefficients of the NSCT to get the low frequency fusion coefficient.L F ¼12L Ansct þL B nsct ÀÁð5Þwhere L A nsct and L Bnsct are low frequency coefficients of the NSCT,L F is the fused low frequency coefficient.Step 3:According to the following method,the high frequency coefficients of the NSCT and NSST are combined to obtain the fused high frequency coefficients of theNSCT.Fig.3a The frequency support of a shearlet ^w jlksatisfies parabolic scaling.b The tilling of frequency plane induced by shearletSens Imaging (2015)16:4Page 5of 164•Firstly,compute the energy of each patch (region energy)around each pixel for each of the high frequency coefficients.The advantage of using region energy can reduce influence of noise in the fused image.REs ;d ðÞi ;j ðÞ¼X M m ¼ÀM X M n ¼ÀMHs ;d ðÞi þm ;j þn ðÞ2ð6Þwhere H (s ,d )is d -th direction of s -th level high frequency coefficient of theNSCT and NSST for each image.M is set as 3.•Secondly,use region energy coefficient to calculate the salience of high frequency coefficients by combining the NSCT and NSST in d -th direction of s -th level for each image.X s ;d ðÞi ;j ðÞ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiRE ðs ;d Þnsct i ;j ðÞ 2þRE ðs ;d Þnsst i ;j ðÞ2r ð7Þwhere RE ðs ;d Þnsct and RE ðs ;d Þnsst are the region energy coefficient of the NSCT andNSST respectively,i and j are pixel position of source image.X (s ,d )is the salience of high frequency coefficients of the NSCT,which reflects the definition of source rger X (s ,d )indicates more clear images.•Then,we choose high frequency coefficients with larger X (s ,d )as the fused high frequency coefficients.H ðs ;d ÞF i ;j ðÞ¼H A s ;d ðÞnsct i ;j ðÞX ðs ;d ÞA i ;j ðÞ!X ðs ;d ÞB i ;j ðÞH B s ;d ðÞnsct i ;j ðÞotherwise(ð8Þwhere H F(s ,d )is the high frequency fusioncoefficient.Fig.4Block diagram of the proposed method4Page 6of 16Sens Imaging (2015)16:4Step 4:Obtain fused image from fused low and high frequency coefficients via the inverse the NSCT.4Experimental Results and AnalysisIn this section,the performance of the proposed method is tested and compared against the image fusion methods based on individual NSCT,NSST and the NSCT-SF-PCNN by objective evaluation and visual inspection.The NSCT-SF-PCNN is one of state-of-the-art fusion algorithms proposed by Qu et al.[12].The Eqs.(5)and (6)based choose-abs-max fusion rule used to fuse low and high frequency coefficient of the NSCT and NSST,respectively.Experiments were performed with MATLAB 8.1(64bit)on i52.3GHz CPU,4GB of RAM laptop.Shearlet,NSCT and NSCT-SF-PCNN toolboxes are used for experiments.Five pairs of multi-focus images as seen in Fig.5a–j are used to test the performance of our method.We adopt three metrics (Q AB/F ,Q e and Q w )to evaluate the quality of fused images [13,14].The metric Q AB/F evaluates the quality of visual information obtained from source images.Both the metric Q w and Q e evaluate the structural similarity between fused image and source images.The metric Q w considers the local salience,and the metric Q e integrates the importance of edge.The values of all metrics are between 0and 1.If their values are near to 1,fusion results indicate better quality.The decomposition of the NSCT and NSST is based on Laplacian pyramid filtering and directional filtering.Firstly,we analysed the influence of different pyramid and directional filters to the performance of the proposed method.For this analysis we considered various types of pyramid filters for the NSCT and NSST,namely pyr (derived from 1-D filters using maximally flat mapping function with 2vanishing moments),pyrexc (same with pyr ,but with two highpassfiltersFig.5Five pairs of multi-focus test imagesSens Imaging (2015)16:4Page 7of 1644Page8of16Sens Imaging(2015)16:4 exchanged),maxflat(derived from1-Dfilters using maximallyflat mapping function with4vanishing moments)and9–7(derived from9to71-D prototypes) while a type of directionalfilter for NSCT,namely cd(derived from9to7 biorthogonalfilters using McClellan transformed by Cohen and Daubechies),pkva (ladderfilters by Phong et al.)and vk(McClellan transformed of thefilter from the VK book).The decomposition level of the NSCT is set as{4,8,8,16}.In the NSST,there is no directionalfilter bank,unlike the NSCT.There are shearing window works for directional potential.If we use only the NSST for multifocus image fusion,higher number of shearing directions provides better fusion result.But in our case,when shearing directions number is lower,the proposed method provides better results. Due to a number of experiments,we chose shearing directions number{4668},it provides better results for Table1and all other experiments.The best results are labeled by bold in all tables.Table1gives the results of four different combinations of thefilters,and it shows that the combination of thefilters pyrexc&cd for the NSCT and the pyramidfilter maxflat for the NSST is better than the other three combinationfilters.In Table2,we show test result of the proposed method using the inverse the NSST to obtain fused image.The list presents that the results of combinationfilters pyr&cd and pyr are better than others.On the whole,the inverse the NSCT obtain better results than the inverse the NSST.Accordingly,with the comparisons between Tables1and2,the combination offilters pyrexc&cd for the NSCT and pyramidfilter maxflat for the NSST is selected.The next experiment is to analyse the influence of different decomposition levels for the proposed method.For this testing,we set as thefilters pyrexc&cd for the NSCT,filter maxflat for the NSST and number shearing directions set as{4,6,6,8}.For the decomposition level setting,we test ten different decomposition level options.To save space,we choose only three different decomposition levels,and the results are shown in Table3.The bolded values are the best results.The result shows that the decomposition level{4,8,8,16}and{8,8,8,16}are better than other decomposition level option.Few decomposition levels can reduce time consumption.Therefore we set decomposition level as{4,8,8,16}for the proposed method.We performed the number of experiments for influence of directionalfilters on the proposed method.The experimental results are listed in Table4.As a consequence cdfilter provides better results for each testing images of Fig.5. Therefore we chose cdfilter for the proposed method.The proposed method is compared with individual NSCT,NSST and the NSCT-SF-PCNN.For the individual NSCT and NSST,according to the result of parameters setting test,we choose the best parameter set.For the NSCT of the proposed method,the number of directions from coarser tofiner scale is set to{4,8, 8,16},and the pyramidfilter and the directionalfilter are set to pyrexc and cd respectively.In the NSST,it sets as{4,8,8,16}directions with pyrexc for pyramid filter when number of shearing directions set as{4,6,6,8}.The NSCT-SF-PCNN parameter option is that the NSCT settings same with individual NSCT and PCNN settings are default option by[12].The comparison results of different fusionT a b l e 1C o m p a r i s o n r e s u l t o f d i f f e r e n t fil t e r c o m b i n a t i o nM e t r i c sP y r a m i d a n d d i r e c t i o n a l fil t e r s o f N S C TP y r a m i d fil t e r s o f N S S TF i g u r e 5a ,b (5129512)F i g u r e 5c ,d (1929192)F i g u r e 5e ,f (2569256)F i g u r e 5g ,h (6409480)F i g u r e 5i ,j (2709205)Q A B /Fp y r e x c &c d m a x fla t0.69120.73430.79790.72790.7219p y r &c d m a x fla t0.69210.73420.79620.72790.7225p y r e x c &c d 9–70.69140.73410.79760.72720.7222p y r &c d 9–70.69190.73420.79630.72790.7227Q w p y r e x c &c d m a x fla t0.89280.89770.93890.88840.9013p y r &c d m a x fla t0.89300.89750.93860.88840.9013p y r e x c &c d 9–70.89270.89760.93890.88840.9013p y r &c d 9–70.89290.89750.93870.88850.9013Q e p y r e x c &c d m a x fla t0.74830.79190.87690.80950.8129p y r &c d m a x fla t0.74850.79180.87470.80960.8129p y r e x c &c d 9–70.74810.79180.87680.80960.8130p y r &c d 9–70.74840.79170.87490.80970.8130Sens Imaging (2015)16:4Page 9of 164T a b l e 2C o m p a r i s o n r e s u l t o f d i f f e r e n t fil t e r c o m b i n a t i o n w h e n f u s e d i m a g e r e c o n s t r u c t e d b y i n v e r s e t h e N S S TM e t r i c sP y r a m i d a n d d i r e c t i o n a l fil t e r s o f N S C TP y r a m i d fil t e r s o f N S S TF i g u r e 5a ,b (5129512)F i g u r e 5c ,d (1929192)F i g u r e 5e ,f (2569256)F i g u r e 5g ,h (6409480)F i g u r e 5i ,j (2709205)Q A B /Fp y r e x c &c d p y r e x c0.69130.73380.79010.72630.7223p y r &c d p y r0.69220.73420.79140.72710.7230p y r e x c &c d 9–70.69210.73400.78970.72830.7240p y r &c d 9–70.69190.73410.78920.72850.7240Q w p y r e x c &c d p y r e x c0.89180.89750.92950.88830.9014p y r &c d p y r0.89180.89730.92950.88840.9014p y r e x c &c d 9–70.89100.89650.93670.88820.9010p y r &c d 9–70.89080.89640.93660.88820.9010Q e p y r e x c &c d p y r e x c0.74860.79140.85080.80930.8131p y r &c d p y r0.74870.79120.85040.80960.8131p y r e x c &c d 9–70.74740.78990.86920.80930.8126p y r &c d 9–70.74690.78990.86910.80930.81254Page 10of 16Sens Imaging (2015)16:4Table3Comparison result of decomposition level for the proposed methodMetrics Decompositionlevel of NSCTand NSST Figure5a,b(5129512)Figure5c,d(1929192)Figure5e,f(2569256)Figure5g,h(6409480)Figure5i,j(2709205)Q AB/F448160.68960.73250.78830.72700.7216 488160.69000.73260.78790.72690.7215888160.69000.73260.78900.72680.7219 Q w448160.89140.89690.93090.88820.9010 488160.89120.89700.93110.88820.9012888160.89130.89700.93110.88820.9012 Q e448160.74830.79020.85440.80890.8124 488160.74770.79030.85510.80900.8125888160.74770.79030.85510.80900.8126 Table4Comparison result of different directionalfilters of the NSCT for the proposed methodMetrics Directionalfiltersof NSCT Figure5a,b(5129512)Figure5c,d(1929192)Figure5e,f(2569256)Figure5g,h(6409480)Figure5i,j(2709205)Q AB/F cd0.69120.73430.79790.72740.7219 vk0.69080.73320.79570.72590.7186pkva0.68110.73030.79270.71990.7149 Q w cd0.89280.89780.93890.88880.9013 vk0.89240.89780.93850.88870.9012pkva0.89150.89640.93880.88810.9014 Q e cd0.74830.79190.87690.80990.8129 vk0.74770.79200.87570.80980.8124pkva0.74780.78960.87800.80890.8125 Table5Comparison result of different fusion methodsMetrics Methods Figure5a,b(5129512)Figure5c,d(1929192)Figure5e,f(2569256)Figure5g,h(6409480)Figure5i,j(2709205)Q AB/F NSCT0.68760.73140.79480.72140.7146 NSST0.68300.72980.79170.72040.7132NSST-SF-PCNN0.69020.72290.79010.71710.7056Proposed0.69120.73430.79790.72740.7219 Q w NSCT0.89190.89770.93930.88840.9010 NSST0.89210.89730.92980.88840.9010NSST-SF-PCNN0.87390.88630.92810.86550.8894Proposed0.89280.89770.93890.88840.9013 Q e NSCT0.74830.79170.87840.80900.8112 NSST0.74820.79160.85140.80920.8114NSST-SF-PCNN0.69160.76540.85020.76920.7846Proposed0.74830.79190.87690.80950.8129methods are shown in Table 5and Fig.6.The column 1,2,3and 4of Fig.6are fusion results of the NSCT,NSCT,NSCT-SF-PCNN and the proposed method,respectively.Visual performance of Fig.6is good for all methods.Table 5illustrates that the proposed method achieves higher values than those compared method.To further test the performance of the proposed method,we performed experiments over multi-focus colour images.Each band of colour image is separately fused by the proposed algorithm.All parameter settings are the same as in the previous experimental options.Figure 7and Table 6show the experimental results.All the fused images Fig.7e–h and i–l are good by the visual inspection.InFig.6The image fusion results.The first column is the result of the NSCT,the second column is the result of the NSST,the third column is the result of the NSCT-SF-PCNN and the last column shows the results of the proposed methodTable 6Comparison results of multi-focus colour image fusion methods MetricsMethodsFigure 7a,b (1789134)Figure 7d,e (2679175)RedGreen Blue Average Red Green Blue Average Q AB/FNSCT 0.72020.73490.73590.73030.69440.69400.69530.6946NSST 0.72660.73510.73840.73340.69790.69940.70060.6993NSCT-SF-PCNN 0.72030.74040.73750.73270.69870.69860.70130.6995Proposed method0.72970.75340.74310.74210.70180.70350.70360.7030Q wNSCT 0.94680.95060.95280.95010.92040.92140.91700.9196NSST 0.94740.95110.95310.95050.92000.92100.91660.9192NSCT-SF-PCNN 0.94220.94720.94890.94610.91930.92050.91490.9182Proposed method0.94840.95260.95380.95160.92120.92220.91760.9203Q eNSCT 0.91260.92130.92360.91920.83240.83400.82690.8311NSST 0.91360.92190.92390.91980.83160.83330.82600.8303NSCT-SF-PCNN 0.90770.91860.92010.91550.82080.82160.81800.8201Proposed method0.91380.92330.92460.92060.83330.83490.82760.8319Fig.7Multi-focus colour image fusion results.a –d are source images.e and i are results of the NSCT,f and j are results of the NSST,g and k are results of the NSCT-SF-PCNN,h and l are results of the proposed methodT a b l e 7T i m e c o n s u m p t i o n r e s u l t o f d i f f e r e n t f u s i o n m e t h o d sM e t h o d sF i g u r e 5a ,b (5129512)F i g u r e 5c ,d (1929192)F i g u r e 5e ,f (2569256)F i g u r e 5g ,h (6409480)F i g u r e 5i ,j (2709205)F i g u r e 7a ,b (1789134)F i g u r e 7d ,e (2679175)N S C T 47.7235s 6.0247s10.7334s54.5095s 9.4083s3.20526.1286N S S T 19.3069s 2.6701s4.1698s27.1523s 5.7165s2.18193.7338N S S T -S F -P C N N 174.5254s 22.6902s40.9332s204.1247s 34.9542s46.1752100.8514P r o p o s e d 68.1781s 9.2564s15.7029s 93.8497s 16.3334s5.29759.1802the results of Table6,all metrics of the proposed method are better than other three methods.Therefore,the proposed method presents better performance in terms of both visual inspection and objective evaluation.In Table7we show the time consumption of the four fusion methods.Table7 indicates that the proposed method is slower than individual NSCT,NSST methods and faster than NSCT-SF-PCNN method.However,it should be not an obstacle with the rapid development of hardware and parallel computing technology.5ConclusionsIn this paper,we proposed a new fusion method based on the NSCT and NSST for multi-focus images.The experimental results demonstrate that our method outperforms individual NSCT,NSST and the NSCT-SF-PCNN,and preserves more edge and contour information.We will focus on using the proposed method to study on other areas of image processing such as image denoising and image enhancement.Acknowledgments The authors would like to thank the editor and anonymous reviewers for their detailed review and valuable comments.This paper is supported by scientific Research Fund of Hunan Provincial Education Department(No.14B006).References1.Do,M.N.,&Vetterli,M.(2005).The contourlet transform:an efficient directional multiresolutionimage representation.IEEE Transactions on Image Processing,14(12),2091–2106.2.Cunha,A.L.,Zhou,J.,&Do,M.N.(2006).The nonsubsampled contourlet transform:theory,design,and applications.IEEE Transactions on Image Processing,15(10),3089–3101.3.Guo,K.,&Labate,D.(2007).Optimally sparse multidimensional representation using shearlets.SIAM Journal on Mathematical Analysis,39(1),298–318.4.Yang,B.,Li,S.T.,&Sun,F.M.(2007).Image fusion using nonsubsampled contourlet transform.Proceedings of the Fourth International Conference on Image and Graphics,pp.719–724.5.Krishnamoorthy,S.,&Soman,K.P.(2010).Implementation and comparative study of image fusion.International Journal of Computer Applications,9(2),25–35.6.Manu,V.T.,&Simon,P.(2012).A novel statistical fusion rule for image fusion and its comparisonin non subsampled contourlet transform domain and wavelet domain.The International Journal of Multimedia and Its Applications,4(2),69–87.7.Miao,Q.G.,Lou,J.J.,&Xu,P.F.(2012).Image fusion based on NSCT and bandelet transform.Proceedings of the Eighth International Conference on Computational Intelligence and Security, pp.314–317.8.Miao,Q.G.,Shi,C.,Xu,P.F.,Yang,M.,&Shi,Y.B.(2011).Multi-focus image fusion algorithmbased on shearlets.Chinese Optics Letters,9(4),041001–04005.9.Easley,G.,Labate,D.,&Lim,W.Q.(2008).Sparse directional image representations using thediscrete shearlet transform.Applied and Computational Harmonic Analysis,25(1),25–46.10.Guo,K.,Labate,D.,&Lim,W.Q.(2009).Edge analysis and identification using the continuousshearlet transform.Applied and Computational Harmonic Analysis,27(1),24–46.11.Cao,Y.,Li,S.T.,&Hu,J.W.(2011).Multi-focus image fusion by nonsubsampled shearlettransform.Sixth International Conference on Image and Graphics,pp.17–21.12.Qu,X.B.,Yan,J.W.,Xiao,H.Z.,&Zhu,Z.Q.(2009).Image fusion algorithm based on spatialfrequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain.Acta Automatica Sinica,34(12),1508–1514.13.Xydeas,C.S.,&Petrovic,V.(2000).Objective image fusion performance measure.ElectronicsLetters,36(4),308–309.14.Piella,G.,&Heijmans,H.(2003).A new quality metric for image fusion.International Conferenceon Image Processing,3,173–176.。