Multimodal Medical Image Fusion
合集下载
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Graduate School of Engineerin KIT Kitakyushu, Japan † zhang@elcs.kyutech.ac.jp
Yosuke Uchimura, Atsushi Kobayashi
Graduate School of Engineering Kyushu Institute of Technology, KIT Kitakyushu, Japan ** serikawa@elcs.kyutech.ac.jp
Miroslaw Trzupek‡
Department of Automatics, Lab. of Biocybernetics AGH University of Science and Technology Krakow, Poland ‡ mtrzupek@.pl and MR images are better in present the normal and pathological soft tissue. Therefore, only one type of image with a single sensor cannot provide a complete view of the scene in many applications. The fused images, if suitably obtained from a set of multimodal sensor images, can provide a better view than that provided by any of the individual source images. In recent decades, growing interest has focused on the multimodal medical image fusion. Multimodal medical image fusion is the process of extracting significant information from multiple images and synthesizing them in an image. In literature, it is well established that the multi-resolution analysis (MRA) is the best approach that suits image fusion. Some MRA based fusion multimodal medical methods [5], such as wavelets [6], Laplacian pyramids [7], wedgelets [8], bandelets [9], curvelets [10], shearlets [36], contourlets [11], have been recognized as one of the most methods to obtain fine fusion images at different resolutions [29]. The pyramid method for multimodal image fusion method firstly constructs the input image pyramid, and then takes some feature selection approach to form the fusion value pyramid. By the inverter of the pyramid, the pyramid of image can be reconstructed, to produce fusion images. This method is relatively simple, but it has some drawbacks [6]. So, discrete wavelet transform (DWT) method is proposed to improve the multi-resolution problem [7]. Discrete wavelet transform (DWT) can be decomposed into a series of subband images with different resolution, frequency and direction characteristics. The spectral characteristics and spatial characteristics of image are completely separation. And then the different resolution image fusion is performed. But because of limited directional of wavelets, it cannot express line- or curve-singularities in two- or higher dimensional signal. So, other excellent MRA
Abstract—As a novel of multi-resolution analysis tool, second generation contourlet transforms (SGCT) provides flexible multiresolution, anisotropy, and directional expansion for medical imaging systems. In this paper, a novel fusion method for multimodal medical images based on SGCT is proposed. Firstly, we utilize the SGCT to decompose the multimodal medical images with highpass subbands and lowpass subbands. Then, for the highpass subbands, the weighted sum modified laplacian (WSML) method is utilized for generate the high frequency coefficients to recovery image details. For the lowpass subbands, the maximum local energy (MLE) method is combined with “local patch” idea for low frequency coefficients selection. Finally, the fused image is obtained by applying inverse SGCT to combine lowpass and highpass subbands. During abundant experiments, we evaluate the proposed method both human visual and quantitative analysis. Compare with the-state-of-the-art methods, the new strategy for attaining image fusion with satisfactory performance. Keywords- medical image; image fusion; sum modified laplacian; maximum local energy; second generation contourlet transform
methods are proposed in recent years to overcome the drawbacks of wavelets. Minh N. Do and Martin Vetterli proposed contourlet transform [11] in 2002. They first develop a transform in the continuous domain and then discretize for sampled data. After that, Y. Lu and Minh N. Do [12] modify a new multiscale decomposition method in the frequency domain, called sharp frequency localized contourlet transform (SFLCT). However, due to upsamplers and downsamplers presented in the directional filter banks (DFB) of SFLCT, Unfortunately, the downsampling of SFLCT leads to the lack of translation invariance. In this paper, we propose an image fusion method for multimodal medical images fusion, which operates in the modified sharp frequency localized contourlet transform (MSFLCT). We modified the SFLCT by using cycle spinning (CS) to solve this problem, namely Second Generation CT (SGCT).We apply maximum local energy method (MLE) and weighted sum-modified Laplacian (WSML) in this work. Particularly, for multimodal images fusion, we selected the low frequency coefficients by the proposed maximum local energy (MLE) method, and introduced weighted sum modified Laplacian (WSML) to calculate the high frequency coefficients. The rest of the paper is organized as follows: In Section 2, we briefly introduce contourlet, and modified sharp frequency localization contourlet transform in this work. As a solution, we propose in Section 3 a new fusion method, named maximum local energy method and weighted sum modified Laplacian method. Numerical experiments are presented in Section 4 to confirm our method. Finally, we conclude the paper in Section 5. II. BACKGROUNDS
Department of Electrical Engineering and Electronics Kyushu Institute of Technology, KIT Kitakyushu, Japan * luhuimin@boss.ecs.kyutech.ac.jp
Shun Inoue, Lifeng Zhang†
Multimodal Medical Image Fusion Using Optimal Feature Selection Methods Based on Second Generation Contourlet Transform
Yujie Li, Huimin Lu*, Seiichi Serikawa**
I.
INTRODUCTION
The importance of image processing and fusion has been investigated for diagnostic and healthcare [1], [30]. Registration and fusion of radiological images is by no means a new post processing technique. Technological advances in medical imaging in the past three decades have radiologists enable to create images of the human body with unprecedented resolution [32-35]. The medical equipment companies like GE [2], Siemens [3], Hitachi [4] et al. build the imaging devices (such as CT, PET and MRI scanners), which quickly acquire the body’s 3D images. Such images provide different and often complementary contents, e.g. CT images supply anatomical information, PET images deliver functional information,
Yosuke Uchimura, Atsushi Kobayashi
Graduate School of Engineering Kyushu Institute of Technology, KIT Kitakyushu, Japan ** serikawa@elcs.kyutech.ac.jp
Miroslaw Trzupek‡
Department of Automatics, Lab. of Biocybernetics AGH University of Science and Technology Krakow, Poland ‡ mtrzupek@.pl and MR images are better in present the normal and pathological soft tissue. Therefore, only one type of image with a single sensor cannot provide a complete view of the scene in many applications. The fused images, if suitably obtained from a set of multimodal sensor images, can provide a better view than that provided by any of the individual source images. In recent decades, growing interest has focused on the multimodal medical image fusion. Multimodal medical image fusion is the process of extracting significant information from multiple images and synthesizing them in an image. In literature, it is well established that the multi-resolution analysis (MRA) is the best approach that suits image fusion. Some MRA based fusion multimodal medical methods [5], such as wavelets [6], Laplacian pyramids [7], wedgelets [8], bandelets [9], curvelets [10], shearlets [36], contourlets [11], have been recognized as one of the most methods to obtain fine fusion images at different resolutions [29]. The pyramid method for multimodal image fusion method firstly constructs the input image pyramid, and then takes some feature selection approach to form the fusion value pyramid. By the inverter of the pyramid, the pyramid of image can be reconstructed, to produce fusion images. This method is relatively simple, but it has some drawbacks [6]. So, discrete wavelet transform (DWT) method is proposed to improve the multi-resolution problem [7]. Discrete wavelet transform (DWT) can be decomposed into a series of subband images with different resolution, frequency and direction characteristics. The spectral characteristics and spatial characteristics of image are completely separation. And then the different resolution image fusion is performed. But because of limited directional of wavelets, it cannot express line- or curve-singularities in two- or higher dimensional signal. So, other excellent MRA
Abstract—As a novel of multi-resolution analysis tool, second generation contourlet transforms (SGCT) provides flexible multiresolution, anisotropy, and directional expansion for medical imaging systems. In this paper, a novel fusion method for multimodal medical images based on SGCT is proposed. Firstly, we utilize the SGCT to decompose the multimodal medical images with highpass subbands and lowpass subbands. Then, for the highpass subbands, the weighted sum modified laplacian (WSML) method is utilized for generate the high frequency coefficients to recovery image details. For the lowpass subbands, the maximum local energy (MLE) method is combined with “local patch” idea for low frequency coefficients selection. Finally, the fused image is obtained by applying inverse SGCT to combine lowpass and highpass subbands. During abundant experiments, we evaluate the proposed method both human visual and quantitative analysis. Compare with the-state-of-the-art methods, the new strategy for attaining image fusion with satisfactory performance. Keywords- medical image; image fusion; sum modified laplacian; maximum local energy; second generation contourlet transform
methods are proposed in recent years to overcome the drawbacks of wavelets. Minh N. Do and Martin Vetterli proposed contourlet transform [11] in 2002. They first develop a transform in the continuous domain and then discretize for sampled data. After that, Y. Lu and Minh N. Do [12] modify a new multiscale decomposition method in the frequency domain, called sharp frequency localized contourlet transform (SFLCT). However, due to upsamplers and downsamplers presented in the directional filter banks (DFB) of SFLCT, Unfortunately, the downsampling of SFLCT leads to the lack of translation invariance. In this paper, we propose an image fusion method for multimodal medical images fusion, which operates in the modified sharp frequency localized contourlet transform (MSFLCT). We modified the SFLCT by using cycle spinning (CS) to solve this problem, namely Second Generation CT (SGCT).We apply maximum local energy method (MLE) and weighted sum-modified Laplacian (WSML) in this work. Particularly, for multimodal images fusion, we selected the low frequency coefficients by the proposed maximum local energy (MLE) method, and introduced weighted sum modified Laplacian (WSML) to calculate the high frequency coefficients. The rest of the paper is organized as follows: In Section 2, we briefly introduce contourlet, and modified sharp frequency localization contourlet transform in this work. As a solution, we propose in Section 3 a new fusion method, named maximum local energy method and weighted sum modified Laplacian method. Numerical experiments are presented in Section 4 to confirm our method. Finally, we conclude the paper in Section 5. II. BACKGROUNDS
Department of Electrical Engineering and Electronics Kyushu Institute of Technology, KIT Kitakyushu, Japan * luhuimin@boss.ecs.kyutech.ac.jp
Shun Inoue, Lifeng Zhang†
Multimodal Medical Image Fusion Using Optimal Feature Selection Methods Based on Second Generation Contourlet Transform
Yujie Li, Huimin Lu*, Seiichi Serikawa**
I.
INTRODUCTION
The importance of image processing and fusion has been investigated for diagnostic and healthcare [1], [30]. Registration and fusion of radiological images is by no means a new post processing technique. Technological advances in medical imaging in the past three decades have radiologists enable to create images of the human body with unprecedented resolution [32-35]. The medical equipment companies like GE [2], Siemens [3], Hitachi [4] et al. build the imaging devices (such as CT, PET and MRI scanners), which quickly acquire the body’s 3D images. Such images provide different and often complementary contents, e.g. CT images supply anatomical information, PET images deliver functional information,