一种基于NSCT与GoogLeNet的多传感器图像融合算法(英文)
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
NSCT
Abstract
In recent years, there has been a growing interest in the development of multi-sensor image fusion technology. The aim of this technology is to combine information from different sensors to create an enhanced image that is more informative and visually pleasing. In this paper, we propose a new multi-sensor image fusion algorithm based on the Non-Subsampled Contourlet Transform (NSCT) and the GoogLeNet model. The proposed algorithm first extracts the features of the input images by using the GoogLeNet model. Then, the NSCT is used to fuse the features extracted from different sensors. The results demonstrate that the proposed algorithm outperforms other state-of- the-art algorithms in terms of objective and subjective image quality measures.
Introduction
Multi-sensor image fusion is becoming increasingly popular in various fields such as remote sensing, surveillance, and medical imaging. The main aim of multi-sensor image fusion is to combine information from different sensors to create an enhanced image that is more informative and visually pleasing. The goal is to produce an image that provides a better representation of the scene than any of the individual sensor images.
There are several challenges in multi-sensor image fusion, including the selection of the appropriate transform for feature extraction, the selection of fusion methods, and the evaluation of the results. In recent years, the use of deep learning models has become increasingly popular in feature extraction and fusion.
In this paper, we propose a new multi-sensor image fusion algorithm based on the Non-Subsampled Contourlet Transform (NSCT) and the GoogLeNet model. The rest of this paper is organized as follows. Section 2 discusses related work in multi-sensor image fusion. Section 3 presents the proposed algorithm. Section 4 presents the experimental results and comparison with existing algorithms. Finally, Section 5 concludes this paper.
Related Work
The past few years have seen many developments in multi-sensor image fusion. Various transform-based and deep learning-based algorithms have been proposed. Some of the popular transform-based algorithms include Wavelet Transform (WT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non-Subsampled Contourlet Transform (NSCT). These algorithms have been used in conjunction with traditional fusion methods such as maximum, minimum, and average.
Deep learning-based algorithms, primarily using Convolutional Neural Networks (CNN), have also been proposed for multi-sensor image fusion. The use of CNN-based models has shown considerable improvement over traditional transform-based algorithms. However, deep learning-based methods require a large dataset and significant computational resources.
Proposed Algorithm
The proposed algorithm uses the GoogLeNet model for feature extraction and the NSCT for fusion. The GoogLeNet model is a deep learning model developed by Google for image classification. It has shown remarkable performance in many image classification tasks. We use the pre-trained GoogLeNet model, which has been trained on the ImageNet dataset, for feature extraction.
In the proposed algorithm, we first extract the features of the input images using the pre-trained GoogLeNet model. The GoogLeNet model extracts high-level features from the input images and reduces the dimension of the feature space. The extracted features are then resized to the same size as the original images.
Next, we use the NSCT to fuse the features extracted from different sensors. The NSCT is a second-generation multiscale transform that has been shown to provide better performance than other transform-based methods. The NSCT has several advantages, including multiscale, multidirectional and shift-invariant properties.
In the proposed algorithm, we apply the NSCT to the features extracted from different sensors. The NSCT coefficients of the features are combined by using the local energy criterion to obtain the fused coefficients. The fused coefficients are then reconstructed using inverse NSCT to obtain the final fused image.
Experimental Results
To evaluate the proposed algorithm, we conducted experiments on several datasets, including the infrared and visible light sensor dataset.
We compared the proposed algorithm with several state-of-the-art algorithms, including the SWT, CT, and CNN-based algorithms.
To evaluate the objective quality of the fused images, we used several evaluation metrics, including the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Fusion Quality Index (FQI). The results demonstrate that the proposed algorithm outperforms other state-of-the-art algorithms in terms of objective quality metrics.
To evaluate the subjective quality of the fused images, we conducted subjective experiments with a group of human observers. The observers were asked to rate the quality of the fused images on a five-point scale. The results demonstrate that the proposed algorithm outperforms other state-of-the-art algorithms in terms of subjective quality.
Conclusion
In this paper, we proposed a new multi-sensor image fusion algorithm based on the Non-Subsampled Contourlet Transform (NSCT) and the GoogLeNet model. The proposed algorithm first extracts the features of the input images using the GoogLeNet model. Then, the NSCT is used to fuse the features extracted from different sensors. The results demonstrate that the proposed algorithm outperforms other state-of-the-art algorithms in terms of objective and subjective image quality measures. The proposed algorithm can be used in various fields such as remote sensing, surveillance, and medical imaging.。