遥感影像信息处理技术的研究进展英文翻译

合集下载

遥感技术应用英文摘要作文

遥感技术应用英文摘要作文

遥感技术应用英文摘要作文Remote sensing technology is widely used in various fields, bringing numerous benefits to our lives. For instance, in agriculture, remote sensing enables farmers to monitor crop growth and detect diseases. By analyzing satellite imagery, farmers can make informed decisions about irrigation, pest control, and fertilizer application. This technology helps improve crop yield and reduce the use of chemicals, making agriculture more sustainable.In urban planning, remote sensing plays a crucial role in mapping and monitoring urban areas. It provides valuable information about land use, infrastructure, and population distribution. This data helps urban planners make informed decisions about zoning, transportation, and resource allocation. By using remote sensing technology, cities can develop more efficiently and sustainably, ensuring a better quality of life for their residents.In environmental monitoring, remote sensing provides aunique perspective on the Earth's ecosystems. It allows scientists to track changes in vegetation, monitor deforestation, and assess the impact of climate change. By analyzing satellite imagery, researchers can identify areas at risk of natural disasters, such as wildfires or floods, and take preventive measures. Remote sensing technology helps us understand and protect our environment, contributing to the conservation of biodiversity and the preservation of natural resources.In disaster management, remote sensing is a valuable tool for assessing and responding to emergencies. It allows authorities to quickly gather information about the extent of damage and the needs of affected areas. By analyzing satellite imagery, emergency responders can identify areas that require immediate assistance, prioritize rescue efforts, and allocate resources effectively. Remote sensing technology helps save lives and minimize the impact of disasters on communities.In conclusion, remote sensing technology has a wide range of applications in various fields. It helps farmersoptimize agricultural practices, enables urban planners to develop cities more efficiently, contributes to environmental monitoring and conservation, and plays a crucial role in disaster management. By harnessing the power of remote sensing, we can make more informed decisions, protect our environment, and improve the quality of life for people around the world.。

遥感影像信息处理技术的研究进展_英文_Advanced processing techniques for remotely sensed imagery

遥感影像信息处理技术的研究进展_英文_Advanced processing techniques for remotely sensed imagery

1007-4619 (2009) 04-0559-11 Journal of Remote Sensing 遥感学报Received : 2008-11-07; Accepted : 2009-04-01Foundation : Supported by Major State Basic Research Development Program (973 Program) under Grant 2009CB723905, the 863 High Technology Pro-gram of China (No. 2007AA12Z148) and the National Science Foundation of China (No. 40771139).First author biography : ZHANG Liang-pei (1962— ), male, professor. Prof. Zhang received the Ph.D. degree in Photogrammetry and Remote Sensing from Wuhan University in 1998. Since 2007, he has been nominated as “ChangJiang Scholar” (Chair Professor) by the Education Ministry of China. He has published more than 150 research papers. E-mail: zlp62@Advanced processing techniques for remotely sensed imageryZHANG Liang-pei, HUANG XinThe State Key Laboratory of Information Engineering in Surveying , Mapping and Remote Sensing ,Wuhan University , Hubei Wuhan 430079, China.Abstract: This paper reviews the recently developed processing techniques for remotely sensed imagery, including very highresolution (VHR) information extraction, super resolution techniques, hyperspectral image processing and object detection, and also some artificial intelligence approaches.Key words: high resolution, super resolution, hyperspectral, artificial intelligence CLC number: TP751 Document code: ARecently, remotely sensed images with very high resolution (such as QuickBird, IKONOS, SPOT5, etc.) and hyperspectral channels (Hyperion, MODIS, MERIS, etc.) have been able to provide a large amount of information, thus opening up ave-nues for new remote sensing applications (e.g., urban mapping, environmental monitoring, precision agriculture, human —environment —earth interaction, etc.). However, their availabil-ity poses challenges to image processing and classification techniques due to the rich spatial and spectral information in the high resolution data and the hyper-dimensional features. It is agreed that conventional approaches are grossly inadequate for these new sensors. In this context, this paper aims to review the recent developments in image processing techniques for these new types of data. The paper is organized as follows. The first section discusses the information extraction and classification of the high spatial resolution images, including spatial feature extraction and object-based analysis. The second part concerns the super resolution reconstruction techniques. Section 3 de-scribes the hyperspectral data analysis and Section 4 discusses the artificial intelligence approaches for remote sensing appli-cations.1 ADVANCED PROCESSING OF HIGH SPATI- AL RESOLUTION IMAGERYCommercially available very high resolution (VHR) satellite imagery (e.g., IKONOS, Quickbird, and SPOT-5) provides a large amount of spatial information. However, their availability poses challenges to image processing and classification tech-niques. The resulting high intra-class and low inter-class vari-abilities lead to a reduction in the statistical separability of the different land cover classes in the spectral domain, and conven-tional spectral classification methods have proven inadequate to interpret the VHR data (Myint et al ., 2004; Zhang et al ., 2006). It is well known that the combination of spatial and spectral information can effectively address this problem. The recent developments for spectral-spatial classification can be divided into 1) spatial feature extraction, and 2) object-based analysis.1.1 Spatial feature extraction1.1.1 Wavelet transform featuresSome researchers have used the wavelet transform to extract the spatial information in different orientations and frequencies. Myint et al . (2004) compared the wavelet features with the fractal, spatial autocorrelation, and the spatial co-occurrence approaches, and the results suggested that a multi-band and multi-level wavelet approach can be used to drastically increase the classification accuracy. The fractal techniques did not pro-vide satisfactory classification accuracy. Spatial autocorrelation and spatial co-occurrence techniques were found to be rela-tively effective when compared to the fractal approach. The experiments concluded that the wavelet transform approach was the most accurate of all four approaches. Zhang et al . (2006) extracted the spectral-textural information by decom-posing the image into four sub-bands at different frequencies and resolutions and then integrating the low and high frequency information as spectral-textural features. Their experiments on QuickBird datasets verified that the utility of wavelet features can extract the spatial information effectively and help improve the pure spectral classification for high resolution images. Me-her et al . (2007) utilized the extracted features obtained by the wavelet transform (WT) rather than the original multispectral features of remote sensing images for land cover classification. WT provides the spatial and spectral characteristics of a pixel560 Journal of Remote Sensing 遥感学报 2009, 13(4)along with its neighbors, and hence this can be utilized for an improved classification. The performance of the original and wavelet-feature (WF)-based methods were compared in ex-periments. The WF-based methods consistently yielded better results, and the biorthogonal3.3 (Bior3.3) wavelet was found to be superior to other wavelets.1.1.2 Gray level co-occurrence matrix (GLCM)Puissant et al. (2005) examined the potential of the spec-tral/textural approach to improve the classification accuracy of intra-urban land cover types. The second-order statistics of the gray level co-occurrence matrix were used as additional bands. In experiments, four texture indices with six window sizes cre-ated from panchromatic images were tested on images at high to very high resolutions. The results showed that the optimal index improving the global classification accuracy was the homogeneity measure, with a 7 by 7 window size. Ouma et al. (2008) presented the results of GLCM texture analysis for the differentiation of forest and non-forest vegetation types in QuickBird imagery. The optimal GLCM windows for land cover classes within the scene were determined using semi-variogram fitting. These optimal window sizes were then applied to eight GLCM texture measures (mean, variance, ho-mogeneity, dissimilarity, contrast, entropy, angular second mo-ment, and correlation) for the scene classification. The experi-mental results were the following: (1) the spectral-only bands classification gave an overall accuracy of 58.69%; (2) the sta-tistically derived 21 by 21 optimal mean texture combined with spectral information gave the best results among the GLCM optimal windows with an accuracy of 73.70%.A key issue for the window-based image processing tech-niques (e.g., GLCM texture) is the adaptive window selection approach. Although it is well known that combining spectral and spatial information can improve land use classification of very high resolution data, many spatial measures refer to the window size problem, and the success of the classification pro-cedure using spatial features depends largely on the window size that was selected. Huang et al. (2007a) proposed an opti-mal window selection method based on the spectral and edge information in a local region for choosing the suitable window size adaptively, and the multiscale information was then fused, based on the selected optimal window size. The spatial features that were extracted by the gray-level co-occurrence matrix (GLCM) were utilized for multispectral IKONOS data, in order to validate the window selection algorithm. The results showed that the proposed algorithm could select and fuse the multiscale features effectively and, at the same time, increase the classifi-cation accuracy.1.1.3 Structural and shape featuresBenediktsson et al. (2003) used a composition of geodesic opening and closing operations of different sizes in order to build a morphological profile and a neural network approach for the classification of features. The experiments were con-ducted on panchromatic images of IKONOS with 1m spatial resolution, and the results showed that the morphological fea-tures can create adequate spatial information and significantly improve the accuracies of the traditional classifiers. The mor-phological features were extended for use with airborne images in Benediktsson et al. (2005) and a joint spectral/spatial classi-fier was presented. The morphological method was based on making use of both the spectral and spatial information for classification and principal component analysis (PCA) was employed for pre-processing and dimensionality reduction. Epifanio and Soille (2007) exploited the morphological texture features to segment high resolution images of natural landscape into several cover types. The texture features were of interest because the number of available spectral bands in high resolu-tion images is sometimes limited; in addition, the traditional pixel-based classification techniques perform poorly. Huang et al. (2007b) investigated the classification and extraction of spatial features in urban areas from high spatial resolution mul-tispectral imagery. A structural feature set (SFS) was proposed to extract the statistical features of the direction-lines histogram. Some methods of dimension reduction, including decision boundary feature extraction and the similarity index feature selection, were then implemented for the proposed SFS to re-duce information redundancy. The approach was evaluated on two QuickBird datasets and the results showed that the new set of reduced spatial features had better performance than tradi-tional methods.1.2 Object-based analysis1.2.1 Development of the object-based algorithmsObject-based classification is a good alternative to the tradi-tional pixel-based methods. This analysis approach will reduce the local spectral variance within a homogeneous region. The basic idea of this method is to group the spatially adjacent pix-els into spectrally homogeneous segments, and then perform classification on objects as the minimum processing units. Ket-tig and Landgrebe (1976) firstly proposed the object-based analysis approach and developed the spectral-spatial classifier called the extraction and classification of homogeneous objects (ECHO) (Landgrebe, 1980). Jimenez et al. (2005) developed the unsupervised version of the ECHO algorithm, namely Un-ECHO, which was a method of unsupervised enhancement of pixel homogeneity in a local neighborhood. It enabled a con-textual classification of multispectral or hyperspectral data, producing results that were more meaningful to the human analyst. Their experiments on HYDICE and A VIRIS showed that the UnECHO classifier was especially relevant for the new generation of airborne and spaceborne sensors with high spatial resolution.In recent years, the fractal net evolution approach (Hay et al., 2003), which is embedded in the eCognition commercial soft-ware, has been widely used for object-based analysis and ex-periments. It utilizes fuzzy set theory to extract the objects of interest, at the scale of interest, segmenting images simultane-ously at both fine and coarse scales. The FNEA is a bottom-up region merging technique starting from a single pixel. In anZHANG Liang-pei et al .: Advanced processing techniques for remotely sensed imagery561iterative way, at each subsequent step, image objects are merged into bigger ones. The region merging decision is made with local homogeneity criteria and the criteria are defined as: Obj1Obj2Obj2Merge Merge Obj11)][(Bb b N H N W N σσσ=+=−∑ (1)where W b is the weight for band b , N Merge , N Obj1, and N Obj2 rep-resent the number of pixels within the merged object, object 1, and object 2, respectively. σ Merge , σ Obj1, and σ Obj2 are respec-tive standard deviations. When a possible merge of a pair of image objects is examined, the fusion heterogeneity value H between those two objects is calculated and compared to the scale parameter T . The two objects are merged when H < T . The scale parameter is a measure of the maximum change in heterogeneity that may occur when merging two image objects. Watershed transformation in mathematical morphology is a powerful tool for image segmentation and can also be em-ployed for object-based analysis. Li and Xiao (2007) presented an extension of the watershed algorithm for image segmenta-tion. A vector-based morphological approach was proposed to calculate gradient magnitude, which was then input to water-shed transformation for image segmentation. The method showed encouraging results and can be used for segmentation of high resolution multispectral imagery and object based clas-sification.Akcay and Aksoy (2008) presented novel methods for automatic object detection in high resolution images by com-bining spectral information with structural information ex-tracted using image segmentation. The segmentation algorithm was carried out using morphological operations applied to indi-vidual spectral bands with structuring elements of increasing sizes. The experimental results showed that the proposed methods were able to automatically detect, group, and label segments belonging to the same object classes. 1.2.2 The improved algorithms for object-based analysisWith increasing developments for object-based classification, some optimized algorithms have been proposed. Wang et al . (2004a) proposed a hybrid classification that integrates the pixel and object-based classifications. The pixel and object level maps were obtained using the maximum likelihood classi-fier and the nearest neighbor classification, respectively (namely, MLCNN). Gamba et al . (2007) proposed a boundary- optimized approach for high resolution image classification. The boundary and non-boundary pixels were discriminated and then classified separately. Bruzzone and Carlin (2006) devel-oped a multilevel approach for high spatial resolution image processing. Its basic idea is the simultaneous use of multiscale representations as a feature extraction module that adaptively models the spatial context of each pixel. In experiments, its effectiveness was verified using urban and rural QuickBird images.1.2.3 Applications for object-based analysisYu et al . (2006) exploited the multiple features from the ob-ject-based analysis approach to create a comprehensive vegeta-tion inventory at Point Reyes National Seashore in Northern California. In their studies, for each object, 52 features were calculated including spectral features, textures, topographic features, and geometric features. Wang et al . (2007a, 2007b) investigated the object-based analysis for mangrove mapping at Punta Galeta on the Caribbean coast of Panama. Waske and van der Linden (2008) exploited the joint classification of multiple segmentation levels to fuse synthetic aperture radar (SAR) and optical remotely sensed data.2 SUPER RESOLUTION (SR) RECONSTRUCTION TECHNIQUESHigh resolution (HR) images are useful in many applications such as remote sensing, video frame freezing, medical diagnos-tics, and military information gathering, etc. However, because of the high cost and physical limitations of the high precision optics and image sensors, it is not easy to obtain the desired HR images in many cases. Therefore, super resolution (SR) image reconstruction techniques, which can reconstruct one or a set of HR images from a sequence of low resolution (LR) images of the same scene, have been widely researched in the last two decades.The multi-frame SR problem was first formulated by Tsai and Huang (1984) in the frequency domain. They proposed a formulation for the reconstruction of a HR image from a set of under-sampled, aliased but noise-free LR images. Kim et al . (1990) extended the formulation to consider observation noise as well as the effects of spatial blurring. They solved the ex-tended formulation by a weighted recursive least squares method to improve computational efficiency. Then Kim and Su (1993) extended their work by considering different blurs for each LR image. Rhee and Kang (1999) proposed a DCT-based algorithm in order to reduce computational costs. Furthermore, there have appeared a couple of papers that concentrate on wavelet SR methods (Chan et al ., 2003; Lertrattanapanich & Bose, 2002; Ng et al ., 2004). The advantage of the frequency domain methods is their low computational complexity; how-ever, these methods are applicable only to global motion and a priori information about the high resolution image cannot be exploited.Most of other super resolution techniques that have appeared in the literature operate in the spatial domain. Ur and Gross (1992) suggested a non-uniform interpolation method based on the generalized multi-channel sampling theorem of Papoulis (1977) and Yen (1956). The interpolation is followed by a de-blurring process and the relative shifts are assumed to be known precisely. Komatsu et al. (1993) presented a scheme to acquire an improved resolution image by applying the Land-weber algorithm (Landweber, 1951) from multiple images taken simultaneously with multiple cameras. They employed the block matching technique to measure relative shifts. Alam et al . (2000) developed a technique for real-time infrared image registration and SR reconstruction. They utilized a gradi-562 Journal of Remote Sensing 遥感学报 2009, 13(4)ent-based registration algorithm for estimating the shifts be-tween the acquired frames and presented a weighted nearest neighbor interpolation approach. The advantage of the non- uniform interpolation approach is that it requires a relatively low computational load and makes real-time applications pos-sible. However, in this approach, degradation models are lim-ited (they are only applicable when the blur and the noise characteristics are the same for all LR images). Additionally, the optimality of the whole reconstruction algorithm is not guaranteed, since the restoration step ignores the errors that occur in the interpolation stage.Irani and Peleg (1991) proposed an iterative back-projection (IBP) method adapted from computer-aided topography (CAT). In this method, the estimate of the high resolution image is updated by back-projecting the error between mo-tion-compensated, blurred and sub-sampled versions of the current estimate of the high resolution image and the observed low resolution images, using an appropriate back-projection operator. The advantage of IBP is that it is understood intui-tively and easily. However, this method has no unique solution due to the ill-posed nature of the inverse problem, and it has some difficulty in choosing the back-projection operator. In addition, it is difficult to apply a priori constraints.Stark and Oskoui (1989) proposed a noteworthy POCS- based formulation to super resolution image reconstruction problems. In this method, the space of a high resolution image is intersected with a set of convex constraint sets presenting desirable image characteristics, such as positivity bounded en-ergy, fidelity to data, smoothness, etc. Their approach was ex-tended by Tekalp et al. (1992) to include observation noise and motion blur (Patti, 1994). Patti et al. (1997) extended the POCS approach to account for arbitrary sampling lattices and non-zero aperture time. The advantage of POCS is that it utilizes the powerful spatial domain observation model. It also allows a convenient inclusion of a priori information. These methods have the disadvantages of non-uniqueness of solution, slow convergence, and a high computational cost.Another class of super resolution algorithms is based on stochastic techniques, including Maximum Likelihood (ML) (Tom & Katsaggelos, 1994) and Maximum A Posterior (MAP) approaches (Guan & Ward, 1992; Schultz & Stevenson, 1994; Schultz & Stevenson, 1995; Schultz & Stevenson, 1996; Hardie et al., 1997). MAP estimation with an edge preserving Huber- Markov random field image prior is studied in Schultz and Stevenson (1994, 1995, 1996). MAP-based super resolution with simultaneous estimation of registration parameters (motion between frames) has been proposed in Hardie et al. (1997). Robustness and flexibility in modeling noise characteristics and a priori knowledge about the solution are the major advantages of the stochastic SR approach. Assuming that the noise process is white Gaussian, a MAP estimation with convex energy func-tions in the priors ensures the uniqueness of the solution. Therefore, efficient gradient descent methods can be used to estimate the HR image. It is also possible to estimate the mo-tion information and the HR image simultaneously.The precise registration of the sub-pixel motion is very im-portant to the reconstruction of the HR image. However, precise knowledge of these parameters is not always assured in real applications. Lee and Kang (2003) proposed a regularized adaptive HR reconstruction considering inaccurate sub-pixel registration. Two methods for the estimation of the regulariza-tion parameter for each LR frame were advanced, based on the approximation that the registration error noise is modeled as Gaussian with a standard deviation proportional to the degree of the registration error. However, the convergence of these methods was not rigorously proved.Robust super resolution techniques have appeared in Zomet et al. (2001), Farsiu et al. (2003), Farsiu et al. (2004), and take into account the existence of outliers (data that do not fit the model very well). In Zomet et al. (2001), a median filter was used in the iterative procedure to obtain the HR image. The robustness of this method is good when the errors from outliers are symmetrically distributed after a biased detection procedure. However, a threshold is needed to decide whether the bias is due to outlier or aliasing information. Also, the mathematical justification of this method was not analyzed. In Farsiu et al. (2003, 2004), a robust SR method was proposed based on the use of the L1 norm in both the regularization and the measure-ment term of the penalty function. Robust regularization based on a bilateral prior is proposed to deal with different data and noise models. However, the pure translational assumption of the entire low resolution image sequence may not be suitable for some real data sequences.There are several examples of the SR technique being suc-cessfully applied in the remote sensing area. The first multi- frame SR idea was motivated by the requirement to improve the resolution of Landsat remote sensing images. In 2002, CENS (National Space Study Center) successfully launched the SPOT5 satellite. Using the SR technique, SPOT5 can deliver a 2.5 meter panchromatic image through the processing of two 5 meter images, which are shifted to half a sampling interval by a double CCD linear array. Shen et al. (2007) proposed a SR reconstruction algorithm applied to real MODIS (moderate resolution imaging spectro-radiometer) remote sensing images in the same spectral band. They employed a truncated quadratic cost function to exclude the outliers in the sub-pixel registration part to obtain accurate photometric and geometric parameters among the observed images, and then used the MAP estimation with robust L1 norm data fidelity and edge-preserving Huber prior to produce the desired HR image in the reconstruction part.3 HYPERSPECTRAL DATA PROCESSINGRecently, hyperspectral imagery has been attracting in-creased attention due to the wealth of information that can be extracted from these images for a variety of applications. Many military and civilian applications involve such areas as globalZHANGLiang-pei et al.: Advanced processing techniques for remotely sensed imagery 563environment monitoring, mapping, geology, forestry, agricul-ture, and water quality management, and so on. These images are capable of producing a large amount of data very quickly due to the high resolution sampling of both the spatial and spectral dimensions. The processing cost for the large quantity of data may be very large. A great deal of research has aimed to find more efficient ways to process this data type, and recent research methodologies can be classified into two kinds: pure and mixed pixel methods.3.1 Pure pixel based methodsThese methods are based on the hypothesis that all the pixels in the images are composed of one kind of land object. Pure pixel based methods can be separated into two sub-groups: vegetation index and statistical methods.3.1.1 Vegetation Index methodsVegetation Index is the parameter extracted from the spectral features of objects, including spectral matching recognition and land object reconstruction (Goetz, 1997). In these methods, the spectral spots measured by fieldwork are compared with those extracted by the imaging spectrometers to distinguish different classes. In order to improve the efficiency and speed of the analysis of hyperspectral data from the imaging spectrometers, these spectral spots are usually coded to extract their spectral features. Kruse et al. (1993) proposed spectral matching, which is one of the most widely used methods for analyzing hyper-spectral data. The main problem of the Vegetation Index is that it is difficult to construct a universal Vegetation Index to be suitable for most of the hyperspectral data. Zhang et al. (2006, 2007) developed a universal pattern decomposition method (UPDM) to construct a VIUPD.3.1.2 Statistical methodsStatistical methods are very important in hyperspectral data processing and are especially widely used in target detection and classification of hyperspectral data. In these methods, each band is regarded as a random variable, and the probability sta-tistics methods are then applied to extract statistical characters of the image. Due to the hyper-dimensionality, dimension re-duction has to be done first to reduce the computational time. The most important application of statistical methods is anom-aly detection. The hyperspectral data are considered to have a certain distribution and the anomalies are the pixels that do not fit the distribution. RX and its extended algorithms (Yu & Reed, 1993; Kwon & Nasrabadi, 2005; Hruschka & Ebecken, 1999) are efficient methods for anomaly detection.3.2 Mixed pixel based methodsDue to the complex conditions in the field and the limitation of spatial resolution of the hyperspectral data, mixed pixels occur widely in hyperspectral data (Zhang & Li, 1998). Mixed Pixel Models were proposed to solve the above problem. The models are summarized as two types: linear mixture models and non-linear mixture models. The linear mixture model is most widely used as it is very simple and usually has clear physical meaning. However, by the use of a linear mixture model, the number of classes to be extracted should be smaller than the number of bands of the hyperspectral data. In order to avoid the limitation of linear mixture models, non-linear mod-els were proposed and the mixed pixels were expressed as the sum of residual error and the high order moment of the end-members.One of the most important applications of mixed pixel based methods is to detect sub-pixel targets in hyperspectral data. Recently, sub-pixel target detection methods have mainly fo-cused on the linear mixture model and matched filter to find the possible occurrence of a target spectrum in the pixels of the hyperspectral data. OSP (Orthogonal Subspace Pursuit) (Chang, 2005), PP (Projection Pursuit) (Chiang et al., 2001) and CEM (Constrained Energy Method) (Settle, 2002) are the widely used methods. Recently, several new algorithms were introduced into sub-pixel target detection, such as kernel-based methods (Nasrabadi & Kwon, 2005; Kwon & Nasrabadi, 2004; Gu et al., 2008; Kwon & Nasrabadi, 2006) and morphology-based meth-ods (Roberts et al., 1998). In addition, another factor affecting the detection results is the spectral variability. Several models were proposed to solve the problem, such as the sub-space ap-proach (Kwon & Nasrabadi, 2006).The other important applications of mixed pixel based methods are endmember extraction and the mixture pixels de-composition. The endmember extraction can be performed in two ways: (1) by deriving them directly from the image (image endmembers), or (2) from field or laboratory spectra of known target materials (library endmembers); see Roberts et al. (1998) for a comparison of the two. The risk in using library endmem-bers is that these spectra are rarely acquired under the same conditions as the airborne data. Image endmembers have the advantage of being collected at the same scale as the data and can thus be more easily associated with features in the scene. An image endmember (IE) is obtained by locating a pixel in the scene with the maximum abundance of the physical endmember it will represent, but there may be cases where it is not possible for a certain algorithm to find such pure pixels in a scene. Dur-ing the last decade, several algorithms have been proposed for the purpose of autonomous/supervised endmember selection from hyperspectral scenes, including the manual endmember selection tool (MEST) (Bateson & Curtiss, 1996), pixel purity index (PPI) (Bowles et al., 1995), N-FINDR, vertex component analysis (VCA)(Nascimento & Dias, 2005), iterative error analysis (IEA) (Neville et al., 1999), iterated constrained end-member(ICE) algorithm (Berman et al., 2004), optical real-time adaptative spectral identification system (ORASIS)(Boardman et al., 1995), and automated morphological endmember extrac-tion (AMEE) (Plaza et al., 2004).PPI, N-FINDR, CCA, and VCA might be characterized as instances of the classic approach to endmember selection, based on the search for spectral convexities. While PPI is partially automated, N-FINDR, CCA, and VCA are fully automated. IEA。

遥感英文

遥感英文

1. Definition of Remote Sensing•Remote sensing is the science and art of obtaining information about a phenomenoa without being in contact with it. Remote sensing deals with the detection and measurement of phenomena with devices sensitive to electromagnetic energy such as:• Light (cameras and scanners)• Heat (thermal scanners)• Radio Waves (radar)2. Passive Remote Sensing: Makes use of sensors that detect the reflected or emitted (散射)electro-magnetic radiation from natural sources.3. Active remote Sensing: Makes use of sensors that detect reflected responses from objects that are irradiated (辐射)from artificially-generated energy sources, such as radar.4. Platforms: Vehicle to carry the sensor (truck, aircraft, space shuttle, satellite, etc.)5. Sensors: Device to detect electro-magnetic radiation (camera, scanner, etc.) 1.Electro-Magnetic Spectrum (EMS): (电磁波谱)Wavelength regions of electro-magnetic radiation have different names ranging from Gamma ray, X-ray, Ultraviolet (UV), Visible light, Infrared (IR) to Radio Wave, in order from the shorter wavelengths.2.Atmospheric scattering(散射)is the unpredictable diffusion of radiation by particles in the atmosphere.3.Rayleigh scatter(瑞利散射)is common when radiation interacts with atmospheric molecules (gas molecules) and other tiny particles (aerosols) that are much smaller in diameter that the wavelength of the interacting radiation.4.Mie scatter(米氏散射)exists when the atmospheric particle diameter is essentially equal to the energy wavelengths being sensed.5.Extinction: (衰减)The sunlight transmission through the atmosphere is effected by absorption and scattering of atmospheric molecules and aerosols. This reduction of the sunlight's intensity is called extinction.6.A spectral reflectance curve: (反射波谱曲线)A graph of the spectral reflectance of an object as a function of wavelength is called a spectral reflectance curve.1.A sun synchronous orbit is a near polar orbit whose altitude is such that the satellite will always pass over a location at a given latitude at the same local solar time. In this way, the same solar illumination condition (except for seasonal variation) can be achieved for the images of a given location taken by the satellite.1.Electromechanical:(光机)the sensor oscillates(振荡)from side to side to form the image.2.Linear array:(线性列阵)an array of detectors is used to simultaneously(同时地)sense the pixel (像素)values along a line。

遥感影像信息处理技术的研究进展英文翻译

遥感影像信息处理技术的研究进展英文翻译

遥感影像信息处理技术的研究进展英文翻译遥感影像信息处理技术的研究进展张良培,黄昕武汉大学测绘遥感信息工程国家重点实验室,湖北武汉 430079 摘要: 综述了遥感影像信息处理技术的研究进展, 主要包括高分辨率影像信息提取技术、影像超分辨率、高光谱影像处理和目标探测, 以及遥感影像处理与分类的人工智能方法。

对于高分辨率影像处理, 从纹理、形状、结构和对象的角度探讨了空间信息提取对于高分辨率影像解译的意义和作用, 分析了小波纹理、空间共生纹理、形状特征提取和面向对象分类技术的进展和存在的问题; 对于超分辨率技术, 文章主要介绍了超分辨率技术的最新进展, 及其在遥感影像(SPOT5 和MODIS)中的应用; 在高光谱数据处理方面, 从纯净像元和混合像元两方面介绍了最新的进展。

对于纯净像元方法, 主要分析了植被指数和统计方法, 混合像元方面, 则主要分析了像元分解、端元提取的最新技术方法; 在智能化信息处理方面, 先回顾了神经网络和遗传算法在遥感图像处理中的应用, 然后介绍了人工免疫系统对多、高光谱遥感影像分类研究的最新进展。

关键词: 高分辨率, 超分辨率, 高光谱, 人工智能最近,高分辨率遥感图像 (如快鸟,IKONOS,SPOT5等)和光谱通道(海波,MODIS,MERIS等)已经能够提供大量的信息,从而开辟了遥感应用程序新的途径(例如,城市测绘,环境监测,精细农业,人类 - 环境 - 地球的相互作用,等等)。

然而,它们同样使图像处理和分类面临着由于丰富的空间和光谱信息技术高分辨率数据和超维特征带来的挑战。

这表明,传统的方法对这些新的传感器是严重不足的。

在此背景下,本文旨在回顾这些新的数据类型下图像处理技术发展。

本文的结构如下,第一部分讨论了信息提取和高空间分辨率的图像分类,包括空间特征的提取和基于对象的分析。

第二部分的关注超分辨率重建技术。

第三部分介绍高光谱数据分析,第四部分讨论人工智能方法的遥感应用。

中英文对照摄影测量与遥感的现状及发展趋势

中英文对照摄影测量与遥感的现状及发展趋势

The status quo and developing trend of Photogrammetry and remote sensing ABSTRACT: Photogrammetry; Remote Sensing;Developing trendWith all the new science and technology,especially the computer technology,space technology and the rapid development of information technology, human society has been into the information age。

Information highway construction, Geo—spatial information science-—geomatics generation and formation, "digital earth” the concept of photogrammetry and remote sensing for the development of the subjects to provide the solid foundation, rare opportunities and clear direction,but also to the further development of the subject put forward a series of challenges。

Countries vice President al gore on jan。

31, 1998, put forward the concept of "digital earth" it,to realize the huge system, will need at least six key technical support, namely,computer science,mass storage,high resolution satellite images, broadband network,interoperability and metadata。

行星遥感专业英语动词

行星遥感专业英语动词

行星遥感专业英语动词
根据行星遥感专业的特点和需求,以下是一些常见的动词。

这些动词主要涉及到采集、处理、分析和解释遥感数据的过程。

1. Acquire:获取,采集遥感数据
2. Process:处理遥感数据
3. Analyze:分析遥感数据
4. Interpret:解释遥感数据
5. Classify:分类,将遥感数据按类别划分
6. Extract:提取特征或信息
7. Monitor:监测遥感数据的变化
8. Validate:验证遥感数据的准确性
9. Fuse:融合多个遥感数据源
10. Detect:检测或发现特定的目标或现象
11. Map:绘制遥感影像地图或图表
12. Model:利用遥感数据构建数学或统计模型
13. Simulate:利用遥感数据进行仿真
14. Visualize:可视化遥感数据
15. Compare:比较不同的遥感数据
16. Track:跟踪遥感数据的变化或流动
这些动词都可以用于描述行星遥感专业中的各种操作和分析过程。

遥感影像处理技术的研究与应用

遥感影像处理技术的研究与应用

遥感影像处理技术的研究与应用随着技术的不断发展,遥感影像处理技术在许多领域中得到了广泛应用。

遥感是利用卫星、飞机等远距离传感器和图像处理技术,获取地球表面及其大气圈上的物理、化学和生态信息,以获得关于地球自然地理、人文地理、社会经济等多方面信息的科学技术。

本文将就遥感影像处理技术的研究与应用进行探讨。

一、遥感影像处理技术的研究遥感影像处理技术是将遥感图像数字化、处理、分析和应用的技术,是遥感技术的重要组成部分。

目前,遥感影像处理技术主要包括以下几个方面:1. 遥感数据的获取与处理遥感技术是通过遥感卫星或飞机等探测器获取遥感数据,然后在计算机中对数据进行处理。

数据的处理包括数据的几何纠正、大气校正、检验、拼接、归一化等,以获得质量更高、更准确的数据。

2. 遥感图像分类将遥感图像进行分类,即将不同区域的像元分为不同的类别,是遥感图像处理的重要步骤。

遥感图像分类的方法有许多,如最大似然分类、支持向量机分类、神经网络分类、回归分类等。

3. 遥感变化检测遥感变化检测是通过对不同时间的遥感图像进行比较和分析,以确定不同时间点的地形、土地利用和覆盖状况等发生的变化。

这种技术在城市规划、资源管理、环境保护和自然灾害监测等领域中得到广泛应用。

4. 遥感摄影测量遥感摄影测量是遥感技术的一项重要应用。

它通过对遥感图像中的特征点进行测量和定位,以获得遥感图像中各种地物的几何信息。

这项技术在测绘、城市规划、交通运输等领域中也得到了广泛应用。

二、遥感影像处理技术的应用遥感影像处理技术在许多领域中都得到了广泛应用。

以下是几个应用领域的介绍:1. 土地利用和土地覆盖监测通过对遥感图像进行分类和遥感变化检测,可以了解土地利用和土地覆盖的变化情况,可用于城市规划、生态环境保护等领域。

2. 农业生产智能化利用遥感图像进行快速调查、实地查勘和农田分类,可以实现农业的精准管理和农业智能化的实现。

例如,可以在种植季节内,通过对农田遥感图像的监控和变化检测,及时发现作物生长变化,实现对农田生产的实时监控。

遥感介绍英文作文高中

遥感介绍英文作文高中

遥感介绍英文作文高中英文:Remote sensing is a fascinating technology that allows us to gather information about the Earth's surface and atmosphere from a distance. It involves the use of sensors, such as cameras and radar, to capture data about the environment. This data can then be processed and analyzed to provide valuable insights into a wide range of applications, from agriculture and forestry to weather forecasting and disaster management.One of the key benefits of remote sensing is itsability to provide information about large areas of the Earth's surface quickly and efficiently. For example, satellite imagery can be used to monitor the health of crops across an entire region, or to track the movement of a hurricane across the ocean. This information can then be used to make informed decisions about how to manage resources, respond to emergencies, or plan for futuredevelopment.Another advantage of remote sensing is its ability to provide information about areas that are difficult or dangerous to access. For example, remote sensing can be used to monitor the health of forests in remote or mountainous areas, or to track the movement of glaciers in polar regions. This information can then be used to inform conservation efforts, or to better understand the impacts of climate change on these sensitive ecosystems.Overall, remote sensing is a powerful tool for understanding our planet and making informed decisions about how to manage it. Whether you're a scientist, a farmer, or a policy maker, remote sensing can provide valuable insights into the complex systems that make up our world.中文:遥感技术是一项令人着迷的技术,它可以从远处收集有关地球表面和大气的信息。

基于深度学习的遥感影像解译技术研究进展

基于深度学习的遥感影像解译技术研究进展

基于深度学习的遥感影像解译技术研究进展引言遥感影像解译是利用遥感卫星或无人机获取的大量影像数据来识别和分析地物、地貌等信息的过程。

它在农业、环境监测、城市规划等领域具有广泛的应用。

传统的遥感影像解译需要借助专家的经验和规则来对影像进行分类,但随着深度学习技术的发展,基于深度学习的遥感影像解译技术逐渐成为研究的热点。

本文将介绍基于深度学习的遥感影像解译技术的研究进展。

背景深度学习是一种模拟人类神经网络的机器学习技术,它通过构建多层神经网络来实现对复杂数据的学习和分析。

由于深度学习网络具有较强的非线性拟合能力和良好的特征提取能力,因此在图像识别、语音识别等领域取得了很大的成功。

近年来,研究者开始将深度学习技术应用于遥感影像解译中,以提高解译的准确率和效率。

方法基于深度学习的遥感影像解译技术主要分为以下几个步骤:数据预处理、特征提取、分类器构建和模型评估。

数据预处理是指将遥感影像数据进行归一化、去噪等操作,以消除影响解译结果的干扰。

常见的数据预处理方法包括直方图均衡化、主成分分析等。

特征提取是深度学习解译技术的核心环节。

在传统的遥感影像解译中,特征提取主要依赖于人工设计的滤波器和图像纹理等特征。

而在基于深度学习的解译技术中,一般采用卷积神经网络(CNN)来自动提取影像中的特征。

CNN是一种专门用于图像处理的深度学习网络,通过多层卷积层和池化层来实现对图像特征的提取。

分类器构建是指将提取到的影像特征输入到分类器中,进行地物分类。

常见的分类器包括支持向量机(SVM)、随机森林等。

这些分类器可以根据特征提取的结果来判定地物的类别。

模型评估是基于深度学习的遥感影像解译中的一个重要环节。

模型评估通过计算预测结果与实际结果之间的差异来评估解译模型的准确性和鲁棒性。

常用的评估指标包括精度、召回率、F1得分等。

研究进展随着深度学习技术的不断发展,基于深度学习的遥感影像解译技术在各个领域取得了许多突破。

在农业领域,基于深度学习的遥感影像解译技术可以实现对作物的识别和分析。

遥感影像图像处理技术的新进展

遥感影像图像处理技术的新进展

遥感影像图像处理技术的新进展一、引言遥感影像技术是指通过卫星、飞机或者其他平台获取地面图像数据,对地表物体进行观测和测量分析。

近年来,随着人造卫星、遥感传感器和计算机技术的不断发展,遥感影像图像处理技术也在不断更新和发展。

本文将重点介绍遥感影像图像处理技术的新进展,包括数据预处理、分类识别、目标检测和影像融合等方面的发展及应用。

二、遥感数据预处理遥感数据预处理是遥感影像处理中非常重要的一步。

遥感数据由于存在数据丢失、噪声干扰和大范围的地表遮盖等问题,因此需要进行数据预处理,以提高遥感图像数据的质量和精度。

新的遥感数据预处理方法主要包括以下几个方面。

1.基于深度学习的遥感数据预处理方法基于深度学习的遥感数据预处理方法是一种新型的数据处理方法,它从图像中提取具有丰富质量信息的特征,进而对遥感数据进行处理。

该方法利用卷积神经网络(CNN)对遥感图像数据进行特征学习,通过反向传播的方式对特征进行优化。

比如使用CNN技术可以克服遥感图像中的云、雾、烟、雨等干扰,保证遥感数据的质量。

2.小波变换的遥感数据去噪方法小波变换是一种非常常用的数学变换方法,可以将时域或空域数据变换到频域中,以达到去噪和降噪的目的。

小波变换在遥感影像处理中经常用于降低图像的噪声干扰,提高图像的质量和精度。

三、遥感数据分类与识别遥感数据分类与识别技术是指根据遥感影像的特征和属性,将其分类到不同的物体、地面覆盖、土地类型或植被类型等类别中去。

新的遥感数据分类和识别技术主要包括以下几个方面。

1.基于深度学习的遥感影像分类方法深度学习在遥感影像分类中的应用越来越广泛。

基于深度学习的方法可以快速对遥感影像的不同区域进行分类,并且可以快速提取影像中的特征。

此外,基于深度学习的方法也能够有效克服地形、植被遮盖、云雾干扰等影响遥感影像分类的干扰因素。

2.基于集成学习的遥感影像分类方法集成学习是一种将多个基分类器组合成一个高性能分类器的技术。

在遥感影像分类中,集成学习同样需要结合多个不同的分类器以减小分类器的误差率和增强分类器的性能。

卫星遥感英文词汇

卫星遥感英文词汇

卫星遥感英文词汇English:Satellite remote sensing refers to the use of satellites to collect information about the Earth's surface. This technology uses sensors and cameras on board the satellite to capture images and data, which can then be used to monitor and study various aspects of the Earth, such as vegetation, natural disasters, climate change, and urban development. Satellite remote sensing has become an important tool for environmental monitoring, disaster management, resource management, and urban planning, as it allows for the collection of large-scale, real-time data that can be analyzed to understand and address various global and local challenges.中文翻译:卫星遥感指利用卫星收集关于地球表面的信息。

这项技术利用卫星上面的传感器和摄像头捕捉图像和数据,然后可以用于监测和研究地球的各个方面,比如植被、自然灾害、气候变化和城市发展。

卫星遥感已经成为环境监测、灾害管理、资源管理和城市规划的重要工具,因为它可以收集大规模、实时数据,这些数据可以被分析以了解和应对各种全球和地方的挑战。

遥感数据处理技术研究进展

遥感数据处理技术研究进展

遥感数据处理技术研究进展遥感技术是一种应用物理学和地学知识,运用气象遥感、遥感卫星等技术获取地表及大气层的各种信息和数据,以客观、量化、高精度的手段描述地球表面及其变化。

而遥感数据处理技术则是将遥感获取到的原始数据转化为可视化的图像或量值数据的过程,也是广大遥感科研工作者必须掌握的技能之一。

本篇文章将探讨遥感数据处理技术的研究进展,希望能对相关领域的工作者提供一定的助益。

一、光学遥感数据处理技术光学遥感数据处理技术是指一种通过特定光谱波段的光电传感器(如数字相机和光学卫星)来采集地面目标照片并对其进行处理、分析和解释的技术。

典型的光学遥感数据处理步骤包括影像预处理、特征提取和分类识别等。

其中影像预处理是一系列数据增强操作,包括去噪、放大、锐化和波段组合等,而特征提取则是通过观察和分析图像的各种数据识别出地表特征(如水体、裸土、植被、居民区等),以期为后续分类工作做好准备。

分类识别是指根据既定的分类标准对特征提取出来的每个目标进行判别,把它分为不同的类别。

近年来,随着大数据技术和机器学习技术的快速发展,光学遥感的数据处理技术也日益成熟。

例如,深度学习技术可以通过复杂的神经网络模型训练来实现卫星影像的自动识别及分类,该技术极大地提升了数据处理的效率和精度。

此外,还有许多其他的创新技术,如多尺度图像处理技术、特征选择技术、自适应参数优化技术等,都在光学遥感数据处理领域得到了广泛应用。

二、微波遥感数据处理技术微波遥感技术以雷达技术为基础,通过接收地表传达回来的雷达信号来获取地表特征的一种遥感数据处理技术。

微波遥感主要覆盖频段有X波段、C波段、L波段和S波段等,它主要用来研究地表物质的散射和吸收规律、地表高度和地形等因素。

微波遥感数据处理技术包括预处理、数据分析和数据解释三个方面。

其中预处理部分包括数据滤波、校正、去噪和分割等,这些操作在微波数据处理程序前必不可少,能够真实反映出地表物质信号的震荡变化过程。

遥感影像处理应用项目计划书

遥感影像处理应用项目计划书

遥感影像处理应用项目计划书Remote sensing image processing is an essential component of various applications in agriculture, urban planning, disaster management, and environmental monitoring. The ability to extract valuable information from satellite imagery enables decision-makers to make informed choices and take necessary actions. 遥感影像处理是农业、城市规划、灾害管理和环境监测等各种应用项目的重要组成部分。

从卫星图像中提取有价值的信息使决策者能够做出明智选择并采取必要行动。

One of the primary goals of the remote sensing image processing project is to enhance the accuracy and efficiency of image analysis through advanced algorithms and techniques. By utilizing machine learning and deep learning algorithms, we can automate the process of feature extraction, classification, and change detection. 遥感影像处理项目的主要目标之一是通过先进的算法和技术提高图像分析的准确性和效率。

通过利用机器学习和深度学习算法,我们可以自动化特征提取、分类和变化检测的过程。

Furthermore, the integration of geographic information systems (GIS) with remote sensing image processing plays a crucial role in spatialanalysis and decision-making. By overlaying satellite imagery with geospatial data, we can create comprehensive maps and models that facilitate resource management and planning. 此外,地理信息系统(GIS)与遥感影像处理的整合在空间分析和决策制定中扮演着关键角色。

遥感影像信息处理技术的研究进展

遥感影像信息处理技术的研究进展

遥感影像信息处理技术的研究进展一、本文概述遥感影像信息处理技术是遥感科学领域的核心技术之一,随着遥感技术的快速发展,其在地理信息系统、环境监测、城市规划、灾害预警等多个领域的应用越来越广泛。

遥感影像信息处理技术的主要任务是对获取的遥感影像进行预处理、增强、解译和分类等处理,以提取出有用的信息。

近年来,随着深度学习、人工智能等技术的兴起,遥感影像信息处理技术也取得了显著的进展。

本文旨在全面综述遥感影像信息处理技术的研究进展,包括预处理技术、特征提取技术、分类技术、目标检测技术以及应用领域的最新发展,以期为相关领域的研究者和实践者提供有益的参考和启示。

二、遥感影像信息处理技术概述遥感影像信息处理技术是一种集成了多种学科知识的综合性技术,主要包括计算机科学、数学、物理学、地理学以及环境科学等。

其核心在于从各种遥感平台(如卫星、无人机、高空气球等)获取的遥感影像中提取有用的信息,以满足对地表、大气、海洋等自然环境的监测、评估和管理需求。

遥感影像信息处理技术主要包括预处理、图像增强、特征提取和识别、信息提取和应用等步骤。

预处理阶段主要对原始遥感影像进行辐射校正、几何校正等,以提高影像的质量。

图像增强则通过一系列算法,如直方图均衡化、滤波等,改善图像的视觉效果,提高后续处理的准确性。

特征提取和识别则是通过特定的算法,如边缘检测、纹理分析等,从图像中提取出关键信息,如地物类型、形状、大小等。

信息提取和应用阶段则是将前面步骤得到的信息进行整合和分析,以满足各种实际应用需求。

随着遥感技术的快速发展,遥感影像信息处理技术也在不断进步。

一方面,随着遥感平台的多样化和遥感数据的丰富化,遥感影像信息处理技术需要处理的数据类型和复杂度也在不断增加。

另一方面,随着计算机科学和人工智能技术的快速发展,遥感影像信息处理技术也在不断引入新的理论和方法,如深度学习、神经网络等,以提高处理的准确性和效率。

遥感影像信息处理技术是遥感技术的重要组成部分,其在环境监测、城市规划、灾害预警、资源调查等领域有着广泛的应用前景。

遥感技术的英文名词解释

遥感技术的英文名词解释

遥感技术的英文名词解释遥感技术,又称为遥测遥感技术(Remote sensing),是一种利用空间传感器获取目标信息的技术手段。

通过遥感技术,可以获取到地球表面的各种信息,包括地形地貌、气候变化、土地利用、植被覆盖等等。

在现代科技的推动下,遥感技术已经成为地质勘探、环境保护、资源监测和农业发展等领域中的重要工具。

遥感技术在传感器技术、信号处理和数据分析等多个学科领域中发展起来。

传感器是遥感技术最重要的组成部分,它能够将电磁辐射转换为可用的数量信号。

常见的遥感传感器包括光电传感器、微波传感器和激光雷达传感器。

这些传感器能够通过不同的波长范围、方向性和分辨率等特性,获取到不同类型的目标信息。

光电传感器通常使用摄像机或者卫星上的成像设备,通过接收来自地球表面的反射光,获取到目标的光谱、形状和纹理等特征。

这些特征可以用来研究地球表面的物理和化学性质,例如植被的生长状态和海洋的温度变化等。

而微波传感器则利用微波辐射穿透云层和气体,获取地表的微波信号。

微波传感器可以穿透云层和烟雾等干扰,因此在大气变化监测和天气预警等方面具有独特优势。

此外,微波传感器还能够探测地下水和岩土结构,广泛用于地质勘探和城市规划等领域。

激光雷达传感器则利用激光束扫描地表,通过测量激光的回波时间和强度,获取地表表面的高程、形状和结构等信息。

激光雷达传感器具有高精度和高分辨率的特点,在数字地形模型(DTM)和三维城市建模等方面有着广泛应用。

除了传感器,信号处理和数据分析是遥感技术中不可或缺的环节。

信号处理的目的是提取感兴趣的信息,去除噪音和干扰,并提高图像的质量。

数据分析则进一步处理和解释图像数据,以生成有关地球表面的量化和定量信息。

数据分析方法包括图像分类、特征提取、变化检测和时序分析等。

遥感技术在现代社会中发挥着重要的作用。

首先,在资源监测和环境保护方面,遥感技术能够提供大面积、全时空尺度下的数据和信息,用于研究和评估自然资源的利用和保护情况。

遥感影像处理技术的研究进展

遥感影像处理技术的研究进展

遥感影像处理技术的研究进展遥感影像技术是一种运用遥感卫星获取地球表层信息的技术。

随着科学技术不断进步,其应用领域也越来越广泛。

在农业、气象、环境监测、城市规划等多个领域都有其重要的应用。

在遥感影像技术应用领域的同时,研究遥感影像处理技术也成为一个热门的研究领域。

接下来,将从遥感影像处理技术发展历程、存在问题以及未来发展方向三个方面进行阐述。

一、遥感影像处理技术发展历程随着人们对地球表层信息需求的不断增长,遥感影像技术也不断得到发展。

遥感影像处理技术作为遥感技术的重要组成部分,从最早的人工处理发展到了计算机辅助处理,并逐渐形成了一套完整的遥感影像处理流程。

早期的遥感影像处理技术主要依靠人工进行图像解译和分类等操作。

这种处理方式主要耗费人力物力,同时由于人工解译主观因素的存在,会对判读结果产生影响。

为了解决人工处理的弊端,计算机辅助处理技术逐渐兴起。

其中利用统计学方法进行遥感影像解译的分类方法得到了广泛应用。

随着计算机科学和技术的不断发展,遥感影像处理技术也开始采用人工智能和计算机视觉的技术手段。

随之而来的是机器学习、神经网络以及深度学习等新技术的应用。

这些技术的引入进一步提高了遥感影像处理的效率和准确度,促进了遥感影像处理技术的快速发展。

二、存在问题虽然遥感影像处理技术取得了许多进展,但同时也存在一些问题。

一是对复杂地区的监测能力不足。

针对复杂地形、复杂环境及地物的识别和监测等问题,遥感影像处理技术仍有待提高。

需要对算法进行不断改善,进一步提高数据处理和分析的准确度。

二是对机器学习和深度学习算法的理论研究不足。

无论是机器学习还是深度学习算法,都需要理论支撑。

因此,在对算法的使用上,需要加强算法本身理论研究而非单纯的算法应用。

三是标签数据的缺乏。

不同的遥感影像数据来源多种多样,因此建立标签数据集十分困难。

对于算法的学习和进一步提高,确立标签数据集是必须要考虑的因素。

三、未来发展方向未来,遥感影像处理技术发展前景辽阔。

航天遥感专业英语(中英文对照)

航天遥感专业英语(中英文对照)

航天遥感专业英语(中英文对照)遥感remote sensing资源与环境遥感remote sensing of natural resources and environment 主动式遥感active remote sensing被动式遥感passive remote sensing多谱段遥感multispectral remote sensing多时相遥感multitemporal remote sensing红外遥感infrared remote sensing微波遥感microwave remote sensing太阳辐射波谱solar radiation spectrum大气窗atmospheric window大气透过率atmospheric transmissivity大气噪声atmospheric noise大气传输特性characteristic of atmospheric transmission波谱特征曲线spectrum character curve波谱响应曲线spectrum response curve波谱特征空间spectrum feature space波谱集群spectrum cluster红外波谱infrared spectrum反射波谱reflectance spectrum电磁波谱electro-magnetic spectrum功率谱power spectrum地物波谱特性object spectrum characteristic热辐射thermal radiation微波辐射microwave radiation数据获取data acquisition数据传输data transmission数据处理data processing地面接收站ground receiving station数字磁带digital tape模拟磁带analog tape计算机兼容磁带computer compatible tape,CCT高密度数字磁带high density digital tape,HDDT图象复原image restoration模糊影象fuzzy image卫星像片图satellite photo map红外图象infrared imagery热红外图象thermal infrared imagery,thermal IR imagery微波图象microwave imagery成象雷达imaging radar熵编码entropy coding冗余码redundant code冗余信息redundant information信息量contents of information信息提取information extraction月球轨道飞行器lunar orbiter空间实验室Spacelab航天飞机space shuttle陆地卫星Landsat海洋卫星Seasat测图卫星Mapsat立体卫星Stereosat礼炮号航天站Salyut space station联盟号宇宙飞船Soyuz spacecraftSPOT卫星SPOT satellite,systeme pro batoire d’observation de la terse(法)地球资源卫星earth resources technology satellite,ERTS环境探测卫星environmental survey satellite地球同步卫星geo-synchronous satellite太阳同步卫星sun-synchronous satellite卫星姿态satellite attitude遥感平台remote sensing platform主动式传感器active sensor被动式传感器passive sensor推扫式传感器push-broom sensor静态传感器static sensor动态传感器dynamic sensor光学传感器optical sensor微波传感器microwave remote sensor光电传感器photoelectric sensor辐射传感器radiation sensor星载传感器satellite-borne sensor机载传感器airborne sensor姿态测量传感器attitude-measuring sensor 探测器detector摄谱仪spectrograph航空摄谱仪aerial spectrograph波谱测定仪spectrometer地面摄谱仪terrestrial spectrograPh测距雷达range-only radar微波辐射计microwave radiometer红外辐射计infrared radiometer侧视雷达side-looking radar, SLR真实孔径雷达real-aperture radar合成孔径雷达synthetic aperture radar,SAR 专题测图传感器thematic mapper,TM 红外扫描仪infrared scanner多谱段扫描仪multispectral scanner.MSS 数字图象处理digital image processing光学图象处理optical image processing实时处理real-time processing地面实况ground truth几何校正geometric correction辐射校正radiometric correction数字滤波digital filtering图象几何配准geometric registration of imagery图象几何纠正geometric rectification of imagery 图象镶嵌image mosaic图象数字化image digitisation彩色合成仪additive colir viewer假彩色合成false color composite直接法纠正direct scheme of digital rectification间接法纠正indirect scheme of digital rectification 图象识别image recognition图象编码image coding彩色编码color coding多时相分析multitemporal analysis彩色坐标系color coordinate system图象分割image segmentation图象复合image overlaying图象描述image description二值图象binary image直方图均衡histogram equalization直方图规格化histogram specification图象变换image transformation彩色变换color transformation伪彩色pseudo-color假彩色false color主分量变换principal component transformation 阿达马变换Hadamard transformation沃尔什变换Walsh transformation比值变换ratio transformation生物量指标变换biomass index transformation 穗帽变换tesseled cap transformation参照数据reference data图象增强image enhancement边缘增强edge enhancement边缘检测edge detection反差增强contrast enhancement纹理增强texture enhancement比例增强ratio enhancement纹理分析texture analysis彩色增强color enhancement模式识别pattern recognition特征feature特征提取feature extraction特征选择feature selection特征编码feature coding距离判决函数distance decision function概率判决函数probability decision function模式分析pattern analysis分类器classifier监督分类supervised classification非监督分类unsupervised classification盒式分类法box classifier method模糊分类法fuzzy classifier method最大似然分类maximum likelihood classification 最小距离分类minimum distance classification 贝叶斯分类Bayesian classification机助分类computer-assisted classification 图象分析image analysis。

遥感科学与技术英语

遥感科学与技术英语

遥感科学与技术英语## Remote Sensing Science and Technology.English Answer.### Definition.Remote sensing is the science of acquiring information about an object without making physical contact with it. It is typically done using sensors mounted on aircraft, satellites, or other platforms. Remote sensing can be used to study a wide variety of phenomena, including the Earth's surface, atmosphere, and oceans.### Applications.Remote sensing has a wide range of applications, including:Land use planning and management: Remote sensing can beused to map land cover and land use, which can help planners to make informed decisions about how to use land resources.Agriculture: Remote sensing can be used to monitor crop growth and identify areas of stress, which can help farmers to improve their yields.Forestry: Remote sensing can be used to map forest cover and monitor forest health, which can help foresters to manage forests sustainably.Water resources management: Remote sensing can be used to monitor water quality and quantity, which can help water managers to ensure that water resources are used wisely.Disaster management: Remote sensing can be used to monitor natural disasters, such as floods, droughts, and earthquakes, which can help emergency responders to prepare for and respond to disasters.### Sensors.Remote sensing sensors can be divided into two main types:Passive sensors: Passive sensors measure the energythat is emitted or reflected by an object.Active sensors: Active sensors emit their own energy and measure the energy that is reflected back by an object.The type of sensor used for a particular application depends on the specific information that is needed.### Data Analysis.Remote sensing data is typically analyzed using computer software. The software can be used to extract information from the data, such as the location of objects, the size of objects, and the temperature of objects.### Challenges.Remote sensing is a complex and challenging field. Some of the challenges associated with remote sensing include:Data quality: Remote sensing data can be affected by a variety of factors, such as the weather, the time of day, and the sensor platform.Data volume: Remote sensing data can be very large, which can make it difficult to store and process.Data interpretation: Remote sensing data can be complex and difficult to interpret.Despite these challenges, remote sensing is a valuable tool that can be used to study a wide variety of phenomena.中文回答。

遥感信息预处理名词解释

遥感信息预处理名词解释

遥感信息预处理名词解释
遥感信息处理(remote-sensing information processing)对遥感器获得的信息进行加工处理的技术。

遥感信息通常以图像的形式出现,故这种处理也称遥感图像信息处理。

遥感图像信息处理的主要目的是:①消除各种辐射畸变和几何畸变,使经过处理后的图像能更真实地表现原景物真实面貌;②利用增强技术突出景物的某些光谱和空间特征,使之易于与其它地物的区分和判释;③进一步理解、分析和判别经过处理后的图像,提取所需要的专题信息。

遥感信息处理分为模拟处理和数字处理两类(见数据采集和处理)。

遥感信息预处理分为辐射预处理和几何预处理。

其中,辐射预处理基本包括的是辐射定标和辐射校正,这方面辐射定标是根据试验场完成的,辐射校正可以通过ENVI软件自动完成。

而几何预处理就是所谓的几何校正,由于卫星在航天中运行,其运行拍摄时,地球也在自转,所以存在几何变形。

比如说本是一张矩形的图像,几何变形变成了梯形,这种图像是无法进行遥感图像解译的,因为其地物变形严重,存在大规模的形状变形,我们也无法进行目视解译,所以对这种图像进行几何校正,一般来说,图像都需要进行。

另外,卫星的高度比较高,穿过大气的电磁波也被削弱,所以在获得图像时,需要对图像进行大气处理,去除大气的影响,以使得图像的清晰度再现。

其实你可以看你的遥感影像,用FLAASH等去除大气后,影像的信息表现的更好,目视解译也是更清晰了。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

遥感影像信息处理技术的研究进展英文翻译遥感影像信息处理技术的研究进展张良培,黄昕武汉大学测绘遥感信息工程国家重点实验室,湖北武汉 430079 摘要: 综述了遥感影像信息处理技术的研究进展, 主要包括高分辨率影像信息提取技术、影像超分辨率、高光谱影像处理和目标探测, 以及遥感影像处理与分类的人工智能方法。

对于高分辨率影像处理, 从纹理、形状、结构和对象的角度探讨了空间信息提取对于高分辨率影像解译的意义和作用, 分析了小波纹理、空间共生纹理、形状特征提取和面向对象分类技术的进展和存在的问题; 对于超分辨率技术, 文章主要介绍了超分辨率技术的最新进展, 及其在遥感影像(SPOT5 和MODIS)中的应用; 在高光谱数据处理方面, 从纯净像元和混合像元两方面介绍了最新的进展。

对于纯净像元方法, 主要分析了植被指数和统计方法, 混合像元方面, 则主要分析了像元分解、端元提取的最新技术方法; 在智能化信息处理方面, 先回顾了神经网络和遗传算法在遥感图像处理中的应用, 然后介绍了人工免疫系统对多、高光谱遥感影像分类研究的最新进展。

关键词: 高分辨率, 超分辨率, 高光谱, 人工智能最近,高分辨率遥感图像 (如快鸟,IKONOS,SPOT5等)和光谱通道(海波,MODIS,MERIS等)已经能够提供大量的信息,从而开辟了遥感应用程序新的途径(例如,城市测绘,环境监测,精细农业,人类 - 环境 - 地球的相互作用,等等)。

然而,它们同样使图像处理和分类面临着由于丰富的空间和光谱信息技术高分辨率数据和超维特征带来的挑战。

这表明,传统的方法对这些新的传感器是严重不足的。

在此背景下,本文旨在回顾这些新的数据类型下图像处理技术发展。

本文的结构如下,第一部分讨论了信息提取和高空间分辨率的图像分类,包括空间特征的提取和基于对象的分析。

第二部分的关注超分辨率重建技术。

第三部分介绍高光谱数据分析,第四部分讨论人工智能方法的遥感应用。

高空间高分辨率图像的进一步加工。

现今,商业上使用非常高的分辨率(VHR)卫星图像(例如,IKONOS,快鸟,和SPOT-5),他们提供了大量的空间信息。

然而,它们对图像处理和分类技术带来了挑战。

由此产生的高内部类和低班际变异导致的统计可分性的减少在频域中不同土地覆盖类型,传统的光谱分类方法已被证明不足以解释VHR数据(Myint等,2004; Zhang等人,2006)。

它是众所周知的的空间和光谱的组合信息可以有效地解决这一问题。

近期发展为谱空间分类,可分为(1)空间特征提取(2)基于对象的分析。

1.1空间特征提取1.1.1小波变换特征一些研究人员利用小波变换提取在不同的方向和频率的空间信息。

Myint(2004)比较了小波特征与分形,空间自相关和空间共生的方法,其结果表明,多频带和多级小波方法可用于大幅增加分类精度。

分形技术没有提供令人满意的分类精度,空间自相关和空间共生技术是相对有效分形方法。

该实验的结论是,小波变换方法是所有四个办法中最准确的。

Zhang等人(2006)通过分解提取的光谱纹理信息将图像分割成4个子带在不同的频率和分辨率,然后整合低频和高频信息光谱纹理特征。

他们的实验利用QuickBird卫星数据集验证了小波特征的实用程序可以有效地提取空间信息,有助于提高纯光谱分类的高分辨率图像。

MEHER等(2007)使用由获得的提取小波变换的特征(WT),而不是原来的多光谱遥感影像土地覆盖分类的功能。

WT 提供的像素与他相邻像素的空间和光谱特性,因此这可以利用一种改进的分类。

原有的性能和小波特征(WF)为基础的方法在实验中进行比较。

在基于WF的方法不断取得更好的结果,以及biorthogonal3.3(Bior3.3)小波被发现优于其他小波。

1.1.2 灰度共生矩阵(GLCM)Puissant 等(2005)研究了频谱的潜在/纹理的方法来提高分类精度城市内的土地覆盖类型。

二阶统计数据的灰度共生矩阵被用作额外条带。

在实验中,在全色图像从较高到非常高的分辨率进行测试的基础上,创建了四个带有六个窗口尺寸的纹理指标。

结果表明,提高全球分类准确率的最佳指数是均一性测度,以7×7的窗口大小。

Ouma等人(2008)提出对森林和非森林植被类型的差异化 QuickBird卫星影像的灰度共生矩阵纹理分析的结果。

最佳的灰度共生矩阵窗口土地场景内覆盖类别的确定是用半变异函数拟合。

这些最佳窗口大小当时适用于8灰度共生矩阵的纹理测度(均值,方差,均匀性,相异,对比度,熵,角二阶矩,和相关性)中的场景分类。

实验结果如下:(1)光谱专用频段分类给出了58.69,的整体准确度; (2)在统计学上得出 21 21最佳平均纹理结合的光谱信息的灰度共生矩阵中最佳的窗口,结果最佳是73.70,的准确度。

基于窗口的图像处理技术的一个关键问题(例如,灰度共生矩阵的纹理)是自适应窗口选择方法。

虽然这是众所周知的,结合光谱和空间信息可以提高土地利用分类的非常高的分辨率的数据,许多空间措施是指窗口大小的问题,和分类程序的成功使用空间的功能在很大程度上取决于窗口尺寸选择。

Huang等人(2007年)提出了一种最优的基于边缘的光谱窗口选择方法适合选择在一个局部区域信息的窗口的大小自适应,以及多尺度信息进行融合,确定选择的最佳的窗口大小。

由灰度共生矩阵提取的空间特征(GLCM)用于多光谱IKONOS卫星数据,以便验证窗口选择算法。

结果表明该算法对可以选择和融合的多尺度功能有效,并且在同一时间,增加分类精度。

1.1.3结构和形状特征Benediktsson等人(2003)使用测地线的组合物打开和关闭不同尺寸的操作以建立一个m的空间全色图形态分布和神经网络方法为特征的分类。

该实验进行通过对IKONOS卫星的用1像解析,结果表明,该形态特征能创造足够的空间信息和显着改善传统的分类准确率。

在Benediktsson等人(2005)和光谱/空间分类器的联合提出了形态特征是延长使用机载图像。

基于形态学的方法编制的光谱和空间信息的方法中,分类和主成分分析(PCA)是采用预处理和降维。

Epifanio and Soille (2007)利用形态学纹理的自然景观特征的高分辨率图像分割为几个覆盖类型。

纹理特征是关注重点,由于高分辨率光谱波段数图像有时是有限的;此外,传统基于像素分类的技术表现不佳。

Huang等人(2007b)调查的分类和提取在城市地区的高空间分辨率多光谱空间特征图像。

一种结构特征集(SFS)提出了提取方向线直方图的统计特征。

降维的一些方法,包括决定边界特征提取和相似性指数特征选择,然后实施减少了SFS信息冗余。

评估的方法是两个QuickBird卫星数据集,结果表明,新集减少空间特征具有比传统的方法更好的性能。

1.2基于对象的分析1.2.1的基于对象的算法开发基于对象的分类是一个很好的替代传统的基于像素的方法。

这种分析方法将减少均匀的区域内的局部谱方差。

该方法的基本思想是将群组在空间上相邻的像素分成均匀的光谱段,然后执行分类的对象作为最小处理单位。

Kettig and Landgrebe (1976) 首先提出了基于对象的分析方法,并开发了空间光谱分类,叫做均质物体的提取和分类(ECHO) (Landgrebe,1980)。

Jimenez等人 (2005)开发的ECHO的算法,就是取消非监督的ECHO版本,是无监督增强的方法在局部邻域像素的均匀性。

它使用了内容相关多光谱或高光谱数据分类,产生的结果,是让人们能较为清晰的分析其内容。

他们对HYDICE和AVIRIS实验表明,该UnECHO分类器尤其适用于具有高空间分辨率的新一代的机载和星载传感器。

近年来,嵌入到的eCognition商业软件的分形网络演化方法(Hay等人,2003)已被广泛地用于基于对象的分析和实验。

它利用模糊集理论提取的对象特点,成规模的特点上在两个细和粗尺度上同时进行图像分割。

该FNEA是一个自下而上从单个像素开始区域合并的技术。

以迭代的方式,在每一个接下来的步骤,图像对象合并成更大的。

区域合并的决定 with local homogeneity criteria and the criteria are defined as: 与本地的同质性标准和准则,定义为:bNNNMergeObj1Obj2W其中是对于带b的权重,,和代表合并后的对象内的像素的数量,,,,MergeObj1Obj2分别表示对象1和对象2。

,和分别是标准偏差。

当一个合并的一对图像对象进行检查,两个对象之间的融合异质性值H是计算和比较尺度参数T。

当H<T时,合并这两个对象。

尺度参数合并两个图像对象时可能发生的异质性是测量的最大变化。

数学形态学中的分水岭变换是一种功能强大的工具,可以进行图像分割,也可以采用对于基于对象的分析。

Li and Xiao (2007) 提出分水岭算法用于图像分割的延伸。

提出了基于矢量的数学形态学方法计算梯度的大小,随即又投入到分水岭变换的图像分割。

该方法表现出了可喜的成果,可用于分割高分辨率多光谱图像和基于对象的分类。

Akcay 和 Aksoy (2008)提出的新方法在高清晰度的图像自动目标检测与光谱信息提取结构信息相结合,对图像进行分割。

分割算法利用形态学运算,应用于个别光谱波段增加结构元素尺寸。

实验结果表明,该方法能够自动地检测,组和标号分类属于同一个对象类。

1.2.2基于对象分析的改进算法随着面向对象分类的发展,一些优化的算法已经被提出。

Wang等人(2004a)提出了一种混合分类,集成了像素和基于对象的分类。

分别为对象的像素和利用最大似然分类器得到级别地图以及最近邻分类,(即, MLCNN)。

Gamba等人(2007)提出了一个boundaryoptimized高分辨率的图像分类方法。

边界与非边界像素被区分了,然后分别归类。

Bruzzone 和 Carlin (2006)开发的高空间分辨率图像的多层次的方法处理。

其基本思想是多尺度同时使用表述作为一个特征提取模块,自适应模型中每个像素的空间范围。

在实验中,其有效利用城市和农村的QuickBird验证图像。

1.2.3基于对象分析的应用Yu等人(2006) 从基于对象分析的方法利用多种特征在北加利福尼亚州的Point Reyes National Seashore建立了一个全面的植被库。

在他们的研究中,对于每个对象,52种特征是计算包括光谱特征,纹理,地形特征和几何特征。

Wang 等人(2007a, 2007b)在巴拿马的加勒比海岸的Punta Galeta研究了基于对象的分析在红树林映射。

Waske 和 van der Linden (2008)融合合成孔径雷达(SAR)和光学遥感数据,利用多个联合分类水平分割。

相关文档
最新文档