Geodesy cover and content
测绘英语unit35
译文:GIS为分析地球空间数据或信息提供了一种技术和方法。
Unit 35 Geographic information and spatial data types (2)
Notes to the Text
8.Geographic Information Sciences include Geographic Information Systems as well as the disciplines of geography ,cartography, geodesy, and remote sensing.
2.Current GISs differ according the way in which they organize reality through the data model.
译文:目前GIS的主要区别在于组5 Geographic information and spatial data types (2)
Unit 35 Geographic information and spatial data types (2)
New words and Expressions
plumb [plʌm] n. 铅垂线 geoid [ˈd ʒi:ɔid] n. 大地水准面 refraction [rɪˈfr ækʃən] n. 折射 thermal [ˈθə :məl ] a. 热的 热量的 stadia [ˈsteidjə] n. 视距,视距仪 terrain [teˈrein] n. 地形 electromagnetic [ɪˈlektrə ʊmægˈnet ɪk] a. 电磁的 ellipsoid [iˈlips ɔid] n. 椭圆体 prism [ˈpr ɪzəm] n. 棱镜
地理信息科学专业英语书后单词
Spatial interpolation 空间插值 standard query language(SQL)标准化查询语言
Polygon 多边形 proximity analysis 邻近域分析
Data structures 数据结构 information retrieval 信息检索
Topological modeling 拓扑建模 network analysis网络分析
Overlay 叠置 data output 数据输出
7、remote sensing 遥感 பைடு நூலகம் sensor 传感器
Electromagnetic radiation 电磁辐射 radiometer 辐射计
Electro-optical scanner 光学扫描仪 radar system 雷达系统
high resolution visible(HRV)sensors 高分辨可视成像传感器
Charge-coupled devices (CCDs)电荷耦合器件
panchromatic(PLA)全色 multispectral(MLA)多波段
WFI(Wide Field Imager)广角成像仪 earth observing system(EOS)地球观测系统
CBERS(China-Brazil Erath Resources Satellite)中巴地球资源卫星
IRMSS(Infrared Multispectral Scanner) 红外多光谱扫描仪
Disaster management 灾害管理 public health 公共卫生
2024全新英语趣味课ppt课件
By imitating the pronunciation of vocabulary or words with similar pronunciation, interesting associations are formed to help students remember vocabulary.
Introduce the definition and function of weak reading,
provide example words and sentences for weak reading
exercises.
Reduced reading
Explain the formation and common forms of reduced
• Sentence structure and expression methods
contents
目录
• Listening training and improvement approaches
• Oral Practice and Scene Simulation • Reading comprehension and
Phonetic phenomena such as linking and weak reading
Linking
Explain the concept of linking, list common linking
phenomena and example sentences.
Weak reading
Consonant phonetic symbols
including pronunciation methods and example words for clear and voiced consonduce the basic rules of English pronunciation, such as stress and syllable division.
a comparsion of affine region detectors
International Journal of Computer Vision 65(1/2), 43–72, 2005c 2005Springer Science +Business Media, Inc. Manufactured in The Netherlands.DOI:10.1007/s11263-005-3848-xA Comparison of Affine Region DetectorsK.MIKOLAJCZYKUniversity of Oxford,OX13PJ,Oxford,United Kingdomkm@T.TUYTELAARSUniversity of Leuven,Kasteelpark Arenberg10,3001Leuven,Belgiumtuytelaa@esat.kuleuven.beC.SCHMIDINRIA,GRAVIR-CNRS,655,av.de l’Europe,38330,Montbonnot,Franceschmid@inrialpes.frA.ZISSERMANUniversity of Oxford,OX13PJ,Oxford,United Kingdomaz@J.MATASCzech Technical University,Karlovo Namesti13,12135,Prague,Czech Republicmatas@cmp.felk.cvut.czF.SCHAFFALITZKY AND T.KADIRUniversity of Oxford,OX13PJ,Oxford,United Kingdomfsm@tk@L.V AN GOOLUniversity of Leuven,Kasteelpark Arenberg10,3001Leuven,Belgiumvangool@esat.kuleuven.beReceived August20,2004;Revised May3,2005;Accepted May11,2005First online version published in January,2006Abstract.The paper gives a snapshot of the state of the art in affine covariant region detectors,and compares their performance on a set of test images under varying imaging conditions.Six types of detectors are included: detectors based on affine normalization around Harris(Mikolajczyk and Schmid,2002;Schaffalitzky and Zisserman,2002)and Hessian points(Mikolajczyk and Schmid,2002),a detector of‘maximally stable extremal regions’,proposed by Matas et al.(2002);an edge-based region detector(Tuytelaars and Van Gool,1999) and a detector based on intensity extrema(Tuytelaars and Van Gool,2000),and a detector of‘salient regions’,44Mikolajczyk et al.proposed by Kadir,Zisserman and Brady(2004).The performance is measured against changes in viewpoint,scale, illumination,defocus and image compression.The objective of this paper is also to establish a reference test set of images and performance software,so that future detectors can be evaluated in the same framework.Keywords:affine region detectors,invariant image description,local features,performance evaluation1.IntroductionDetecting regions covariant with a class of transforma-tions has now reached some maturity in the computer vision literature.These regions have been used in quite varied applications including:wide baseline matching for stereo pairs(Baumberg,2000;Matas et al.,2002;Pritchett and Zisserman,1998;Tuytelaars and Van Gool,2000),reconstructing cameras for sets of disparate views(Schaffalitzky and Zisserman, 2002),image retrieval from large databases(Schmid and Mohr,1997;Tuytelaars and Van Gool,1999), model based recognition(Ferrari et al.,2004;Lowe, 1999;Obdrˇz´a lek and Matas,2002;Rothganger et al., 2003),object retrieval in video(Sivic and Zisserman, 2003;Sivic et al.,2004),visual data mining(Sivic and Zisserman,2004),texture recognition(Lazebnik et al.,2003a,b),shot location(Schaffalitzky and Zisserman,2003),robot localization(Se et al.,2002) and servoing(Tuytelaars et al.,1999),building panoramas(Brown and Lowe,2003),symmetry detection(Turina et al.,2001),and object categoriza-tion(Csurka et al.,2004;Dorko and Schmid,2003; Fergus et al.,2003;Opelt et al.,2004).The requirement for these regions is that they should correspond to the same pre-image for dif-ferent viewpoints,i.e.,their shape is notfixed but automatically adapts,based on the underlying image intensities,so that they are the projection of the same 3D surface patch.In particular,consider images from two viewpoints and the geometric transformation between the images induced by the viewpoint change. Regions detected after the viewpoint change should be the same,modulo noise,as the transformed versions of the regions detected in the original image–image transformation and region detection commute.As such,even though they have often been called invariant regions in the literature(e.g.,Dorko and Schmid,2003;Lazebnik et al.,2003a;Sivic and Zisserman,2004;Tuytelaars and Van Gool,1999),in principle they should be termed covariant regions since they change covariantly with the transformation. The confusion probably arises from the fact that,even though the regions themselves are covariant,the nor-malized image pattern they cover and the feature de-scriptors derived from them are typically invariant. Note,our use of the term‘region’simply refers to a set of pixels,i.e.any subset of the image.This differs from classical segmentation since the region bound-aries do not have to correspond to changes in image appearance such as colour or texture.All the detectors presented here produce simply connected regions,but in general this need not be the case.For viewpoint changes,the transformation of most interest is an affinity.This is illustrated in Fig.1. Clearly,a region withfixed shape(a circular exam-ple is shown in Fig.1(a)and(b))cannot cope with the geometric deformations caused by the change in view-point.We can observe that the circle does not cover the same image content,i.e.,the same physical surface. Instead,the shape of the region has to be adaptive,or covariant with respect to affinities(Fig.1(c)–close-ups shown in Fig.1(d)–(f)).Indeed,an affinity is suffi-cient to locally model image distortions arising from viewpoint changes,provided that(1)the scene sur-face can be locally approximated by a plane or in case of a rotating camera,and(2)perspective effects are ignored,which are typically small on a local scale any-way.Aside from the geometric deformations,also pho-tometric deformations need to be taken into account. These can be modeled by a linear transformation of the intensities.To further illustrate these issues,and how affine covariant regions can be exploited to cope with the geometric and photometric deformation between wide baseline images,consider the example shown in Fig.2.Unlike the example of Fig.1(where a circular region was chosen for one viewpoint)the elliptical image regions here are detected independently in each viewpoint.As is evident,the pre-images of these affineA Comparison of Affine Region Detectors45Figure 1.Class of transformations needed to cope with viewpoint changes.(a)First viewpoint;(b,c)second viewpoint.Fixed size circular patches (a,b)clearly do not suffice to deal with general viewpoint changes.What is needed is an anisotropic rescaling,i.e.,an affinity (c).Bottom row shows close-up of the images of the toprow.Figure 2.Affine covariant regions offer a solution to viewpoint and illumination changes.First row:one viewpoint;second row:other viewpoint.(a)Original images,(b)detected affine covariant regions,(c)close-up of the detected regions.(d)Geometric normalization to circles.The regions are the same up to rotation.(e)Photometric and geometric normalization.The slight residual difference in rotation is due to an estimation error.covariant regions correspond to the same surface region.Given such an affine covariant region,it is then possible to normalize against the geometric and photometric deformations (shown in Fig.2(d),(e))and to obtain a viewpoint and illumination invariant description of the intensity pattern within the region.In a typical matching application,the regions are used as follows.First,a set of covariant regions is46Mikolajczyk et al.detected in an image.Often a large number,perhaps hundreds or thousands,of possibly overlapping regions are obtained.A vector descriptor is then asso-ciated with each region,computed from the intensity pattern within the region.This descriptor is chosen to be invariant to viewpoint changes and,to some extent, illumination changes,and to discriminate between the regions.Correspondences may then be established with another image of the same scene,byfirst detect-ing and representing regions(independently)in the new image;and then matching the regions based on their descriptors.By design the regions commute with viewpoint change,so by design,corresponding regions in the two images will have similar(ideally identical) vector descriptors.The benefits are that correspon-dences can then be easily established and,since there are multiple regions,the method is robust to partial occlusions.This paper gives a snapshot of the state of the art in affine covariant region detection.We will describe and compare six methods of detecting these regions on images.These detectors have been designed and implemented by a number of researchers and the comparison is carried out using binaries supplied by the authors.The detectors are:(i)the ‘Harris-Affine’detector(Mikolajczyk and Schmid, 2002,2004;Schaffalitzky and Zisserman,2002); (ii)the‘Hessian-Affine’detector(Mikolajczyk and Schmid,2002,2004);(iii)the‘maximally stable extremal region’detector(or MSER,for short)(Matas et al.,2002,2004);(iv)an edge-based region detec-tor(Tuytelaars and Van Gool,1999,2004)(referred to as EBR);(v)an intensity extrema-based region detector(Tuytelaars and Van Gool,2000,2004) (referred to as IBR);and(vi)an entropy-based region detector(Kadir et al.,2004)(referred to as salient regions).To limit the scope of the paper we have not included methods for detecting regions which are covariant only to similarity transformations(i.e.,in particular scale), such as(Lowe,1999,2004;Mikolajczyk and Schmid, 2001;Mikolajczyk et al.,2003),or other methods of computing affine invariant descriptors,such as image lines connecting interest points(Matas et al.,2000; Tell and Carlson,2000,2002),or invariant vertical line segments(Goedeme et al.,2004).Also the detectors proposed by Lindeberg and G˚a rding(1997)and Baum-berg(2000)have not been included,as they come very close to the Harris-Affine and Hessian-Affine detectors.The six detectors are described in Section2.They are compared on the data set shown in Fig.9.This data set includes structured and textured scenes as well as different types of transformations:viewpoint changes,scale changes,illumination changes,blur and JPEG compression.It is described in more detail in Section3.Two types of comparisons are carried out. First,in Section10,the repeatability of the detector is measured:how well does the detector determine cor-responding scene regions?This is measured by com-paring the overlap between the ground truth and de-tected regions,in a manner similar to the evaluation test used in Mikolajczyk and Schmid(2002),but with special attention paid to the effect of the different scales (region sizes)of the various detectors’output.Here, we also measure the accuracy of the regions’shape, scale and localization.Second,the distinctiveness of the detected regions is assessed:how distinguishable are the regions detected?Following(Mikolajczyk and Schmid,2003,2005),we use the SIFT descriptor de-veloped by Lowe(1999),which is an128-dimensional vector,to describe the intensity pattern within the im-age regions.This descriptor has been demonstrated to be superior to others used in literature on a number of measures(Mikolajczyk and Schmid,2003).Our intention is that the images and tests de-scribed here will be a benchmark against which fu-ture affine covariant region detectors can be assessed. The images,Matlab code to carry out the performance tests,and binaries of the detectors are available from /∼vgg/research/affine. 2.Affine Covariant DetectorsIn this section we give a brief description of the six re-gion detectors used in the comparison.Section2.1de-scribes the related methods Harris-Affine and Hessian-Affine.Sections2.2and2.3describe methods for detecting edge-based regions and intensity extrema-based regions.Finally,Sections2.4and2.5describe MSER and salient regions.For the purpose of the comparisons the output re-gion of all detector types are represented by a common shape,which is an ellipse.Figures3and4show the el-lipses for all detectors on one pair of images.In order not to overload the images,only some of the corre-sponding regions that were actually detected in both images have been shown.This selection is obtained by increasing the threshold.A Comparison of Affine Region Detectors47Figure3.Regions generated by different detectors on corresponding sub-parts of thefirst and third graffiti images of Fig.9(a).The ellipses show the original detection size.In fact,for most of the detectors the output shape is an ellipse.However,for two of the de-tectors(edge-based regions and MSER)it is not, and information is lost by this representation,as ellipses can only be matched up to a rotational degree of freedom.Examples of the original re-gions detected by these two methods are given in Fig.5.These are parallelogram-shaped regions for the edge-based region detector,and arbitrarily shaped regions for the MSER detector.In the following the representing ellipse is chosen to have the same first and second moments as the originally detected region,which is an affine covariant construction method.48Mikolajczyk etal.Figure 4.Regions generated by different detectors continued.2.1.Detectors Based on Affine Normalization—Harris-Affine &Hessian-AffineWe describe here two related methods which detect interest points in scale-space,and then determine an elliptical region for each point.Interest points are either detected with the Harris detector or with a detector based on the Hessian matrix.In both casesscale-selection is based on the Laplacian,and the shape of the elliptical region is determined with the second moment matrix of the intensity gradient (Baumberg,2000;Lindeberg and G˚a rding,1997).The second moment matrix,also called the auto-correlation matrix,is often used for feature detection or for describing local image structures.Here it is used both in the Harris detector and the ellipticalA Comparison of Affine Region Detectors49Figure5.Originally detected region shapes for the regions shown in Figs.3(c)and4(b). shape estimation.This matrix describes the gradientdistribution in a local neighbourhood of a point:M=µ(x,σI,σD)= µ11µ12µ21µ22=σ2D g(σI)∗I2x(x,σD)I x I y(x,σD)I x I y(x,σD)I2y(x,σD)(1)The local image derivatives are computed with Gaussian kernels of scaleσD(differentiation scale). The derivatives are then averaged in the neighbourhood of the point by smoothing with a Gaussian window of scaleσI(integration scale).The eigenvalues of this matrix represent two principal signal changes in a neighbourhood of the point.This property enables the extraction of points,for which both curvatures are significant,that is the signal change is significant in orthogonal directions.Such points are stable in arbitrary lighting conditions and are representative of an image.One of the most reliable interest point detectors,the Harris detector(Harris and Stephens, 1988),is based on this principle.A similar idea is explored in the detector based on the Hessian matrix:H=H(x,σD)=h11h12h21h22=I xx(x,σD)I xy(x,σD)I xy(x,σD)I yy(x,σD)(2)The second derivatives,which are used in this matrix give strong responses on blobs and ridges.The regions are similar to those detected by a Laplacian operator (trace)(Lindeberg,1998;Lowe,1999)but a function based on the determinant of the Hessian matrix penal-izes very long structures for which the second deriva-tive in one particular orientation is very small.A local maximum of the determinant indicates the presence of a blob structure.To deal with scale changes a scale selection method(Lindeberg,1998)is applied.The idea is to select the characteristic scale of a local structure, for which a given function attains an extremum over scales(see Fig.6).The selected scale is characteristic in the quantitative sense,since it measures the scale50Mikolajczyk etal.Figure 6.Example of characteristic scales.Top row shows images taken with different zoom.Bottom row shows the responses of the Laplacian over scales.The characteristic scales are 10.1and 3.9for the left and right image,respectively.The ratio of scales corresponds to the scale factor (2.5)between the two images.The radius of displayed regions in the top row is equal to 3times the selected scales.at which there is maximum similarity between the feature detection operator and the local image struc-tures.The size of the region is therefore selected in-dependently of image resolution for each point.The Laplacian operator is used for scale selection in both detectors since it gave the best results in the ex-perimental comparison in Mikolajczyk and Schmid (2001).Given the set of initial points extracted at their char-acteristic scales we can apply the iterative estimation of elliptical affine region (Lindeberg and G˚a rding,1997).The eigenvalues of the second moment matrix are used to measure the affine shape of the point neighbourhood.To determine the affine shape,we find the transforma-tion that projects the affine pattern to the one with equal eigenvalues.This transformation is given by the square root of the second moment matrix M 1/2.If the neigh-bourhood of points x R and x L are normalized by trans-formations x R =M 1/2R x R and x L =M 1/2L x L ,respec-tively,the normalized regions are related by a simple rotation x L =R x R (Baumberg,2000;Lindeberg andG˚a rding,1997).The matrices M Land M R computed in the normalized frames are equal to a rotation matrix (see Fig.7).Note that rotation preserves the eigen-value ratio for an image patch,therefore,the affine deformation can be determined up to a rotation fac-tor.This factor can be recovered by other methods,for example normalization based on the dominant gradi-ent orientation (Lowe,1999;Mikolajczyk and Schmid,2002).The estimation of affine shape can be applied to any initial point given that the determinant of the second moment matrix is larger than zero and the signal to noise ratio is insignificant for this point.We can there-fore use this technique to estimate the shape of initial regions provided by the Harris and Hessian based de-tector.The outline of the iterative region estimation:1.Detect initial region with Harris or Hessian detector and select the scale.2.Estimate the shape with the second moment matrix3.Normalize the affine region to the circular one4.Go to step 2if the eigenvalues of the second moment matrix for new point are not equal.Examples of Harris-Affine and Hessian-Affine re-gions are displayed on Fig.3(a)and (b).2.2.An Edge-Based Region DetectorWe describe here a method to detect affine covariant regions in an image by exploiting the edges present in the image.The rationale behind this is that edges are typically rather stable features,that can be detected over a range of viewpoints,scales and/or illumination changes.Moreover,by exploiting the edge geometry,the dimensionality of the problem can be significantly reduced.Indeed,as will be shown next,the 6D search problem over all possible affinities (or 4D,once theA Comparison of Affine Region Detectors51Figure 7.Diagram illustrating the affine normalization using the second moment matrices.Image coordinates are transformed with matrices M −1/2L and M −1/2R .center point is fixed)can further be reduced to a one-dimensional problem by exploiting the nearby edges geometry.In practice,we start from a Harris corner point p (Harris and Stephens,1988)and a nearby edge,extracted with the Canny edge detector (Canny,1986).To increase the robustness to scale changes,these basic features are extracted at multiple scales.Two points p 1and p 2move away from the corner in both directions along the edge,as shown in Fig.8(a).Their relative speed is coupled through the equality of relative affine invariant parameters l 1and l 2:l i =absp i (1)(s i )p −p i (s i ) ds i (3)with s i an arbitrary curve parameter (in both direc-tions),p i (1)(s i )the first derivative of p i (s i )with respectto s i ,abs()the absolute value and |...|the determi-nant.This condition prescribes that the areas between the joint p ,p 1 and the edge and between the joint p ,p 2 and the edge remain identical.This is an affine invariant criterion indeed.From now on,we simply use l when referring to l 1=l 2.For each value l ,the two points p 1(1)and p 2(1)together with the corner p define a parallelogram (l ):the parallelogram spanned by the vectors p 1(l )−p and p 2(l )−p .This yields a one dimensional family of parallelogram-shaped regions as a function of l .From this 1D family we select one (or a few)parallelogram for which the following photometric quantities of the texture go through an extremum.Inv 1=abs|p 1−p g p 2−p g ||p −p 1p −p 2| M 100 M 200M 000−(M 100)2Inv 2=abs|p −p g q −p g |1p −p 2 M 100 M 200M 000−(M 100)2Figure 8.Construction methods for EBR and IBR.(a)The edge-based region detector starts from a corner point p and exploits nearby edgeinformation;(b)The intensity extrema-based region detector starts from an intensity extremum and studies the intensity pattern along rays emanating from this point.52Mikolajczyk et al.with M n pq =I n (x ,y )x p y q dxdy(4)p g = M 110M 100,M 101M 100with M n pq the n th order ,(p +q )th degree momentcomputed over the region (l ),p g the center of gravity of the region,weighted with intensity I (x ,y ),and q the corner of the parallelogram opposite to the corner point p (see Fig.8(a)).The second factor in these formula has been added to ensure invariance under an intensity offset.In the case of straight edges,the method described above cannot be applied,since l =0along the entire edge.Since intersections of two straight edges occur quite often,we cannot simply neglect this case.To cir-cumvent this problem,the two photometric quantities given in Eq.(4)are combined and locations where both functions reach a minimum value are taken to fix the parameters s 1and s 2along the straight edges.Moreover,instead of relying on the correct detection of the Harris corner point,we can simply use the straight lines intersection point instead.A more detailed expla-nation of this method can be found in Tuytelaars and Van Gool (1999,2004).Examples of detected regions are displayed in Fig.5(b).For easy comparison in the context of this paper,the parallelograms representing the invariant regions are replaced by the enclosed ellipses,as shown in Fig.4(b).However,in this way the orientation-information is lost,so it should be avoided in a practical application,as discussed in the beginning of Section 2.2.3.Intensity Extrema-Based Region DetectorHere we describe a method to detect affine covariant regions that starts from intensity extrema (detected at multiple scales),and explores the image around them in a radial way,delineating regions of arbitrary shape,which are then replaced by ellipses.More precisely,given a local extremum in inten-sity,the intensity function along rays emanating from the extremum is studied,as shown in Fig.8(b).The following function is evaluated along each ray:f I (t )=abs (I (t )−I 0)max t 0abs (I (t )−I 0)dt t,d with t an arbitrary parameter along the ray,I (t )the intensity at position t ,I 0the intensity value at the ex-tremum and d a small number which has been added to prevent a division by zero.The point for which this function reaches an extremum is invariant under affine geometric and linear photometric transforma-tions (given the ray).Typically,a maximum is reached at positions where the intensity suddenly increases or decreases.The function f I (t )is in itself already in-variant.Nevertheless,we select the points where this function reaches an extremum to make a robust selec-tion.Next,all points corresponding to maxima of f I (t )along rays originating from the same local extremum are linked to enclose an affine covariant region (see Fig.8(b)).This often irregularly-shaped region is re-placed by an ellipse having the same shape moments up to the second order.This ellipse-fitting is again an affine covariant construction.Examples of detected re-gions are displayed in Fig.4(a).More details about this method can be found in Tuytelaars and Van Gool (2000,2004).2.4.Maximally Stable Extremal Region DetectorA Maximally Stable Extremal Region (MSER)is aconnected component of an appropriately thresholded image.The word ‘extremal’refers to the property that all pixels inside the MSER have either higher (bright extremal regions)or lower (dark extremal regions)intensity than all the pixels on its outer boundary.The ‘maximally stable’in MSER describes the property optimized in the threshold selection process.The set of extremal regions E ,i.e.,the set of all connected components obtained by thresholding,has a number of desirable properties.Firstly,a mono-tonic change of image intensities leaves E unchanged,since it depends only on the ordering of pixel intensi-ties which is preserved under monotonic transforma-tion.This ensures that common photometric changes modelled locally as linear or affine leave E unaffected,even if the camera is non-linear (gamma-corrected).Secondly,continuous geometric transformations pre-serve topology–pixels from a single connected compo-nent are transformed to a single connected component.Thus after a geometric change locally approximated by an affine transform,homography or even continuous non-linear warping,a matching extremal region will be in the transformed set E .Finally,there are no more extremal regions than there are pixels in the image.SoA Comparison of Affine Region Detectors53a set of regions was defined that is preserved under a broad class of geometric and photometric changes and yet has the same cardinality as e.g.the set offixed-sized square windows commonly used in narrow-baseline matching.Implementation Details.The enumeration of the set of extremal regions E is very efficient,almost linear in the number of image pixels.The enumeration pro-ceeds as follows.First,pixels are sorted by intensity. After sorting,pixels are marked in the image(either in decreasing or increasing order)and the list of growing and merging connected components and their areas is maintained using the union-find algorithm(Sedgewick, 1988).During the enumeration process,the area of each connected component as a function of intensity is stored.Among the extremal regions,the‘maximally stable’ones are those corresponding to thresholds were the relative area change as a function of relative change of threshold is at a local minimum.In other words, the MSER are the parts of the image where local bi-narization is stable over a large range of thresholds. The definition of MSER stability based on relative area change is only affine invariant(both photomet-rically and geometrically).Consequently,the process of MSER detection is affine covariant.Detection of MSER is related to thresholding,since every extremal region is a connected component of a thresholded image.However,no global or‘optimal’threshold is sought,all thresholds are tested and the stability of the connected components evaluated.The output of the MSER detector is not a binarized image. For some parts of the image,multiple stable thresholds exist and a system of nested subsets is output in this case.Finally we remark that the different sets of extremal regions can be defined just by changing the ordering function.The MSER described in this section and used in the experiments should be more precisely called intensity induced MSERs.2.5.Salient Region DetectorThis detector is based on the pdf of intensity values computed over an elliptical region.Detection proceeds in two steps:first,at each pixel the entropy of the pdf is evaluated over the three parameter family of el-lipses centred on that pixel.The set of entropy extrema over scale and the corresponding ellipse parameters are recorded.These are candidate salient regions.Second,the candidate salient regions over the entire image are ranked using the magnitude of the derivative of the pdf with respect to scale.The top P ranked regions are retained.In more detail,the elliptical region E centred on a pixel x is parameterized by its scale s(which specifies the major axis),its orientationθ(of the major axis), and the ratio of major to minor axesλ.The pdf of intensities p(I)is computed over E.The entropy H is then given byH=−Ip(I)log p(I)The set of extrema over scale in H is computed for the parameters s,θ,λfor each pixel of the image.For each extrema the derivative of the pdf p(I;s,θ,λ)with s is computed asW=s22s−1I∂p(I;s,θ,λ)∂s,and the saliency Y of the elliptical region is com-puted as Y=HW.The regions are ranked by their saliency Y.Examples of detected regions are displayed in Fig.4(c).More details about this method can be found in Kadir et al.(2004).3.The Image Data SetFigure9shows examples from the image sets used to evaluate the detectors.Five different changes in imag-ing conditions are evaluated:viewpoint changes(a) &(b);scale changes(c)&(d);image blur(e)&(f); JPEG compression(g);and illumination(h).In the cases of viewpoint change,scale change and blur,the same change in imaging conditions is applied to two different scene types.This means that the effect of changing the image conditions can be separated from the effect of changing the scene type.One scene type contains homogeneous regions with distinctive edge boundaries(e.g.graffiti,buildings),and the other con-tains repeated textures of different forms.These will be referred to as structured versus textured scenes re-spectively.In the viewpoint change test the camera varies from a fronto-parallel view to one with significant fore-shortening at approximately60degrees to the camera. The scale change and blur sequences are acquired by varying the camera zoom and focus respectively.。
Global land cover mapping at 30 m resolution_ A POK-based operational approach
Global land cover mapping at 30m resolution:A POK-based operational approachJun Chen a ,⇑,Jin Chen b ,Anping Liao a ,Xin Cao b ,Lijun Chen a ,Xuehong Chen b ,Chaoying He a ,Gang Han a ,Shu Peng a ,Miao Lu a ,Weiwei Zhang a ,Xiaohua Tong c ,Jon Mills daNational Geomatics Center of China,Beijing 100830,ChinabState Key Laboratory of Earth Surface Processes and Resource Ecology,Beijing Normal University,Beijing 100875,China cCollege of Surveying and Geo-Informatics,Tongji University,Shanghai 200092,China dSchool of Civil Engineering and Geosciences,Newcastle University,Newcastle upon Tyne NE17RU,United Kingdoma r t i c l e i n f o Article history:Received 6August 2014Received in revised form 30August 2014Accepted 3September 2014Available online xxxxKeywords:Global land cover Operational approachPixel-object-knowledge-based classification GlobeLand30a b s t r a c tGlobal Land Cover (GLC)information is fundamental for environmental change studies,land resource management,sustainable development,and many other societal benefits.Although GLC data exists at spatial resolutions of 300m and 1000m,a 30m resolution mapping approach is now a feasible option for the next generation of GLC products.Since most significant human impacts on the land system can be captured at this scale,a number of researchers are focusing on such products.This paper reports the operational approach used in such a project,which aims to deliver reliable data products.Over 10,000Landsat-like satellite images are required to cover the entire Earth at 30m resolution.To derive a GLC map from such a large volume of data necessitates the development of effective,efficient,economic and operational approaches.Automated approaches usually provide higher efficiency and thus more economic solutions,yet existing automated classification has been deemed ineffective because of the low classification accuracy achievable (typically below 65%)at global scale at 30m resolution.As a result,an approach based on the integration of pixel-and object-based methods with knowledge (POK-based)has been developed.To handle the classification process of 10land cover types,a split-and-merge strategy was employed,i.e.firstly each class identified in a prioritized sequence and then results are merged together.For the identification of each class,a robust integration of pixel-and object-based classification was developed.To improve the quality of the classification results,a knowl-edge-based interactive verification procedure was developed with the support of web service technology.The performance of the POK-based approach was tested using eight selected areas with differing land-scapes from five different continents.An overall classification accuracy of over 80%was achieved.This indicates that the developed POK-based approach is effective and feasible for operational GLC mapping at 30m resolution.Ó2014The Authors.Published by Elsevier B.V.on behalf of International Society for Photogrammetry and Remote Sensing,Inc.(ISPRS).This is an open access article under the CC BY-NC-ND license (/licenses/by-nc-nd/3.0/).1.IntroductionInformation regarding land cover and its change over time is essential for a variety of societal needs,ranging from natural resources management,environmental studies,urban planning to sustainable development (Foley et al.,2005;Roger and Pielke,2005;Running,2008;Grimm et al.,2008;Zell et al.,2012;Sterling et al.,2013).Remote sensing has long been recognized as an effec-tive tool for broad-scale land cover mapping (e.g.,Carneggie and Lauer,1966;Kushwaha,1990;Townshend et al.,1991;Cihlar,2000;Rogana and Chen,2004;Hansen et al.,2013).As a result,a number of land cover datasets at a global scale have been developed with resolution ranging from 300m to 1km,using coarse resolution satellite imagery such as AVHRR,MODIS and MERIS (Hansen et al.,2000;Loveland et al.,2000;Friedl et al.,2002,2010;Bartholoméand Belward,2005;Bontemps et al.,2010).Although these GLC data products have been widely used,their quality is far from satisfac-tory for many applications (Coppin et al.,2004;Croke et al.,2004;He et al.,2006;Herold et al.,2008;Gong,2009;Goward et al.,2011;Verburg et al.,2011).Various researchers (e.g.,Iwao et al.,2006;Gong,2009;Fritz et al.,2010)have highlighted the shortfalls of these datasets,e.g.the considerably low accuracies and low-level/10.1016/j.isprsjprs.2014.09.0020924-2716/Ó2014The Authors.Published by Elsevier B.V.on behalf of International Society for Photogrammetry and Remote Sensing,Inc.(ISPRS).This is an open access article under the CC BY-NC-ND license (/licenses/by-nc-nd/3.0/).Corresponding author.agreement amongst themselves.Consequently,the demand for new GLC products with improved spatial resolution and accuracy has been increasingly recognized by the remote sensing community e.g.the Group on Earth Observations (GEO)and the International Society for Photogrammetry and Remote Sensing (ISPRS)(Zell et al.,2012;Giri et al.,2013).With the long-term archive and free availability of Landsat and similar image data,the development of GLC data products at 30m resolution has become feasible.Such product have been considered a superior option for the next generation of GLC maps,since most significant human activities on the land system can be captured at this scale (Wulder et al.,2008;Giri et al.,2013).During the past two decades,the extraction of land cover information from Land-sat-like imagery has been intensively studied,and a variety of automated and semi-automated methods/algorithms have been developed (e.g.,Gong and Howarth,1992;Ban,2003;Coppin et al.,2004;Lu et al.,2004;Lu and Weng,2007;Aitkenhead and Aalders,2011;Chen et al.,2012;Huang and Jia,2012;Ban and Jacob,2013;Chen et al.,2013a,b ).These have been applied to a number of national and regional land cover mapping projects using Landsat imagery (Liu et al.,1999;Xian et al.,2009;Johnson and Mueller,2010;Hansen and Loveland,2012).For instance,a set of 30m land cover data with 13different classes was produced by MacDonald Dettwiler and Associates Information Systems LLC (MDA,2014),which covers the USA and a large proportion of Africa and Asia.Land cover mapping with 30m resolution at a global scale is much more complex than national or regional scale due to a num-ber of factors,including the availability of good-quality imagery covering the land surface of the entire Earth (about 150mil-lion km 2)and the complex spectral and textual characterization of global landscapes.This makes the development of reliable 30m GLC data products a very difficult task,as it requires a sub-stantive level of technical innovation,as well as human and finan-cial resources.This could be the reason why so far only global datasets with limited classes at this resolution have been reported (e.g.a global forest dataset at 30m resolution was produced by Townshend et al.(2012)and Hansen et al.(2013)).Of course,forest data is very important for various applications,however a 30m GLC product with a more comprehensive set of land cover types is desirable for wider studies.In 2010China launched a GLC mapping project,the aim of which was to produce a 30m GLC data product (GlobeLand30)with 10classes for years 2000and 2010within a four year period (Chen et al.,2011a ).It was defined as an operational mapping pro-ject,with the production of reliable datasets as its clear objective.To achieve such an objective,it was necessary to utilize automated classification routines as much as possible.Therefore,the first attempt was to conduct an experimental evaluation on the usabil-ity of existing automated classification techniques (Gong et al.,2013).Four classifiers,i.e.,Maximum Likelihood Classifier (MLC),J4.8Decision Tree Classifier (DT),Random Forest Classifier (RF),and Support Vector Machine (SVM)were tested with more than 8000images captured during the year 2000.It was found that the highest overall classification accuracy (OCA)was produced by SVM,at 64.9%(Gong et al.,2013).Such low accuracy is possibly attributable to significant spectral confusion among different land cover types.This essentially meant that it was not feasible to make use of fully automated classification techniques for such an opera-tional project.A variety of mapping strategies and classification approaches were therefore investigated for deriving reliable 30m GLC datasets (Chen et al.,2012,2014a;Hu et al.,2014;Liao et al.,2014;Tang et al.,2014).This paper presents a pixel-object-knowledge-based (POK-based)classification approach,the primary methodology used to produce China’s 30m GLC data product (GlobeLand30).Section 2examines the two most critical issues for operational 30m GLC mapping.Section 3introduces the key concepts of the POK approach proposed for an operational mapping strategy.Section 4describes how pixel-classifiers and object-based identification were integrated for per-class classification.Section 5reports the use of knowledge for verifying and interactively improving map-ping results.Section 6presents the experimental results for selected areas and the accuracy assessment of the final Globe-Land30product,as well as a comparison with regard to other sim-ilar 30m data products.Section 7provides discussion and conclusions.2.Critical issues for operational 30m GLC mappingExperimental and operational classifications are two different approaches for large area land cover mapping and monitoring (Defries and Townshend,1999;Hansen and Loveland,2012).The former concentrates on the development and performance testing of novel algorithms and models,the latter focuses on the develop-ment and delivery of reliable data products within a pre-defined time schedule.From the point of view of operational GLC mapping at 30m resolution,the selection and/or development of appropri-ate classifiers suitable for spectral and textual characterization of complex landscapes,as well as the quality of the resulting data products are two critical issues that need to be addressed.2.1.Appropriate classifiers suitable for characterization of complex landscapesTaking into consideration the massive task of operational GLC mapping and the existing land cover classification systems,China’s GLC mapping project adopted a classification scheme consisting of 10first-level classes,namely water bodies,wetland,artificial sur-faces,cultivated land,permanent snow/ice,forest,shrubland,grassland,bareland and tundra.Table 1presents these first-level classes and examples of their typical appearance on Landsat imag-ery collected from different sites around the world.On one hand,30m remote sensing imagery allows the observation of human impact on the Earth surface through scrutiny of shapes,sizes and patterns.Braided rivers and lakes,small villages and airports,irri-gated round cultivated land,large-scale machinery agricultural patches can all be observed among others.On the other hand,it can easily be observed that there is high spectral heterogeneity within a single land cover class and significant spectral confusion among different classes.For instance,clear water from a reservoir,a river with a high sediment content,and a eutrophic lake may have very diverse spectral reflectance.Conversely,some sub-classes of artificial surface,bareland and cropland have very similar spectral responses.The cultivated land class contains irrigated farmlands,paddy fields,green houses cultivated land,artificial tame pastures,economic cultivated land (such as grape,coffee,and palm),and abandoned arable lands.They may not have a unique spectral signature but have similarities with other land cover classes.This high spectral variation makes the per-pixel clas-sification much more difficult than might otherwise be anticipated (Lu and Weng,2007).For reliable operational mapping,it is impos-sible to select an appropriate single per-pixel classifier and a suit-able set of variables for the entire globe.Therefore,new methods and approaches need to be adopted or developed to deal with the complex classification issues.Image segmentation and object-based classification techniques have been developed to derive these structural elements by group-ing pixels that have a relatively uniform spectral response into objects and identify the real world land cover features (Ban et al.,2010;Blaschke,2010;Malinverni et al.,2011).Errors associated2J.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx (2014)xxx–xxxTable 1The 10land cover classes and examples of their appearance on 30mimagery.(continued on next page J.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx (2014)xxx–xxx34J.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx(2014)xxx–xxxJ.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx(2014)xxx–xxx5(continued on next pagewith individual pixel misidentification may be avoided through this object-based classification approach (Aplin and Smith,2011;Myint et al.,2011).It is natural to combine the spectral and structural information for solving the problems related to the char-acterization of complex landscapes (Ryherd and Woodcock,1996;Trias-Sanz et al.,2008;Hussain et al.,2013).However,the determi-nation of suitable segmentation parameters and incorporation of domain knowledge into the object identification process is extre-mely difficult,especially when the mapping area has a complex landscape with multi-scale patterns (Smith,2013).2.2.Assurance of data product qualityThe reality consistency and logical consistency are used to eval-uate the quality of geo-spatial data products (Heipke et al.,2008;Brisaboa et al.,2014;Chen et al.,2014b ).For a land cover data prod-uct,the former refers to the thematic accuracy and up-to-dateness in comparison to the real world.The latter is related to the congru-ency of land cover features (or objects)within the same dataset,and between sequential datasets (Verburg et al.,2011;Robertson and King,2011;Hansen and Loveland,2012).Due to the errors intro-duced by image data manipulation and classification,it is common to find inconsistency or imperfections in land cover datasets.There-fore the verification of preliminary classification results is an important task to ensure a high level of data quality.A number of factors impact on the reality and logical consis-tency,such as the semantic interpretation of the adopted classifi-cation scheme,handling of the minimum mapping units,as well as time differences between the imagery used and the mapping time (Lu and Weng,2007;Verburg et al.,2011;Costa et al.,2014).For instance,different understandings of the definition for wetland may lead to dissimilar results of interpretation from the same 30m imagery and with the same mapping methodology (Comber et al.,2004),because the wetland class consists of several sub-types (lake swamp,river flooding wetlands,sea marsh,shrub/forest wetlands,mangrove forest,tidal flats/salt marshes).In addi-tion,different individual interpretations of multi-epoch imagery might cause false changes.The reduction of omission and commis-sion errors in land cover data products requires the formulation of data quality knowledge rules and the development of standardized verification procedures (Lu and Weng,2007;Costa et al.,2014).However,the logical and temporal consistency of land cover cate-gory data is not as evident or explicit as the topographic features (Gerke et al.,2004;Brisaboa et al.,2014;Chen et al.,2007).One6J.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx (2014)xxx–xxxpossible solution is to develop advanced verification tools andinteractive mapping procedures by integrating land cover knowl-edge and all available ancillary data.3.A POK approach for operational mappingTo address the critical issues outlined in Section 2,a POK-based approach was proposed.Its key elements are the integration of pixel-and object-based classification approaches,optimal cover-age of 30m imagery for two baseline years,web-service oriented integration of reference data,and knowledge-based verification for assuring the consistency and accuracy of 30m GLC data prod-ucts.An overview of the operational mapping strategy adopted is presented in Fig.1.3.1.Integrating pixel-and object-based classification techniques As it is difficult to select or invent a classifier that is suitable for complex land cover mapping over large areas,the combination of different classification techniques has been investigated (e.g.,Aitkenhead and Aalders,2011;Hansen and Loveland,2012).Spe-cifically,the integration of pixel-based and the object-based classi-fiers for large area land cover mapping has been explored by several authors (e.g.Myint et al.,2011;Malinverni et al.,2011;Robertson and King,2011;Smith,2013;Costa et al.,2014).In this project,the object-based technique was used to determine the spatial extent of land features with their structural/contextual information to form land objects.For any given land object,pixel-based classifiers were used to derive variables (such as veg-etation indices)and to identify the attribute value with the help of available reference data and expert knowledge.Hierarchical classification strategies have also been tested by several researchers with a series of per-class classifiers to minimize the effect of spectral confusion among different land cover classes (Frazier and Page,2000;Sulla-Menashe et al.,2011;Smith,2013).Here it was proposed to decompose the complex GLC mapping into a series of simpler per-class classifications.Different pixel-based variables and segmentation approaches were selected and utilized for characterizing different land cover classes.The water bodies was extracted first and the corresponding pixels were masked out before the next processing stage.Wetland was characterized next,followed by permanent snow/ice,artificial surfaces,culti-vated land,forest,shrubland,grassland,bareland and tundra.3.2.Optimal coverage of 30m imagery for two baseline years Landsat TM/ETM+images were selected as the primary data source for the two baseline years of 2000and 2010.Imagery from the Chinese Environmental and Disaster satellite (HJ-1)were also used as supplementary data for the year 2010,as they have similar characteristics to the Landsat TM/ETM+sensor in terms of spectral band set and spatial resolution (Hu et al.,2014).The 30m remotely sensed images were carefully selected to ensure optimal global coverage and minimal cloud contamination and such that they were captured within the local vegetation growing season.In addition,various image data useful to classifica-tion and validation were collected,including MODIS-NDVI time series data for extraction of seasonal information of vegetation (Verbesselt et al.,2010).Landsat TM/ETM+provided by USGS were satisfactorily geo-referenced,meaning no further geometric regis-tration was necessary.However,the HJ-1imagery suffers from nonlinear distortion because of its wide-scene imaging capability (Zhang et al.,2012;Tang et al.,2014;Hu et al.,2014).Therefore,a specific geo-registration model that integrates the collinearity equations and Lagrange interpolation was developed for geometric processing of HJ-1imagery.In total,2640HJ-1scenes were ortho-rectified with a registration accuracy of less than two pixels,cover-ing approximately 60%of the global land surface.Radiometric correction was performed for both Landsat and HJ-1imagery,including both atmospheric and topographic correc-tions.The atmospheric correction model of MODerate resolution atmospheric TRANsmission (MODTRAN)was used for Landsat imagery (Berk et al.,1999).A relative radiometric correction was adopted for HJ-1imagery by selecting corresponding atmospheri-cally corrected Landsat imagery that covered the same area but was acquired in different years as reference.For imagery covering mountainous areas with rough terrain,a new approach was devel-oped to restore the radiometric information in areas of shadow cast by mountains by using a ‘‘continuum removal’’(CR)spectral processing technique without the aid of a DEM (Zhou et al.,2014).The CR-based approach makes full use of the spectral infor-mation derived from both the shaded pixels and their neighboring non-shaded pixels of the same land cover type.Thick cloud cover and Landsat ETM+Scan Line Corrector (SLC)-off failure occasionally resulted in poor quality or missing data on Landsat TM/ETM+images.The neighborhood similar pixel interpo-lator (NSPI)approach was employed to remove thick clouds (Zhu et al.,2012)and filled gaps in SLC-off images (Chen et al.,2011b ).Based on the assumption that the same-class neighboring pixels around the missing pixels caused by thick clouds or SLC-off failure have similar spectral characteristics,and that these neigh-boring and missing pixels exhibit similar patterns of spectral dif-ferences at different dates,Neighborhood Similar Pixel based model (NSPI)interpolates the missing pixels by using information from an ancillary imagery with good quality but acquired at a dif-ferent date.The results of case studies indicate that NSPI can restore the value of missing pixels very accurately,and that it works well in heterogeneous regions.In addition,it can work well even if there is a relatively long time interval or significant spectral changes between the input and target images.The filled images appear spatially continuous without any obvious striping patterns (Chen et al.,2011b;Zhu et al.,2012;Huang,2012).3.3.Web-based reference data integrationAncillary data have proven effective in improving classification accuracy,such as the identification of land cover with higher spec-tral variation and distinctions between different land cover classes with similar spectral responses (Chen,1984;Ehlers et al.,1989;Lu and Weng,2007).At the global scale,there are a variety of reference data sets that can be used to support 30m GLC mapping,such as existing GLC maps at coarser resolution,30m or higher resolution regional land cover data,global DEM data (SRTM and ASTER DEM),global 1:1million topographic data (Hayakawa et al.,2008),and ecological zones (Olson et al.,2001).Online-distributed geospa-tial data assets and services (such as Google map,Map World,and OpenStreetMap),as well as land cover related services (such as Geo-Wiki)also provide valuable external and interoperableJ.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx (2014)xxx–xxx7ancillary sources of information(Fritz et al.,2012;Yu and Gong, 2012).Ancillary data are less uniform than remotely sensed data,vary-ing in format,accuracy and spatial resolution.In order to facilitate the use of such data and their incorporation into classification and verification processes,a bespoke web-based information service system was designed,developed and implemented by the Chinese GLC team.The system was used to integrate all the primary image data,reference data,ancillary data,and preliminary andfinal clas-sification results(Han et al.,2014).Ancillary data were processed and published as web services according to OGC standards.The web-based system was connected to the image analysis systems, with data exchange facilitated by a shared user interface.Online tools were developed to support data browsing,information retrie-val,comparison,geo-tagging,and verification.A special tool for change markup and reporting was developed following a pub-lish/subscribe model for multi-tier applications.3.4.Knowledge-based interactive verificationThe verification of geo-spatial data is an important process to identify inconsistency and to remove,or minimize,errors in thedatasets(Comber et al.,2004;Brisaboa et al.,2014).Knowledge-based verification has been explored for topographic mapping (Mills and Newton,1996;Gerke et al.,2004;Chen et al.,2007) and land cover mapping(Comber et al.,2004;Verburg et al., 2011).A knowledge framework was proposed by Gao et al. (2004)for the generalization of land cover maps with nature-based,culture-based and application-specific knowledge.How-ever,a priori knowledge of land cover and its change at a global scale is not available(Lu and Weng,2007;Smith,2013).In order to facilitate the logical consistency checking,knowledge about the geographical distribution of land cover was summarized and used to guide the data product verification.With the support of a web-based system,classification errors(commissions and omis-sions)on land cover and its change were checked with a variety of ancillary data and prior-knowledge.An allowable number of errors was defined for each land cover class on the basis of the minimum mapping units for a standardized quality controlling. This was followed by an interactive improvement to remove iden-tified errors.A more detailed explanation of the knowledge-based verification approach adopted is provided in Section5.3.5.The proposed POK approachWith the above considerations,a pixel-object-knowledge(POK) based classification approach was developed,as shown in Fig.2. With the POK approach,the classification process of all10land cover types is handled with a split-and-merge strategy,i.e.firstly each class is determined in a prioritized sequence and then the results are merged together.Thefirst step is called‘‘per-class classification’’.In per-class classification,pixel-based classifiers and object-based identification approaches are integrated to determine the spatial extent and category of land cover features in each class. Both the spectral and multi-temporal signatures of the30m imag-ery are used for deriving variables.A knowledge-based interactive verification is then carried out to check and improve the reality and logical consistency of the data products with the support of web service technology.For each class,the classification results of individual scenes were integrated into a map sheet of dimensions6°in longitude and5°in latitude.In total,847map sheets were produced.Incon-sistency and conflicts within the individual classification results and at strata boundaries were checked and corrected by operators All these per-class classifications were aggregated afterwards to produce comprehensive datasets for the years2000and2010.4.Hybrid pixel-and object-based classificationPixel-based classification may generate a large number of mis-classified pixels(the‘‘salt-and-pepper effect’’)due to the spectral confusion between land cover types and spectral diversity within the same land cover type.Therefore,object-based identification was adopted to integrate the pixel-based classification results with segmented objects generated using eCognition(v8)software.In this process,scale parameters ranging from10to50(with interval of5)were used to obtain multiple scale segmentation results because landscapes of different land cover types vary significantly in multiple scenes.Based on pixel-based per-class extraction results and multiple segmentation layers,the majority criterion (Lu and Weng,2007;Ok and Akyurek,2012;Costa et al.,2014) and auxiliary information(such as DEM,slope,and reference GLC products)were applied as a decision rule to automatically or man-ually label the objects.Table2provides an overview of the pixel-based classifiers and object-based identification methods for each land cover type.4.1.Water bodiesIn general,water bodies exhibit a distinct spectral feature in Landsat TM/ETM+imagery compared to other land cover types.A supervised Maximum Likelihood Classification(MLC)was employed based on the Normalized Differential Water Index (NDWI)and Wetness component of Tassel Cap transformation (TC-W).For the majority of mapped areas,the method worked well to ensure the accuracy and efficiency of water mapping(Liao et al., 2014).In some areas,water bodies,which included clear water, green water and turbid water,displayed spectral diversity.Clear water displayed lower reflectance in all bands,while turbid water had higher reflectance due to the blend of mud and sand,and green water caused by eutrophication displayed spectral features similar to green vegetation to some extent.A prior knowledge based deci-sion trees classifier was adopted to deal with the complexity of land surface waters(Sun et al.,2012).With the results of image segmentation,first the percentage of water pixels from the pixel-based classification results within each image object was calculated,and then a percentage threshold was defined,such as20%,15%and10%,which was then used to com-pare with the percentage of water pixels within each image object. In this project,10%was selected as the percentage threshold to8J.Chen et al./ISPRS Journal of Photogrammetry and Remote Sensing xxx(2014)xxx–xxx。
Global Land Cover Validation
Geodesy 2002(大地测量参数)
GEODESY
你在椭球上的位置以经度、纬 度和椭球高描述
5
Geodetic Datums大地坐标系统 大地坐标系统
A “Datum” is where you move an ellipsoid to fit the geoid in your area.
坐标系统 是如何移动椭球以符合当地的 大地水准面
GEODESY
13
NAD-1983
1. Select “State Plane NAD-83” in Grids.
选择State Plane NAD-83
2.
Select the appropriate zone in “Zone”. 选择带号
3. 4.
Check the distance unit.检查距离单位 You’re good to go.
大地测量参数程序是用来对 每个测量项目的椭球、投影 每个测量项目的椭球、 和坐标转换参数进行设置。 和坐标转换参数进行设置。
A “New Project” inherits the geodetic parameter settings from the previous project. “新建项目”将自动延用前 新建项目”
GEODESY 9
Projections – Types 投影---类型
Mercator墨卡托 墨卡托 Cylindrical
Transverse Mercator 横轴墨卡托 Cylindrical
Lambert – 1 兰伯特-1 兰伯特 Concical
Lambert – 2 兰伯特-2 兰伯特 Conical
圆柱投影
Great for eastto-west areas around the equator.
Synthetic aperture radar interferometry
Synthetic Aperture Radar InterferometryPAUL A.ROSEN,SCOTT HENSLEY,IAN R.JOUGHIN,MEMBER,IEEE,FUK K.LI,FELLOW,IEEE, SØREN N.MADSEN,SENIOR MEMBER,IEEE,ERNESTO RODRÍGUEZ,ANDRICHARD M.GOLDSTEINInvited PaperSynthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface,its changes over time, and other changes in the detailed characteristics of the surface. By exploiting the phase of the coherent radar signal,interferom-etry has transformed radar remote sensing from a largely inter-pretive science to a quantitative tool,with applications in cartog-raphy,geodesy,land cover characterization,and natural hazards. This paper reviews the techniques of interferometry,systems and limitations,and applications in a rapidly growing area of science and engineering.Keywords—Geophysical applications,interferometry,synthetic aperture radar(SAR).I.I NTRODUCTIONThis paper describes a remote sensing technique gener-ally referred to as interferometric synthetic aperture radar (InSAR,sometimes termed IFSAR or ISAR).InSAR is the synthesis of conventional SAR techniques and interfer-ometry techniques that have been developed over several decades in radio astronomy[1].InSAR developments in recent years have addressed some of the limitations in conventional SAR systems and subsequently have opened entirely new application areas in earth system science studies.SAR systems have been used extensively in the past two decades for fine resolution mapping and other remote sensing applications[2]–[4].Operating at microwave frequencies, Manuscript received December4,1998;revised October24,1999.This work was supported by the National Imagery and Mapping Agency,the De-fense Advanced Research Projects Agency,and the Solid Earth and Natural Hazards Program Office,National Aeronautics and Space Administration (NASA).P.A.Rosen,S.Hensley,I.R.Joughin,F.K.Li,E.Rodríguez,and R. M.Goldstein are with the Jet Propulsion Laboratory,California Institute of Technology,Pasadena,CA91109USA.S.N.Madsen is with Jet Propulsion Laboratory,California Institute of Technology,Pasadena,CA91109USA and with the Technical University of Denmark,DK2800Lyngby,Denmark.Publisher Item Identifier S0018-9219(00)01613-3.SAR systems provide unique images representing the elec-trical and geometrical properties of a surface in nearly all weather conditions.Since they provide their own illumina-tion,SAR’s can image in daylight or at night.SAR data are increasingly applied to geophysical problems,either by themselves or in conjunction with data from other remote sensing instruments.Examples of such applications include polar ice research,land use mapping,vegetation,biomass measurements,and soil moisture mapping[3].At present,a number of spaceborne SAR systems from several countries and space agencies are routinely generating data for such re-search[5].A conventional SAR only measures the location of a target in a two-dimensional coordinate system,with one axis along the flight track(“along-track direction”)and the other axis defined as the range from the SAR to the target(“cross-track direction”),as illustrated in Fig.1.The target locations in a SAR image are then distorted relative to a planimetric view, as illustrated in Fig.2[4].For many applications,this alti-tude-dependent distortion adversely affects the interpretation of the imagery.The development of InSAR techniques has enabled measurement of the third dimension.Rogers and Ingalls[7]reported the first application of in-terferometry to radar,removing the“north–south”ambiguity in range–range rate radar maps of the planet Venus made from Earth-based antennas.They assumed that there were no topographic variations of the surface in resolving the ter,Zisk[8]could apply the same method to mea-sure the topography of the moon,where the radar antenna directivity was high so there was no ambiguity.The first report of an InSAR system applied to Earth observation was by Graham[9].He augmented a conven-tional airborne SAR system with an additional physical antenna displaced in the cross-track plane from the conven-tional SAR antenna,forming an imaging interferometer. By mixing the signals from the two antennas,the Graham interferometer recorded amplitude variations that repre-sented the beat pattern of the relative phase of the signals.0018–9219/00$10.00©2000IEEEPROCEEDINGS OF THE IEEE,VOL.88,NO.3,MARCH2000333Fig.1.Typical imaging scenario for an SAR system,depicted here as a shuttle-borne radar.The platform carrying the SAR instrument follows a curvilinear track known as the “along-track,”or “azimuth,”direction.The radar antenna points to the side,imaging the terrain below.The distance from the aperture to a target on the surface in the look direction is known as the “range.”The “cross-track,”or range,direction is defined along the range and is terraindependent.Fig. 2.The three-dimensional world is collapsed to two dimensions in conventional SAR imaging.After image formation,the radar return is resolved into an image in range-azimuth coordinates.This figure shows a profile of the terrain at constant azimuth,with the radar flight track into the page.The profile is cut by curves of constant range,spaced by the range resolution ofradar,defined as 1 =c=21f,where c is the speed of light and 1f is the range bandwidth of the radar.The backscattered energy from all surface scatterers within a range resolution element contribute to the radar return for that element.The relative phase changes with the topography of the surface as described below,so the fringe variations track the topographic contours.To overcome the inherent difficulties of inverting ampli-tude fringes to obtain topography,subsequent InSAR sys-tems were developed to record the complex amplitude and phase information digitally for each antenna.In this way,the relative phase of each image point could be reconstructed di-rectly.The first demonstrations of such systems with an air-borne platform were reported by Zebker and Goldstein [10],and with a spaceborne platform using SeaSAT data by Gold-stein and colleagues [11],[12].Today,over a dozen airborne interferometers exist throughout the world,spurred by commercialization of InSAR-derived digital elevation products and dedicated operational needs of governments,as well as by research.Interferometry using data from spaceborne SAR instruments is also enjoying widespread application,in large part be-cause of the availability of suitable globally-acquired SAR data from the ERS-1and ERS-2satellites operated by the European Space Agency,JERS-1operated by the National Space Development Agency of Japan,RadarSAT-1operated by the Canadian Space Agency,and SIR-C/X-SAR operated by the United States,German,and Italian space agencies.This review is written in recognition of this explosion in popularity and utility of this method.The paper is organized to first provide an overview of the concepts of InSAR (Section II),followed by more detailed discussions on InSAR theory,system issues,and examples of applications.Section III provides a consistent mathematical representation of InSAR principles,including issues that im-pact processing algorithms and phenomenology associated with InSAR data.Section IV describes the implementation approach for various types of InSAR systems with descrip-tions of some of the specific systems that are either opera-tional or planned in the next few years.Section V provides a broad overview of the applications of InSAR,including to-pographic mapping,ocean current measurement,glacier mo-tion detection,earthquake and hazard mapping,and vegeta-tion estimation and classification.Finally,Section VI pro-vides our outlook on the development and impact of InSAR in remote sensing.Appendix A defines some of the common concepts and vocabulary used in the field of synthetic aper-ture radar that appear in this paper.The tables in Appendix B list the symbols used in the equations in this paper and their definitions.We note that four recently published review papers are complementary resources available to the reader.Gens and Vangenderen [13]and Madsen and Zebker [14]cover gen-eral theory and applications.Bamler and Hartl [15]review SAR interferometry with an emphasis on signal theoretical aspects,including mathematical imaging models,statistical properties of InSAR signals,and two-dimensional phase unwrapping.Massonnet and Feigl [16]give a comprehen-sive review of applications of interferometry to measuring changes of Earth’s surface.II.O VERVIEW OF I NTERFEROMETRIC SAR A.Interferometry for TopographyFig.3illustrates the InSAR system concept.While radar pulses are transmitted from the conventional SAR antenna,radar echoes are received by both the conventional and an ad-ditional SAR antenna.By coherently combining the signals from the two antennas,the interferometric phase difference between the received signals can be formed for each imaged point.In this scenario,the phase difference is essentially re-lated to the geometric path length difference to the image334PROCEEDINGS OF THE IEEE,VOL.88,NO.3,MARCH 2000Fig.3.Interferometric SAR for topographic mapping uses two apertures separated by a“baseline”to image the surface.The phase difference between the apertures for each image point,along withthe range and knowledge of the baseline,can be used to infer the precise shape of the imaging triangle to derive the topographic height of the image point.point,which depends on the topography.With knowledge of the interferometer geometry,the phase difference can be con-verted into an altitude for each image point.In essence,the phase difference provides a third measurement,in addition to the along and cross track location of the image point,or “target,”to allow a reconstruction of the three-dimensional location of the targets.The InSAR approach for topographic mapping is similar in principle to the conventional stereoscopic approach.In stere-oscopy,a pair of images of the terrain are obtained from two displaced imaging positions.The“parallax”obtained from the displacement allows the retrieval of topography because targets at different heights are displaced relative to each other in the two images by an amount related to their altitudes[17]. The major difference between the InSAR technique and stereoscopy is that,for InSAR,the“parallax”measurements between the SAR images are obtained by measuring the phase difference between the signals received by two InSAR antennas.These phase differences can be used to determine the angle of the target relative to the baseline of the interfer-ometric SAR directly.The accuracy of the InSAR parallax measurement is typically several millimeters to centimeters, being a fraction of the SAR wavelength,whereas the par-allax measurement accuracy of the stereoscopic approach is usually on the order of the resolution of the imagery(several meters or more).Typically,the post spacing of the InSAR topographic data is comparable to the fine spatial resolution of SAR imagery, while the altitude measurement accuracy generally exceeds stereoscopic accuracy at comparable resolutions.The regis-tration of the two SAR images for the interferometric mea-surement,the retrieval of the interferometric phase differ-ence,and subsequent conversion of the results into digital el-evation models of the terrain can be highly automated,repre-senting an intrinsic advantage of the InSAR approach.As dis-cussed in the sections below,the performance of InSAR sys-tems is largely understood both theoretically and experimen-tally.These developments have led to airborne and space-borne InSAR systems for routine topographic mapping.The InSAR technique just described,using two apertures on a single platform,is often called“cross-track interferom-etry”(XTI)in the literature.Other terms are“single-track”and“single-pass”interferometry.B.Interferometry for Surface ChangeAnother interferometric SAR technique was advanced by Goldstein and Zebker[18]for measurement of surface mo-tion by imaging the surface at multiple times(Fig.4).The time separation between the imaging can be a fraction of a second to years.The multiple images can be thought of as “time-lapse”imagery.A target movement will be detected by comparing the images.Unlike conventional schemes in which motion is detected only when the targets move more than a significant fraction of the resolution of the imagery, this technique measures the phase differences of the pixels in each pair of the multiple SAR images.If the flight path and imaging geometries of all the SAR observations are identical, any interferometric phase difference is due to changes over time of the SAR system clock,variable propagation delay,or surface motion in the direction of the radar line of sight.In the first application of this technique described in the open literature,Goldstein and Zebker[18]augmented a con-ventional airborne SAR system with an additional aperture, separated along the length of the aircraft fuselage from the conventional SAR antenna.Given an antenna separation of roughly20m and an aircraft speed of about200m/s,the time between target observations made by the two antennas was about100ms.Over this time interval,clock drift and propa-gation delay variations are negligible.Goldstein and Zebker showed that this system was capable of measuring tidal mo-tions in the San Francisco bay area with an accuracy of sev-eral cm/s.This technique has been dubbed“along-track in-terferometry”(ATI)because of the arrangement of two an-tennas along the flight track on a single platform.In the ideal case,there is no cross-track separation of the apertures,and therefore no sensitivity to topography.C.General Interferometry:Topography and ChangeATI is merely a special case of“repeat-track interferom-etry”(RTI),which can be used to generate topography and motion.The orbits of several spaceborne SAR satellites have been controlled in such a way that they nearly retrace them-selves after several days.Aircraft can also be controlled to repeat flight paths accurately.If the repeat flight paths result in a cross-track separation and the surface has not changed between observations,then the repeat-track observation pair can act as an interferometer for topography measurement. For spaceborne systems,RTI is usually termed“repeat-pass interferometry”in the literature.If the flight track is repeated perfectly such that there is no cross-track separation,then there is no sensitivity to topog-raphy,and radial motions can be measured directly as with an ATI system.Since the temporal separation between the ob-servations is typically hours to days,however,the ability to detect small radial velocities is substantially better than the ATI system described above.The first demonstration of re-peat track interferometry for velocity mapping was a studyPROCEEDINGS OF THE IEEE,VOL.88,NO.3,MARCH2000335Fig. 4.An along-track interferometer maintains a baseline separated along the flight track such that surface points are imagedby each aperture within1s.Motion of the surface over the elapsedtime is recorded in the phase difference of the pixels.of the Rutford ice stream in Antarctica,again by Goldstein and colleagues[19].The radar aboard the ERS-1satellite ob-tained several SAR images of the ice stream with near-per-fect retracing so that there was no topographic signature in the interferometric phase.Goldstein et al.showed that mea-surements of the ice stream flow velocity of the order of1year(or3The section begins with a geometric interpretation of the interferometric phase,from which we develop the equations of height mapping and sensitivity and extend to motion map-ping.We then move toward a signal theoretic interpretation of the phase to characterize the interferogram,which is the basic interferometric observable.From this we formulate the phase unwrapping and absolute phase determination prob-lems.We finally move to a basic scattering theory formula-tion to discuss statistical properties of interferometric data and resulting phenomenology.1)Basic Measurement Principles:A conventional SAR system resolves targets in the range direction by measuring the time it takes a radar pulse to propagate to the target and return to the radar.The along-track location is determined from the Doppler frequency shift that results whenever the relative velocity between the radar and target is not zero.Ge-ometrically,this is the intersection of a sphere centered at the antenna with radius equal to the radar range and a cone with generating axis along the velocity vector and cone angle proportional to the Doppler frequency as shown in Fig.6.A target in the radar image could be located anywhere on the intersection locus,which is a circle in the plane formed by the radar line of sight to the target and vector pointing from the aircraft to nadir.To obtain three-dimensional position in-formation,an additional measurement of elevation angle is needed.Interferometry using two or more SAR images pro-vides a means of determining this angle.Interferometry can be understood conceptually by con-sidering the signal return of elemental scatterers comprising each resolution element in an SAR image.A resolution ele-ment can be represented as a complex phasor of the coherent backscatter from the scattering elements on the ground and the propagation phase delay,as illustrated in Fig.7.The backscatter phase delay is the net phase of the coherent sum of the contributions from all elemental scatterers in the reso-lution element,each with their individual backscatter phases and their differential path delays relative to a reference surface normal to the radar look direction.Radar images observed from two nearby antenna locations have resolution elements with nearly the same complex phasor return,but with a different propagation phase delay.In interferometry,the complex phasor information of one image is multiplied by the complex conjugate phasor information of the second image to form an “interferogram,”effectively canceling the common backscatter phase in each resolution element,but leaving a phase term proportional to the differential path delay.This is a geometric quantity directly related to the elevation angle of the resolution element.Ignoring the slight difference in backscatter phase in the two images treats each resolution element as a point scatterer.For the next few sections we will assume point scatterers to consider only geometry.The sign of the propagation phase delay is set by the desire for consistency between the Dopplerfrequency.Specificallyis the radar wavelength in the reference frame of thetransmitterandis determined by integra-tion,since,is then set by the order of multiplication and con-jugation in forming the interferogram.In this paper,we have elected the most common convention.Given twoantennas,(4)whereFig.7.The interferometric phase difference is mostly due to the propagation delay difference.The (nearly)identical coherent phase from the different scatterers inside a resolution cell(mostly)cancelsduring interferogram formation.collection.In standard mode,the radar transmits a signal out ofone of the interferometric antennas only and receives the echoesthrough both antennas,A,simultaneously.In“ping-pong”mode,the radar transmits alternatively out of the top and bottomantennas and receives the radar echo only through the same antenna.Repeat-track interferometers are inherently in“ping-pong”mode.It is important to appreciate that only the principalvalues of the phase,modulostraints on the relative positioning of the antennas for making useful measurements.These issues are quantified below. The interferometric phase as previously defined is propor-tional to the range difference from two antenna locations to a point on the surface.This range difference can be expressed in terms of the vector separating the two antenna locations, called the interferometric baseline.The range and azimuth position of the sensor associated with imaging a given scat-terer depends on the portion of the synthetic aperture used to process the image(see Appendix A).Therefore the interfer-ometric baseline depends on the processing parameters and is defined as the difference between the location of the two antenna phase center vectors at the time when a given scat-terer is imaged.The equation relating the scatterer position vector,a ref-erence position for the platform,and the look vector,,isi s t h e r a n g e t o t h e s c a t t e r e r a n di s t h e u n i t v e c t o r i nt h e d i r e c t i o n o f.T h e p o s i t i o nc a n b e c h o s e n a r b i t r a r i l y b u ti s u s u a l l y t a k e n a s t h e p o s i t i o n o f o n e o f t h e i n t e r f e r o m e t e ra n t e n n a s.I n t e r f e r o m e t r i c h e i g h t r e c o n s t r u c t i o n i s t h e d e t e r-m i n a t i o n o f a t a r g e t’s p o s i t i o n v e c t o r f r o m k n o w n p l a t f o r me p h e m e r i s i nf o r m a t i o n,b a s e l i n e i n f o r m a t i o n,a n d t h e i n t e r-f e r o m e t r i c p h a s e.A s s u m i n ga nd(8)f o r“p i n g-p o n g”m o d e s y s t e m s a n df o r s t a n d a r d m o d e s y s t e m s,a n d t h e s u b s c r i p t s r e f e r t o t h ea n t e n n a n u mb e r.T h i s e x p r e s s i o nc a n b e s i m p l i f i ed a s s u m i ngb y T a y l o r-e x p a n d i n g(9)t o f i r s t o r d e r t o g i vei s t h e a n g l e t h eF i g.9.S A R i n t e r f e r o m e t r y i m a g i n g g e o m e t r y i n t h e p l a n e n o r m a lt o t h e f l i g h t d i r e c t i o n.F i g.10.P h a s e i n i n t e r f e r o g r a m d e p i c t e d a s c y c l e s o fe l e c t r o m a g n e t i c w a v e p r o p a g a t i n g a d if f e r e n t i a l d i s t a n c efor thecase p=1.Phase in the interferogram is initially known modulo2 : ),where= ( ;s),where kin this case.Absolutephase determination is the process to determine the overall multipleof2 k= +2 kFig.12.(a)Radar brightness image of Mojave desert near Fort Irwin,CA derived from SIR-C C-band(5.6-cm wavelength)repeat-track data.The image extends about20km in range and50km in azimuth.(b)Phase of the interferogram of the area showing intrinsic fringe variability.The spatial baseline of the observations is about70m perpendicular to the line-of-sight direction.(c)Flattened interferometric phase assuming a reference surface at zero elevation above a spherical earth.baseline makes with respect to a reference horizontal plane.Then,(10)can be rewrittenasis the look angle,the angle the line-of-sight vectormakes with respect to nadir,shown in Fig.9.Fig.12(b)shows an interferogram of the Fort Irwin,CA,generated using data collected on two consecutive days of theSIR-C mission.In this figure,the image brightness representsthe radar backscatter and the color represents the interfero-metric phase,with one cycle of color equal to a phase changeofbe a unit vector pointing to a surface of constantelevation,,is givenby(12)where(13)and(14)assuming a spherical Earth withradiusis the local incidence angle relative to a spherical sur-face,facing toward the radar and is less on slopes facing away from the radar.Also from(16),the fringe frequency is inversely proportionalto(20)These equations appear to be nonlinear,but by choosing anappropriate local coordinate basis,they can be readily solvedfor[48].To illustrate,welet be the platformposition vector and specialize to the two-dimensional casewhereis measured and relates the platform posi-tion to the target(range sphere)through extension of the unitlook vector.The lookangleestimated,can be constructed.It is immediate from the above expressions that re-construction of the scatterer position vector depends onknowledge of the platform location,the interferometricbaseline length,orientation angle,and the interferometricphase.To generate accurate topographic maps,radar inter-ferometry places stringent requirements on knowledge ofthe platform and baseline vectors.In the above discussion,atmospheric effects are neglected.Appendix B developsthe correction for atmospheric refraction in an exponential,horizontally stratified atmosphere,showing that the bulk ofthe correction can be made by altering the effective speedof light through the refractive atmosphere.Refractivityfluctuation due to turbulence in the atmosphere is a minoreffect for two-aperture single-track interferometers[24].To illustrate these theoretical concepts in a more concreteway,we show in Fig.13a block diagram of the major stepsin the processing data for topographic mapping applica-tions,from raw data collection to generation of a digitaltopographic model.The description assumes simultaneouscollection of the two interferometric channels;however,with minor modification,the procedure outlined applies toprocessing of repeat-track data as well.B.Sensitivity Equations and Accuracy1)Sensitivity Equations and Error Model:In designtradeoff studies of InSAR systems,it is often convenient toknow how interferometric performance varies with systemparameters and noise characteristics.Sensitivity equationsare derived by differentiating the basic interferometric heightreconstruction equation(6)with respect to the parametersneeded to determine,,and,baselinelength,and position,assumingthat,yields[24],and velocity errors result in positionerrors on a vector parallelto.Since the look vectorin an interferometric mapping system has components bothparallel and perpendicular to nadir,baseline and phase errorscontribute simultaneously to planimetric and height errors.For broadside mapping geometries,where the look vector is PROCEEDINGS OF THE IEEE,VOL.88,NO.3,MARCH2000341Fig.13.Block diagram showing the major steps in interferometric processing to generate topographic maps.Data for each interferometric channel are processed to full resolution images using the platform motion information to compensate the data for perturbations from a straight line path.One of the complex images is resampled to overlay the other,and an interferogram is formed by cross-multiplying images,one of which is conjugated.The resulting interferogram is averaged to reduce noise.Then,the principal value of the phase for each complex sample is computed.To generate a continuous height map,the two-dimensional phase field must be unwrapped.After the unwrapping process,an absolute phase constant is determined.Subsequently,the three-dimensional target location is performed with corrections applied to account for tropospheric effects.A relief map is generated in a natural coordinate system aligned with the flight path.Gridded products may include the target heights,the SAR image,a correlation map,and a height errormap.Fig.14.Sensitivity tree showing the sensitivity of target location to various parameters used in interferometric height reconstruction.See Fig.15for definitions of angles.orthogonal to the velocity vector,the velocity errors do not contribute to target position errors.Fig.14graphically de-picts the sensitivity dependencies,according to the geometry defined in Fig.15.To highlight the essential features of the interferometric sensitivity,we simplify the geometry to a flat earth,withtheFig.15.Baseline and look angle geometry as used in sensitivity formulas.baseline in a plane perpendicular to the velocity vector.With this geometry the baseline and velocity vectors are givenby。
GIS专业外语
Topics which are usually included in the contents of geo-referencing are: geoGeodesy Map projections Coordinate Systems
3.1 Geodesy Geodesy is the science of measuring and monitoring the size and shape of the Earth and the location of points on its surface. The elements of Geodesy are the figure of the Earth, Datum and gravity.
The Earth's shape is nearly spherical, with a
radius of about 3,963 miles (6,378 km), and it`s surface is very irregular. Mountains and valleys make actually measuring this surface impossible because an infinite amount of data would be needed.
To measure the Earth and avoid the problems that places like the Grand Canyon present, geodesists use a theoretical mathematical surface called the ellipsoid. Because the ellipsoid. ellipsoid exists only in theory and not in real life, it can be completely smooth and does not take any irregularities - such as mountains or valleys -- into account.
地表沉降研究方法英语作文
地表沉降研究方法英语作文Title: Research Methods for Land Subsidence。
Land subsidence, the gradual sinking of the Earth's surface, is a complex phenomenon with multifaceted causes and impacts. Understanding and mitigating land subsidence require comprehensive research methodologies. In this essay, we delve into various research methods employed to study land subsidence.1. Geodetic Surveys: Geodetic surveys involve precise measurements of the Earth's surface using techniques suchas GPS (Global Positioning System), GNSS (Global Navigation Satellite System), and leveling. These surveys provide valuable data on ground deformation over time, aiding inthe identification and monitoring of subsidence-prone areas.2. Interferometric Synthetic Aperture Radar (InSAR): InSAR is a remote sensing technique that utilizessatellite-based radar to detect ground movements withmillimeter-scale accuracy. By analyzing interferograms generated from multiple radar images, researchers can map land subsidence over large areas and monitor its spatial and temporal evolution.3. Groundwater Monitoring: Land subsidence oftenresults from excessive groundwater extraction, causing aquifer compaction and subsequent surface sinking. Groundwater monitoring involves measuring water levels in wells and aquifers to assess groundwater depletion rates and its correlation with subsidence.4. Geotechnical Investigations: Geotechnical investigations focus on understanding the subsurface soil properties and geological conditions that contribute to land subsidence. Techniques such as borehole drilling, soil sampling, and geophysical surveys help characterize the subsurface and identify potential triggers of subsidence, such as clay layer compression or underground mining activities.5. Numerical Modeling: Numerical modeling plays acrucial role in simulating and predicting land subsidence phenomena. Finite element method (FEM) and finitedifference method (FDM) are commonly used numerical techniques to simulate groundwater flow, soil deformation, and surface subsidence processes. These models integrate geospatial data, hydrogeological parameters, and mechanical properties to simulate the complex interactions driving land subsidence.6. Remote Sensing and Geographic Information Systems (GIS): Remote sensing data, including optical imagery, thermal infrared, and LiDAR (Light Detection and Ranging), provide valuable information for land subsidence analysis. GIS platforms facilitate spatial data management, visualization, and spatial analysis, enabling researchers to integrate multi-source data for comprehensive subsidence assessment.7. Satellite Altimetry: Satellite altimetry measures variations in sea surface height, which indirectly reflect changes in land elevation due to subsidence. By analyzing long-term satellite altimetry data, researchers canestimate regional land subsidence rates in coastal areas and low-lying regions affected by sea-level rise and groundwater extraction.8. Collaborative Monitoring Networks: Collaborative monitoring networks involve partnerships between government agencies, research institutions, and local communities to establish comprehensive subsidence monitoring programs. These networks leverage resources, expertise, and data-sharing mechanisms to enhance subsidence research, raise public awareness, and support informed decision-making for sustainable land use planning.In conclusion, the study of land subsidence requires a multidisciplinary approach encompassing geodesy, remote sensing, hydrogeology, geotechnical engineering, and numerical modeling. By employing an array of research methods, scientists can gain insights into the causes, mechanisms, and impacts of land subsidence, ultimately contributing to effective mitigation strategies and sustainable land management practices.。
地球物理专业英语常用单词..
地球物理专业英语常用单词raypath:射线路径ray tracing:射线追踪real time:实时recordsection记录剖面reflection coefficient:反射系数reflectionsurvey:反射法勘探refraction:折射refraction survey:折射法勘探refractionwave:折射波:residualnormal moveout:剩余正常时差reverberation:交混回响reverse migration:逆偏移Ricker wavelet:雷克子波rms velocity均方根速度R-wave:瑞雷波。
scattering:散射secondary wave次波SEG A,SEG B, or SEG C标准磁带格式seismic:地震的seismic discontinuity:地震不连续面seismic-electriceffect:震电效应seismic survey:地震勘探seismogram:震相图seismograph: 地震仪,地震检波器seismologist:地震学家,地震工作者seismology地震学shear modulus:剪切模量shear wave:剪切波shotpoint:炮点shotpoint gap:炮点间隙S/N:信噪比。
Snell'slaw: 斯奈尔定律sparker: 电火花震源spherical divergence球面扩散stack叠加Ground mixing地面混波Instrument mixing仪器混波Verticalstacking垂直叠加Uphole stacking井口时间叠加Common-depth-point(CDP)s tacking共深度点叠加Common-offsetstacking同炮检距叠加Velocity filtering速度滤波Coherency filtering/Auto-p icking相干滤波/自动拾取Diversity stacking相异叠加Automigrating自动偏移stackingvelocity:叠加速度Stoneley wave:斯通利波streamer:海上拖缆S-wave:S-波synthetic seismogram:合成地震记录tangentialwave:切向波traceequalization:道均衡trace gather:道选排,道集trace integration:道积分,道综合transformedwave:变换波trough:波谷velocityanalysis:速度分析velocity filter:速度滤波velocity inversion: 速度反转velocityspectrum:速度谱velocity survey速度测量wave equation:波动方程wave form:波形wavefront:波阵面wave impedance:波阻抗wide-angle reflection:广角反射Young'smodulus:杨氏模量zero-phase:零相位Zoeppritz'sequations:佐普里兹方程acoustic log声波测井amplitude log:声幅测井borehole gravimeter:井下重力仪borehole log测井曲线calibrate校正densilog密度测井density log:密度测井dip log地层倾角测井dipmeter地层倾角测井electricallog:电测井gamma-gamma log:伽马-伽马测井gamma-raylog:γ射线测井induction log:感应测井microlog微电阻率测井neutron log: 中子测井radioactivity log:放射性测井sonic log:声波测井spectrallog能谱测井velocity log:声速测井well log:钻井记录,测井Ampere’slaw:安培定律angularfrequency:角频率Anomaly: 异常anticline:背斜array:排列autocorrelation:自相关background:背景值band-limited function:频截函数band-pass:带通band-reject filter:带阻滤波器bandwidth:带宽baseof weathering:低速带底面boundary condition:边界条件boundary-valueproblem:边值问题byte:字节characteristic value:Eigenvalue特性值/特征值Complexnumber复数contour:等值线contourinterval:等值线间隔convolution:褶积crust:地壳curl:旋度db:Decibel.分贝。
关于经纬度的英文原文及翻译
Latitude and LongitudeAny location on Earth is described by two numbers--its latitude and its longitude. If a pilot or a ship's captain wants to specify position on a map, these are the "coordinates" they would use.Actually, these are two angles, measured in degrees, "minutes of arc" and "seconds of arc." These are denoted by the symbols ( °, ', " ) e.g. 35° 43' 9" means an angle of 35 degrees, 43 minutes and 9 seconds (do not confuse this with the notation (', ") for feet and inches!). A degree contains 60 minutes of arc and a minute contains 60 seconds of arc--and you may omit the words "of arc" where the context makes it absolutely clear that these are not units of time.Calculations often represent angles by small letters of the Greek alphabet, and that way latitude will be represented by λ(lambda, Greek L), and longitude by φ (phi, Greek F). Here is how they are defined.PLEASE NOTE: Charts used in ocean navigation often use the OPPOSITE notation--λfor LONGITUDE and φfor LATITUDE. The convention followed here resembles the one used by mathematicians in 3 dimensions for spherical polar coordinates.Imagine the Earth was a transparent sphere(actually the shape is slightly oval; because of the Earth's rotation, its equator bulges out a little). Through the transparent Earth (drawing) we can see its equatorial plane, and its middle the point is O, the center of the Earth.To specify the latitude of some point P on the surface, draw the radius OP to that point. Then the elevation angle of that point above the equator is its latitude λ--northern latitude if north of the equator, southern (or negative) latitude if south of it.LatitudeIn geography, latitude (φ) is a geographic coordinate that specifies the north–south position of a point on the Earth's surface. Latitude is an angle (defined below) which ranges from 0° at the Equator to 90° (North or South) at the poles. Lines of constan t latitude, or parallels, run east–west as circles parallel to the equator. Latitude is used together with longitude to specify the precise location of features on the surface of the Earth. Two levels of abstraction are employed in the definition of these coordinates. In the first step the physical surface is modelled by the geoid, a surface which approximates the mean sea level over the oceans and its continuation under the land masses. The second step is to approximate the geoid by a mathematically simpler reference surface. The simplest choice for the reference surface is a sphere, but the geoid is more accurately modelled by an ellipsoid.The definitions of latitude and longitude on such reference surfaces are detailed in the following sections. Lines of constant latitude and longitude together constitute a graticule on the reference surface. The latitude of a point on the actual surface is that of the corresponding point on the reference surface, the correspondence being along the normal to the reference surface which passes through the point on the physical surface. Latitude and longitude together with some specification of height constitute a geographic coordinate system as defined in the specification of the ISO 19111 standard.Since there are many different reference ellipsoids the latitude of a feature on the surface is not unique: this is stressed in the ISO standard which states that "without the full specification of the coordinate reference system, coordinates (that is latitude and longitude) are ambiguous at best and meaningless at worst". This is of great importance in accurate applications, such as GPS, but in common usage, where high accuracy is not required, the reference ellipsoid is not usually stated.In English texts the latitude angle, defined below, is usually denoted by the Greek lower-case letter phi (φ or ɸ). It is measured in degrees, minutes and seconds or decimal degrees, north or south of the equator.Measurement of latitude requires an understanding of the gravitational field of the Earth, either for setting up theodolites or for determination of GPS satellite orbits. The study of the figure of the Earth together with its gravitational field is the science of geodesy. These topics are not discussed in this article. (See for example the textbooks by Torge and Hofmann-Wellenhof and Moritz.)This article relates to coordinate systems for the Earth: it may be extended to cover the Moon, planets and other celestial objects by a simple change of nomenclature.Imagine the Earth was a transparent sphere(actually the shape is slightly oval; because of the Earth's rotation, its equator bulges out a little). Through the transparent Earth (drawing) we can see its equatorial plane, and its middle the point is O, the center of the Earth. To specify the latitude of some point P on the surface, draw the radius OP to that point. Then the elevation angle of that point above the equator is its latitude λ--northern latitude if north of the equator, southern (or negative) latitude if south of it.[How can one define the angle between a line and a plane, you may well ask? After all, angles are usually measured between twolines!Good question. We must use the angle which completes it to 90 degrees, the one between the given line and one perpendicular to the plane. Herethat would be the angle (90°-λ) between OP and the Earth's axis, known as the co-latitude of P.]On a globe of the Earth, lines of latitude are circles of different size. The longest is the equator, whose latitude is zero, while at the poles--at latitudes 90° north and 90° south (or -90°) the circles shrink to a point.LongitudeLongitude is a geographic coordinate that specifies the east-west position of a point on the Earth's surface. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda (λ). Meridians (lines running from the North Pole to the South Pole) connect points with the same longitude. By convention, one of these, the Prime Meridian, which passes through the Royal Observatory, Greenwich, England, was allocated the position of zero degrees longitude. The longitude of other places is measured as the angle east or west from the Prime Meridian, ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. Specifically, it is the angle between a plane containing the Prime Meridian and a plane containing the North Pole, South Pole and the location in question. (This forms a right-handed coordinate system with the z axis (right hand thumb) pointing from the Earth's center toward the North Pole and the x axis (right hand index finger) extending from Earth's center through the equator at the Prime Meridian.)A location's north–south position along a meridian is given by its latitude, which is approximately the angle between the local vertical and the plane of the Equator.If the Earth were perfectly spherical and homogeneous, then the longitude at a point would be equal to the angle between a vertical north–south plane through that point and the plane of the Greenwich meridian. Everywhere on Earth the vertical north–south plane would contain the Earth's axis. But the Earth is not homogeneous, and has mountains—which have gravity and so can shift the vertical plane away from the Earth's axis. The vertical north–south plane still intersects the plane of the Greenwich meridian at some angle; that angle is the astronomical longitude, calculated from star observations. The longitude shown on maps and GPS devices is the angle between the Greenwich plane and a not-quite-vertical plane through the point; the not-quite-vertical plane is perpendicular to the surface of the spheroid chosen to approximate the Earth's sea-level surface, rather than perpendicular to the sea-level surface itself.On the globe, lines of constant longitude ("meridians") extend from pole to pole, like the segment boundaries on a peeled orange.Every meridian must cross the equator. Since the equator is a circle, we can divide it--like any circle--into 360 degrees, andthe longitude φ of a point is then the marked value of that division where its meridian meets the equator.What that value is depends of course on where we begin to count--on where zero longitude is. For historical reasons, the meridian passing the old Royal Astronomical Observatory in Greenwich, England, is the one chosen as zero longitude. Located at the eastern edge of London, the British capital, the observatory is now a public museum and a brass band stretching across its yard marks the "prime meridian." Tourists often get photographed as they straddle it--one foot in the eastern hemisphere of the Earth, the other in the western hemisphere.A lines of longitude is also called a meridian, derived from the Latin, from meri, a variation of "medius" which denotes "middle", and diem, meaning "day." The word once meant "noon", and times of the day before noon were known as "ante meridian", while times after it were "post meridian." Today's abbreviations a.m. and p.m. come from these terms, and the Sun at noon was said to be "passing meridian". All points on the same line of longitude experienced noon (and any other hour) at the same time and were therefore said to be on the same "meridian line", which became "meridian" for short.About time--Local and UniversalTwo important concepts, related to latitude and (especially) longitude are Local time (LT) and Universal time (UT)Local time is actually a measure of the position of the Sun relative to a locality. At 12 noon local time the Sun passes to the south and is furthest from the horizon (northern hemisphere). Somewhere around 6 am it rises, and around 6 pm it sets. Local time is what you and I use to regulate our lives locally, our work times, meals and sleep-times.But suppose we wanted to time an astronomical event--e.g. the time when the 1987 supernova was first detected. For that we need a single agreed-on clock, marking time world-wide, not tied to our locality. That is universal time (UT), which can be defined (with some slight imprecision, no concern here) as the local time in Greenwich, England, at the zero meridian.Local Time (LT) and Time ZonesLongitudes are measured from zero to 180° east and 180° west (or -180°), and both 180-degree longitudes share the same line, in the middle of the Pacific Ocean.As the Earth rotates around its axis, at any moment one line of longitude--"the noon meridian"--faces the Sun, and at that moment, it will be noon everywhere on it. After 24 hours the Earth has undergone a full rotation with respect to the Sun, and the same meridian again faces noon. Thus each hour the Earth rotates by 360/24 = 15 degrees.When at your location the time is 12 noon, 15° to the east the time is 1 p.m., for that is the meridian which faced the Sun an hour ago. On the other hand, 15° to the west the time is 11 a.m., for in an hour's time, that meridian will face the Sun and experience noon.In the middle of the 19th century, each community across the US defined in this manner its own local time, by which the Sun, on the average, reached the farthest point from the horizon (for that day) at 12 oclock. However, travelers crossing the US by train had to re-adjust their watches at every city, and long distance telegraph operators had to coordinate their times. This confusion led railroad companies to adopt time zones, broad strips (about 15° wide) which observed the same local time, differing by 1 hour from neighboring zones, and the system was adopted by the nation as a whole.The continental US has 4 main time zones--eastern, central, mountain and western, plus several more for Alaska, the Aleut islands and Hawaii. Canadian provinces east of Maine observe Atlantic time; you may find those zones outlined in your telephone book, on the map giving area codes. Other countries of the world have their own time zones; only Saudi Arabia uses local times, because of religious considerations.In addition, the clock is generally shifted one hour forward between April and October. This "daylight saving time" allows people to take advantage of earlier sunrises, without shifting their working hours. By rising earlier and retiring sooner, you make better use of the sunlight of the early morning, and you can enjoy sunlight one hour longer in late afternoon.The Date Line and Universal Time (UT) Suppose it is noon where you are and you proceed west--and suppose you could travel instantly to wherever you wanted.Fifteen degrees to the west the time is 11 a.m., 30 degrees to the west, 10 a.m., 45 degrees--9 a.m. and so on. Keeping this up, 180 degrees away one should reach midnight, and still further west, it is the previous day. This way, by the time we have covered 360 degrees and have come back to where we are, the time should be noon again--yesterday noon.Hey--wait a minute! You cannot travel from today to the same time yesterday!We got into trouble because longitude determines only the hour of the day--not the date, which is determined separately. To avoid the sort of problem encountered above, the international date line has been established--most of it following the 180th meridian--where by common agreement, whenever we cross it the date advances one day (going west) or goes back one day (going east).That line passes the Bering Strait between Alaska and Siberia, which thus have different dates, but for most of its course it runs in mid-ocean and does not inconvenience any local time keeping.Astronomers, astronauts and people dealing with satellite data may need a time schedule which is the same everywhere, not tied to a locality or time zone. The Greenwich mean time, the astronomical time at Greenwich (averaged over the year) is generally used here. It is sometimes called Universal Time (UT).Right Ascension and DeclinationThe globe of the heavens resembles the globe of the Earth, and positions on it are marked in a similar way, by a network ofmeridians stretching from pole to pole and of lines of latitudeperpendicular to them, circling the sky. To study some particular galaxy, an astronomer directs the telescope to its coordinates.On Earth, the equator is divided into 360 degrees, with the zero meridian passing Greenwich and with the longitude angle φmeasured east or west of Greenwich, depending on where the corresponding meridian meets the equator.In the sky, the equator is also divided into 360 degrees, but the count begins at one of the two points where the equator cuts theecliptic--the one which the Sun reaches around March 21. It is called the vernal equinox ("vernal" means related to spring) or sometimes the first point in Aries, because in ancient times, when first observedby the Greeks, it was in the zodiac constellation of Aries, the ram. It has since then moved, as is discussed in the later section on precession.The celestial globe, however, uses terms and notations which differ somewhat from those of the globe of the Earth. Meridians are marked by the angle α (alpha, Greek A), called right ascension, not longitude. It is measured from the vernal equinox, but only eastward, and instead of going from 0 to 360 degrees, it is specified in hours and other divisions of time, each hour equal to 15 degrees.Similarly, where on Earth latitude goes from 90° north to 90° south (or -90°), astronomers prefer the co-latitude, the angle from the polar axis,equal to 0° at the north pole, 90° on the equator, and 180° at the south pole. It is called declination and is denoted by the letter δ (delta, Greek small D). The two angles (α, δ), used in specifying (for instance) the position of a star are jointly called its celestial coordinates.纬度和经度地球上的任何位置都是由两个数字来描述的--纬度和经度。
大地测量学常用英语词汇357个
大地测量学常用英语词汇357个大地测量学常用英语词汇357个,值得学习!001大地测量geodetic surveying002几何大地测量学geometric geodesy003椭球面大地测量学ellipsoidal geodesy004大地天文学geodetic astronomy005物理大地测量学(又称“大地重力学”)physical geodesy 006空间大地测量学space geodesy007卫星大地测量学satellite geodesy008动力大地测量学dynamic geodesy009海洋大地测量学marine geodesy010月面测量学lunar geodesy,selenodesy011行星测量学planetary geodesy012天文大地网(又称“国家大地网”) astro--geodetic network 013参考椭球reference ellipsoid014贝塞尔椭球Bessel ellipsoid015海福德椭球Hayford ellipsoid016克拉索夫斯基椭球Krasovsky ellipsoid017参考椭球定位orientation of reference ellipsoid018大地基准geodetic datum019大地坐标系geodetic coordinate system020弧度测量arc measurement021拉普拉斯方位角Laplace azimuth022拉普拉斯点Laplace point023三角测量triangulation024三角点triangulation point025三角锁triangulation chain026三角网triangulation network027图形权倒数weight reciprocal of figure028菲列罗公式Ferreros formula029施赖伯全组合测角法Schreiber method in all combinations 030方向观测法method of direction observation,method by series 031测回observation set032归心元素elements of centring033归心改正correction for centring034水平折光差(又称“旁折光差”) horizontal refraction error 035基线测量base measurement036基线baseline037基线网base network038精密导线测量precise traversing039三角高程测量trigonometric leveling040三角高程网trigonometric leveling network041铅垂线plumb line042天顶距zenith distance043高度角elevation angle,altitude angle044垂直折光差vertical refraction error045垂直折光系数vertical refraction coefficient 046国家水准网national leveling network047精密水准测量Precise leveling048水准面level surface049高程height050正高orthometric height051正常高normal height052力高dynamic height053地球位数geopotential number054水准点benchmark055水准路线leveling line056跨河水准测量river-crossing leveling057椭球长半径major radius of ellipsoid058椭球扁率flattening of ellipsoid059椭球偏心率eccentricity of ellipsoid060子午面meridian plane061子午圈meridian062卯酉圈prime vertical063平行圈parallel circle064法截面normal section065子午圈曲率半径radius of curvature in meridian066卯酉圈曲率半径radius of curvature in prime vertical067平均曲率半径mean radius of curvature068大地线geodesic069大地线微分方程differential equation of geodesic070大地坐标geodetic coordinate071大地经度geodetic longitude072大地纬度geodetic latitude073大地高geodetic height,ellipsoidal height074大地方位角geodetic azimuth075天文大地垂线偏差astro—geodetic deflection of the vertical076垂线偏差改正correction for deflection of the vertical077标高差改正correction for skew normals078截面差改正correction from normal section to geodetic 079大地主题正解direct solution of geodetic problem080大地主题反解inverse solution of geodetic problem081高斯中纬度公式Gauss mid—latitude formula082贝塞尔大地主题解算公式Bessel formula for solution of geodetic problem 083高斯一克吕格投影Gauss-Kruger projection又称“高斯投影”。
Dermatology
A common allergic skin disease characterized by symptoms such as wheals, redness, and itching on the skin. The causes of urticaria are diverse, including food, medication, infection, etc.
Dermatology
汇报人:XX 2024-01-22
contents
目录
• Skin Overview and Function • Common skin diseases and
diagnosis • Diagnostic methods and
techniques • Treatment principles and drug
located below the dermis, composed of adipocytes and
connective tissue, providing support and insulation for the skin.
Skin physiological function
• Protective function: As the largest organ of the human body, the skin has protective effects such as preventing harmful substances from entering, preventing water loss, and resisting ultraviolet radiation.
Dermis
located below the epidermal layer, composed of collagen, elastin, and matrix, providing elasticity and firmness to the skin.
介绍武汉大学用英文作文
介绍武汉大学用英文作文Introduction。
Wuhan University is a prestigious institution of higher learning located in Wuhan, the capital city of Hubei Province in China. Founded in 1893, it is one of the oldest and most respected universities in the country. With its beautiful campus, excellent faculty, and rigorous academic programs, Wuhan University has become a top destination for students seeking a world-class education.History。
Wuhan University has a long and storied history that dates back over a century. It was founded in 1893 as the Ziqiang Institute by Zhang Zhidong, the governor of Hubei and Hunan provinces. The school was later renamed Wuhan University in 1928, and it has since grown into one of the largest and most prestigious universities in China.Campus。
The Wuhan University campus is located in the Wuchang district of Wuhan, and it covers an area of over 3.5 square kilometers. The campus is divided into four main areas: the Main Campus, the South Lake Campus, the Donghu Campus, and the Tongji Medical College. Each area has its own unique features and facilities, but they are all connected by a network of walkways and bike paths.Academics。
英语作文 怎样读书
Reading is an essential skill that not only enhances knowledge but also improves cognitive abilities.Here are some effective strategies for reading:1.Set a Purpose for Reading:Before you start reading,determine why you are reading the material.Are you reading for pleasure,to learn something new,or to complete an assignment?2.Choose the Right Reading Material:Select books or articles that align with your interests and reading level.This will make the process more enjoyable and less of a chore.3.Create a Comfortable Reading Environment:Find a quiet and comfortable place where you can focus without distractions.Good lighting is also important to avoid straining your eyes.4.Preview the Material:Before diving into the text,skim through the headings, subheadings,and any summaries or introductions.This will give you an overview of the content and help you to understand the structure of the material.e Guided Reading Questions:If you are reading for a specific purpose,such as an exam or a project,use guided questions to focus your reading and ensure you cover all the necessary points.6.Take Notes:Jot down important points,new vocabulary,or interesting ideas as you read.This will not only help you to remember what youve read but also provide material for further reflection or discussion.7.Vary Your Reading Speed:Adjust your reading speed according to the material.Skim through less important parts and slow down when you encounter complex ideas or information that requires deeper understanding.8.Practice Active Reading:Engage with the text by asking questions,making predictions, and summarizing what youve read in your own words.This will help to deepen your understanding and retention of the material.9.Utilize Visual Aids:Use diagrams,charts,and illustrations to better understand complex concepts.They can provide a visual representation of the information and make it easier to remember.10.Read Aloud:Sometimes reading aloud can help with comprehension,especially for language learners.It can also improve pronunciation and fluency.11.Revisit Difficult Sections:If you come across a section that is difficult to understand, dont be afraid to reread it.Take your time to digest the information.12.Discuss with Others:Talking about what youve read with friends,classmates,or a reading group can provide new insights and deepen your understanding.13.Regular Reading Habits:Make reading a daily habit.The more you read,the better you become at it.Consistency is key to improving your reading skills.14.Expand Your Vocabulary:Look up unfamiliar words and try to use them in your own sentences.This will not only improve your reading comprehension but also your writing and speaking abilities.15.Reflect on What Youve Read:After finishing a book or an article,take some time to reflect on what youve learned,how it made you feel,and how it might change your perspective on a certain topic.By incorporating these strategies into your reading routine,you can become a more effective and efficient reader,gaining more from every book or article you read.。
将气球贴在封面上英语作文
When it comes to attaching balloons to a cover,such as a party invitation or a birthday card,there are several steps and techniques you can follow to ensure a secure and visually appealing result.Heres a detailed guide on how to do it:1.Choose the Right Balloons:Select balloons that match the theme and color scheme of the cover.Consider the size and material of the balloons,as these will affect how they look and how they adhere to the cover.2.Prepare the Cover:Ensure the cover is clean and smooth.If its a card,make sure its closed and the surface is free of any dust or debris that might prevent the balloons from sticking properly.3.Select an Adhesive:Use a strong adhesive that is suitable for both the material of the balloons and the cover.For latex or foil balloons,a glue stick or doublesided tape can work well.For heavier or larger balloons,you might need a stronger adhesive like glue dots or even a hot glue gun.4.Plan the Layout:Before attaching the balloons,decide where you want them on the cover.You might want them in a cluster,in a line,or scattered randomly.Sketch a light outline with a pencil if necessary.5.Apply the Adhesive:Apply the adhesive to the back of the balloons.If using doublesided tape,cut it into strips and place them on the back of the balloon where you want it to stick to the cover.If using a glue stick or glue dots,apply them in the desired spots.6.Attach the Balloons:Press the balloons firmly onto the cover,starting from one end and smoothing out any wrinkles or bubbles as you go.Make sure the balloons are aligned according to your planned layout.7.Secure the Balloons:After attaching the balloons,hold them in place for a few seconds to ensure the adhesive has time to bond.If using a hot glue gun,be careful not to burn your fingers.8.Add Extra Support:For heavier balloons or if you want to make sure they stay in place, you can add extra support with clear fishing line or thin ribbon.Attach one end of the line or ribbon to the balloon and the other to the back of the cover.9.Let it Dry:Allow the adhesive to dry completely.This might take a few minutes to a few hours,depending on the type of adhesive used.10.Decorate Further:Once the balloons are securely attached,you can add additional decorations like ribbons,confetti,or stickers to enhance the overall look of the cover.11.Handle with Care:After the cover is decorated,handle it with care to avoid the balloons slipping or the adhesive losing its bond.By following these steps,you can create a festive and eyecatching cover with balloons that will surely add a touch of joy and celebration to any event.。
地质雷达和电法的英文文献1
Geophysical imaging of shallow subsurface topography and its implication for shallow landslide susceptibility in the Urseren Valley,SwitzerlandStefan Carpentier a ,⁎,Markus Konz b ,1,Ria Fischer a ,Grigorios Anagnostopoulos b ,Katrin Meusburger c ,Konrad Schoeck ba Institute of Geophysics,ETH Zürich,Zürich,Switzerlandb Institute of Environmental Engineering,ETH Zürich,Zürich,SwitzerlandcInstitute for Environmental Geosciences,University of Basel,Basel,Switzerlanda b s t r a c ta r t i c l e i n f o Article history:Received 15December 2011Accepted 5May 2012Available online 11May 2012Keywords:GPRResistivity GPSLandslides HydrologyLandslides and soil erosion are an ever present threat to water management,building construction,vegetation formation and biodiversity in the Swiss Alps.Improved understanding of the mechanics and causative factors of soil erosion is a key factor in mitigation of damage to Alpine natural resources.Recently,much progress has been achieved in the forecasting of landslides on Alpine slopes with a new generation of shallow landslide models.These models perform well in spatial predictions,but temporal control on the occurrence of shallow landslides is less successful.Realistic soil composition and geometry of interfaces are necessary input for better predictions.Geophysical methods have so far not been widely considered to obtain these parameters,in spite of their ability to cover much ground with high-resolution.In this study we successfully use such methods to derive adequate subsurface topography as input to dynamic spatially distributed hydrological and soil mechanical models.Trench,GPS,electrical resistivity tomography and ground penetrating radar data were collected,resulting in revealing images of the composition and geometry of past and future landslides.A conceptual model for the occurrence of local shallow landslides is derived,spanning from block-wise steady creep of detaching soil units to rapid sliding and downslope deposition of soil units via varying sliding planes.Signi ficant topography was observed in the soil interfaces acting as sliding planes,leading to a more complex role of groundwater flow in the initiation of shallow landslides.Hydrogeologic models should be revised accordingly.©2012Elsevier B.V.All rights reserved.1.IntroductionLandslides and rockslides frequently occur throughout most mountain belts worldwide;the Alps in Switzerland are no exception.These slide events vary in intensity from shallow landslides and gradual surface degradation to mass movements in soil and bedrock.The rockslides at the village of Randa in the Matter Valley in 1991are per-haps the most well-known events in the Swiss Alps in recent times (Green et al.,2006;Heincke et al.,2006;Spillmann et al.,2007;Willenberg et al.,2008).Whereas the devastating effect of rockslides is evident,the effects of shallow landslides are often more subtle,yet far more abundant.Whole slope ranges in populated Swiss valleys are threatened by swarms of landslides.Soil stability plays an important role in ecological,agricultural,hydrological and economic aspects of life.Realistic forecasting models for shallow landslides are an integral part of their prediction and mitigation.Key static parameters oflandslide susceptibility are slope angle,soil composition,soil satura-tion level and bedrock orientation (Meusburger and Alewell,2009;Schoeck,2010).Each of these parameters can be constrained with dif-ferent methodologies,but until now high-resolution geophysical methods have played a minor role in investigating shallow landslides.In this paper,we advocate the use of such methods by presenting a successful geophysical case study.First we describe some more conventional methods to obtain static landslide parameters.Slope angle is a property of mountain valleys that can be determined relatively well using GPS,LIDAR,laser-altimetry and other remote sensing techniques,resulting in high-resolution digital elevation models (DEMs).Models with horizontal resolution of up to 2×2m are suitable for modeling soil creep and shallow landslides (Schoeck,2010).Soil saturation level can be modeled with advanced algorithms for saturated and unsaturated inter flow using river catch-ment data and realistic precipitation input.Many water flow algorithms exist both in the unsaturated and saturated zone:recent ones have been developed by Fahs et al.(2009)and Anagnostopoulos and Burlando (2011).Realistic simulation of the water flow is crucial since the major triggering mechanism for slope failures is the build-up of soil pore water pressure resulting in a decrease in effective stress and strengthJournal of Applied Geophysics 83(2012)46–56⁎Corresponding author at:T&A Survey,Dynamostraat 48,1014BK,Amsterdam,The Netherlands.Tel.:+31206651368.E-mail address:carpentier@ta-survey.nl (S.Carpentier).1Present address:RMS,Risk Management Solutions,London,UK.0926-9851/$–see front matter ©2012Elsevier B.V.All rights reserved.doi:10.1016/j.jappgeo.2012.05.001Contents lists available at SciVerse ScienceDirectJournal of Applied Geophysicsj o u r n a l h o m e pa ge :w ww.e l se v i e r.c o m/l o c a te /j a p p g e o(Terzaghi,1943).Furthermore,on steep slopes (as the ones that domi-nate alpine landscapes)shallow landslides can be triggered by a rapid drop in the apparent cohesion following a decrease in matric suction when a wetting front descends into the soil without generating positive pore pressures (Fredlund et al.,1995;Godt et al.,2009;Lu and Godt,2008;Rahardjo et al.,1995).Constraints on detailed soil-and bedrock composition are harder to come by.The weak point in predicting shallow landslides is usually the lack of accurate local geological models required for representative water flow modeling and regionalization of soil strength parameters.Most modelers do not have representative 3-D or even 2-D soil models with spatially varying soil-mechanical and hydrological parameters at their disposal,and they assume parallel-to-slope soil interface topogra-phy.In situ and laboratory methods can be used for the determination of soil hydraulic properties and soil shear strength,but these measure-ments usually represent 1-D samples which provide only limited 3-D support.Geophysical methods do cover large amounts of ground while measuring with high resolution and have the ability to obtain the de-sired 2-D and 3-D soil interface models.Shallow seismic methods (Büker et al.,1998),ground penetrating radar (GPR;Annan,2009)and electrical resistivity tomography (ERT;Loke and Barker,1996)are prov-en methods that deliver the desired images.This paper presents results of 2D ERT-and GPR-imaging,trenching,soil sampling and GPS-pro filing on the northern slopes of the Urseren Valley near Hospental,Canton Uri,Switzerland (Fig.1).The target of in-vestigation is twofold:1)to image and extrapolate the soil-and bedrock composition at past and future shallow landslides above Hospental and 2)to assess how representative current simpli fied soil-parameterization models for water flow modeling are.After a brief introduction of the site and the regional geology,we discuss the present state of shallow landslide modeling and landslide hazard assessment.The acquisition and processing of the geophysical datasets (GPS,trenches,GPRanda bcFig.1.a)Inset map of Switzerland with location of Urseren Valley indicated.b)Shaded elevation map of Urseren Valley with major towns indicated.The occurrence of shallow landslides in the period 1959–2008is also shown (red spots),together with the survey site (blue lines in white box).c)Major geological units and source rocks in Urseren Valley (P fiffner,2009).The Valley coincides with the Axen thrust Fault,along which the gneisses of the Gotthard Massif are thrust on the granites of the Aare Massif.Shallow landslides (black spots)occur mostly on the steep slopes of the Jurassic and Permian formations.47S.Carpentier et al./Journal of Applied Geophysics 83(2012)46–56ERT)are then described,and following this,representative images of all methods are presented in which we interpret mutual subsurface struc-tures.Based on these results,we analyze the local soil composition of shallow landslides and assess the complexity of subsurface topography in soil interfaces along the slopes.Finally,we propose a conceptual model for the occurrence of local shallow landslides and conclude that current simplified hydrogeological models need to take subsurface topography into account.2.The Urseren Valley2.1.Regional geologyThe Urseren Valley(Fig.1b)is the earth surface expression of two reinforcing geological processes:a tectonic process and a geomorpho-logical process.In terms of tectonics two crystalline basements of dif-ferent genesis are juxtaposed on opposite sides of the major Axen thrust fault which runs through the Urseren Valley(Fig.1c).The Aare System north of the Axen thrust fault,consisting of the granitoid Aare Massif and pre-existing basement,is overthrust by the southerly Gotthard Massif,a younger gneissic intrusive body(Pfiffner,2009). Between the crystalline massifs a wedge of younger sediments exists, the Urseren Zone,consisting of Permian schists,Triassic marbles,Ju-rassic calcareous shales and Quaternary alluvium.Since the Urseren Zone is mechanically softer and additionally suffers from fault zone weakening,it is more susceptible to geomorphological processes like glacial andfluvial erosion.The many glaciations(for example the Riss and Würm periods;Cadisch et al.,1948)and downcutting by the current Furkareuss river have shaped the present Urseren Valley. On the slopes of the Urseren Valley,the Urseren Zone is extensively exposed.Shallow landslides on the slopes occur mainly within the bands of Permian schist and Jurassic calcareous shales(Fig.1b and c;Meusburger and Alewell,2008).One main causal factor is very likely the soil formation on the weathered products of the schist and shale bedrock:under the influence of water,these bedrocks weather to clay(Blume,2002).The presence of a clay layer between bedrock and soil,added to significant groundwaterflow,is a catalyst for shallow landslides.Geophysical measurements should constrain the extent and geometry of the clay layer for improved landslide modeling.2.2.Geometry of the Urseren Valley and catchment distributionBesides the influence of bedrock geology on the formation of shallow landslides,the valley slope angles are another major factor(Fig.2a). Angles at the bottom of the valley are generally low,but already in the slightly higher part of the valley they quickly increase to45°.A correlation between slope-angle and the occurrence of shallow land-slides is evident,and we further imply a causal relation(Meusburger and Alewell,2009).An extra factor that contributes to this causality is that water streams,tributary to the Furkareuss river,generally gather in the concentrated gully system on steep slopes.The catchment of the Furkareuss consists of the major source of the river,downstream south-west of Realp(Figs.1b and2a)and of the many tributaries northeast of Realp.Especially during snow melt and heavy precipitation,the groundwaterflow in the valley slopes and resulting soil saturation are considerable.3.Assessment of shallow landslide susceptibility with current hydrological modelsWith information on geology,slope angle and water stream density available,two classes of models for shallow landslide occurrence can be made:statistical and physical models.The majority of statistical models are based on either multivariate correlation between mapped landslides and landscape attributes or general associations of landslides occurrences from rankings based on slope,lithology,landform or geological structure.A recent logistic regression model by Meusburger and Alewell(2009), based on such environmental predictors,performed spatial predictions of shallow landslides resulting in the prediction of81.4%of the shallow landslides in the Urseren Valley in the year2000.The model of Meusburger and Alewell(2009)does not implicitly include predictions on the timing of shallow landslide events.For accu-rate timing,a physical model is required that can respond to dynamic rainfall and meltwater inputs.A modification on the TOPKAPI model (Todini and Ciarapica,2001;Todini and Mazzetti,2008),developed by the Institute of Environmental Engineering of ETH(Konz et al.,2011), provides the modeling framework that is needed for this type of predic-tions.The TOPKAPI model originated as a fully distributed rainfallflow algorithm.Given a certain DEM,soil information and precipitation input,the algorithm kinematically tracks the transport of waterflow be-tween neighboring gridcells.The tracking of the water volume balance is done in space and time from the input water inflow at the highest boundary cells until the outflow at the lowest boundary cells in the grid.An additional shallow landslide routine was implemented in the TOPKAPI model as well(Schoeck,2010).This routine is based on the in-finite slope analysis with the assumption that the slopes are long and continuous,and the thickness of the unstable material is small com-pared to the dimensions of the slope(Lambe and Whitman,1979). Per gridcell,this routine evaluates the shear stress of the soil and com-pares it to the shear strength.If the shear stress exceeds the shear strength,that gridcell reports failed soil stability implying a shallow landslide.The shear conditions are a function of among others:slope-angle,soil composition,soil depth and groundwater pressure head.To make estimates about the probability of failed soil stability and therefore shallow landslides,a measure for the susceptibility of soil in a cell to fail was established.It is defined per gridcell as the number of total soil failures that occur in time,divided over the total amount of time(‐steps).This ratio varies between zero and one,and in case of the latter,an extremely susceptible and thus permanently unstable soil is present.Such a shallow landslide susceptibility map was computed for the Urseren Valley according to the algorithm of Schoeck(2010),and it is displayed in Fig.2b.This map has captured85.6%of all observed shallow landslides in the year2000:a marginal improvement with re-spect to the model of Meusburger and Alewell(2009)which correctly predicts81.4%of the shallow landslides.Several assumptions are made for the implementation of the shallow landslide extension to TOPKAPI.It is assumed that a type of soil has an as-sociated set of constant soil properties(porosity,density)per gridcell. Another assumption is that the soil-bedrock interface is always parallel to the surface,being essentially1D,and that soil depth is constant per gridcell.Especially the latter assumption is a strong one and is probably the reason that the timing of the shallow landslides is inaccurate in the current shallow landslide algorithm.The geophysical images resulting from our geophysical survey can confirm or disprove the assumption of 1D constant parallel-to-surface topography of the soil layer.4.Geophysical measurements4.1.Data acquisitionDuring geophysical surveys on the valley slopes above Hospental in the summers of2006,2009and2010,independent measurements of ERT-,GPS-,GPR-and trench data were done.On Figs.1and2the survey location in the Urseren Valley is indicated.One ERT profile, 36GPR profiles,6trenches and associated GPS coordinates spanned a survey area of about250m times200m.Several past shallow land-slides are visibly present in this area and the majority of profiles crossed a number of suspected soon-to-fail shallow landslides as well.Fig.3shows an example of a GPR measurement in progress at the top of a past shallow landslide on a±40°slope.The GPR antennas were mounted on a plastic sled in a parallel broadside configuration.48S.Carpentier et al./Journal of Applied Geophysics83(2012)46–56ERT and GPR pro files of 20–140m could be collected throughout the survey area,except at places with dense bush coverage (see Figs.3and 4).The ERT data were collected with a Syscal Kid system,and GPR data were collected with a PulseEKKO Pro system and the GPS data used Novatel equipment.Trenches were dug on a past landslide,next to a series of closely spaced parallel GPR pro files upslope.De-tailed acquisition parameters for the ERT and GPR data are listed in Table 1.Because of the challenging terrain and the high-resolution nature of the data,precise and absolute coordinates are required for every measurement to perform proper data processing and to achieve a combined interpretation.The absolute locations and labels of all the pro files are displayed in Fig.4:a photo of the slopes,past shallow landslides and vegetation gives context to the pro files in Fig.4a,whereas elevation contours,precise locations and labels are drawn in Fig.4b.4.2.Data processingThe multitude of geophysical methods required custom data pro-cessing steps.GPS data were post-processed by combining differential solutions of a remote measuring unit and a base station.This delivered absolute coordinates with ±3cm precision (Fig.4).ERT data from pro file e1(Fig.4b)underwent initial quality control (removing noisy and poorly coupled electrode-measurements)after which the data and geometry were input to a tomographic inversion algorithm,part of the RES2Dinv software package (Loke and Barker,1996).After suf ficient inversion iterations,the root-mean-square (RMS)data mis fit dropped to a minimum of 5.7%.The GPR re flection data were acquired at two dominant frequencies,100MHz (pro files a1–a20,Fig.4b)and 250MHz (pro files b1–b16,Fig.4b).The GPR data were subjected to processing algorithms very similar to those in seismic re flection imaging.A Dewow(high-pass,abFig.2.a)Slope-angle map of Urseren Valley.Note that shallow landslides (black spots)occur exclusively on steep (>40°)slopes,but not all steep slopes cause shallow landslides.The underlying hydrology is of paramount in fluence on shallow landslide activity.b)Shallow landslide susceptibility map as derived by Schoeck (2010).Observed shallow landslides are spatially well predicted by the susceptibility model,which combines slope information with river catchment analysis and detailed hydrological modeling.Temporal prediction is so far less successful.49S.Carpentier et al./Journal of Applied Geophysics 83(2012)46–56median)filter was applied to the recorded GPR traces to compensate for the signal saturation on the antennas,after which a time-zero correc-tion put the onset of the signals at the theoretical start time.The signal attenuation by geometrical spreading and by conductive ground was compensated by amplitude scaling,in this case an automatic gain control (AGC)with 20and 10ns windows for the 100and 250MHz data respectively.An Ormsby bandpass filter with corner frequencies of 30–50–110–130MHz (100–120–300–350for the 250MHz data)removed noise outside of the source frequency spectrum,and a spatial median filter along the traces removed reverberation artifacts.Through the process of trace equalization subtle trace-by-trace variations in recording gain,caused by varying antenna coupling along the pro file,could be corrected.This enhanced coherency was of bene fit to the performance of the subsequent phase shift migration (Gazdag,1978),which repositioned re flected energy in the GPR section to its true re flec-tion point.The last step in the processing scheme was time-to-depth-con-version of the migrated section in ing an average radar-wave velocity of 0.1m/ns,representative for loamy sand at our survey site,the final GPR depth-images were generated.Lastly,six trenches (T1–T6,Fig.4b)were dug next to two represen-tative GPR pro files,a9and a8.These trenches had a width and depth of about 1m and 2.5m respectively,and they were photographed and categorized into soil layers and associated soil composition.5.Results and interpretation 5.1.Trench sectionsFrom the six trenches,detailed observations concerning depth,in-terfaces and other parameters of the soil were gathered and used as constraints for further interpretations.Examples of trench results,in the form of photos,can be seen in Fig.5.There,the complete soil column up to the top of the bedrock is sampled.Five main units could be consis-tently classi fied in the trenches,from top to bottom:1)a ±0.7m loamy sand layer,2)a ±0.05m xenolithic schist layer,3)1.5m of clayey sand,4)a ±0.1m pure clay layer and 5)fractured bedrock.Yellow annotation denotes the five units in Fig.5,and black arrows mark the continuous clay layer in the neighboring trenches (insets).The xenolithic schist layer,clay layer and fractured bedrock are all major soil interfaces and are known to act as hydrological boundaries.Depths to these interfaces were seen to vary in the order of 0.5m from trench to trench,already in-dicating signi ficant subsurface topography.These subsurface undula-tions are independent of the strong surface topography,includingbumps and cracks close to the trenches.At the time of excavation of the trenches,no water or high levels of moisture were observed in any of the trenches,but conditions were dry at the time.No signs of organic material were present as well,probably due to the nature of the eroded source rock and because of the active recycling of soil on the slopes.Together,these observations point to a highly dynamic layer of soil on top of the bedrock.Soil appears to be in steady motion,leading to compressed and dilatated zones in the soil layer expressed as bumps and cracks at the surface.The soil appears devoid of organic material,allowing relatively fast drainage of water trough the sandy layers in case of flooding by rain or melt water.The subsurface topography in especially the clay layer is expected to have a large im-pact on this drainage as it acts as impermeable layer which de fines the maximum penetration depth of percolating water.Thus,it is critical for over-pressure distributions in the soil layer above.5.2.GPR pro files5.2.1.GPR pro files at top of slopeSoil interface observations in our trenches can be at best considered as 1-D depth-sections.Both the GPR and ERT measurements will image contrasts in electric properties of the soil over the whole spatial extent of the suspected shallow landslides.The schist layer,clay layer and frac-tured bedrock as observed in the trenches should show up as major di-electric contrasts.Fig.6con firms this with 250MHz GPR section b3(Fig.4b),which underwent the data processing sequence as described in Section 4.Several strong and continuous GPR re flections can be traced through the section (Fig.6a)at depths below the surface of ~1m,~2m and ~2.5m.The shallowest continuous re flection (~1m)is also the weakest of the three.A very strong and continuous re flection appears at ~2m depth and below that,a relatively strong but less con-tinuous re flection occurs (~2.5m).We presume that the upper re flec-tion represents the xenolithic schist layer (green line,Fig.6b),being a thin and low-contrasting layer.The pure clay layer is thick and well-de fined in all trenches,making for a large and consistent contrast as the second re flection indicates (yellow line).Fractured bedrock,as seen in the trenches,is a large dielectric contrast as well but the rough surface of this interface will cause a less continuous re flection,as the one at ±2.5m depth (black line).In Fig.6b their signi ficant topography,independent of the surface topography,becomes evident.A general trend of shallowing interfaces downslope is present,resulting in de-creased GPR depth penetration due to the shallowing pure clay layer.Another prominent feature is the interruption of the direct wave,the very first arrival in the GPR section.This wave,traveling along the soil surface directly between source and receiver,is discontinuous at X =15m.An outtake from the GPR section at this location,with its elevation correction removed (black box in Fig.6a),shows the break in the direct wave more clearly (within black ellipse).During acquisi-tion along the b3pro file a hint of a surface crack was already present at this location.Such a surface crack constitutes an air-filled cavity which acts as a total re flector for the direct wave.The break in the di-rect wave was consistently crossed by all pro files b1–b7(Fig.4b)suggesting that the crack is the upper boundary of a potentially un-stable patch of soil.Below bedrock depth,high variability in re flection strength is visible.Truncations in re flectivity between X =1–X =4m and X =10–X =11m (Fig.6b)indicate strong heterogeneity in the bedrock,which is not a usual phenomenon.The unmigrated GPR sections,both 100-and 250MHz (not shown here),feature an abundance of diffrac-tion hyperbolas at these locations.Given the fact that the structural dip of the local Permian schistose bedrock is near-vertical (Wyss,1986),we are most likely looking at interbedding within the bedrock.GPR section b5(Fig.7b)further to the northeast (Fig.4b)features the same interpreted horizons and interrupted direct wave as section b3.All three soil interfaces run deeper in pro file b5than in b3,but have an identical shallowing trend downslope,including thereducedFig.3.Photo of ground penetrating radar acquisition setup.The steep slopes posed a signi ficant acquisition challenge,here for a 100MHz GPR pro file.With the help of ~3cm precise differential GPS positioning,ultra high resolution GPR pro files could be produced.50S.Carpentier et al./Journal of Applied Geophysics 83(2012)46–56GPR depth penetration.Indications for truncated re flectivity below the bedrock are weaker,but are enough to interpret as vertical internal beds.5.2.2.GPR pro files at base of slopeAn additional set of GPR pro files were collected at the base of the shallow landslide area,where the slopes are much less steep.The sub-surface re flectivity is very different at this location,as demonstrated by 250MHz GPR section b16(Fig.4b)in Fig.8.Re flection strength,continu-ity and depth penetration have markedly increased,allowing for the im-aging of strongly curved horizons close to the surface at the far base of the slopes.We observe subhorizontal,banded re flectivity which points to either well-developed strata in the bedrock or to stacked colluvial/shallow landslide units.Due to the dominant vertical dip of the localPermian schistose bedrock (Wyss,1986)we rule out bedrock strata,which leaves soil deposits as the most likely candidate.Whether these soil deposits are past shallow landslides or long-term colluvial deposits cannot be determined from these data alone.Their curvature and an offset truncation at X =17m suggest strong compressional deforma-tion of the layers,which we have interpreted with an overthrust fault (Fig.8b).This process can be part of overall valley bulging at the base of the valley.5.3.ERT pro file and overlapping GPR pro fileOur study area is bounded upslope to the west by ERT pro file e1(Fig.4b).We will discuss the area between e1and GPR pro file a2through a combined interpretation of these two pro files.When comparingthesebFig.4.a)Photo of survey site,taken from the opposite (SE)side of the Urseren Valley.Pro files were acquired at representative positions in the survey area,with focus on the past shallow landslide ‘S ’at the center right and on the base of the slopes at the center bottom.White contour lines denote elevation.b)Detailed plan view of GPR pro files,ERT pro file and trenching bels denote the pro files as they are mentioned in the later figures and text.Note the dense GPR sampling of the past shallow landslide ‘S ’by lines b1–b11and a8–a14.White contour lines denote elevation.51S.Carpentier et al./Journal of Applied Geophysics 83(2012)46–56。
package geometry翻译
package geometry翻译geometry翻译为几何学。
几何学是一门研究空间、形状、大小、相对位置以及属性的数学学科。
它涉及到点、线、面、体等几何元素,并研究它们之间的关系和性质。
几何学在日常生活中有许多实际应用,如建筑设计、地图制作、计算机图形学等。
在几何学中,有许多基本概念和定理。
例如,点是几何中最基本的元素,它没有大小和形状,只有位置。
线是由无数个点组成的,它是一维的,没有宽度。
面是由无数个线组成的,它是二维的,有宽度和长度。
体是由无数个面组成的,它是三维的,有宽度、长度和高度。
几何学中的一些常见概念包括直线、射线、线段、角度、圆、三角形、四边形、多边形等。
其中,直线是由无数个点组成的,没有起点和终点;射线有一个起点,延伸到无穷远;线段有一个起点和终点,长度有限。
角度是由两条射线共享一个起点形成的,用度数或弧度来表示。
圆是由一组等距离于圆心的点组成的,其中半径是连接圆心和圆上任意一点的线段。
几何学还有一些重要的定理和公式。
例如,勾股定理是指在直角三角形中,直角边的平方和等于斜边的平方;平行线定理是指如果两条直线被一条截线分成两个内角相加为180度的相对角,则这两条直线是平行的。
几何学在实际应用中有许多用法。
例如,在建筑设计中,几何学可以帮助设计师确定建筑物的形状和大小,并保证其结构的稳定性。
在地图制作中,几何学可以用来确定地理位置的坐标和测量两地之间的距离。
在计算机图形学中,几何学可以用来描述和绘制三维模型和图形。
以下是一些用法和中英文对照例句:1. 点 - point- The coordinates of a point in a two-dimensional plane are represented by an ordered pair.(二维平面上一个点的坐标由一个有序对表示。
)2. 线 - line- A line is made up of infinitely many points and extends in both directions.(一条线由无数个点组成,在两个方向上延伸。