图像分割图像预处理中英文对照外文翻译文献

合集下载

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。

计算机科学与技术专业使用阈值技术的图像分割等毕业论文外文文献翻译及原文

计算机科学与技术专业使用阈值技术的图像分割等毕业论文外文文献翻译及原文

毕业设计(论文)外文文献翻译文献、资料中文题目: 1.使用阈值技术的图像分割2.最大类间方差算法的图像分割综述文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:计算机科学与技术班级:姓名:学号:指导教师:翻译日期: 2017.02.14毕业设计(论文)题目基于遗传算法的自动图像分割软件开发翻译(1)题目Image Segmentation by Using ThresholdTechniques翻译(2)题目A Review on Otsu Image Segmentation Algorithm使用阈值技术的图像分割 1摘要本文试图通过5阈值法作为平均法,P-tile算法,直方图相关技术(HDT),边缘最大化技术(EMT)和可视化技术进行了分割图像技术的研究,彼此比较从而选择合的阈值分割图像的最佳技术。

这些技术适用于三个卫星图像选择作为阈值分割图像的基本猜测。

关键词:图像分割,阈值,自动阈值1 引言分割算法是基于不连续性和相似性这两个基本属性之一的强度值。

第一类是基于在强度的突然变化,如在图像的边缘进行分区的图像。

第二类是根据预定义标准基于分割的图像转换成类似的区域。

直方图阈值的方法属于这一类。

本文研究第二类(阈值技术)在这种情况下,通过这项课题可以给予这些研究简要介绍。

阈分割技术可分为三个不同的类:首先局部技术基于像素和它们临近地区的局部性质。

其次采用全局技术分割图像可以获得图像的全局信息(通过使用图像直方图,例如;全局纹理属性)。

并且拆分,合并,生长技术,为了获得良好的分割效果同时使用的同质化和几何近似的概念。

最后的图像分割,在图像分析的领域中,常用于将像素划分成区域,以确定一个图像的组成[1][2]。

他们提出了一种二维(2-D)的直方图基于多分辨率分析(MRA)的自适应阈值的方法,降低了计算的二维直方图的复杂而提高了多分辨率阈值法的搜索精度。

这样的方法源于通过灰度级和灵活性的空间相关性的多分辨率阈值分割方法中的阈值的寻找以及效率由二维直方图阈值分割方法所取得的非凡分割效果。

图像识别中英文对照外文翻译文献

图像识别中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Elastic image matchingAbstractOne fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NP-complete by reduction from 3-SAT, thus giving evidence that the known exponential time algorithms are justified, but approximation algorithms or simplifications are necessary.Keywords: Elastic image matching; Two-dimensional warping; NP-completeness 1. IntroductionIn image recognition, a common problem is to match two given images, e.g. when comparing an observed image to given references. In that pro-cess, elastic image matching, two-dimensional (2D-)warping (Uchida and Sakoe, 1998) or similar types of invariant methods (Keysers et al., 2000) can be used. For this purpose, we can define cost functions depending on the distortion introduced in the matching andsearch for the best matching with respect to a given cost function. In this paper, we show that it is an algorithmically hard problem to decide whether a matching between two images exists with costs below a given threshold. We show that the problem image matching is NP-complete by means of a reduction from 3-SAT, which is a common method of demonstrating a problem to be intrinsically hard (Garey and Johnson, 1979). This result shows the inherent computational difficulties in this type of image comparison, while interestingly the same problem is solvable for 1D sequences in polynomial time, e.g. the dynamic time warping problem in speech recognition (see e.g. Ney et al., 1992). This has the following implications: researchers who are interested in an exact solution to this problem cannot hope to find a polynomial time algorithm, unless P=NP. Furthermore, one can conclude that exponential time algorithms as presented and extended by Uchida and Sakoe (1998, 1999a,b, 2000a,b) may be justified for some image matching applications. On the other hand this shows that those interested in faster algorithms––e.g. for pattern recognition purposes––are right in searching for sub-optimal solutions. One method to do this is the restriction to local optimizations or linear approximations of global transformations as presented in (Keysers et al., 2000). Another possibility is to use heuristic approaches like simulated annealing or genetic algorithms to find an approximate solution. Furthermore, methods like beam search are promising candidates, as these are used successfully in speech recognition, although linguistic decoding is also an NP-complete problem (Casacuberta and de la Higuera, 1999). 2. Image matchingAmong the varieties of matching algorithms,we choose the one presented by Uchida and Sakoe(1998) as a starting point to formalize the problem image matching. Let the images be given as(without loss of generality) square grids of size M×M with gray values (respectively node labels)from a finite alphabet &={1,…,G}. To define thed:&×&→N , problem, two distance functions are needed,one acting on gray valuesg measuring the match in gray values, and one acting on displacement differences :Z×Z→N , measuring the distortion introduced by t he matching. For these distance ddfunctions we assume that they are monotonous functions (computable in polynomial time) of the commonly used squared Euclid-ean distance, i.ed g (g 1,g 2)=f 1(||g 1-g 2||²)and d d (z)=f 2(||z||²) monotonously increasing. Now we call the following optimization problem the image matching problem (let µ={1,…M} ).Instance: The pair( A ; B ) of two images A and B of size M×M .Solution: A mapping function f :µ×µ→µ×µ.Measure:c (A,B,f )=),(),(j i f ij g B Ad ∑μμ⨯∈),(j i+∑⨯-⋅⋅⋅∈+-+μ}1,{1,),()))0,1(),(())0,1(),(((M j i d j i f j i f dμ⨯-⋅⋅⋅∈}1,{1,),(M j i +∑⋅⋅⋅⨯∈+-+1}-M ,{1,),()))1,0(),(())1,0(),(((μj i d j i f j i f d 1}-M ,{1,),(⋅⋅⋅⨯∈μj iGoal:min f c(A,B,f).In other words, the problem is to find the mapping from A onto B that minimizes the distance between the mapped gray values together with a measure for the distortion introduced by the mapping. Here, the distortion is measured by the deviation from the identity mapping in the two dimensions. The identity mapping fulfills f(i,j)=(i,j),and therefore ,f((i,j)+(x,y))=f(i,j)+(x,y)The corresponding decision problem is fixed by the followingQuestion:Given an instance of image matching and a cost c′, does there exist a ma pping f such that c(A,B,f)≤c′?In the definition of the problem some care must be taken concerning the distance functions. For example, if either one of the distance functions is a constant function, the problem is clearly in P (for d g constant, the minimum is given by the identity mapping and for d d constant, the minimum can be determined by sorting all possible matching for each pixel by gray value cost and mapping to one of the pixels with minimum cost). But these special cases are not those we are concerned with in image matching in general.We choose the matching problem of Uchida and Sakoe (1998) to complete the definition of the problem. Here, the mapping functions are restricted by continuity and monotonicity constraints: the deviations from the identity mapping may locally be at most one pixel (i.e. limited to the eight-neighborhood with squared Euclidean distance less than or equal to 2). This can be formalized in this approach bychoosing the functions f1,f2as e.g.f 1=id,f2(x)=step(x):=⎩⎨⎧.2,)10(,2,0>≤⋅xGxMM3. Reduction from 3-SAT3-SAT is a very well-known NP-complete problem (Garey and Johnson, 1979), where 3-SAT is defined as follows:Instance: Collection of clauses C={C1,···,CK} on a set of variables X={x1, (x)L}such that each ckconsists of 3 literals for k=1,···K .Each literal is a variable or the negation of a variable.Question:Is there a truth assignment for X which satisfies each clause ck, k=1,···K ?The dependency graph D(Ф)corresponding to an instance Ф of 3-SAT is defined to be the bipartite graph whose independent sets are formed by the set of clauses Cand the set of variables X .Two vert ices ck and x1are adjacent iff ckinvolvesx 1or-xL.Given any 3-SAT formula U, we show how to construct in polynomial time anequivalent image matching problem l(Ф)=(A(Ф),B(Ф)); . The two images of l (Ф)are similar according to the cost function (i.e.f:c(A(Ф),B(Ф),f)≤0) iff the formulaФ is satisfiable. We perform the reduction from 3-SAT using the following steps:• From the formula Ф we construct the dependency graph D(Ф).• The dependency graph D(Ф)is drawn in the plane.• The drawing of D(Ф)is refined to depict the logical behaviour of Ф , yielding two images(A(Ф),B(Ф)).For this, we use three types of components: one component to represent variables of Ф , one component to represent clauses of Ф, and components which act as interfaces between the former two types. Before we give the formal reduction, we introduce these components.3.1. Basic componentsFor the reduction from 3-SAT we need five components from which we will construct the in-stances for image matching , given a Boolean formula in 3-DNF,respectively its graph. The five components are the building blocks needed for the graph drawing and will be introduced in the following, namely the representations of connectors,crossings, variables, and clauses. The connectors represent the edges and have two varieties, straight connectors and corner connectors. Each of the components consists of two parts, one for image A and one for image B , where blank pixels are considered to be of the‘background ’color.We will depict possible mappings in the following using arrows indicating the direction of displacement (where displacements within the eight-neighborhood of a pixel are the only cases considered). Blank squares represent mapping to the respective counterpart in the second image.For example, the following displacements of neighboring pixels can be used with zero cost:On the other hand, the following displacements result in costs greater than zero:Fig. 1 shows the first component, the straight connector component, which consists of a line of two different interchanging colors,here denoted by the two symbols◇and□. Given that the outside pixels are mapped to their respe ctive counterparts and the connector is continued infinitely, there are two possible ways in which the colored pixels can be mapped, namely to the left (i.e. f(2,j)=(2,j-1)) or to the right (i.e. f(2,j)=(2,j+1)),where the background pixels have different possibilities for the mapping, not influencing the main property of the connector. This property, which justifies the name ‘connector ’, is the following: It is not possible to find a mapping, which yields zero cost where the relative displacements of the connector pixels are not equal, i.e. one always has f(2,j)-(2,j)=f(2,j')-(2,j'),which can easily be observed by induction over j'.That is, given an initial displacement of one pixel (which will be ±1 in this context), the remaining end of the connector has the same displacement if overall costs of the mapping are zero. Given this property and the direction of a connector, which we define to be directed from variable to clause, wecan define the state of the connector as carrying the‘true’truth value, if the displacement is 1 pixel in the direction of the connector and as carrying the‘false’ truth value, if the displacement is -1 pixel in the direction of the connector. This property then ensures that the truth value transmitted by the connector cannot change at mappings of zero cost.Image A image Bmapping 1 mapping 2Fig. 1. The straight connector component with two possible zero cost mappings.For drawing of arbitrary graphs, clearly one also needs corners,which are represented in Fig. 2.By considering all possible displacements which guarantee overall cost zero, one can observe that the corner component also ensures the basic connector property. For example, consider the first depicted mapping, which has zero cost. On the other hand, the second mapping shows, that it is not possible to construct a zero cost mapping with both connectors‘leaving’the component. In that case, the pixel at the position marked‘? ’either has a conflict (that i s, introduces a cost greater than zero in the criterion function because of mapping mismatch) with the pixel above or to the right of it,if the same color is to be met and otherwise, a cost in the gray value mismatch term is introduced.image A image Bmapping 1 mapping 2Fig. 2. The corner connector component and two example mappings.Fig. 3 shows the variable component, in this case with two positive (to the left) and one negated output (to the right) leaving the component as connectors. Here, a fourth color is used, denoted by ·.This component has two possible mappings for thecolored pixels with zero cost, which map the vertical component of the source image to the left or the right vertical component in the target image, respectively. (In both cases the second vertical element in the target image is not a target of the mapping.) This ensures±1 pixel relative displacements at the entry to the connectors. This property again can be deducted by regarding all possible mappings of the two images.The property that follows (which is necessary for the use as variable) is that all zero cost mappings ensure that all positive connectors carry the same truth value,which is the opposite of the truth value for all the negated connectors. It is easy to see from this example how variable components for arbitrary numbers of positive and negated outputs can be constructed.image A image BImage C image DFig. 3. The variable component with two positive and one negated output and two possible mappings (for true and false truth value).Fig. 4 shows the most complex of the components, the clause component. This component consists of two parts. The first part is the horizontal connector with a 'bend' in it to the right.This part has the property that cost zero mappings are possible for all truth values of x and y with the exception of two 'false' values. This two input disjunction,can be extended to a three input dis-junction using the part in the lower left. If the z connector carries a 'false' truth value, this part can only be mapped one pixel downwards at zero cost.In that case the junction pixel (the fourth pixel in the third row) cannot be mapped upwards at zero cost and the 'two input clause' behaves as de-scribed above. On the other hand, if the z connector carries a 'true' truth value, this part can only be mapped one pixel upwards at zero cost,and the junction pixel can be mapped upwards,thus allowing both x and y to carry a 'false' truth value in a zero cost mapping. Thus there exists a zero cost mapping of the clause component iff at least one of the input connectors carries a truth value.image Aimage B mapping 1(true,true,false)mapping 2 (false,false,true,)Fig. 4. The clause component with three incoming connectors x, y , z and zero cost mappings forthe two cases(true,true,false)and (false, false, true).The described components are already sufficient to prove NP-completeness by reduction from planar 3-SAT (which is an NP-complete sub-problem of 3-SAT where the additional constraints on the instances is that the dependency graph is planar),but in order to derive a reduction from 3-SAT, we also include the possibility of crossing connectors.Fig. 5 shows the connector crossing, whose basic property is to allow zero cost mappings if the truth–values are consistently propagated. This is assured by a color change of the vertical connector and a 'flexible' middle part, which can be mapped to four different positions depending on the truth value distribution.image Aimage Bzero cost mappingFig. 5. The connector crossing component and one zero cost mapping.3.2. ReductionUsing the previously introduced components, we can now perform the reduction from 3-SAT to image matching .Proof of the claim that the image matching problem is NP-complete:Clearly, the image matching problem is in NP since, given a mapping f and two images A and B ,the computation of c(A,B,f)can be done in polynomial time. To prove NP-hardness, we construct a reduction from the 3-SAT problem. Given an instance of 3-SAT we construct two images A and B , for which a mapping of cost zero exists iff all the clauses can be satisfied.Given the dependency graph D ,we construct an embedding of the graph into a 2D pixel grid, placing the vertices on a large enough distance from each other (say100(K+L)² ).This can be done using well-known methods from graph drawing (see e.g.di Battista et al.,1999).From this image of the graph D we construct the two images A and B , using the components described above.Each vertex belonging to a variable is replaced with the respective parts of the variable component, having a number of leaving connectors equal to the number of incident edges under consideration of the positive or negative use in the respective clause. Each vertex belonging to a clause is replaced by the respective clause component,and each crossing of edges is replaced by the respective crossing component. Finally, all the edges are replaced with connectors and corner connectors, and the remaining pixels inside the rectangular hull of the construction are set to the background gray value. Clearly, the placement of the components can be done in such a way that all the components are at a large enough distance from each other, where the background pixels act as an 'insulation' against mapping of pixels, which do not belong to the same component. It can be easily seen, that the size of the constructed images is polynomial with respect to the number of vertices and edges of D and thus polynomial in the size of the instance of 3-SAT, at most in the order (K+L)².Furthermore, it can obviously be constructed in polynomial time, as the corresponding graph drawing algorithms are polynomial.Let there exist a truth assignment to the variables x1,…,xL, which satisfies allthe clauses c1,…,cK. We construct a mapping f , that satisfies c(f,A,B)=0 asfollows.For all pixels (i, j ) belonging to variable component l with A(i,j)not of the background color,set f(i,j)=(i,j-1)if xlis assigned the truth value 'true' , set f(i,j)=(i,j+1), otherwise. For the remaining pixels of the variable component set A(i,j)=B(i,j),if f(i,j)=(i,j), otherwise choose f(i,j)from{(i,j+1),(i+1,j+1),(i-1,j+1)}for xl'false' respectively from {(i,j-1),(i+1,j-1),(i-1,j-1)}for xl'true ',such that A(i,j)=B(f(i,j)). This assignment is always possible and has zero cost, as can be easily verified.For the pixels(i,j)belonging to (corner) connector components,the mapping function can only be extended in one way without the introduction of nonzero cost,starting from the connection with the variable component. This is ensured by thebasic connector property. By choosing f (i ,j )=(i,j )for all pixels of background color, we obtain a valid extension for the connectors. For the connector crossing components the extension is straight forward, although here ––as in the variable mapping ––some care must be taken with the assign ment of the background value pixels, but a zero cost assignment is always possible using the same scheme as presented for the variable mapping.It remains to be shown that the clause components can be mapped at zero cost, if at least one of the input connectors x , y , z carries a ' true' truth value.For a proof we regard alls even possibilities and construct a mapping for each case. In thedescription of the clause component it was already argued that this is possible,and due to space limitations we omit the formalization of the argument here.Finally, for all the pixels (i ,j )not belonging to any of the components, we set f (i ,j )=(i ,j )thus arriving at a mapping function which has c (f ,A ,B )=0。

图像处理外文翻译 (2)

图像处理外文翻译 (2)

附录一英文原文Illustrator software and Photoshop software difference Photoshop and Illustrator is by Adobe product of our company, but as everyone more familiar Photoshop software, set scanning images, editing modification, image production, advertising creative, image input and output in one of the image processing software, favored by the vast number of graphic design personnel and computer art lovers alike.Photoshop expertise in image processing, and not graphics creation. Its application field, also very extensive, images, graphics, text, video, publishing various aspects have involved. Look from the function, Photoshop can be divided into image editing, image synthesis, school tonal color and special effects production parts. Image editing is image processing based on the image, can do all kinds of transform such as amplifier, reducing, rotation, lean, mirror, clairvoyant, etc. Also can copy, remove stain, repair damaged image, to modify etc. This in wedding photography, portrait processing production is very useful, and remove the part of the portrait, not satisfied with beautification processing, get let a person very satisfactory results.Image synthesis is will a few image through layer operation, tools application of intact, transmit definite synthesis of meaning images, which is a sure way of fine arts design. Photoshop provide drawing tools let foreign image and creative good fusion, the synthesis of possible make the image is perfect.School colour in photoshop with power is one of the functions of deep, the image can be quickly on the color rendition, color slants adjustment and correction, also can be in different colors to switch to meet in different areas such as web image design, printing and multimedia application.Special effects production in photoshop mainly by filter, passage of comprehensive application tools and finish. Including image effects of creative and special effects words such as paintings, making relief, gypsum paintings, drawings, etc commonly used traditional arts skills can be completed by photoshop effects. And all sorts of effects of production aremany words of fine arts designers keen on photoshop reason to study.Users in the use of Photoshop color function, will meet several different color mode: RGB, CMY K, HSB and Lab. RGB and CMYK color mode will let users always remember natural color, users of color and monitors on the printed page color is a totally different approach to create. The monitor is by sending red, green, blue three beams to create color: it is using RGB (red/green/blue) color mode. In order to make a complex color photographs on a continuous colour and lustre effect, printing technology used a cyan, the red, yellow and black ink presentation combinations from and things, reflect or absorb all kinds of light wavelengths. Through overprint) this print (add four color and create color is CMYK (green/magenta/yellow/black) yan color part of a pattern. HSB (colour and lustre/saturation/brightness) color model is based on the way human feelings, so the color will be natural color for customer computer translation of the color create provides an intuitive methods. The Lab color mode provides a create "don't rely on equipment" color method, this also is, no matter use what monitors.Photoshop expertise in image processing, and not graphics creation. It is necessary to distinguish between the two concepts. Image processing of the existing bitmap image processing and use edit some special effects, the key lies in the image processing processing; Graphic creation software is according to their own idea originality, using vector graphics to design graphics, this kind of software main have another famous company Adobe Illustrator and Macromedia company software Freehand.As the world's most famous Adobe Illustrator, feat graphics software is created, not graphic image processing. Adobe Illustrator is published, multimedia and online image industry standard vector illustration software. Whether production printing line draft of the designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers Illustrator, will find is not only an art products tools. This software for your line of draft to provide unprecedented precision and control, is suitable for the production of any small design to large complex projects.Adobe Illustrator with its powerful function and considerate user interface has occupied most of the global vector editing software share. With incomplete statistics global 37% of stylist is in use Adobe Illustrator art design. Especially the patent PostScript Adobe companybased on the use of technology, has been fully occupied professional Illustrator printed fields. Whether you're line art designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers, had used after Illustrator, its formidable will find the function and concise interface design style only Freehand to compare. (Macromedia Freehand is launched vector graphics software company, following the Macromedia company after the merger by Adobe Illustrator and will decide to continue the development of the software have been withdrawn from market).Adobe company in 1987 when they launched the Illustrator1.1 version. In the following year, and well platform launched 2.0 version. Illustrator really started in 1988, should say is introduced on the Mac Illustrator 88 version. A year after the upgrade to on the Mac version3.0 in 1991, and spread to Unix platforms. First appeared on the platform in the PC version4.0 version of 1992, this version is also the earliest Japanese transplant version. And in the MAC is used most is5.0/5.5 version, because this version used Dan Clark's do alias (anti-aliasing display) display engine is serrated, make originally had been in graphic display of vector graphics have a qualitative leap. At the same time on the screen making significant reform, style and Photoshop is very similar, so for the Adobe old users fairly easy to use, it is no wonder that did not last long, and soon also popular publishing industry launched Japanese. But not offering PC version. Adobe company immediately Mac and Unix platforms in launched version6.0. And by Illustrator real PC users know is introduced in 1997, while7.0 version of Mac and Windows platforms launch. Because the 7.0 version USES the complete PostScript page description language, make the page text and graphics quality got again leap. The more with her and Photoshop good interchangeability, won a good reputation. The only pity is the support of Chinese 7.0 abysmal. In 1998 the company launched landmark Adobe Illustrator8.0, making version - Illustrator became very perfect drawing software, is relying on powerful strength, Adobe company completely solved of Chinese characters and Japanese language support such double byte, more increased powerful "grid transition" tool (there are corresponding Draw9.0 Corel, but the effect the function of poor), text editing tools etc function, causes its fully occupy the professional vector graphics software's supremacy.Adobe Illustrator biggest characteristics is the use of beisaier curve, make simpleoperation powerful vector graphics possible. Now it has integrated functions such as word processing, coloring, not only in illustrations production, in printing products (such as advertising leaflet, booklet) design manufacture aspect is also widely used, in fact has become desktop publishing or (DTP) industry default standard. Its main competitors are in 2005, but MacromediaFreehand Macromedia had been Adobe company mergers.So-called beisaier curve method, in this software is through "the pen tool" set "anchor point" and "direction line" to realize. The average user in the beginning when use all feel not accustomed to, and requires some practice, but once the master later can follow one's inclinations map out all sorts of line, and intuitive and reliable.It also as Creative Suite of software suit with important constituent, and brother software - bitmap graphics software Photoshop have similar interface, and can share some plug-ins and function, realize seamless connection. At the same time it also can put the files output for Flash format. Therefore, can pass Illustrator let Adobe products and Flash connection.Adobe Illustrator CS5 on May 17, 2010 issue. New Adobe Illustrator CS5 software can realize accurate in perspective drawing, create width variable stroke, use lifelike, make full use of paint brush with new Adobe CS Live online service integration. AI CS5 has full control of the width zoom along path variable, and stroke, arrows, dashing and artistic brushes. Without access to multiple tools and panel, can directly on the sketchpad merger, editing and filling shape. AI CS5 can handle a file of most 100 different size, and according to your sketchpad will organize and check them.Here in Adobe Illustrator CS5, for example, briefly introduce the basic function: Adobe IllustratorQuick background layerWhen using Illustrator after making good design, stored in Photoshop opens, if often pattern is in a transparent layer, and have no background ground floor. Want to produce background bottom, are generally add a layer, and then executed merge down or flatten, with background ground floor. We are now introducing you a quick method: as long as in diagram level on press the upper right version, choose new layer, the arrow in the model selection and bottom ", "background can quickly produce. However, in Photoshop 5 after the movementmerged into one instruction, select menu on the "new layer is incomplete incomplete background bottom" to finish.Remove overmuch type clothWhen you open the file, version 5 will introduce the Illustrator before Illustrator version created files disused zone not need. In order to remove these don't need in the zone, click on All Swatches palette Swatches icon and then Select the Select clause in the popup menu, and Trash Unused. Click on the icon to remove irrelevant type cloth. Sometimes you must repeat selection and delete processes to ensure that clear palette. Note that complex documents will take a relatively long time doing cleanup.Put the fabric to define the general-screeningIn Illustrator5 secondary color and process color has two distinct advantages compared to establish for easy: they provide HuaGan tonal; And when you edit the general-screening prescription, be filled some of special color objects will be automatically updated into to the new color. Because process color won't let you build tonal and provides automatic updates, you may want to put all the fabric is defined as the general-screening. But to confirm Illustrator, when you are in QuarkXPress or when PageMaker quaclrochramatic must keep their into process of color.Preferred using CMYKBecause of Illustrator7 can let you to CMYK, RGB and HSB (hue, saturation, bright) color mode, so you want to establish color the creation of carefully, you can now contains the draft with the combination of these modes created objects. When you do, they may have output various kinds of unexpected things will happen. Printing output file should use CMYK; Only if you don't use screen display manuscript RGB. If your creation draft will also be used for printing and screen display, firstly with CMYK create printing output file, then use to copy it brings As ordered the copy and modify to the appropriate color mode.Information source:" Baidu encyclopedia "附录二中文译文Illustrator软件与Photoshop软件的区别Photoshop与Illustrator都是由Adobe公司出品的,而作为大家都比较熟悉的Photoshop软件,集图像扫描、编辑修改、图像制作、广告创意,图像输入与输出于一体的图形图像处理软件,深受广大平面设计人员和电脑美术爱好者的喜爱。

图像处理-毕设论文外文翻译(翻译+原文)

图像处理-毕设论文外文翻译(翻译+原文)

英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up forthe edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to alabel image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed bygraphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。

图像的分割和配准中英文翻译

图像的分割和配准中英文翻译

外文文献资料翻译:李睿钦指导老师:刘文军Medical image registration with partial dataSenthil Periaswamy,Hany FaridThe goal of image registration is to find a transformation that aligns one image to another. Medical image registration has emerged from this broad area of research as a particularly active field. This activity is due in part to the many clinical applications including diagnosis, longitudinal studies, and surgical planning, and to the need for registration across different imaging modalities (e.g., MRI, CT, PET, X-ray, etc.). Medical image registration, however, still presents many challenges. Several notable difficulties are (1) the transformation between images can vary widely and be highly non-rigid in nature; (2) images acquired from different modalities may differ significantly in overall appearance and resolution; (3) there may not be a one-to-one correspondence between the images (missing/partial data); and (4) each imaging modality introduces its own unique challenges, making it difficult to develop a single generic registration algorithm.In estimating the transformation that aligns two images we must choose: (1) to estimate the transformation between a small number of extracted features, or between the complete unprocessed intensity images; (2) a model that describes the geometric transformation; (3) whether to and how to explicitly model intensity changes; (4) an error metric that incorporates the previous three choices; and (5) a minimization technique for minimizing the error metric, yielding the desired transformation.Feature-based approaches extract a (typically small) number of corresponding landmarks or features between the pair of images to be registered. The overall transformation is estimated from these features. Common features include corresponding points, edges, contours or surfaces. These features may be specified manually or extracted automatically. Fiducial markers may also be used as features;these markers are usually selected to be visible in different modalities. Feature-based approaches have the advantage of greatly reducing computational complexity. Depending on the feature extraction process, these approaches may also be more robust to intensity variations that arise during, for example, cross modality registration. Also, features may be chosen to help reduce sensor noise. These approaches can be, however, highly sensitive to the accuracy of the feature extraction. Intensity-based approaches, on the other hand, estimate the transformation between the entire intensity images. Such an approach is typically more computationally demanding, but avoids the difficulties of a feature extraction stage.Independent of the choice of a feature- or intensity-based technique, a model describing the geometric transform is required. A common and straightforward choice is a model that embodies a single global transformation. The problem of estimating a global translation and rotation parameter has been studied in detail, and a closed form solution was proposed by Schonemann. Other closed-form solutions include methods based on singular value decomposition (SVD), eigenvalue-eigenvector decomposition and unit quaternions. One idea for a global transformation model is to use polynomials. For example, a zeroth-order polynomial limits the transformation to simple translations, a first-order polynomial allows for an affine transformation, and, of course, higher-order polynomials can be employed yielding progressively more flexible transformations. For example, the registration package Automated Image Registration (AIR) can employ (as an option) a fifth-order polynomial consisting of 168 parameters (for 3-D registration). The global approach has the advantage that the model consists of a relatively small number of parameters to be estimated, and the global nature of the model ensures a consistent transformation across the entire image. The disadvantage of this approach is that estimation of higher-order polynomials can lead to an unstable transformation, especially near the image boundaries. In addition, a relatively small and local perturbation can cause disproportionate and unpredictable changes in the overall transformation. An alternative to these global approaches are techniques that model the global transformation as a piecewise collection of local transformations. For example, the transformation between each local region may bemodeled with a low-order polynomial, and global consistency is enforced via some form of a smoothness constraint. The advantage of such an approach is that it is capable of modeling highly nonlinear transformations without the numerical instability of high-order global models. The disadvantage is one of computational inefficiency due to the significantly larger number of model parameters that need to be estimated, and the need to guarantee global consistency. Low-order polynomials are, of course, only one of many possible local models that may be employed. Other local models include B-splines, thin-plate splines, and a multitude of related techniques. The package Statistical Parametric Mapping (SPM) uses the low-frequency discrete cosine basis functions, where a bending-energy function is used to ensure global consistency. Physics-based techniques that compute a local geometric transform include those based on the Navier–Stokes equilibrium equations for linear elastici and those based on viscous fluid approaches.Under certain conditions a purely geometric transformation is sufficient to model the transformation between a pair of images. Under many real-world conditions, however, the images undergo changes in both geometry and intensity (e.g., brightness and contrast). Many registration techniques attempt to remove these intensity differences with a pre-processing stage, such as histogram matching or homomorphic filtering. The issues involved with modeling intensity differences are similar to those involved in choosing a geometric model. Because the simultaneous estimation of geometric and intensity changes can be difficult, few techniques build explicit models of intensity differences. A few notable exceptions include AIR, in which global intensity differences are modeled with a single multiplicative contrast term, and SPM in which local intensity differences are modeled with a basis function approach.Having decided upon a transformation model, the task of estimating the model parameters begins. As a first step, an error function in the model parameters must be chosen. This error function should embody some notion of what is meant for a pair of images to be registered. Perhaps the most common choice is a mean square error (MSE), defined as the mean of the square of the differences (in either feature distance or intensity) between the pair of images. This metric is easy to compute and oftenaffords simple minimization techniques. A variation of this metric is the unnormalized correlation coefficient applicable to intensity-based techniques. This error metric is defined as the sum of the point-wise products of the image intensities, and can be efficiently computed using Fourier techniques. A disadvantage of these error metrics is that images that would qualitatively be considered to be in good registration may still have large errors due to, for example, intensity variations, or slight misalignments. Another error metric (included in AIR) is the ratio of image uniformity (RIU) defined as the normalized standard deviation of the ratio of image intensities. Such a metric is invariant to overall intensity scale differences, but typically leads to nonlinear minimization schemes. Mutual information, entropy and the Pearson product moment cross correlation are just a few examples of other possible error functions. Such error metrics are often adopted to deal with the lack of an explicit model of intensity transformations .In the final step of registration, the chosen error function is minimized yielding the desired model parameters. In the most straightforward case, least-squares estimation is used when the error function is linear in the unknown model parameters. This closed-form solution is attractive as it avoids the pitfalls of iterative minimization schemes such as gradient-descent or simulated annealing. Such nonlinear minimization schemes are, however, necessary due to an often nonlinear error function. A reasonable compromise between these approaches is to begin with a linear error function, solve using least-squares, and use this solution as a starting point for a nonlinear minimization.译文:部分信息的医学图像配准Senthil Periaswamy,Hany Farid图像配准的目的是找到一种能把一副图像对准另外一副图像的变换算法。

数字图像处理英文文献翻译参考

数字图像处理英文文献翻译参考

…………………………………………………装………………订………………线…………………………………………………………………Hybrid Genetic Algorithm Based Image EnhancementTechnologyMu Dongzhou Department of the Information Engineering XuZhou College of Industrial TechnologyXuZhou, China ****************.cnXu Chao and Ge Hongmei Department of the Information Engineering XuZhou College of Industrial TechnologyXuZhou, China ***************.cn,***************.cnAbstract—in image enhancement, Tubbs proposed a normalized incomplete Beta function to represent several kinds of commonly used non-linear transform functions to do the research on image enhancement. But how to define the coefficients of the Beta function is still a problem. We proposed a Hybrid Genetic Algorithm which combines the Differential Evolution to the Genetic Algorithm in the image enhancement process and utilize the quickly searching ability of the algorithm to carry out the adaptive mutation and searches. Finally we use the Simulation experiment to prove the effectiveness of the method.Keywords- Image enhancement; Hybrid Genetic Algorithm; adaptive enhancementI. INTRODUCTIONIn the image formation, transfer or conversion process, due to other objective factors such as system noise, inadequate or excessive exposure, relative motion and so the impact will get the image often a difference between the original image (referred to as degraded or degraded) Degraded image is usually blurred or after the extraction of information through the machine to reduce or even wrong, it must take some measures for its improvement.Image enhancement technology is proposed in this sense, and the purpose is to improve the image quality. Fuzzy Image Enhancement situation according to the image using a variety of special technical highlights some of the information in the image, reduce or eliminate the irrelevant information, to emphasize the image of the whole or the purpose of local features. Image enhancement method is still no unified theory, image enhancement techniques can be divided into three categories: point operations, and spatial frequency enhancement methods Enhancement Act. This paper presents an automatic adjustment according to the image characteristics of adaptive image enhancement method that called hybrid genetic algorithm. It combines the differential evolution algorithm of adaptive search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.…………………………………………………装………………订………………线…………………………………………………………………II. IMAGE ENHANCEMENT TECHNOLOGYImage enhancement refers to some features of the image, such as contour, contrast, emphasis or highlight edges, etc., in order to facilitate detection or further analysis and processing. Enhancements will not increase the information in the image data, but will choose the appropriate features of the expansion of dynamic range, making these features more easily detected or identified, for the detection and treatment follow-up analysis and lay a good foundation.Image enhancement method consists of point operations, spatial filtering, and frequency domain filtering categories. Point operations, including contrast stretching, histogram modeling, and limiting noise and image subtraction techniques. Spatial filter including low-pass filtering, median filtering, high pass filter (image sharpening). Frequency filter including homomorphism filtering, multi-scale multi-resolution image enhancement applied [1].III. DIFFERENTIAL EVOLUTION ALGORITHMDifferential Evolution (DE) was first proposed by Price and Storn, and with other evolutionary algorithms are compared, DE algorithm has a strong spatial search capability, and easy to implement, easy to understand. DE algorithm is a novel search algorithm, it is first in the search space randomly generates the initial population and then calculate the difference between any two members of the vector, and the difference is added to the third member of the vector, by which Method to form a new individual. If you find that the fitness of new individual members better than the original, then replace the original with the formation of individual self.The operation of DE is the same as genetic algorithm, and it conclude mutation, crossover and selection, but the methods are different. We suppose that the group size is P, the vector dimension is D, and we can express the object vector as (1):xi=[xi1,xi2,…,xiD] (i =1,…,P)(1) And the mutation vector can be expressed as (2):()321rrriXXFXV-⨯+=i=1,...,P (2) 1rX,2rX,3rX are three randomly selected individuals from group, and r1≠r2≠r3≠i.F is a range of [0, 2] between the actual type constant factor difference vector is used to control the influence, commonly referred to as scaling factor. Clearly the difference between the vector and the smaller the disturbance also smaller, which means that if groups close to the optimum value, the disturbance will be automatically reduced.DE algorithm selection operation is a "greedy " selection mode, if and only if the new vector ui the fitness of the individual than the target vector is better when the individual xi, ui will be retained to the next group. Otherwise, the target vector xi individuals remain in the original group, once again as the next generation of the parent vector.…………………………………………………装………………订………………线…………………………………………………………………IV. HYBRID GA FOR IMAGE ENHANCEMENT IMAGEenhancement is the foundation to get the fast object detection, so it is necessary to find real-time and good performance algorithm. For the practical requirements of different systems, many algorithms need to determine the parameters and artificial thresholds. Can use a non-complete Beta function, it can completely cover the typical image enhancement transform type, but to determine the Beta function parameters are still many problems to be solved. This section presents a Beta function, since according to the applicable method for image enhancement, adaptive Hybrid genetic algorithm search capabilities, automatically determines the transformation function of the parameter values in order to achieve adaptive image enhancement.The purpose of image enhancement is to improve image quality, which are more prominent features of the specified restore the degraded image details and so on. In the degraded image in a common feature is the contrast lower side usually presents bright, dim or gray concentrated. Low-contrast degraded image can be stretched to achieve a dynamic histogram enhancement, such as gray level change. We use Ixy to illustrate the gray level of point (x, y) which can be expressed by (3).Ixy=f(x, y) (3) where: “f” is a linear or nonline ar function. In general, gray image have four nonlinear translations [6] [7] that can be shown as Figure 1. We use a normalized incomplete Beta function to automatically fit the 4 categories of image enhancement transformation curve. It defines in (4):()()()()10,01,111<<-=---⎰βαβαβαdtttBufu(4) where:()()⎰---=1111,dtttBβαβα(5) For different value of α and β, we can get response curve from (4) and (5).The hybrid GA can make use of the previous section adaptive differential evolution algorithm to search for the best function to determine a value of Beta, and then each pixel grayscale values into the Beta function, the corresponding transformation of Figure 1, resulting in ideal image enhancement. The detail description is follows:Assuming the original image pixel (x, y) of the pixel gray level by the formula (4),denoted byxyi,()Ω∈yx,, here Ω is the image domain. Enhanced image is denoted by Ixy. Firstly, the image gray value normalized into [0, 1] by (6).minmaxminiiiig xyxy--=(6)where:maxi andm ini express the maximum and minimum of image gray relatively.Define the nonlinear transformation function f(u) (0≤u≤1) to transform source image…………………………………………………装………………订………………线…………………………………………………………………Finally, we use the hybrid genetic algorithm to determine the appropriate Beta function f (u) the optimal parameters αand β. Will enhance the image Gxy transformed antinormalized.V. EXPERIMENT AND ANALYSISIn the simulation, we used two different types of gray-scale images degraded; the program performed 50 times, population sizes of 30, evolved 600 times. The results show that the proposed method can very effectively enhance the different types of degraded image.Figure 2, the size of the original image a 320 × 320, it's the contrast to low, and some details of the more obscure, in particular, scarves and other details of the texture is not obvious, visual effects, poor, using the method proposed in this section, to overcome the above some of the issues and get satisfactory image results, as shown in Figure 5 (b) shows, the visual effects have been well improved. From the histogram view, the scope of the distribution of image intensity is more uniform, and the distribution of light and dark gray area is more reasonable. Hybrid genetic algorithm to automatically identify the nonlinear…………………………………………………装………………订………………线…………………………………………………………………transformation of the function curve, and the values obtained before 9.837,5.7912, from the curve can be drawn, it is consistent with Figure 3, c-class, that stretch across the middle region compression transform the region, which were consistent with the histogram, the overall original image low contrast, compression at both ends of the middle region stretching region is consistent with human visual sense, enhanced the effect of significantly improved.Figure 3, the size of the original image a 320 × 256, the overall intensity is low, the use of the method proposed in this section are the images b, we can see the ground, chairs and clothes and other details of the resolution and contrast than the original image has Improved significantly, the original image gray distribution concentrated in the lower region, and the enhanced image of the gray uniform, gray before and after transformation and nonlinear transformation of basic graph 3 (a) the same class, namely, the image Dim region stretching, and the values were 5.9409,9.5704, nonlinear transformation of images degraded type inference is correct, the enhanced visual effect and good robustness enhancement.Difficult to assess the quality of image enhancement, image is still no common evaluation criteria, common peak signal to noise ratio (PSNR) evaluation in terms of line, but the peak signal to noise ratio does not reflect the human visual system error. Therefore, we use marginal protection index and contrast increase index to evaluate the experimental results.Edgel Protection Index (EPI) is defined as follows:…………………………………………………装………………订………………线…………………………………………………………………(7)Contrast Increase Index (CII) is defined as follows:minmaxminmax,GGGGCCCEOD+-==(8)In figure 4, we compared with the Wavelet Transform based algorithm and get the evaluate number in TABLE I.Figure 4 (a, c) show the original image and the differential evolution algorithm for enhanced results can be seen from the enhanced contrast markedly improved, clearer image details, edge feature more prominent. b, c shows the wavelet-based hybrid genetic algorithm-based Comparison of Image Enhancement: wavelet-based enhancement method to enhance image detail out some of the image visual effect is an improvement over the original image, but the enhancement is not obvious; and Hybrid genetic algorithm based on adaptive transform image enhancement effect is very good, image details, texture, clarity is enhanced compared with the results based on wavelet transform has greatly improved the image of the post-analytical processing helpful. Experimental enhancement experiment using wavelet transform "sym4" wavelet, enhanced differential evolution algorithm experiment, the parameters and the values were 5.9409,9.5704. For a 256 × 256 size image transform based on adaptive hybrid genetic algorithm in Matlab 7.0 image enhancement software, the computing time is about 2 seconds, operation is very fast. From TABLE I, objective evaluation criteria can be seen, both the edge of the protection index, or to enhance the contrast index, based on adaptive hybrid genetic algorithm compared to traditional methods based on wavelet transform has a larger increase, which is from This section describes the objective advantages of the method. From above analysis, we can see…………………………………………………装………………订………………线…………………………………………………………………that this method.From above analysis, we can see that this method can be useful and effective.VI. CONCLUSIONIn this paper, to maintain the integrity of the perspective image information, the use of Hybrid genetic algorithm for image enhancement, can be seen from the experimental results, based on the Hybrid genetic algorithm for image enhancement method has obvious effect. Compared with other evolutionary algorithms, hybrid genetic algorithm outstanding performance of the algorithm, it is simple, robust and rapid convergence is almost optimal solution can be found in each run, while the hybrid genetic algorithm is only a few parameters need to be set and the same set of parameters can be used in many different problems. Using the Hybrid genetic algorithm quick search capability for a given test image adaptive mutation, search, to finalize the transformation function from the best parameter values. And the exhaustive method compared to a significant reduction in the time to ask and solve the computing complexity. Therefore, the proposed image enhancement method has some practical value.REFERENCES[1] HE Bin et al., Visual C++ Digital Image Processing [M], Posts & Telecom Press,2001,4:473~477[2] Storn R, Price K. Differential Evolution—a Simple and Efficient Adaptive Scheme forGlobal Optimization over Continuous Space[R]. International Computer Science Institute, Berlaey, 1995.[3] Tubbs J D. A note on parametric image enhancement [J].Pattern Recognition.1997,30(6):617-621.[4] TANG Ming, MA Song De, XIAO Jing. Enhancing Far Infrared Image Sequences withModel Based Adaptive Filtering [J] . CHINESE JOURNAL OF COMPUTERS, 2000, 23(8):893-896.[5] ZHOU Ji Liu, LV Hang, Image Enhancement Based on A New Genetic Algorithm [J].Chinese Journal of Computers, 2001, 24(9):959-964.[6] LI Yun, LIU Xuecheng. On Algorithm of Image Constract Enhancement Based onWavelet Transformation [J]. Computer Applications and Software, 2008,8.[7] XIE Mei-hua, WANG Zheng-ming, The Partial Differential Equation Method for ImageResolution Enhancement [J]. Journal of Remote Sensing, 2005,9(6):673-679.…………………………………………………装………………订………………线…………………………………………………………………基于混合遗传算法的图像增强技术Mu Dongzhou 徐州工业职业技术学院信息工程系 XuZhou, China****************.cnXu Chao and Ge Hongmei 徐州工业职业技术学院信息工程系 XuZhou,********************.cn,***************.cn摘要—在图像增强之中,塔布斯提出了归一化不完全β函数表示常用的几种使用的非线性变换函数对图像进行研究增强。

A Threshold Selection Method from Gray-Level Histograms图像分割经典论文翻译(部分)

A Threshold Selection Method from Gray-Level Histograms图像分割经典论文翻译(部分)

A Threshold Selection Method from Gray-Level Histograms[1][1]Otsu N, A threshold selection method from gray-level histogram. IEEE Transactions on System,Man,and Cybemetics,SMC-8,1978:62-66.一种由灰度直方图选取阈值的方法摘要介绍了一种对于画面分割自动阈值选择的非参数和无监督的方法。

最佳阈值由判别标准选择,即最大化通过灰度级所得到的类的方差。

这个过程很简单,是利用了灰度直方图0阶和第1阶的累积。

这是简单的方法扩展到多阈值的问题。

几种实验结果呈现也支持了方法的有效性。

一.简介选择灰度充分的阈值,从图片的背景中提取对象对于图像处理非常重要。

在这方面已经提出了多种技术。

在理想的情况下,直方图具有分别表示对象和背景的能力,两个峰之间有很深的明显的谷,使得阈值可以选择这个谷底。

然而,对于大多数实际图片,它常常难以精确地检测谷底,特别是在这种情况下,当谷是平的和广泛的,具有噪声充满时,或者当两个峰是在高度极其不等,通常不产生可追踪的谷。

已经出现了,为了克服这些困难,提出的一些技术。

它们是,例如,谷锐化技术[2],这个技术限制了直方图与(拉普拉斯或梯度)的衍生物大于绝对值的像素,并且描述了绘制差分直方图方法[3],选择灰度级的阈值与差的最大值。

这些利用在原始图象有关的信息的相邻像素(或边缘),修改直方图以便使其成为阈值是有用的。

另一类方法与参数方法的灰度直方图直接相关。

例如,该直方图在最小二乘意义上与高斯分布的总和近似,应用了统计决策程序 [4]。

然而,这种方法需要相当繁琐,有时不稳定的计算。

此外,在许多情况下,高斯分布与真实模型的近似值较小。

在任何情况下,没有一个阈值的评估标准能够对大多数的迄今所提出的方法进行评价。

这意味着,它可能是派生的最佳阈值方法来建立一个适当的标准,从更全面的角度评估阈值的“好与坏”的正确方法。

matlab图像处理 外文翻译 外文文献 英文文献 基于视觉的矿井救援机器人场景识别

matlab图像处理 外文翻译 外文文献 英文文献 基于视觉的矿井救援机器人场景识别

附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.During the training stage, robot collects the images of the environment where it works and processes the images to extract global features that represent the scene. Some approaches were used to analyze the data-set of image directly and some primary features were found, such as the PCA method [3]. However, the PCA method is not effective in distinguishing the classes of features. Another type of approach uses appearance features including color, texture and edge density to represent the image. For example, ZHOU et al[4] used multidimensional histograms to describe global appearance features. This method is simple but sensitive to scale and illumination changes. In fact, all kinds of global image features are suffered from the change of environment.LOWE [5] presented a SIFT method that uses similarity invariant descriptors formed by characteristic scale and orientation at interest points to obtain the features. The features are invariant to image scaling, translation, rotation and partially invariant to illumination changes. But SIFT may generate 1 000 or more interest points, which may slow down the processor dramatically.During the matching stage, nearest neighbor strategy(NN) is widely adopted for its facility and intelligibility[6]. But it cannot capture the contribution of individual feature for scene recognition. In experiments, the NN is not good enough to express the similarity between two patterns. Furthermore, the selected features can not represent the scene thoroughly according to the state-of-art pattern recognition, which makes recognition not reliable[7].So in this work a new recognition system is presented, which is more reliable and effective if it is used in a complex mine environment. In this system, we improve the invariance by extracting salient local image regions as landmarks to replace the whole image to deal with large changes in scale, 2D rotation and viewpoint. And the number of interest points is reduced effectively, which makes the processing easier. Fuzzy recognition strategy is designed to recognize the landmarks in place of NN, which can strengthen the contribution of individual feature for scene recognition. Because of its partial information resuming ability, hidden Markov model is adopted to organize those landmarks, which can capture the structure or relationship among them. So scene recognition can be transformed to the evaluation problem of HMM, which makes recognition robust.2 Salient local image regions detectionResearches on biological vision system indicate that organism (like drosophila) often pays attention to certain special regions in the scene for their behavioral relevance or local image cues while observing surroundings [8]. These regions can be taken as natural landmarks to effectively represent and distinguish different environments. Inspired by those, we use center-surround difference method to detect salient regions in multi-scale image spaces. The opponencies of color and texture are computed to create the saliency map.Follow-up, sub-image centered at the salient position in S is taken as the landmark region. The size of the landmark region can be decided adaptively according to the changes of gradient orientation of the local image [11].Mobile robot navigation requires that natural landmarks should be detected stably when environments change to some extent. To validate the repeatability on landmark detection of our approach, we have done some experiments on the cases of scale, 2D rotation and viewpoint changes etc. Fig.1 shows that the door is detected for its saliency when viewpoint changes. More detailed analysis and results about scale and rotation can be found in our previous works[12].3 Scene recognition and localizationDifferent from other scene recognition systems, our system doesn’t need training offline. In other words, our scenes are not classified in advance. When robot wanders, scenes captured at intervals of fixed time are used to build the vertex of a topological map, which represents the place where robot locates. Although the map’s geometric layout is ignored by the localization system, it is useful for visualization and debugging[13] and beneficial to path planning. So localization means searching the best match of current scene on the map. In this paper hidden Markov model is used to organize the extracted landmarks from current scene and create the vertex of topological map for its partial information resuming ability.Resembled by panoramic vision system, robot looks around to get omni-images. FromFig.1 Experiment on viewpoint changeseach image, salient local regions are detected and formed to be a sequence, named as landmark sequence whose order is the same as the image sequence. Then a hidden Markov model is created based on the landmark sequence involving k salient local image regions, which is taken as the description of the place where the robot locates. In our system EVI-D70 camera has a view field of ±170°. Considering the overlap effect, we sample environment every 45° to get 8 images.Let the 8 images as hidden state Si (1≤i≤8), the created HMM can be illustrated by Fig.2. The parameters of HMM, aij and bjk, are achieved by learning, using Baulm-Welch algorithm[14]. The threshold of convergence is set as 0.001.As for the edge of topological map, we assign it with distance information betweentwo vertices. The distances can be computed according to odometry readings.To locate itself on the topological map, robot must run its ‘eye’ on environment andextract a landmark sequence L1′ −Lk′ , then search the map for the best matched vertex (scene). Different from traditional probabilistic localization[15], in our system localization problem can be converted to the evaluation problem of HMM. The vertex with the greatest evaluation value, which must also be greater than a threshold, is taken as the best matched vertex, which indicates the most possible place where the robot is.4 Match strategy based on fuzzy logicOne of the key issues in image match problem is to choose the most effective features or descriptors to represent the original image. Due to robot movement, those extracted landmark regions will change at pixel level. So, the descriptors or features chosen should be invariant to some extent according to the changes of scale, rotation and viewpoint etc. In this paper, we use 4 features commonly adopted in the community that are briefly described as follows.GO: Gradient orientation. It has been proved that illumination and rotation changes are likely to have less influence on it[5].ASM and ENT: Angular second moment and entropy, which are two texture descriptors.H: Hue, which is used to describe the fundamental information of the image.Another key issue in match problem is to choose a good match strategy or algorithm. Usually nearest neighbor strategy (NN) is used to measure the similarity between two patterns. But we have found in the experiments that NN can’t adequately exhibit the individual descriptor or feature’s contribution to similarity measurement. As indicated in Fig.4, the input image Fig.4(a) comes from different view of Fig.4(b). But the distance between Figs.4(a) and (b) computed by Jefferey divergence is larger than Fig.4(c).To solve the problem, we design a new match algorithm based on fuzzy logic for exhibiting the subtle changes of each features. The algorithm is described as below.And the landmark in the database whose fused similarity degree is higher than any others is taken as the best match. The match results of Figs.2(b) and (c) are demonstrated by Fig.3. As indicated, this method can measure the similarity effectively between two patterns.Fig.3 Similarity computed using fuzzy strategy5 Experiments and analysisThe localization system has been implemented on a mobile robot, which is built by our laboratory. The vision system is composed of a CCD camera and a frame-grabber IVC-4200. The resolution of image is set to be 400×320 and the sample frequency is set to be 10 frames/s. The computer system is composed of 1 GHz processor and 512 M memory, which is carried by the robot. Presently the robot works in indoor environments.Because HMM is adopted to represent and recognize the scene, our system has the ability to capture the discrimination about distribution of salient local image regions and distinguish similar scenes effectively. Table 1 shows the recognition result of static environments including 5 laneways and a silo. 10 scenes are selected from each environment and HMMs are created for each scene. Then 20 scenes are collected when the robot enters each environment subsequently to match the 60 HMMs above.In the table, “truth” means that the scene to be localized matches with the right scene (the evaluation value of HMM is 30% greater than the second high evaluation). “Uncertainty” means that the eva luation value of HMM is greater than the second high evaluation under 10%. “Error match” means that the scene to be localized matches with the wrong scene. In the table, the ratio of error match is 0. But it is possible that the scene to be localized can’t match any scenes and new vertexes are created. Furthermore, the “ratio of truth” about silo is lower because salient cues are fewer in this kind of environment.In the period of automatic exploring, similar scenes can be combined. The process can be summarized as: when localization succeeds, the current landmark sequence is added to the accompanying observation sequence of the matched vertex un-repeatedly according to their orientation (including the angle of the image from which the salient local region and the heading of the robot come). The parameters of HMM are learned again.Compared with the approaches using appearance features of the whole image (Method 2, M2), our system (M1) uses local salient regions to localize and map, which makes it have more t olerance of scale, viewpoint changes caused by robot’s movement and higher ratio of recognition and fewer amount of vertices on the topological map. So, our system has better performance in dynamic environment. These can be seen in Table 2. Laneways 1, 2, 4, 5 are in operation where some miners are working, which puzzle the robot.6 Conclusions1) Salient local image features are extracted to replace the whole image to participate in recognition, which improve the tolerance of changes in scale, 2D rotation and viewpoint of environment image.2) Fuzzy logic is used to recognize the local image, and emphasize the individual feature’s contribution to recognition, which improves the reliability of landmarks.3) HMM is used to capture the structure or relationship of those local images, which converts the scene recognition problem into the evaluation problem of HMM.4) The results from the above experiments demonstrate that the mine rescue robot scene recognition system has higher ratio of recognition and localization.Future work will be focused on using HMM to deal with the uncertainty of localization.附录B 中文翻译基于视觉的矿井救援机器人场景识别CUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)摘要:基于模糊逻辑和隐马尔可夫模型(HMM),论文提出了一个新的场景识别系统,可应用于紧急情况下矿山救援机器人的定位。

图像处理中英文对照外文翻译文献

图像处理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:基于局部二值模式多分辨率的灰度和旋转不变性的纹理分类摘要:本文描述了理论上非常简单但非常有效的,基于局部二值模式的、样图的非参数识别和原型分类的,多分辨率的灰度和旋转不变性的纹理分类方法。

此方法是基于结合某种均衡局部二值模式,是局部图像纹理的基本特性,并且已经证明生成的直方图是非常有效的纹理特征。

我们获得一个一般灰度和旋转不变的算子,可表达检测有角空间和空间结构的任意量子化的均衡模式,并提出了结合多种算子的多分辨率分析方法。

根据定义,该算子在图像灰度发生单一变化时具有不变性,所以所提出的方法在灰度发生变化时是非常强健的。

另一个优点是计算简单,算子在小邻域内或同一查找表内只要几个操作就可实现。

在旋转不变性的实际问题中得到了良好的实验结果,与来自其他的旋转角度的样品一起以一个特别的旋转角度试验而且测试得到分类, 证明了基于简单旋转的发生统计学的不变性二值模式的分辨是可以达成。

这些算子表示局部图像纹理的空间结构的又一特色是,由结合所表示的局部图像纹理的差别的旋转不变量不一致方法,其性能可得到进一步的改良。

这些直角的措施共同证明了这是旋转不变性纹理分析的非常有力的工具。

关键词:非参数的,纹理分析,Outex ,Brodatz ,分类,直方图,对比度2 灰度和旋转不变性的局部二值模式我们通过定义单色纹理图像的一个局部邻域的纹理T ,如 P (P>1)个象素点的灰度级联合分布,来描述灰度和旋转不变性算子:01(,,)c P T t g g g -= (1)其中,g c 为局部邻域中心像素点的灰度值,g p (p=0,1…P-1)为半径R(R>0)的圆形邻域内对称的空间象素点集的灰度值。

图1如果g c 的坐标是(0,0),那么g p 的坐标为(cos sin(2/),(2/))R R p P p P ππ-。

图1举例说明了圆形对称邻域集内各种不同的(P,R )。

数字图像处理论文中英文对照资料外文翻译文献

数字图像处理论文中英文对照资料外文翻译文献

第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。

图像分割图像预处理中英文对照外文翻译文献

图像分割图像预处理中英文对照外文翻译文献

图像分割图像预处理中英文对照外文翻译文献中英文对照外文翻译一种在线图像编码识别系统的设计摘要:本文介绍了在线图像编码字符识别系统的设计与实现过程,对其中重点环节进行了分析与研究,给出了主要环节问题的解决方法,在识别算法上,结合模板匹配与特征识别,提出了基于特征加权的模板匹配算法,该算法对提高字符识别率提到了较好的作用。

关键词:图像处理;模式识别;特征加权;软件设计0引言图像编码字符识别的研究目前仍是国内外一个重点研究课题,它具有广泛的应用背景,比如车牌号码自动识别、邮政编码的自动识别、试卷自动阅读、报表自动处理等,由于这种在线图像编码字符的识别都具有一些共性,本文结合在线轮胎编码字符识别系统的设计,对一般图像编码字符识别系统进行了阐述,对关键环节进行了研究与分析,该方法对其它在线图像编码字符系统的开发具有一定指导意义。

1在线图像编码识别系统流程在线图像编码字符识别系统主要包括数字图像的采集、存储、图像预处理、编码图像提取、编码特征提取、编码识别和后续处理等一些环节,其流程图如图1所示。

图1 在线图像编码字符识别系统流程图在线轮胎图像编码字符识别系统要求对通过生产流水线上每一个轮胎采集含有轮胎编码的图像,然后通过对图像的处理,提取出轮胎编码特征,采用合适的识别算法将每一位编码字符进行识别。

由于轮胎编码字符在轮胎上有一定变形,且摄像角度不同,得到的编码图像差异也很大,规律性差,所以编码图像的预处理和识别算法的选取显得尤为重要。

2图像采集与存储在线编码图像通常使用数码摄像机、数码照相机、数码摄像头等设备采集并输入计算机进行处理,本系统采用QuickCamPro4000数码摄像头采集轮胎编码图像,直接按JPG格式存储。

编码图像一般都要先转成BMP图像格式,因为BMP格式己经成为PC领域事实上的标准——几乎所有为Windows操作系统设计的图像处理软件都支持这种格式的图像。

BMP是Windows的原始位图格式,它可以用于保存任意类型的位图数据,可以支持所有的屏幕分辨率和Windows所支持的颜色组合。

计算机图像图形外文翻译外文文献英文文献图像分割

计算机图像图形外文翻译外文文献英文文献图像分割

原文出处Digital Image Processing 2/E图像分割前一章的资料使我们所研究的图像处理方法开始发生了转变。

从输人输出均为图像的处理方法转变为输人为图像而输出为从这些图像中提取出来的属性的处理方法〔这方面在1.1节中定义过)。

图像分割是这一方向的另一主要步骤。

分割将图像细分为构成它的子区域或对象。

分割的程度取决于要解决的问题。

就是说当感兴趣的对象已经被分离出来时就停止分割。

例如,在电子元件的自动检测方面,我们关注的是分析产品的图像,检测是否存在特定的异常状态,比如,缺失的元件或断裂的连接线路。

超过识别这此元件所需的分割是没有意义的。

异常图像的分割是图像处理中最困难的任务之一。

精确的分割决定着计算分析过程的成败。

因此,应该特别的关注分割的稳定性。

在某些情况下,比如工业检测应用,至少有可能对环境进行适度控制的检测。

有经验的图像处理系统设计师总是将相当大的注意力放在这类可能性上。

在其他应用方面,比如自动目标采集,系统设计者无法对环境进行控制。

所以,通常的方法是将注意力集中于传感器类型的选择上,这样可以增强获取所关注对象的能力,从而减少图像无关细节的影响。

一个很好的例子就是,军方利用红外线图像发现有很强热信号的目标,比如移动中的装备和部队。

图像分割算法一般是基于亮度值的不连续性和相似性两个基本特性之一。

第一类性质的应用途径是基于亮度的不连续变化分割图像,比如图像的边缘。

第二类的主要应用途径是依据事先制定的准则将图像分割为相似的区域,门限处理、区域生长、区域分离和聚合都是这类方法的实例。

本章中,我们将对刚刚提到的两类特性各讨论一些方法。

我们先从适合于检测灰度级的不连续性的方法展开,如点、线和边缘。

特别是边缘检测近年来已经成为分割算法的主题。

除了边缘检测本身,我们还会讨论一些连接边缘线段和把边缘“组装”为边界的方法。

关于边缘检测的讨论将在介绍了各种门限处理技术之后进行。

门限处理也是一种人们普遍关注的用于分割处理的基础性方法,特别是在速度因素占重要地位的应用中。

matlab图像处理中英文翻译文献

matlab图像处理中英文翻译文献

附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have twodistinct stages: training offline and matching online.During the training stage, robot collects the images of the environment where it works and processes the images to extract global features that represent the scene. Some approaches were used to analyze the data-set of image directly and some primary features were found, such as the PCA method [3]. However, the PCA method is not effective in distinguishing the classes of features. Another type of approach uses appearance features including color, texture and edge density to represent the image. For example, ZHOU et al[4] used multidimensional histograms to describe global appearance features. This method is simple but sensitive to scale and illumination changes. In fact, all kinds of global image features are suffered from the change of environment.LOWE [5] presented a SIFT method that uses similarity invariant descriptors formed by characteristic scale and orientation at interest points to obtain the features. The features are invariant to image scaling, translation, rotation and partially invariant to illumination changes. But SIFT may generate 1 000 or more interest points, which may slow down the processor dramatically.During the matching stage, nearest neighbor strategy(NN) is widely adopted for its facility and intelligibility[6]. But it cannot capture the contribution of individual feature for scene recognition. In experiments, the NN is not good enough to express the similarity between two patterns. Furthermore, the selected features can not represent the scene thoroughly according to the state-of-art pattern recognition, which makes recognition not reliable[7].So in this work a new recognition system is presented, which is more reliable and effective if it is used in a complex mine environment. In this system, we improve the invariance by extracting salient local image regions as landmarks to replace the whole image to deal with large changes in scale, 2D rotation and viewpoint. And the number of interest points is reduced effectively, which makes the processing easier. Fuzzy recognition strategy is designed to recognize the landmarks in place of NN, which can strengthen the contribution of individual feature for scene recognition. Because of its partial information resuming ability, hidden Markov model is adopted to organize those landmarks, which can capture the structure or relationship among them. So scene recognition can be transformed to the evaluation problem of HMM, which makes recognition robust.2 Salient local image regions detectionResearches on biological vision system indicate that organism (like drosophila) often pays attention to certain special regions in the scene for their behavioral relevance or local image cues while observing surroundings [8]. These regions can be taken as natural landmarks to effectively represent and distinguish different environments. Inspired by those, we use center-surround difference method to detect salient regions in multi-scale image spaces. The opponencies of color and texture are computed to create the saliency map.Follow-up, sub-image centered at the salient position in S is taken as the landmark region. The size of the landmark region can be decided adaptively according to the changes of gradient orientation of the local image [11].Mobile robot navigation requires that natural landmarks should be detected stably when environments change to some extent. To validate the repeatability on landmark detection of our approach, we have done some experiments on the cases of scale, 2D rotation and viewpoint changes etc. Fig.1 shows that the door is detected for its saliency when viewpoint changes. More detailed analysis and results about scale and rotation can be found in our previous works[12].3 Scene recognition and localizationDifferent from other scene recognition systems, our system doesn’t need training offline. In other words, our scenes are not classified in advance. When robot wanders, scenes captured at intervals of fixed time are used to build the vertex of a topological map, which represents the place where robot locates. Although the map’s geometric layout is ignored by the localization system, it is useful for visualization and debugging[13] and beneficial to path planning. So localization means searching the best match of current scene on the map. In this paper hidden Markov model is used to organize the extracted landmarks from current scene and create the vertex of topological map for its partial information resuming ability.Resembled by panoramic vision system, robot looks around to get omni-images. FromFig.1 Experiment on viewpoint changeseach image, salient local regions are detected and formed to be a sequence, named as landmark sequence whose order is the same as the image sequence. Then a hidden Markov model is created based on the landmark sequence involving k salient local image regions, which is taken as the description of the place where the robot locates. In our system EVI-D70 camera has a view field of ±170°. Considering the overlap effect, we sample environment every 45° to get 8 images.Let the 8 images as hidden state Si (1≤i≤8), the created HMM can be illustrated by Fig.2. The parameters of HMM, aij and bjk, are achieved by learning, using Baulm-Welch algorithm[14]. The threshold of convergence is set as 0.001.As for the edge of topological map, we assign it with distance information between twovertices. The distances can be computed according to odometry readings.Fig.2 HMM of environmentTo locate itself on the topological map, robot must run its ‘eye’ on environment and extract a landmark sequence L1′ −Lk′ , then search the map for the best matched vertex (scene). Different from traditional probabilistic localization[15], in our system localization problem can be converted to the evaluation problem of HMM. The vertex with the greatest evaluation value, which must also be greater than a threshold, is taken as the best matched vertex, which indicates the most possible place where the robot is.4 Match strategy based on fuzzy logicOne of the key issues in image match problem is to choose the most effective features or descriptors to represent the original image. Due to robot movement, those extracted landmark regions will change at pixel level. So, the descriptors or features chosen should be invariant to some extent according to the changes of scale, rotation and viewpoint etc. In this paper, we use 4 features commonly adopted in the community that are briefly described as follows.GO: Gradient orientation. It has been proved that illumination and rotation changes are likely to have less influence on it[5].ASM and ENT: Angular second moment and entropy, which are two texture descriptors.H: Hue, which is used to describe the fundamental information of the image.Another key issue in match problem is to choose a good match strategy or algorithm. Usually nearest neighbor strategy (NN) is used to measure the similarity between two patterns. But we have found in the experiments that NN can’t adequately exhibit the individual descriptor or feature’s contribution to similarity measurement. As indicated in Fig.4, the input image Fig.4(a) comes from different view of Fig.4(b). But the distance between Figs.4(a) and (b) computed by Jefferey divergence is larger than Fig.4(c).To solve the problem, we design a new match algorithm based on fuzzy logic for exhibiting the subtle changes of each features. The algorithm is described as below.And the landmark in the database whose fused similarity degree is higher than any others is taken as the best match. The match results of Figs.2(b) and (c) are demonstrated by Fig.3. As indicated, this method can measure the similarity effectively between two patterns.Fig.3 Similarity computed using fuzzy strategy5 Experiments and analysisThe localization system has been implemented on a mobile robot, which is built by our laboratory. The vision system is composed of a CCD camera and a frame-grabber IVC-4200. The resolution of image is set to be 400×320 and the sample frequency is set to be 10 frames/s. The computer system is composed of 1 GHz processor and 512 M memory, which is carried by the robot. Presently the robot works in indoor environments.Because HMM is adopted to represent and recognize the scene, our system has the ability to capture the discrimination about distribution of salient local image regions and distinguish similar scenes effectively. Table 1 shows the recognition result of static environments including 5 laneways and a silo. 10 scenes are selected from each environment and HMMs are created for each scene. Then 20 scenes are collected when the robot enters each environment subsequently to match the 60 HMMs above.In the table, “truth” m eans that the scene to be localized matches with the right scene (the evaluation value of HMM is 30% greater than the second high evaluation). “Uncertainty” means that the evaluation value of HMM is greater than the second high evaluation under 10%. “Error match” means that the scene to be localized matches with the wrong scene. In the table, the ratio of error match is 0. But it is possible that the scene to be localized can’t match any scenes and new vertexes are created. Furthermore, the “ratio of truth” about silo is lower because salient cues arefewer in this kind of environment.In the period of automatic exploring, similar scenes can be combined. The process can be summarized as: when localization succeeds, the current landmark sequence is added to the accompanying observation sequence of the matched vertex un-repeatedly according to their orientation (including the angle of the image from which the salient local region and the heading of the robot come). The parameters of HMM are learned again.Compared with the approaches using appearance features of the whole image (Method 2, M2), our system (M1) uses local salient regions to localize and map, which makes it have more tolerance of scale, viewpoint changes caused by robot’s movement and higher ratio of recognition and fewer amount of vertices on the topological map. So, our system has better performance in dynamic environment. These can be seen in Table 2. Laneways 1, 2, 4, 5 are in operation where some miners are working, which puzzle the robot.6 Conclusions1) Salient local image features are extracted to replace the whole image to participate in recognition, which improve the tolerance of changes in scale, 2D rotation and viewpoint of environment image.2) Fuzzy logic is used to recognize the local image, and emphasize the individual feature’s contribution to recognition, which improves the reliability of landmarks.3) HMM is used to capture the structure or relationship of those local images, which converts the scene recognition problem into the evaluation problem of HMM.4) The results from the above experiments demonstrate that the mine rescue robot scene recognition system has higher ratio of recognition and localization.Future work will be focused on using HMM to deal with the uncertainty of localization.附录B 中文翻译基于视觉的矿井救援机器人场景识别CUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)摘要:基于模糊逻辑和隐马尔可夫模型(HMM),论文提出了一个新的场景识别系统,可应用于紧急情况下矿山救援机器人的定位。

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理外文翻译外文文献英文文献数字图像处理Digital Image Processing1 IntroductionMany operators have been proposed for presenting a connected component n a digital image by a reduced amount of data or simplied shape. In general we have to state that the development, choice and modi_cation of such algorithms in practical applications are domain and task dependent, and there is no \best method". However, it isinteresting to note that there are several equivalences between published methods and notions, and characterizing such equivalences or di_erences should be useful to categorize the broad diversity of published methods for skeletonization. Discussing equivalences is a main intention of this report.1.1 Categories of MethodsOne class of shape reduction operators is based on distance transforms. A distance skeleton is a subset of points of a given component such that every point of this subset represents the center of a maximal disc (labeled with the radius of this disc) contained in the given component. As an example in this _rst class of operators, this report discusses one method for calculating a distance skeleton using the d4 distance function which is appropriate to digitized pictures. A second class of operators produces median or center lines of the digitalobject in a non-iterative way. Normally such operators locate critical points _rst, and calculate a speci_ed path through the object by connecting these points.The third class of operators is characterized by iterative thinning. Historically, Listing [10] used already in 1862 the term linear skeleton for the result of a continuous deformation of the frontier of a connected subset of a Euclidean space without changing the connectivity of the original set, until only a set of lines and points remains. Many algorithms in image analysis are based on this general concept of thinning. The goal is a calculation of characteristic properties of digital objects which are not related to size or quantity. Methods should be independent from the position of a set in the plane or space, grid resolution (for digitizing this set) or the shape complexity of the given set. In the literature the term \thinning" is not used - 1 -in a unique interpretation besides that it always denotes a connectivity preserving reduction operation applied to digital images, involving iterations of transformations of speci_ed contour points into background points. A subset Q _ I of object points is reduced by ade_ned set D in one iteration, and the result Q0 = Q n D becomes Q for the next iteration. Topology-preserving skeletonization is a special case of thinning resulting in a connected set of digital arcs or curves.A digital curve is a path p =p0; p1; p2; :::; pn = q such that pi is a neighbor of pi?1, 1 _ i _ n, and p = q. A digital curve is called simpleif each point pi has exactly two neighbors in this curve. A digital arc is a subset of a digital curve such that p 6= q. A point of a digital arc which has exactly one neighbor is called an end point of this arc. Within this third class of operators (thinning algorithms) we may classify with respect to algorithmic strategies: individual pixels are either removed in a sequential order or in parallel. For example, the often cited algorithm by Hilditch [5] is an iterative process of testing and deleting contour pixels sequentially in standard raster scan order. Another sequential algorithm by Pavlidis [12] uses the de_nition of multiple points and proceeds by contour following. Examples of parallel algorithms in this third class are reduction operators which transform contour points into background points. Di_erences between these parallel algorithms are typically de_ned by tests implemented to ensure connectedness in a local neighborhood. The notion of a simple point is of basic importance for thinning and it will be shown in this reportthat di_erent de_nitions of simple points are actually equivalent. Several publications characterize properties of a set D of points (to be turned from object points to background points) to ensure that connectivity of object and background remain unchanged. The report discusses some of these properties in order to justify parallel thinning algorithms.1.2 BasicsThe used notation follows [17]. A digital image I is a functionde_ned on a discrete set C , which is called the carrier of the image.The elements of C are grid points or grid cells, and the elements (p;I(p)) of an image are pixels (2D case) or voxels (3D case). The range of a (scalar) image is f0; :::Gmaxg with Gmax _ 1. The range of a binary image is f0; 1g. We only use binary images I in this report. Let hIi be the set of all pixel locations with value 1, i.e. hIi = I?1(1). The image carrier is de_ned on an orthogonal grid in 2D or 3D - 2 -space. There are two options: using the grid cell model a 2D pixel location p is a closed square (2-cell) in the Euclidean plane and a 3D pixel location is a closed cube (3-cell) in the Euclidean space, where edges are of length 1 and parallel to the coordinate axes, and centers have integer coordinates. As a second option, using the grid point model a 2D or 3D pixel location is a grid point.Two pixel locations p and q in the grid cell model are called 0-adjacent i_ p 6= q and they share at least one vertex (which is a 0-cell). Note that this speci_es 8-adjacency in 2D or 26-adjacency in 3D if the grid point model is used. Two pixel locations p and q in the grid cell model are called 1- adjacent i_ p 6= q and they share at least one edge (which is a 1-cell). Note that this speci_es 4-adjacency in 2D or 18-adjacency in 3D if the grid point model is used. Finally, two 3Dpixel locations p and q in the grid cell model are called 2-adjacent i_ p 6= q and they share at least one face (which is a 2-cell). Note that this speci_es 6-adjacency if the grid point model is used. Any of these adjacency relations A_, _ 2 f0; 1; 2; 4; 6; 18; 26g, is irreexive andsymmetric on an image carrier C. The _-neighborhood N_(p) of a pixel location p includes p and its _-adjacent pixel locations. Coordinates of 2D grid points are denoted by (i; j), with 1 _ i _ n and 1 _ j _ m; i; j are integers and n;m are the numbers of rows and columns of C. In 3Dwe use integer coordinates (i; j; k). Based on neighborhood relations wede_ne connectedness as usual: two points p; q 2 C are _-connected with respect to M _ C and neighborhood relation N_ i_ there is a sequence of points p = p0; p1; p2; :::; pn = q such that pi is an _-neighbor of pi?1, for 1 _ i _ n, and all points on this sequence are either in M or all in the complement of M. A subset M _ C of an image carrier is called _-connected i_ M is not empty and all points in M are pairwise _-connected with respect to set M. An _-component of a subset S of C is a maximal _-connected subset of S. The study of connectivity in digital images has been introduced in [15]. It follows that any set hIi consists of a number of _-components. In case of the grid cell model, a component is the union of closed squares (2D case) or closed cubes (3D case). The boundary of a 2-cell is the union of its four edges and the boundary of a 3-cell is the union of its six faces. For practical purposes it iseasy to use neighborhood operations (called local operations) on adigital image I which de_ne a value at p 2 C in the transformed image based on pixel- 3 -values in I at p 2 C and its immediate neighbors in N_(p).2 Non-iterative AlgorithmsNon-iterative algorithms deliver subsets of components in specied scan orders without testing connectivity preservation in a number of iterations. In this section we only use the grid point model.2.1 \Distance Skeleton" AlgorithmsBlum [3] suggested a skeleton representation by a set of symmetric points.In a closed subset of the Euclidean plane a point p is called symmetric i_ at least 2 points exist on the boundary with equal distances to p. For every symmetric point, the associated maximal discis the largest disc in this set. The set of symmetric points, each labeled with the radius of the associated maximal disc, constitutes the skeleton of the set. This idea of presenting a component of a digital image as a \distance skeleton" is based on the calculation of a speci_ed distance from each point in a connected subset M _ C to the complement of the subset. The local maxima of the subset represent a \distance skeleton". In [15] the d4-distance is specied as follows. De_nition 1 The distance d4(p; q) from point p to point q, p 6= q, is the smallest positive integer n such that there exists a sequence of distinct grid points p = p0,p1; p2; :::; pn = q with pi is a 4-neighbor of pi?1, 1 _ i _ n.If p = q the distance between them is de_ned to be zero. Thedistance d4(p; q) has all properties of a metric. Given a binary digital image. We transform this image into a new one which represents at each point p 2 hIi the d4-distance to pixels having value zero. The transformation includes two steps. We apply functions f1 to the image Iin standard scan order, producing I_(i; j) = f1(i; j; I(i; j)), and f2in reverse standard scan order, producing T(i; j) = f2(i; j; I_(i; j)), as follows:f1(i; j; I(i; j)) =8><>>:0 if I(i; j) = 0minfI_(i ? 1; j)+ 1; I_(i; j ? 1) + 1gif I(i; j) = 1 and i 6= 1 or j 6= 1- 4 -m+ n otherwisef2(i; j; I_(i; j)) = minfI_(i; j); T(i+ 1; j)+ 1; T(i; j + 1) + 1g The resulting image T is the distance transform image of I. Notethat T is a set f[(i; j); T(i; j)] : 1 _ i _ n ^ 1 _ j _ mg, and let T_ _ T such that [(i; j); T(i; j)] 2 T_ i_ none of the four points in A4((i; j)) has a value in T equal to T(i; j)+1. For all remaining points (i; j) let T_(i; j) = 0. This image T_ is called distance skeleton. Now weapply functions g1 to the distance skeleton T_ in standard scan order, producing T__(i; j) = g1(i; j; T_(i; j)), and g2 to the result of g1 in reverse standard scan order, producing T___(i; j) = g2(i; j; T__(i; j)), as follows:g1(i; j; T_(i; j)) = maxfT_(i; j); T__(i ? 1; j)? 1; T__(i; j ? 1) ? 1gg2(i; j; T__(i; j)) = maxfT__(i; j); T___(i + 1; j)? 1; T___(i; j + 1) ? 1gThe result T___ is equal to the distance transform image T. Both functions g1 and g2 de_ne an operator G, with G(T_) = g2(g1(T_)) = T___, and we have [15]: Theorem 1 G(T_) = T, and if T0 is any subset of image T (extended to an image by having value 0 in all remaining positions) such that G(T0) = T, then T0(i; j) = T_(i; j) at all positions of T_with non-zero values. Informally, the theorem says that the distance transform image is reconstructible from the distance skeleton, and it is the smallest data set needed for such a reconstruction. The useddistance d4 di_ers from the Euclidean metric. For instance, this d4-distance skeleton is not invariant under rotation. For an approximation of the Euclidean distance, some authors suggested the use of di_erent weights for grid point neighborhoods [4]. Montanari [11] introduced a quasi-Euclidean distance. In general, the d4-distance skeleton is a subset of pixels (p; T(p)) of the transformed image, and it is not necessarily connected.2.2 \Critical Points" AlgorithmsThe simplest category of these algorithms determines the midpointsof subsets of connected components in standard scan order for each row. Let l be an index for the number of connected components in one row of the original image. We de_ne the following functions for 1 _ i _ n: ei(l) = _ j if this is the lth case I(i; j) = 1 ^ I(i; j ? 1) = 0 in row i, counting from the left, with I(i;?1) = 0 ,oi(l) = _ j if this is the lth case I(i; j) = 1- 5 -^ I(i; j+ 1) = 0 ,in row i, counting from the left, with I(i;m+ 1)= 0 ,mi(l) = int((oi(l) ?ei(l)=2)+ oi(l) ,The result of scanning row i is a set ofcoordinates (i;mi(l)) ofof the connected components in row i. The set of midpoints of all rows midpoints ,constitutes a critical point skeleton of an image I. This method is computationally eÆcient.The results are subsets of pixels of the original objects, and these subsets are not necessarily connected. They can form \noisy branches" when object components are nearly parallel to image rows. They may be useful for special applications where the scanning direction is approximately perpendicular to main orientations of object components.References[1] C. Arcelli, L. Cordella, S. Levialdi: Parallel thinning ofbinary pictures. Electron. Lett. 11:148{149, 1975}.[2] C. Arcelli, G. Sanniti di Baja: Skeletons of planar patterns. in: Topolog- ical Algorithms for Digital Image Processing (T. Y. Kong, A. Rosenfeld, eds.), North-Holland, 99{143, 1996.}[3] H. Blum: A transformation for extracting new descriptors of shape. in: Models for the Perception of Speech and Visual Form (W. Wathen- Dunn, ed.), MIT Press, Cambridge, Mass., 362{380, 1967.19} - 6 -数字图像处理1引言许多研究者已提议提出了在数字图像里的连接组件是由一个减少的数据量或简化的形状。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

(文档含英文原文和中文翻译)中英文对照外文翻译一种在线图像编码识别系统的设计摘要:本文介绍了在线图像编码字符识别系统的设计与实现过程,对其中重点环节进行了分析与研究,给出了主要环节问题的解决方法,在识别算法上,结合模板匹配与特征识别,提出了基于特征加权的模板匹配算法,该算法对提高字符识别率提到了较好的作用。

关键词:图像处理;模式识别;特征加权;软件设计0引言图像编码字符识别的研究目前仍是国内外一个重点研究课题,它具有广泛的应用背景,比如车牌号码自动识别、邮政编码的自动识别、试卷自动阅读、报表自动处理等,由于这种在线图像编码字符的识别都具有一些共性,本文结合在线轮胎编码字符识别系统的设计,对一般图像编码字符识别系统进行了阐述,对关键环节进行了研究与分析,该方法对其它在线图像编码字符系统的开发具有一定指导意义。

1在线图像编码识别系统流程在线图像编码字符识别系统主要包括数字图像的采集、存储、图像预处理、编码图像提取、编码特征提取、编码识别和后续处理等一些环节,其流程图如图1所示。

图1 在线图像编码字符识别系统流程图在线轮胎图像编码字符识别系统要求对通过生产流水线上每一个轮胎采集含有轮胎编码的图像,然后通过对图像的处理,提取出轮胎编码特征,采用合适的识别算法将每一位编码字符进行识别。

由于轮胎编码字符在轮胎上有一定变形,且摄像角度不同,得到的编码图像差异也很大,规律性差,所以编码图像的预处理和识别算法的选取显得尤为重要。

2图像采集与存储在线编码图像通常使用数码摄像机、数码照相机、数码摄像头等设备采集并输入计算机进行处理,本系统采用QuickCamPro4000数码摄像头采集轮胎编码图像,直接按JPG格式存储。

编码图像一般都要先转成BMP图像格式,因为BMP格式己经成为PC领域事实上的标准——几乎所有为Windows操作系统设计的图像处理软件都支持这种格式的图像。

BMP是Windows的原始位图格式,它可以用于保存任意类型的位图数据,可以支持所有的屏幕分辨率和Windows所支持的颜色组合。

一般情况下,为了保证显示的高效率,它对图像数据没有任何的压缩,所以一幅很小的位图就可能占据相当大的空间。

BMP位图文件包括位图文件头、位图信息头、调色板、位图数据区四个部分,位图文件头由14个字节构成,位图信息头由40个字节构成,调色板的大小取决于色彩数,单色图像调色板占8个字节,16色图像调色板占64个字节,256色图像调色板占1024个字节,224色图像没有调色板,位图数据区内数据按行顺序自下而上、自左而右排列。

3图像预处理图像预处理主要包括有:图像灰度化、图像降噪与增强、编码区边缘检测、图像几何校正、编码区图像提取、编码图像二值化、字符分割、字符归一化等。

下面介绍几个关键环节的处理过程。

3.1 图像灰度化处理编码图像通常是彩色的,实际识别用的图像是灰度图,所在需要先将彩色编码图像转换为灰度图像。

在RGB颜色模型中,如果R=G=B,则颜色(R,G,B)表示一种黒白颜色,其中R=G=B的值叫灰度值,灰度化处理就是使彩色的R、G、B分量值相等的过程。

常用灰度化处理方法是加权平均值法,即R=G=B=(W R R+W G G+W B B)/3其中,W R、W G、W B分别是R、G、B的权值,实验和理论证明,当W R=0.3, W G=0.59, W B=0.11时,即当R=G=B=0.30R+0.59G+0.11B时,能得到最合理的灰度图像。

3.2 图像增强处理3.2.1 直接灰度变换①线性灰度变换:假设图像灰度是线性变化的,如原图像f(x,y)灰度范围为[a,b],要求变换后图像灰度范围达到[c,d],根据线性规律,则变换后图像g(x,y)为:(1)②非线性变换——对数变换和指数变换。

当需要扩展低灰度区、压缩高灰度区时使用对数变换,当需要扩展高灰度区时使用指数变换。

3.2.2 平滑滤波—降噪由于噪声对应图像中的区域边缘等灰度值具有较大较快变化的部分,属高频分量,所以使用低通滤波器(即平滑滤波器)降噪。

同时平滑还可以使图像模糊,有利于在提取较大的目标前去除较小的细节或将目标内的小间断连接起来。

平滑降噪的方法是使用模板对图像进行卷积运算,线性平滑滤波器最常用的模板是如图2所示的3×3模板,将此模板与图像中像素按如下方法进行卷积运算,可得到平滑降噪的图像。

①将模板在图中漫游,并将模板中心与图中每个像素位置重合;②将模板上系数与模板下对应像素相乘;③将所有乘积相加;④将和赋给图中对应模板中心位置的像素。

非线性平滑滤波器最常用的是中值滤波器,它将区域中所有的值按大小进行排序,将排序后位于中间的像素值赋予中心像素。

中值滤波可有效地去除随机噪声,能得到较好的视觉效果。

3.3 编码区边缘检测边缘是灰度值不连续的结果,可利用求一阶和二阶导数的方法检测到。

因为在边缘地带导数值大,而非边缘的地方导数值小。

由于数字图像是离散的,不能求导数,可以通过卷积的方法用差分近似代替微分。

效果较好的边缘检测算法是Sobel 算子。

Sobel 算子是一种梯度幅值22y x s s M +=,分别利用垂直算子Sx 、水平算子Sy 来获取编码区垂直边图3图2 平滑滤波器模板 图3 Sobel 边缘检测模板 图4 Sobel 算子边缘检测结果3.4 图像几何校正Hough 变换可以检测出编码区图像倾斜角度,根据此角度进行旋转变换可使编码区图像得到校正。

Hough 变换可以将图像空间XY 中的直线(y=px+q )检测问题转换到参数空间PQ 中点的检测问题,在参数空间PQ 里,建立一个累加数组Sum(p,q),对每一个图像空间中给定边缘点,让p 取遍所有可能值,根据直线方程q=-xp+y 计算出对应的q ,对Sum(p,q)进行累加,得到Sum(p,q)的值就是在(p,q)处共线的点的个数,(p,q)的值就是图像空间中直线的斜率和截距,由斜率得到图像编码区水平边缘角度。

3.5 字符切割通过对编码字符区直接进行水平扫描,由字符间距一般可以将字符区域分割出来。

也可以通过对编码字符区做垂直方向投影运算,根据字符大致宽度与字符总数,对字符进行切割。

如图5所示是编码字符区及对应垂直投影图。

图5 编码字符及对应垂直投影图6 线性插值示意图3.6 字符归一化处理对分割出的字符从四个方向扫描,确定字符边界,然后采用线性插值方法对每个字符作归一化处理,使每个字符归一为32×16点阵。

图6为线性插值示意图,根据线性原理,f(x1)可由公式(2)计算:(2)4识别算法设计字符识别一般采取特征判别或模板匹配的方法,特征判别是根据特征抽取的程度分阶段的、用结构分析的办法完成字符的识别。

模板匹配即是根据字符的知识采取按形匹配的方法,模板匹配一般分为两类:一类是直接利用输入的二维平面图像与字典中记忆的图形进行匹配;另一类是抽出部分特征与字典进行匹配。

轮胎编码图像中字符仅涉及部分英文字符和10个阿拉伯数字,字符较少,结构相对简单,因此具体识别时,既可以采用图形匹配的方法,也可以采用结构分析的方法。

但由于轮胎上编码字符有一定变形,且有断裂现象,所以直接模板匹配与直接特征抽取方法识别率都不理想,本系统使用了模板匹配与特征识别相结合的基于特征加权的模板匹配识别算法,其字符识别率比简单模板匹配算法和特征识别算法识别率都有不同程度的提高。

基于特征加权的模板匹配识别算法基本思路是:给模板中有字符笔画的点分配不同的权重,位于笔画中心的点权重最高,位于笔画边缘的点权重最低,然后将样本模板与标准模板逐点模糊匹配,按模糊识别规则识别。

5结论本文结合轮胎编码识别系统的实现对在线图像字符编码识别系统的设计进行了阐述,提出了一种模板匹配与特征匹配相结合的识别算法,该方法对传统的模板匹配算法进行了改进,提高了变形、断裂等字符的识别率。

这种方法在试验中得到了验证,取得了令人满意的效果。

The Development of A Kind of Online ImageCode Recognition SystemAbstract:This paper describes the design and the implement of online image coding char recognition system. It analyses and researches the important contents about the system. Then it provides the solutions of main problems. In recognition algorithm, combining template matching with feature recognition, it put forword an improved template matching algorithm based on feature weights. The algorithm can obviously improve the char recognition ratio.Keyword: image processing; pattern recognition; feature weights; software design0 IntroductionCharacter recognition of image coding is still the subject of intense study at home and abroad, it has broad applications, such as Automatic number plate recognition, postal code of the automatic identification, automatic reading papers, reports, automatic processing, because of this online image coded character recognition has some common, this paper online tire coding character recognition system for the general image coding character recognition system has been elaborated on the key link of the research and analysis, the method of the other online image coded character system Development of guiding significance.1An online image coding identification system processesOnline image coding character recognition system includes digital image capture, storage, image preprocessing, encoding the image extraction, feature extraction coding, coding identification and follow-up treatment of some aspects of its flow chart shown in Figure 1.Figure 1-line character recognition image coding system flowchart Online tire image coding character recognition system requires the production pipeline through the acquisition of each tire with tire encoded image, and then through image processing, coding to extract features of the tire, using the appropriate recognition algorithm to identify each coded character. Tire coding characters as a certain deformation in the tires, and different camera angles, are also great differences in the coding images, regularity is poor, so coded image preprocessing and recognition algorithms of selection is very important.2Image Acquisition and StorageLine coding commonly used digital camera images, digital cameras, digital video cameras capture and processed in computer, the system uses QuickCamPro4000 tire coding digital camera image capture, directly from JPG format.Coded images generally must first convert BMP image format, because the BMP format has become the de facto standard PC in the field - almost all of the Windows operating system designed for image processing software to support this format of the image. BMP is the original Windows bitmap format, which can be used to save any type of digital map data, can support all Windows supported screen resolution and color combination. Under normal circumstances, in order to ensure the display of high efficiency, it does not have any compressed image data, so a small bitmap may occupy considerable space.BMP bitmap file includes the bitmap file header, bitmap information header, palette, bitmap data area of four parts, bitmap file header from 14bytes constitute the bitmap header from 40 bytes composition, tone color palette depends on the number of monochrome color images.Board accounted for 8 bytes, 16-color palette images accounted for 64 bytes, 256-color palette image 1024 bytes total, 224-color images without color palette, the bitmap data from the region under the order of the data by row and on the arrangement from left to right.3PreprocessingImage preprocessing includes are: gray image, image noise reduction and enhancement, coding, edge detection, image geometry correction, image coding region of extraction, encoding image binarization, character segmentation, character normalization and so on. Here are some key aspects of the process.3.1gray image processingImages are usually color coded, the actual identification with the image is grayscale, where the need to convert first color-coded images to grayscale. In the RGB color model, if R = G = B, then color (R, G, B) indicates a Black white color, in which R = G = B is called the value of gray value, gray level processing is to make the color of the R , G, B component value equal to the process. Gray-scale processing methods are commonly used weighted average method, that is,R = G = B = (W R R + W G G + W B B) / 3Which, W R, W G, W B are the R, G, B the weight of experimental and theoretical proof, when W R = 0.3, W G = 0.59, W B = 0.11, that is when R =G = B = 0.30R +0.59 G +0.11 B, can be the most reasonable grayscale.3.2 image enhancement processing3.2.1 Direct gray-scale transformation① linear gray level transformation: if the image gray scale is linear, as in the original image f (x, y) gray-scale range of [a, b], asked the transformed image intensity range of up to [c, d], According to the linear law, the transformed image g (x, y) as:(1)②nonlinear transformation -- log transformation and exponential transformation:When the need to expand low gray zone, gray zone of high compression used on the log transformation, when the need to expand the use of high gray area index transformation.3.2.2 smoothing filter - Noise ReductionAs the noise in the area corresponding to the edge of the image gray value of such rapid change with a larger part is a high frequency, so the use of low-pass filter (ie, smoothing filter) noise. At the same time can make the image fuzzy smoothing is beneficial to the larger goal of the extraction prior to removal of the smaller details or to target the small interruption link.Smoothing noise reduction method is to use the template on the image convolution operation, linear smoothing filter is the most commonly used template is shown in Figure 2 of the 3 × 3 template, this template and image in pixels by the following method of convolution , get smooth image noise reduction.① I n the figure, roaming the template and the template center and maplocation of each pixel overlap;② t he template on the coefficient multiplied with the template under thecorresponding pixel;③ a dd all the product;④ I t will assign the figure corresponds to the template and the center ofthe pixel.The most commonly used non-linear smoothing filter is median filter, it will all of the values of the region are sorted according to size, will be sorted in the middle of the pixel values given to the center pixel.Median filter can effectively remove the random noise, can get a better visual effect.3.3 Edge detection codingEdge is the result of discrete gray value can be used to request thefirst and second derivative method to detect. Because the derivative of the edge of a large area, rather than the local derivative of the edge of the small. As the digital image is discrete, not the derivative,convolution method can replace the differential with the differential approximation.Is better Sobel edge detection algorithm is operator. Sobel operator is a gradient amplitude 22y x s s M +=, respectively, using verticaloperator Sx, Sy operator to obtain the level of the coding region of the vertical edges and horizontal edges, that is, the horizontal and vertical directions as shown in Figure 3 using two different volumes product template, get the edge as shown in Figure 4 results.Figure 2 smoothing filter template Figure 3 Sobel edge detection perator3.4 Image Rectification Hough transform can detect the coding region of the image angle, the angle of rotation according to the coding region of the imagetransformation can be corrected.Hough transform to the image space XY of the line (y = px + q)parameter space detection problem is transformed into the mid-point of detection PQ, PQ in the parameter space, the establishment of a cumulative array Sum (p, q), for each given the edge in image space, let p taken over all possible values, according to linear equation q =- xp + y to calculate the corresponding q, on the Sum (p, q) to accumulate, by Sum (p, q) the valueof the is the (p, q) point total of the number line, (p, q) is the image space inthe value of the slope and intercept, obtained by the slope angle of the edge image coding standard.3.5Character CuttingCoded character area on the level of scanning directly from the character spacing can generally be out of character segmentation. Can also be done by coded character area vertical projection operation, according to the character width and character less the total number of characters to be cut. Figure 5 is a coded character areas and the corresponding vertical projection.Figure 5 encoded characters and the corresponding3.6Character normalizationThe character of the segmented into four scans to determine the character boundaries, and then use linear interpolation for each character for normalized so that each character is normalized to 32 × 16 lattice. Figure 6 Schematic diagram of linear interpolation, according to linear theory, f (x1) by the formula (2) Calculation:(2)4Identification algorithmTo determine the general characteristics of character recognition or template matching method, Feature identification is based on the degree of feature extraction stages, complete with a structural analysis approach to character recognition. Template matching that is based on knowledge of the characters take shape matching method according to the template matchingis generally divided into two categories: direct use of the importedtwo-dimensional plane images and dictionary matching graphics memory; the other is out of some feature match with the dictionary.Tire coding image only some of the characters and English characters and 10 Arabic numerals, characters less, the structure is relatively simple, so when the specific identification, either graphical matching method, you can also use structural analysis. However, the tires have a certain deformation of character encoding, and there is breakage, so a direct template matching and feature extraction methods to identify directly rate is unsatisfactory, the system uses a template matching and feature recognition weighted combination of feature-based template matching recognition , the character recognition rate than simple template matching algorithm and feature recognition algorithm for the recognition rate improved to varying degrees.Feature-based weighted template matching recognition algorithm basic idea is: to the template in character stroke of points assigned different weights, in the stroke center point of the highest weight, in the stroke edge point of the weight minimum, then the sample templates and Standard Template point by point fuzzy matching, recognition by fuzzy recognition rules.5ConclusionIn this paper, coded tire identification system character encoding to achieve on-line image recognition system design was described, a template matching and feature matching recognition algorithm combines the method of the traditional template matching algorithm is improved, improved deformation and fracture character recognition rate. This method was validated in the test and achieved satisfactory results.。

相关文档
最新文档