外文翻译----数字图像处理方法的研究

合集下载

人脸识别 面部 数字图像处理相关 中英对照 外文文献翻译 毕业设计论文 高质量人工翻译 原文带出处

人脸识别 面部 数字图像处理相关 中英对照 外文文献翻译 毕业设计论文 高质量人工翻译 原文带出处

人脸识别相关文献翻译,纯手工翻译,带原文出处(原文及译文)如下翻译原文来自Thomas David Heseltine BSc. Hons. The University of YorkDepartment of Computer ScienceFor the Qualification of PhD. — September 2005 -《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (fbr a cooperative subject in a door access system fbr example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1).The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are representative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of feces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are included in a single template, rather than individually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the camera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin justbeneath the eyes. The reason being that in some cases the eyebrows can closely match the template, particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the individual left and right eyes then refines each eye position.This basic template-based method of eye localisation, although providing fairly preciselocalisations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.Eye localisation is performed on the set of training images, which is then separated into two sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localisations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 一Distance to the eye template for successful detections (top) indicating variance due to noise and failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have taken the set of failed localisations(images of the forehead, nose, cheeks, background etc. falsely detected by the localisation routine) and once again computed the average distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasise the difference of the pupil regions for these failed matches and minimise the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improved detection rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begin our investigation into face recognition with perhaps the simplest approach,known as the direct correlation method (also referred to as template matching by Brunelli and Poggio [29 ]) involving the direct comparison of pixel intensity values taken from facial images. We use the term "Direct Conelation, to encompass all techniques in which face images are compared directly, without any form of image space analysis, weighting schemes or feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearson's correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric in these investigations (inversely related to Pearson's correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.Firstly, all facial images must be aligned such that the eye centres are located at two specified pixel coordinates and the image cropped to remove any background information. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the corresponding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as the query image q, and gallery image g), we get an indication of similarity. A threshold is then applied to make the final verification decision.d . q - g ( threshold accept ) (d threshold ⇒ reject ). Equ. 4-14.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system's ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematicaldescription of data distribution in some classification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, although we also use Fisher's Linear Discriminant to analyse individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a system's ability to correctly accept or reject the proposed identity of an individual. At a functional level, this reduces to two images being presented for comparison, fbr which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site access. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rej ection decision.The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignificant (fbr example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test set.However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances as in the application environment.On the other hand, if the purpose of the experimentation is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of6difficult9 images than would be expected in the perceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: CompareF aces(F ace A, FaceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce high scores.Every image is compared with every other image, no image is compared with itself and nopair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are of the same person or different people. In practical tests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are then stored in one of two lists: a list containing scores produced by comparing images of different people and a list containing scores produced by comparing images of the same person. The final acceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of scores from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.For IndexA = 0 to length(TestSet) For IndexB = IndexA+l to length(TestSet) Score = CompareFaces(TestSet[IndexA], TestSet[IndexB]) If IndexA and IndexB are the same person Append Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCountF alse AcceptRate = FalseAcceptCount / Length(AcceptScoresList)FalseRej ectRate = FalseRejectCount / length(RejectScoresList)Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the inadequacies of the system when operating at aspecific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably resultin increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.False Acceptance Rate / %Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be seen as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances were equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were significantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a system with a lower EER might not necessarily be the better performer towards the extremes of its operating capability.There is a strong connection between the above graph and the receiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualisations of the same results, in that the ROC format uses the True Acceptance Rate(TAR), where TAR = 1.0 - FRR in place of the FRR, effectively flipping the graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This presentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.Figure 4-6 - Example error rate curve as a function of the score threshold The fluctuation of these error curves due to noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows fbr a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of the comparisons made. A typical dataset of 720 images (as used in section 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER to fluctuate by up to 0.28.422 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a likeness score, providing 258,840 verification operations from which to calculate false acceptance rates and false rejection rates. The error curve produced is shown in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER threshold approximately one quarter of all verification operations carried out resulted in an incorrect classification. Thereare a number of well-known reasons for this poor level of accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person's face. The distance between images of different people becomes smaller than the area of face space covered by images of the same person and hence false acceptances and false rejections occur frequently. Other disadvantages include the large amount of storage necessaryfor holding many face images and the intensive processing required for each comparison, making this method unsuitable fbr applications applied to a large database. In section 4.3 we explore the eigenface method, which attempts to address some of these issues.4二维人脸识别4.1功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。

计算机图形_Digital Image Processing, 2nd ed(数字图像处理(第2版))

计算机图形_Digital Image Processing, 2nd ed(数字图像处理(第2版))

Digital Image Processing, 2nd ed(数字图像处理(第2版))数据摘要:DIGITAL IMAGE PROCESSING has been the world-wide leading textbook in its field for more than 30 years. As the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 edition by Gonzalez and Woods, the present edition was prepared with students and instructors in mind. The material is timely, highly readable, and illustrated with numerous examples of practical significance. All mainstream areas of image processing are covered, including a totally revised introduction and discussion of image fundamentals, image enhancement in the spatial and frequency domains, restoration, color image processing, wavelets, image compression, morphology, segmentation, and image description. Coverage concludes with a discussion on the fundamentals of object recognition.Although the book is completely self-contained, this companion web site provides additional support in the form of review material, answers to selected problems, laboratory project suggestions, and a score of other features. A supplementary instructor's manual is available to instructors who have adopted the book for classroom use.中文关键词:数字图像处理,图像基础,图像在空间和频率域的增强,图像压缩,图像描述,英文关键词:digital image processing,image fundamentals,image compression,image description,数据格式:IMAGE数据用途:DIGITAL IMAGE PROCESSING数据详细介绍:Digital Image Processing, 2nd editionAbout the BookBasic InformationISBN number 020*******.Publisher: Prentice Hall12 chapters.793 pages.© 2002.DIGITAL IMAGE PROCESSING has been the world-wide leading textbook in its field for more than 30 years. As the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 edition by Gonzalez and Woods, the present edition was prepared with students and instructors in mind. The material is timely, highly readable, and illustrated with numerous examples of practical significance. All mainstream areas of image processing are covered, including a totally revised introduction and discussion of image fundamentals, image enhancement in the spatial and frequency domains, restoration, color image processing, wavelets, image compression, morphology, segmentation, and image description. Coverage concludes with a discussion on the fundamentals of object recognition.Although the book is completely self-contained, this companion web site provides additional support in the form of review material, answers to selected problems, laboratory project suggestions, and a score of other features. A supplementary instructor's manual is available to instructors who have adopted the book for classroom use.Partial list of institutions that use the book.NEW FEATURESNew chapters on wavelets, image morphology, and color image processing.A revision and update of all chapters, including topics such as segmentation by watersheds.More than 500 new images and over 200 new line drawings and tables.A reorganization that allows the reader to get to the material on actual image processing much sooner than before.A more intuitive development of traditional topics such as image transforms and image restoration.Numerous new examples with processed images of higher resolution. Updated image compression standards and a new section on compression using wavelets.Updated bibliography.Differences Between the DIP and DIPUM BooksDigital Image Processing is a book on fundamentals.Digital Image Processing Using MATLAB is a book on the software implementation of those fundamentals.The key difference between the books is that Digital Image Processing (DIP) deals primarily with the theoretical foundation of digital image processing, while Digital Image Processing Using MATLAB (DIPUM) is a book whose main focus is the use of MATLAB for image processing. The DIPUM book covers essentially the same topics as DIP, but the theoretical treatment is not asdetailed. Some instructors prefer to fill in the theoretical details in class in favor of having available a book with a strong emphasis on implementation.© 2002 by Prentice-Hall, Inc.Upper Saddle River, New Jersey 07458All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher.The author and publisher of this book have used their best efforts in preparing this book.These efforts include the development, research, and testing of the theories and programs to determine their effectiveness.The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book.The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs.数据预览:点此下载完整数据集。

数字图像处理

数字图像处理

数字图像处理的理论基础及发展方向一、数字图像处理的起源及发展数字图像处理(Digital Image Processing) 将图像信号转换成数字信号并利用计算机对其进行处理,起源于20 世纪20年代,目前已广泛地应用于科学研究、工农业生产、生物医学工程、航空航天、军事、工业检测、机器人视觉、公安司法、军事制导、文化艺术等,已成为一门引人注目、前景远大的新型学科,发挥着越来越大的作用。

数字图像处理作为一门学科形成于20 世纪60 年代初期,早期的图像处理的目的是改善图像的质量,以人为对象,以改善人的视觉效果为目的,首次获得实际成功应用的是美国喷气推进实验室(J PL)并对航天探测器徘徊者7 号在1964 年发回的几千张月球照片使用了图像处理技术,并考虑了太阳位置和月球环境的影响,由计算机成功地绘制出月球表面地图,随后又对探测飞船发回的近十万张照片进行了更为复杂的图像处理,以致获得了月球的地形图、彩色图及全景镶嵌图,为人类登月创举奠定了坚实的基础,也推动了数字图像处理这门学科的诞生。

数字图像处理取得的另一个巨大成就是在医学上获得的成果,1972 年英国EMI 公司工程师Ho usfield 发明了用于头颅诊断的X射线计算机断层摄影装置即CT(Computer Tomograph)。

1975 年EMI 公司又成功研制出全身用的CT 装置,获得了人体各个部位鲜明清晰的断层图像. 1979 年这项无损伤诊断技术获得了诺贝尔奖,说明它对人类作出了划时代的贡献. 随着图像处理技术的深入发展,从70 年代中期开始,随着计算机技术和人工智能、思维科学研究的迅速发展,数字图像处理向更高、更深层次发展。

人们已开始研究如何用计算机系统解释图像,实现类似人类视觉系统理解外部世界. 很多国家,特别是发达国家投入更多的人力、物力到这项研究,取得了不少重要的研究成果。

其中代表性的成果是70 年代末MIT 的Ma rr 提出的视觉计算理论,这个理论成为计算机视觉领域其后多年的主导思想。

图像处理外文翻译

图像处理外文翻译

英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for the edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed torestore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image (or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to a label image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the sameknowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft V oyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed by graphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。

外文翻译----基于数字图像处理技术的边缘特征提取

外文翻译----基于数字图像处理技术的边缘特征提取
Edge feature extraction has been applied in many areas widely. This paper mainly discusses about advantages and disadvantages of several edge detection operators applied in the cable insulation parameter measurement. In order to gain more legible image outline, firstly the acquired image is filtered anddenoised. In the process ofdenoising, wavelet transformation is used. And then different operators are applied to detect edge including Differential operator, Log operator,Cannyoperator and Binary morphology operator. Finally the edge pixels of image are connected using the method of bordering closed. Then a clear and complete image outline will be obtained.
The traditionaldenoisingmethod is the use of a low-pass or band-pass filter todenoise. Its shortcoming is that the signal is blurred when noises are removed. There is irreconcilable contradiction between removing noise and edge maintenance. Yet wavelet analysis has been proved to be a powerful tool for image processing. Because Waveletdenoisinguses a different frequency band-pass filters on the signal filtering. It removes the coefficients of some scales which mainly reflect the noise frequency. Then the coefficient of every remaining scale is integrated for inverse transform, so that noise can be suppressed well. So wavelet analysis can be widely used in manyaspects such as image compression, imagedenoising, etc.

matlab图像处理外文翻译外文文献

matlab图像处理外文翻译外文文献

matlab图像处理外文翻译外文文献附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.。

Digital-Signal-Processing数字信号处理大学毕业论文英文文献翻译及原文

Digital-Signal-Processing数字信号处理大学毕业论文英文文献翻译及原文

毕业设计(论文)外文文献翻译文献、资料中文题目:数字信号处理文献、资料英文题目:Digital Signal Processing 文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14数字信号处理一、导论数字信号处理(DSP)是由一系列的数字或符号来表示这些信号的处理的过程的。

数字信号处理与模拟信号处理属于信号处理领域。

DSP包括子域的音频和语音信号处理,雷达和声纳信号处理,传感器阵列处理,谱估计,统计信号处理,数字图像处理,通信信号处理,生物医学信号处理,地震数据处理等。

由于DSP的目标通常是对连续的真实世界的模拟信号进行测量或滤波,第一步通常是通过使用一个模拟到数字的转换器将信号从模拟信号转化到数字信号。

通常,所需的输出信号却是一个模拟输出信号,因此这就需要一个数字到模拟的转换器。

即使这个过程比模拟处理更复杂的和而且具有离散值,由于数字信号处理的错误检测和校正不易受噪声影响,它的稳定性使得它优于许多模拟信号处理的应用(虽然不是全部)。

DSP算法一直是运行在标准的计算机,被称为数字信号处理器(DSP)的专用处理器或在专用硬件如特殊应用集成电路(ASIC)。

目前有用于数字信号处理的附加技术包括更强大的通用微处理器,现场可编程门阵列(FPGA),数字信号控制器(大多为工业应用,如电机控制)和流处理器和其他相关技术。

在数字信号处理过程中,工程师通常研究数字信号的以下领域:时间域(一维信号),空间域(多维信号),频率域,域和小波域的自相关。

他们选择在哪个领域过程中的一个信号,做一个明智的猜测(或通过尝试不同的可能性)作为该域的最佳代表的信号的本质特征。

从测量装置对样品序列产生一个时间或空间域表示,而离散傅立叶变换产生的频谱的频率域信息。

自相关的定义是互相关的信号本身在不同时间间隔的时间或空间的相关情况。

二、信号采样随着计算机的应用越来越多地使用,数字信号处理的需要也增加了。

外文翻译----数字图像处理方法的研究(中英文)(1)

外文翻译----数字图像处理方法的研究(中英文)(1)

The research of digital image processing technique1IntroductionInterest in digital image processing methods stems from two principal application areas:improvement of pictorial information for human interpretation;and processing of image data for storage,transmission,and representation for autonomous machine perception.1.1What Is Digital Image Processing?An image may be defined as a two-dimensional function,f(x,y),where x and y are spatial(plane)coordinates,and the amplitude of f at any pair of coordinates(x,y)is called the intensity or gray level of the image at that point.When x,y,and digital image.The field of digital image processing refers to processing digital images by means of a digital computer.Note that a digital image is composed of a finite number of elements,each of which has a particular location and value.These elements are referred to as picture elements,image elements,pels,and pixels.Pixel is the term most widely used to denote the elements of a digital image.We consider these definitions in more formal terms in Chapter2.Vision is the most advanced of our senses,so it is not surprising that images play the single most important role in human perception.However,unlike human who are limited to the visual band of the electromagnetic(EM)spectrum,imaging machines cover almost the entire EM spectrum,ranging from gamma to radio waves.They can operate on images generated by sources that human are not accustomed to associating with image.These include ultrasound,electron microscopy,and computer-generated images.Thus,digital image processing encompasses a wide and varied field of application.There is no general agreement among authors regarding where image processing stops and other related areas,such as image analysis and computer vision,start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images.We believe this to be a limiting and somewhat artificial boundary.For example,under this definition,even the trivial task of computing the average intensity of an image(which yields a single number)would not be considered an image processing operation.On the other hand, there are fields such as computer vision whose ultimate goal is to use computer to emulate human vision,including learning and being able to make inferences and take actions based on visual inputs.This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence.This field of AI is in its earliest stages of infancy in terms of development,with progress having been much slower than originally anticipated.The area of image analysis(also called image understanding)is in between image processing and computer vision.There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other.However,one useful paradigm is to consider three types of computerized processes is this continuum:low-,mid-,and high-ever processes.Low-level processes involve primitive operation such as image preprocessing to reduce noise,contrast enhancement,and image sharpening.A low-level process is characterized by the fact that both its input and output are images. Mid-level processing on images involves tasks such as segmentation(partitioning an image into regions or objects),description of those objects to reduce them to a form suitable for computer processing,and classification(recognition)of individual object. Amid-level process is characterized by the fact that its inputs generally are images, but its output is attributes extracted from those images(e.g.,edges contours,and the identity of individual object).Finally,higher-level processing involves“making sense”of an ensemble of recognized objects,as in image analysis,and,at the far end of the continuum,performing the cognitive function normally associated with vision. Based on the preceding comments,we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image.Thus,what we call in this book digital image processing encompasses processes whose inputs and outputs are images and,in addition, encompasses processes that extract attributes from images,up to and including the recognition of individual objects.As a simple illustration to clarify these concepts, consider the area of automated analysis of text.The processes of acquiring an image of the area containing the text.Preprocessing that images,extracting(segmenting)the individual characters,describing the characters in a form suitable for computer processing,and recognizing those individual characters are in the scope of what we call digital image processing in this book.Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement“making cense.”As will become evident shortly,digital image processing,as we have defined it,is used successfully in a broad rang of areas of exceptional social and economic value.The concepts developed in the following chapters are the foundation for the methods used in those application areas.1.2The Origins of Digital Image ProcessingOne of the first applications of digital images was in the newspaper industry,when pictures were first sent by submarine cable between London and NewYork. Introduction of the Bartlane cable picture transmission system in the early1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours.Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end.Figure 1.1was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern.Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution ofintensity levels.The printing method used to obtain Fig.1.1was abandoned toward the end of1921in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal.Figure1.2shows an images obtained using this method.The improvements over Fig.1.1are evident,both in tonal quality and in resolution.FIGURE1.1A digital picture produced in FIGURE1.2A digital picture 1921from a coded tape by a telegraph printer made in1922from a tape punched With special type faces(McFarlane)after the signals had crossed theAtlantic twice.Some errors areVisible.(McFarlane)The early Bartlane systems were capable of coding images in five distinct level of gray.This capability was increased to15levels in1929.Figure1.3is typical of the images that could be obtained using the15-tone equipment.During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Although the examples just cited involve digital images,they are not considered digital image processing results in the context of our definition because computer were not involved in their creation.Thus,the history of digital processing is intimately tied to the development of the digital computer.In fact digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computers of supporting technologies that include data storage,display,and transmission.The idea of a computer goes back to the invention of the abacus in Asia Minor, more than5000years ago.More recently,there were developments in the past two centuries that are the foundation of what we call computer today.However,the basis for what we call a modern digital computer dates back to only the1940s with the introduction by John von Neumann of two key concepts:(1)a memory to hold a stored program and data,and(2)conditional branching.There two ideas are the foundation of a central processing unit(CPU),which is at the heart of computer today. Starting with von Neumann,there were a series of advances that led to computers powerful enough to be used for digital image processing.Briefly,these advances maybe summarized as follow:(1)the invention of the transistor by Bell Laboratories in1948;(2)the development in the1950s and1960s of the high-level programminglanguages COBOL(Common Business-Oriented Language)and FORTRAN (Formula Translator);(3)the invention of the integrated circuit(IC)at Texas Instruments in1958;(4)the development of operating system in the early1960s;(5)the development of the microprocessor(a single chip consisting of the centralprocessing unit,memory,and input and output controls)by Inter in the early 1970s;(6)introduction by IBM of the personal computer in1981;(7)progressive miniaturization of components,starting with large scale integration(LI)in the late1970s,then very large scale integration(VLSI)in the1980s,to the present use of ultra large scale integration(ULSI).Figure1.3In1929from London to Cenerale Pershingthat New York delivers with15level tone equipmentsthrough cable with Foch do not the photograph by decorationConcurrent with these advances were development in the areas of mass storage and display systems,both of which are fundamental requirements for digital image processing.The first computers powerful enough to carry out meaningful image processing tasks appeared in the early1960s.The birth of what we call digital image processing today can be traced to the availability of those machines and the onset of the apace program during that period.It took the combination of those two developments to bring into focus the potential of digital image processing concepts.Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory(Pasadena,California)in1964when pictures of the moontransmitted by Ranger7were processed by a computer to correct various types of image distortion inherent in the on-board television camera.Figure1.4shows the first image of the moon taken by Ranger7on July31,1964at9:09A.M.Eastern Daylight Time(EDT),about17minutes before impacting the lunar surface(the markers,called reseau mark,are used for geometric corrections,as discussed in Chapter5).This also is the first image of the moon taken by a U.S.spacecraft.The imaging lessons learned with ranger7served as the basis for improved methods used to enhance and restore images from the Surveyor missions to the moon,the Mariner series of flyby mission to Mars,the Apollo manned flights to the moon,and others.In parallel with space application,digital image processing techniques began in the late1960s and early1970s to be used in medical imaging,remote Earth resources observations,and astronomy.The invention in the early1970s of computerized axial tomography(CAT),also called computerized tomography(CT)for short,is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors encircles an object(or patient)and an X-ray source,concentric with the detector ring,rotates about the object.The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring.As the source rotates,this procedure is repeated.Tomography consists of algorithms that use the sensed data to construct an image that represents a“slice”through the object.Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices,which constitute a three-dimensional(3-D)rendition of the inside of the object.Tomography was invented independently by Sir Godfrey N.Hounsfield and Professor Allan M. Cormack,who shared the X-rays were discovered in1895by Wilhelm Conrad Roentgen,for which he received the1901Nobel Prize for Physics.These two inventions,nearly100years apart,led to some of the most active application areas of image processing today.Figure1.4The first picture of the moon by a U.S.Spacecraft.Ranger7took this image on July31,1964at9:09A.M.EDT,about17minutes beforeImpacting the lunar surface.(Courtesy of NASA.)中文翻译数字图像处理方法的研究1绪论数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进;其二是为了使机器自动理解而对图像数据进行存储、传输及显示。

数字图像处理论文中英文对照资料外文翻译文献

数字图像处理论文中英文对照资料外文翻译文献

第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。

数字图像处理文献综述

数字图像处理文献综述

医学图像增强处理与分析【摘要】医学图像处理技术作为医学成像技术的发展基础,带动着现代医学诊断产生着深刻的变革。

图像增强技术在医学数字图像的定量、定性分析中扮演着重要的角色,它直接影响到后续的处理与分析工作。

本文以医学图像(主要为X光、CT、B超等医用透视图像)为主要的研究对象,研究图像增强技术在医学图像处理领域中的应用。

本文通过对多种图像增强方法的图像处理效果进行了比较和验证,最后总结出了针对医学图像的各项特点最有效的图像增强处理方法。

关键词:医学图像处理;图像增强;有效方法;Medical Image has been an important supplementary measure of the doctor's diagnosis and treatment. As the developmental foundation of these imaging technology, Medical Image Processing leads to profoundly changes of modern medical diagnosis. Image enhancement technology plays an important role in quantitative and qualitative analysis of medical imaging .It has affected the following treatment and analysis directly. The thesis chooses medical images (including X-ray, CT, B ultrasonic image) as the main research object, studies the application of image enhancement technology in the field of medical images processing. and then we sum up the most effective processing method for image enhancement according to the characteristics of image.Key words:Medical Image ;Medical image enhancement ;effective method11 引言近年来,随着信息时代特别是数字时代的来临数字医学影像成为医生诊断和治疗的重要辅助手段。

数字图像处理(DigitalImageProcessing)

数字图像处理(DigitalImageProcessing)
噪效果。
图像变换
傅里叶变换
将图像从空间域转换到频率域,便于分析图 像的频率成分。
离散余弦变换
将图像从空间域转换到余弦函数构成的系数 空间,用于图像压缩。
小波变换
将图像分解成不同频率和方向的小波分量, 便于图像压缩和特征提取。
沃尔什-哈达玛变换
将图像转换为沃尔什函数或哈达玛函数构成 的系数空间,用于图像分析。
理的自动化和智能化水平。
生成对抗网络(GANs)的应用
02
GANs可用于生成新的图像,修复老照片,增强图像质量,以及
进行图像风格转换等。
语义分割和目标检测
03
利用深度学习技术对图像进行语义分割和目标检测,实现对图
像中特定区域的识别和提取。
高动态范围成像技术
高动态范围成像(HDRI)技术
01
通过合并不同曝光级别的图像,获得更宽的动态范围
动态特效
数字图像处理技术可以用于制作动态特效,如电影、广告中的火焰、 水流等效果。
虚拟现实与增强现实
数字图像处理技术可以用于虚拟现实和增强现实应用中,提供更真 实的视觉体验。
05
数字图像处理的未 来发展
人工智能与深度学习在数字图像处理中的应用
深度学习在图像识别和分类中的应用
01
利用深度学习算法,对图像进行自动识别和分类,提高图像处
医学影像重建
通过数字图像处理技术,可以将 CT、MRI等医学影像数据进行重建, 生成三维或更高维度的图像,便于 医生进行更深入的分析。
医学影像定量分析
数字图像处理技术可以对医学影像 进行定量分析,提取病变区域的大 小、形状、密度等信息,为医生提 供更精确的病情评估。
安全监控系统
视频监控

数字图像处理英文原版及翻译

数字图像处理英文原版及翻译

数字图象处理英文原版及翻译Digital Image Processing: English Original Version and TranslationIntroduction:Digital Image Processing is a field of study that focuses on the analysis and manipulation of digital images using computer algorithms. It involves various techniques and methods to enhance, modify, and extract information from images. In this document, we will provide an overview of the English original version and translation of digital image processing materials.English Original Version:The English original version of digital image processing is a comprehensive textbook written by Richard E. Woods and Rafael C. Gonzalez. It covers the fundamental concepts and principles of image processing, including image formation, image enhancement, image restoration, image segmentation, and image compression. The book also explores advanced topics such as image recognition, image understanding, and computer vision.The English original version consists of 14 chapters, each focusing on different aspects of digital image processing. It starts with an introduction to the field, explaining the basic concepts and terminology. The subsequent chapters delve into topics such as image transforms, image enhancement in the spatial domain, image enhancement in the frequency domain, image restoration, color image processing, and image compression.The book provides a theoretical foundation for digital image processing and is accompanied by numerous examples and illustrations to aid understanding. It also includes MATLAB codes and exercises to reinforce the concepts discussed in each chapter. The English original version is widely regarded as a comprehensive and authoritative reference in the field of digital image processing.Translation:The translation of the digital image processing textbook into another language is an essential task to make the knowledge and concepts accessible to a wider audience. The translation process involves converting the English original version into the target language while maintaining the accuracy and clarity of the content.To ensure a high-quality translation, it is crucial to select a professional translator with expertise in both the source language (English) and the target language. The translator should have a solid understanding of the subject matter and possess excellent language skills to convey the concepts accurately.During the translation process, the translator carefully reads and comprehends the English original version. They then analyze the text and identify any cultural or linguistic nuances that need to be considered while translating. The translator may consult subject matter experts or reference materials to ensure the accuracy of technical terms and concepts.The translation process involves several stages, including translation, editing, and proofreading. After the initial translation, the editor reviews the translated text to ensure its coherence, accuracy, and adherence to the target language's grammar and style. The proofreader then performs a final check to eliminate any errors or inconsistencies.It is important to note that the translation may require adapting certain examples, illustrations, or exercises to suit the target language and culture. This adaptation ensures that the translated version resonates with the local audience and facilitates better understanding of the concepts.Conclusion:Digital Image Processing: English Original Version and Translation provides a comprehensive overview of the field of digital image processing. The English original version, authored by Richard E. Woods and Rafael C. Gonzalez, serves as a valuable reference for understanding the fundamental concepts and techniques in image processing.The translation process plays a crucial role in making this knowledge accessible to non-English speakers. It involves careful selection of a professional translator, thoroughunderstanding of the subject matter, and meticulous translation, editing, and proofreading stages. The translated version aims to accurately convey the concepts while adapting to the target language and culture.By providing both the English original version and its translation, individuals from different linguistic backgrounds can benefit from the knowledge and advancements in digital image processing, fostering international collaboration and innovation in this field.。

数字图像处理技术在某领域中的应用研究

数字图像处理技术在某领域中的应用研究

数字图像处理技术在某领域中的应用研究一、引言数字图像处理技术是一种以数字计算为基础的图像处理方式。

它通过对数字图像的分析、处理和重构,可以快速地获取并处理复杂的图像信息。

数字图像处理技术在医疗、安防、工业控制等许多领域都有应用,本文将重点介绍数字图像处理技术在医学领域中的应用研究。

二、医学图像处理技术概述医学图像处理技术是指将医学检查机器如CT、MRI等获取到的图像进行数字化处理,实现对生物医学信息的提取、分析及应用过程。

医学图像处理技术可以帮助医生在疾病的诊断、治疗方案的制定等方面提供有效的支持。

三、数字图像处理技术在CT图像分割中的应用CT图像分割是医学图像处理的一项基础技术,在CT图像分割中数字图像处理技术可以帮助医生更准确地分割出肿瘤、血管等病变部分,这对于医生的诊断和治疗非常重要。

数字图像处理技术在CT图像分割中的应用主要包括以下几个方面:1. 阈值分割:利用阈值将图像中的病变和健康组织分离出来。

2. 区域生长分割:以一个种子点为基础,生长出相同属性的像素区域。

3. 特征分割:根据图像的局部和全局特征将病变部分分割出来。

以上三种方法可以单独使用,也可以结合使用,通过数字图像处理技术的手段可以提高CT图像分割的准确度。

四、数字图像处理技术在MRI图像配准中的应用MRI图像配准是将两个或多个MRI图像进行对准,以便有更好的诊断效果。

数字图像处理技术在MRI图像配准中的应用主要包括以下两个方面:1. 基于特征的方法:利用图像的特征进行配准,如角点、线段等。

2. 基于图像互信息的方法:利用两幅图像之间互信息的度量值来进行配准。

数字图像处理技术在MRI图像配准中的应用可以大大提高MRI图像的质量和准确度,从而更好地辅助医生做出诊断和治疗方案。

五、数字图像处理技术在三维重建中的应用三维重建技术可以将多幅医学图像进行拼接,形成三维的立体图像。

数字图像处理技术在三维重建中的应用主要包括以下两个方面:1. 体素重建:将医学图像拆分为一固定大小的立方体,通过对立方体的组合形成三维图像。

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理 外文翻译 外文文献 英文文献 数字图像处理

数字图像处理外文翻译外文文献英文文献数字图像处理Digital Image Processing1 IntroductionMany operators have been proposed for presenting a connected component n a digital image by a reduced amount of data or simplied shape. In general we have to state that the development, choice and modi_cation of such algorithms in practical applications are domain and task dependent, and there is no \best method". However, it isinteresting to note that there are several equivalences between published methods and notions, and characterizing such equivalences or di_erences should be useful to categorize the broad diversity of published methods for skeletonization. Discussing equivalences is a main intention of this report.1.1 Categories of MethodsOne class of shape reduction operators is based on distance transforms. A distance skeleton is a subset of points of a given component such that every point of this subset represents the center of a maximal disc (labeled with the radius of this disc) contained in the given component. As an example in this _rst class of operators, this report discusses one method for calculating a distance skeleton using the d4 distance function which is appropriate to digitized pictures. A second class of operators produces median or center lines of the digitalobject in a non-iterative way. Normally such operators locate critical points _rst, and calculate a speci_ed path through the object by connecting these points.The third class of operators is characterized by iterative thinning. Historically, Listing [10] used already in 1862 the term linear skeleton for the result of a continuous deformation of the frontier of a connected subset of a Euclidean space without changing the connectivity of the original set, until only a set of lines and points remains. Many algorithms in image analysis are based on this general concept of thinning. The goal is a calculation of characteristic properties of digital objects which are not related to size or quantity. Methods should be independent from the position of a set in the plane or space, grid resolution (for digitizing this set) or the shape complexity of the given set. In the literature the term \thinning" is not used - 1 -in a unique interpretation besides that it always denotes a connectivity preserving reduction operation applied to digital images, involving iterations of transformations of speci_ed contour points into background points. A subset Q _ I of object points is reduced by ade_ned set D in one iteration, and the result Q0 = Q n D becomes Q for the next iteration. Topology-preserving skeletonization is a special case of thinning resulting in a connected set of digital arcs or curves.A digital curve is a path p =p0; p1; p2; :::; pn = q such that pi is a neighbor of pi?1, 1 _ i _ n, and p = q. A digital curve is called simpleif each point pi has exactly two neighbors in this curve. A digital arc is a subset of a digital curve such that p 6= q. A point of a digital arc which has exactly one neighbor is called an end point of this arc. Within this third class of operators (thinning algorithms) we may classify with respect to algorithmic strategies: individual pixels are either removed in a sequential order or in parallel. For example, the often cited algorithm by Hilditch [5] is an iterative process of testing and deleting contour pixels sequentially in standard raster scan order. Another sequential algorithm by Pavlidis [12] uses the de_nition of multiple points and proceeds by contour following. Examples of parallel algorithms in this third class are reduction operators which transform contour points into background points. Di_erences between these parallel algorithms are typically de_ned by tests implemented to ensure connectedness in a local neighborhood. The notion of a simple point is of basic importance for thinning and it will be shown in this reportthat di_erent de_nitions of simple points are actually equivalent. Several publications characterize properties of a set D of points (to be turned from object points to background points) to ensure that connectivity of object and background remain unchanged. The report discusses some of these properties in order to justify parallel thinning algorithms.1.2 BasicsThe used notation follows [17]. A digital image I is a functionde_ned on a discrete set C , which is called the carrier of the image.The elements of C are grid points or grid cells, and the elements (p;I(p)) of an image are pixels (2D case) or voxels (3D case). The range of a (scalar) image is f0; :::Gmaxg with Gmax _ 1. The range of a binary image is f0; 1g. We only use binary images I in this report. Let hIi be the set of all pixel locations with value 1, i.e. hIi = I?1(1). The image carrier is de_ned on an orthogonal grid in 2D or 3D - 2 -space. There are two options: using the grid cell model a 2D pixel location p is a closed square (2-cell) in the Euclidean plane and a 3D pixel location is a closed cube (3-cell) in the Euclidean space, where edges are of length 1 and parallel to the coordinate axes, and centers have integer coordinates. As a second option, using the grid point model a 2D or 3D pixel location is a grid point.Two pixel locations p and q in the grid cell model are called 0-adjacent i_ p 6= q and they share at least one vertex (which is a 0-cell). Note that this speci_es 8-adjacency in 2D or 26-adjacency in 3D if the grid point model is used. Two pixel locations p and q in the grid cell model are called 1- adjacent i_ p 6= q and they share at least one edge (which is a 1-cell). Note that this speci_es 4-adjacency in 2D or 18-adjacency in 3D if the grid point model is used. Finally, two 3Dpixel locations p and q in the grid cell model are called 2-adjacent i_ p 6= q and they share at least one face (which is a 2-cell). Note that this speci_es 6-adjacency if the grid point model is used. Any of these adjacency relations A_, _ 2 f0; 1; 2; 4; 6; 18; 26g, is irreexive andsymmetric on an image carrier C. The _-neighborhood N_(p) of a pixel location p includes p and its _-adjacent pixel locations. Coordinates of 2D grid points are denoted by (i; j), with 1 _ i _ n and 1 _ j _ m; i; j are integers and n;m are the numbers of rows and columns of C. In 3Dwe use integer coordinates (i; j; k). Based on neighborhood relations wede_ne connectedness as usual: two points p; q 2 C are _-connected with respect to M _ C and neighborhood relation N_ i_ there is a sequence of points p = p0; p1; p2; :::; pn = q such that pi is an _-neighbor of pi?1, for 1 _ i _ n, and all points on this sequence are either in M or all in the complement of M. A subset M _ C of an image carrier is called _-connected i_ M is not empty and all points in M are pairwise _-connected with respect to set M. An _-component of a subset S of C is a maximal _-connected subset of S. The study of connectivity in digital images has been introduced in [15]. It follows that any set hIi consists of a number of _-components. In case of the grid cell model, a component is the union of closed squares (2D case) or closed cubes (3D case). The boundary of a 2-cell is the union of its four edges and the boundary of a 3-cell is the union of its six faces. For practical purposes it iseasy to use neighborhood operations (called local operations) on adigital image I which de_ne a value at p 2 C in the transformed image based on pixel- 3 -values in I at p 2 C and its immediate neighbors in N_(p).2 Non-iterative AlgorithmsNon-iterative algorithms deliver subsets of components in specied scan orders without testing connectivity preservation in a number of iterations. In this section we only use the grid point model.2.1 \Distance Skeleton" AlgorithmsBlum [3] suggested a skeleton representation by a set of symmetric points.In a closed subset of the Euclidean plane a point p is called symmetric i_ at least 2 points exist on the boundary with equal distances to p. For every symmetric point, the associated maximal discis the largest disc in this set. The set of symmetric points, each labeled with the radius of the associated maximal disc, constitutes the skeleton of the set. This idea of presenting a component of a digital image as a \distance skeleton" is based on the calculation of a speci_ed distance from each point in a connected subset M _ C to the complement of the subset. The local maxima of the subset represent a \distance skeleton". In [15] the d4-distance is specied as follows. De_nition 1 The distance d4(p; q) from point p to point q, p 6= q, is the smallest positive integer n such that there exists a sequence of distinct grid points p = p0,p1; p2; :::; pn = q with pi is a 4-neighbor of pi?1, 1 _ i _ n.If p = q the distance between them is de_ned to be zero. Thedistance d4(p; q) has all properties of a metric. Given a binary digital image. We transform this image into a new one which represents at each point p 2 hIi the d4-distance to pixels having value zero. The transformation includes two steps. We apply functions f1 to the image Iin standard scan order, producing I_(i; j) = f1(i; j; I(i; j)), and f2in reverse standard scan order, producing T(i; j) = f2(i; j; I_(i; j)), as follows:f1(i; j; I(i; j)) =8><>>:0 if I(i; j) = 0minfI_(i ? 1; j)+ 1; I_(i; j ? 1) + 1gif I(i; j) = 1 and i 6= 1 or j 6= 1- 4 -m+ n otherwisef2(i; j; I_(i; j)) = minfI_(i; j); T(i+ 1; j)+ 1; T(i; j + 1) + 1g The resulting image T is the distance transform image of I. Notethat T is a set f[(i; j); T(i; j)] : 1 _ i _ n ^ 1 _ j _ mg, and let T_ _ T such that [(i; j); T(i; j)] 2 T_ i_ none of the four points in A4((i; j)) has a value in T equal to T(i; j)+1. For all remaining points (i; j) let T_(i; j) = 0. This image T_ is called distance skeleton. Now weapply functions g1 to the distance skeleton T_ in standard scan order, producing T__(i; j) = g1(i; j; T_(i; j)), and g2 to the result of g1 in reverse standard scan order, producing T___(i; j) = g2(i; j; T__(i; j)), as follows:g1(i; j; T_(i; j)) = maxfT_(i; j); T__(i ? 1; j)? 1; T__(i; j ? 1) ? 1gg2(i; j; T__(i; j)) = maxfT__(i; j); T___(i + 1; j)? 1; T___(i; j + 1) ? 1gThe result T___ is equal to the distance transform image T. Both functions g1 and g2 de_ne an operator G, with G(T_) = g2(g1(T_)) = T___, and we have [15]: Theorem 1 G(T_) = T, and if T0 is any subset of image T (extended to an image by having value 0 in all remaining positions) such that G(T0) = T, then T0(i; j) = T_(i; j) at all positions of T_with non-zero values. Informally, the theorem says that the distance transform image is reconstructible from the distance skeleton, and it is the smallest data set needed for such a reconstruction. The useddistance d4 di_ers from the Euclidean metric. For instance, this d4-distance skeleton is not invariant under rotation. For an approximation of the Euclidean distance, some authors suggested the use of di_erent weights for grid point neighborhoods [4]. Montanari [11] introduced a quasi-Euclidean distance. In general, the d4-distance skeleton is a subset of pixels (p; T(p)) of the transformed image, and it is not necessarily connected.2.2 \Critical Points" AlgorithmsThe simplest category of these algorithms determines the midpointsof subsets of connected components in standard scan order for each row. Let l be an index for the number of connected components in one row of the original image. We de_ne the following functions for 1 _ i _ n: ei(l) = _ j if this is the lth case I(i; j) = 1 ^ I(i; j ? 1) = 0 in row i, counting from the left, with I(i;?1) = 0 ,oi(l) = _ j if this is the lth case I(i; j) = 1- 5 -^ I(i; j+ 1) = 0 ,in row i, counting from the left, with I(i;m+ 1)= 0 ,mi(l) = int((oi(l) ?ei(l)=2)+ oi(l) ,The result of scanning row i is a set ofcoordinates (i;mi(l)) ofof the connected components in row i. The set of midpoints of all rows midpoints ,constitutes a critical point skeleton of an image I. This method is computationally eÆcient.The results are subsets of pixels of the original objects, and these subsets are not necessarily connected. They can form \noisy branches" when object components are nearly parallel to image rows. They may be useful for special applications where the scanning direction is approximately perpendicular to main orientations of object components.References[1] C. Arcelli, L. Cordella, S. Levialdi: Parallel thinning ofbinary pictures. Electron. Lett. 11:148{149, 1975}.[2] C. Arcelli, G. Sanniti di Baja: Skeletons of planar patterns. in: Topolog- ical Algorithms for Digital Image Processing (T. Y. Kong, A. Rosenfeld, eds.), North-Holland, 99{143, 1996.}[3] H. Blum: A transformation for extracting new descriptors of shape. in: Models for the Perception of Speech and Visual Form (W. Wathen- Dunn, ed.), MIT Press, Cambridge, Mass., 362{380, 1967.19} - 6 -数字图像处理1引言许多研究者已提议提出了在数字图像里的连接组件是由一个减少的数据量或简化的形状。

介绍数字图像处理外文翻译

介绍数字图像处理外文翻译

附录1 外文原文Source: "the 21st century literature the applied undergraduate electronic communication series of practical teaching planThe information and communication engineering specialty in English ch02_1. PDF 120-124Ed: HanDing ZhaoJuMin, etcText A: An Introduction to Digital Image Processing1. IntroductionDigital image processing remains a challenging domain of programming for several reasons. First the issue of digital image processing appeared relatively late in computer history. It had to wait for the arrival of the first graphical operating systems to become a true matter. Secondly, digital image processing requires the most careful optimizations especially for real time applications. Comparing image processing and audio processing is a good way to fix ideas. Let us consider the necessary memory bandwidth for examining the pixels of a 320x240, 32 bits bitmap, 30 times a second: 10 Mo/sec. Now with the same quality standard, an audio stereo wave real time processing needs 44100 (samples per second) x 2 (bytes per sample per channel) x 2(channels) = 176Ko/sec, which is 50 times less.Obviously we will not be able to use the same techniques for both audio and image signal processing. Finally, digital image processing is by definition a two dimensions domain; this somehow complicates things when elaborating digital filters.We will explore some of the existing methods used to deal with digital images starting by a very basic approach of color interpretation. As a moreadvanced level of interpretation comes the matrix convolution and digital filters. Finally, we will have an overview of some applications of image processing.The aim of this document is to give the reader a little overview of the existing techniques in digital image processing. We will neither penetrate deep into theory, nor will we in the coding itself; we will more concentrate on the algorithms themselves, the methods. Anyway, this document should be used as a source of ideas only, and not as a source of code. 2. A simple approach to image processing(1) The color data: Vector representation①BitmapsThe original and basic way of representing a digital colored image in a computer's memory is obviously a bitmap. A bitmap is constituted of rows of pixels, contraction of the word s “Picture Element”. Each pixel has a particular value which determines its appearing color. This value is qualified by three numbers giving the decomposition of the color in the three primary colors Red, Green and Blue. Any color visible to human eye can be represented this way. The decomposition of a color in the three primary colors is quantified by a number between 0 and 255. For example, white will be coded as R = 255, G = 255, B = 255; black will be known as (R,G,B)= (0,0,0); and say, bright pink will be : (255,0,255). In other words, an image is an enormous two-dimensional array of color values, pixels, each of them coded on 3 bytes, representing the three primary colors. This allows the image to contain a total of 256×256×256 = 16.8 million different colors. This technique is also known as RGB encoding, and is specifically adapted to human vision. With cameras or other measure instruments we are capable of “seeing”thousands of other “colors”, in which cases the RG B encoding is inappropriate.The range of 0-255 was agreed for two good reasons: The first is that the human eye is not sensible enough to make the difference between more than 256 levels of intensity (1/256 = 0.39%) for a color. That is to say, an image presented to a human observer will not be improved by using more than 256 levels of gray (256shades of gray between black and white). Therefore 256 seems enough quality. The second reason for the value of 255 is obviously that it is convenient for computer storage. Indeed on a byte, which is the computer's memory unit, can be coded up to 256 values.As opposed to the audio signal which is coded in the time domain, the image signal is coded in a two dimensional spatial domain. The raw image data is much more straightforward and easy to analyze than the temporal domain data of the audio signal. This is why we will be able to do lots of stuff and filters for images without transforming the source data, while this would have been totally impossible for audio signal. This first part deals with the simple effects and filters you can compute without transforming the source data, just by analyzing the raw image signal as it is.The standard dimensions, also called resolution, for a bitmap are about 500 rows by 500 columns. This is the resolution encountered in standard analogical television and standard computer applications. You can easily calculate the memory space a bitmap of this size will require. We have 500×500 pixels, each coded on three bytes, this makes 750 Ko. It might not seem enormous compared to the size of hard drives, but if you must deal with an image in real time then processing things get tougher. Indeed rendering images fluidly demands a minimum of 30 images per second, the required bandwidth of 10 Mo/sec is enormous. We will see later that the limitation of data access and transfer in RAM has a crucial importance in image processing, and sometimes it happens to be much more important than limitation of CPU computing, which may seem quite different from what one can be used to in optimization issues. Notice that, with modern compression techniques such as JPEG 2000, the total size of the image can be easily reduced by 50 times without losing a lot of quality, but this is another topic.②Vector representation of colorsAs we have seen, in a bitmap, colors are coded on three bytes representing their decomposition on the three primary colors. It sounds obvious to a mathematician to immediately interpret colors as vectors in athree-dimension space where each axis stands for one of the primary colors. Therefore we will benefit of most of the geometric mathematical concepts to deal with our colors, such as norms, scalar product, projection, rotation or distance. This will be really interesting for some kind of filters we will see soon. Figure 1 illustrates this new interpretation:Figure 1(2) Immediate application to filters① Edge DetectionFrom what we have said before we can quantify the 'difference' between two colors by computing the geometric distance between the vectors representing those two colors. Lets consider two colors C1 = (R1,G1,B1) and C2 = (R2,B2,G2), the distance between the two colors is given by the formula :D(C1, C2) =(R1+This leads us to our first filter: edge detection. The aim of edge detection is to determine the edge of shapes in a picture and to be able to draw a resultbitmap where edges are in white on black background (for example). The idea is very simple; we go through the image pixel by pixel and compare the color of each pixel to its right neighbor, and to its bottom neighbor. If one of these comparison results in a too big difference the pixel studied is part of an edge and should be turned to white, otherwise it is kept in black. The fact that we compare each pixel with its bottom and right neighbor comes from the fact that images are in two dimensions. Indeed if you imagine an image with only alternative horizontal stripes of red and blue, the algorithms wouldn't see the edges of those stripes if it only compared a pixel to its right neighbor. Thus the two comparisons for each pixel are necessary.This algorithm was tested on several source images of different types and it gives fairly good results. It is mainly limited in speed because of frequent memory access. The two square roots can be removed easily by squaring the comparison; however, the color extractions cannot be improved very easily. If we consider that the longest operations are the get pixel function and put pixel functions, we obtain a polynomial complexity of 4*N*M, where N is the number of rows and M the number of columns. This is not reasonably fast enough to be computed in realtime. For a 300×300×32 image I get about 26 transforms per second on an Athlon XP 1600+. Quite slow indeed.Here are the results of the algorithm on an example image:A few words about the results of this algorithm: Notice that the quality of the results depends on the sharpness of the source image. Ifthe source image is very sharp edged, the result will reach perfection. However if you have a very blurry source you might want to make it pass through a sharpness filter first, which we will study later. Another remark, you can also compare each pixel with its second or third nearest neighbors on the right and on the bottom instead of the nearest neighbors. The edges will be thicker but also more exact depending on the source image's sharpness. Finally we will see later on that there is another way to make edge detection with matrix convolution.②Color extractionThe other immediate application of pixel comparison is color extraction.Instead of comparing each pixel with its neighbors, we are going to compare it with a given color C1. This algorithm will try to detect all the objects in the image that are colored with C1. This was quite useful for robotics for example. It enables you to search on streaming images for a particular color. You can then make you robot go get a red ball for example. We will call the reference color, the one we are looking for in the image C0 = (R0,G0,B0).Once again, even if the square root can be easily removed it doesn't really improve the speed of the algorithm. What really slows down the whole loop is the NxM get pixel accesses to memory and put pixel. This determines the complexity of this algorithm: 2xNxM, where N and M are respectively the numbers of rows and columns in the bitmap. The effective speed measured on my computer is about 40 transforms per second on a 300x300x32 source bitmap.3.JPEG image compression theory(一)JPEG compression is divided into four steps to achieve:(1) Color mode conversion and samplingRGB color system is the most common ways that color. JPEG uses a YCbCr colorsystem. Want to use JPEG compression method dealing with the basic full-color images, RGB color mode to first image data is converted to YCbCr color model data. Y representative of brightness, Cb and Cr represents the hue, saturation. By the following calculation to be completed by data conversion. Y = 0.2990R +0.5870 G+0.1140 B Cb =- 0.1687R-0.3313G +0.5000 B +128 Cr = 0.5000R-0.4187G-0.0813B+128 of human eyes on the low-frequency data than high-frequency data with higher The sensitivity, in fact, the human eye to changes in brightness than to color changes should be much more sensitive, ie Y component of the data is more important. Since the Cb and Cr components is relatively unimportant component of the data comparison, you can just take part of the data to deal with. To increase the compression ratio. JPEG usually have two kinds of sampling methods: YUV411 and YUV422, they represent is the meaning of Y, Cb and Cr data sampling ratio of three components.(2)DCT transformationThe full name is the DCT-discrete cosine transform (Discrete Cosine Transform), refers to a group of light intensity data into frequency data, in order that intensity changes of circumstances. If the modification of high-frequency data do, and then back to the original form of data, it is clear there are some differences with the original data, but the human eye is not easy to recognize. Compression, the original image data is divided into 8 * 8 matrix of data units. JPEG entire luminance and chrominance Cb matrix matrix, saturation Cr matrix as a basic unit called the MCU. Each MCU contains a matrix of no more than 10. For example, the ratio of rows and columns Jie Wei 4:2:2 sampling, each MCU will contain four luminance matrix, a matrix and a color saturation matrix. When the image data is divided into an 8 * 8 matrix, you must also be subtracted for each value of 128, and then a generation of formula into the DCT transform can be achieved by DCT transform purposes. The image data value must be reduced by 128, because the formula accepted by the DCT-figure range is between -128 to +127.(3)QuantizationImage data is converted to the frequency factor, you still need to accept a quantitative procedure to enter the coding phase. Quantitative phase requires two 8 * 8 matrix of data, one is to deal specifically with the brightness of the frequency factor, the other is the frequency factor for the color will be the frequency coefficient divided by the value of quantization matrix to obtain the nearest whole number with the quotient, that is completed to quantify. When the frequency coefficients after quantization, will be transformed into the frequency coefficients from the floating-point integer This facilitate the implementation of the final encoding. However, after quantitative phase, all the data to retain only the integer approximation, also once again lost some data content.(4)CodingHuffman encoding without patent issues, to become the most commonly used JPEG encoding, Huffman coding is usually carried out in a complete MCU. Coding, each of the DC value matrix data 63 AC value, will use a different Huffman code tables, while the brightness and chroma also require a different Huffman code tables, it needs a total of four code tables, in order to successfully complete the JPEG coding. DC Code DC is a color difference pulse code modulation using the difference coding method, which is in the same component to obtain an image of each DC value and the difference between the previous DC value to encode. DC pulse code using the main reason for the difference is due to a continuous tone image, the difference mostly smaller than the original value of the number of bits needed to encode the difference will be more than the original value of the number of bits needed to encode the less. For example, a margin of 5, and its binary representation of a value of 101, if the difference is -5, then the first changed to a positive integer 5, and then converted into its 1's complement binary number can be. The so-called one's complement number, that is, if the value is 0 for each Bit, then changed to 1; Bit is 1, it becomes 0. Difference between the five should retain the median 3, the following table that lists the difference between the Bit to be retained and the difference between the number of content controls.In the margin of the margin front-end add some additional value Hoffman code, such as the brightness difference of 5 (101) of the median of three, then the Huffman code value should be 100, the two connected together shall be 100101. The following two tables are the brightness and chroma DC difference encoding table. According to these two forms content, you can add the difference for the DC value Huffman code to complete the DC coding.4. ConclusionsDigital image processing is far from being a simple transpose of audiosignal principles to a two dimensions space. Image signal has its particular properties, and therefore we have to deal with it in a specificway. The Fast Fourier Transform, for example, which was such a practical tool in audio processing, becomes useless in image processing. Oppositely, digital filters are easier to create directly, without any signal transforms, in image processing.Digital image processing has become a vast domain of modern signal technologies. Its applications pass far beyond simple aesthetical considerations, and they include medical imagery, television and multimedia signals, security, portable digital devices, video compression,and even digital movies. We have been flying over some elementarynotions in image processing but there is yet a lot more to explore. Ifyou are beginning in this topic, I hope this paper will have given you thetaste and the motivation to carry on.附录2 外文翻译文献出处:《21 世纪全国应用型本科电子通信系列实用规划教材》之《信息与通信工程专业英语》ch02_1.pdf 120-124页主编:韩定定、赵菊敏等正文:介绍数字图像处理1.导言有几个原因使数字图像处理仍然是一个具有挑战性的领域。

数字图像处理课件(冈萨雷斯第三版)英文翻译优秀课件

数字图像处理课件(冈萨雷斯第三版)英文翻译优秀课件

The image on the left is the image processing technique . Used to test computer algorithms A standard image of actual effects . The name of this image is lenna . It is made up of a set of numbers. Original image The width and height are 256 pixels each .There are eight bits in pixels. It is in BMP form at About 66K bytes in size.
The objective world is a three-dimensional space, but the general image is two-dimensional. Two dimensional images inevitably lose part of the information in the process of reflecting the three-dimensional world. Even recorded information can be distorted and even difficult to recognize objects. Therefore, it is necessary to recover and reconstruct information from images, and to analyze and extract mathematical models of images so that people can have a correct and profound understanding of what is recorded in the image. This process becomes the process of image processing.

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

引言英文文献原文Digital image processing and pattern recognition techniques for the detection of cancerCancer is the second leading cause of death for both men and women in the world , and is expected to become the leading cause of death in the next few decades . In recent years , cancer detection has become a significant area of research activities in the image processing and pattern recognition community .Medical imaging technologies have already made a great impact on our capabilities of detecting cancer early and diagnosing the disease more accurately . In order to further improve the efficiency and veracity of diagnoses and treatment , image processing and pattern recognition techniques have been widely applied to analysis and recognition of cancer , evaluation of the effectiveness of treatment , and prediction of the development of cancer . The aim of this special issue is to bring together researchers working on image processing and pattern recognition techniques for the detection and assessment of cancer , and to promote research in image processing and pattern recognition for oncology . A number of papers were submitted to this special issue and each was peer-reviewed by at least three experts in the field . From these submitted papers , 17were finally selected for inclusion in this special issue . These selected papers cover a broad range of topics that are representative of the state-of-the-art in computer-aided detection or diagnosis(CAD)of cancer . They cover several imaging modalities(such as CT , MRI , and mammography) and different types of cancer (including breast cancer , skin cancer , etc.) , which we summarize below .Skin cancer is the most prevalent among all types of cancers . Three papers in this special issue deal with skin cancer . Y uan et al. propose a skin lesion segmentation method. The method is based on region fusion and narrow-band energy graph partitioning . The method can deal with challenging situations with skin lesions , such as topological changes , weak or false edges , and asymmetry . T ang proposes a snake-based approach using multi-direction gradient vector flow (GVF) for the segmentation of skin cancer images . A new anisotropic diffusion filter is developed as a preprocessing step . After the noise is removed , the image is segmented using a GVF1snake . The proposed method is robust to noise and can correctly trace the boundary of the skin cancer even if there are other objects near the skin cancer region . Serrano et al. present a method based on Markov random fields (MRF) to detect different patterns in dermoscopic images . Different from previous approaches on automatic dermatological image classification with the ABCD rule (Asymmetry , Border irregularity , Color variegation , and Diameter greater than 6mm or growing) , this paper follows a new trend to look for specific patterns in lesions which could lead physicians to a clinical assessment.Breast cancer is the most frequently diagnosed cancer other than skin cancer and a leading cause of cancer deaths in women in developed countries . In recent years , CAD schemes have been developed as a potentially efficacious solution to improving radiologists’diagnostic accuracy in breast cancer screening and diagnosis . The predominant approach of CAD in breast cancer and medical imaging in general is to use automated image analysis to serve as a “second reader”, with the aim of improving radiologists’diagnostic performance . Thanks to intense research and development efforts , CAD schemes have now been introduces in screening mammography , and clinical studies have shown that such schemes can result in higher sensitivity at the cost of a small increase in recall rate . In this issue , we have three papers in the area of CAD for breast cancer . Wei et al. propose an image-retrieval based approach to CAD , in which retrieved images similar to that being evaluated (called the query image) are used to support a CAD classifier , yielding an improved measure of malignancy . This involves searching a large database for the images that are most similar to the query image , based on features that are automatically extracted from the images . Dominguez et al. investigate the use of image features characterizing the boundary contours of mass lesions in mammograms for classification of benign vs. Malignant masses . They study and evaluate the impact of these features on diagnostic accuracy with several different classifier designs when the lesion contours are extracted using two different automatic segmentation techniques . Schaefer et al. study the use of thermal imaging for breast cancer detection . In their scheme , statistical features are extracted from thermograms to quantify bilateral differences between left and right breast regions , which are used subsequently as input to a fuzzy-rule-based classification system for diagnosis.Colon cancer is the third most common cancer in men and women , and also the third mostcommon cause of cancer-related death in the USA . Y ao et al. propose a novel technique to detect colonic polyps using CT Colonography . They use ideas from geographic information systems to employ topographical height maps , which mimic the procedure used by radiologists for the detection of polyps . The technique can also be used to measure consistently the size of polyps . Hafner et al. present a technique to classify and assess colonic polyps , which are precursors of colorectal cancer . The classification is performed based on the pit-pattern in zoom-endoscopy images . They propose a novel color waveler cross co-occurence matrix which employs the wavelet transform to extract texture features from color channels.Lung cancer occurs most commonly between the ages of 45 and 70 years , and has one of the worse survival rates of all the types of cancer . Two papers are included in this special issue on lung cancer research . Pattichis et al. evaluate new mathematical models that are based on statistics , logic functions , and several statistical classifiers to analyze reader performance in grading chest radiographs for pneumoconiosis . The technique can be potentially applied to the detection of nodules related to early stages of lung cancer . El-Baz et al. focus on the early diagnosis of pulmonary nodules that may lead to lung cancer . Their methods monitor the development of lung nodules in successive low-dose chest CT scans . They propose a new two-step registration method to align globally and locally two detected nodules . Experments on a relatively large data set demonstrate that the proposed registration method contributes to precise identification and diagnosis of nodule development .It is estimated that almost a quarter of a million people in the USA are living with kidney cancer and that the number increases by 51000 every year . Linguraru et al. propose a computer-assisted radiology tool to assess renal tumors in contrast-enhanced CT for the management of tumor diagnosis and response to treatment . The tool accurately segments , measures , and characterizes renal tumors, and has been adopted in clinical practice . V alidation against manual tools shows high correlation .Neuroblastoma is a cancer of the sympathetic nervous system and one of the most malignant diseases affecting children . Two papers in this field are included in this special issue . Sertel et al. present techniques for classification of the degree of Schwannian stromal development as either stroma-rich or stroma-poor , which is a critical decision factor affecting theprognosis . The classification is based on texture features extracted using co-occurrence statistics and local binary patterns . Their work is useful in helping pathologists in the decision-making process . Kong et al. propose image processing and pattern recognition techniques to classify the grade of neuroblastic differentiation on whole-slide histology images . The presented technique is promising to facilitate grading of whole-slide images of neuroblastoma biopsies with high throughput .This special issue also includes papers which are not derectly focused on the detection or diagnosis of a specific type of cancer but deal with the development of techniques applicable to cancer detection . T a et al. propose a framework of graph-based tools for the segmentation of microscopic cellular images . Based on the framework , automatic or interactive segmentation schemes are developed for color cytological and histological images . T osun et al. propose an object-oriented segmentation algorithm for biopsy images for the detection of cancer . The proposed algorithm uses a homogeneity measure based on the distribution of the objects to characterize tissue components . Colon biopsy images were used to verify the effectiveness of the method ; the segmentation accuracy was improved as compared to its pixel-based counterpart . Narasimha et al. present a machine-learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using an ion-abrasion scanning electron microscope . The proposed approach has minimal user intervention and can achieve high classification accuracy . El Naqa et al. investigate intensity-volume histogram metrics as well as shape and texture features extracted from PET images to predict a patient’s response to treatment . Preliminary results suggest that the proposed approach could potentially provide better tools and discriminant power for functional imaging in clinical prognosis.We hope that the collection of the selected papers in this special issue will serve as a basis for inspiring further rigorous research in CAD of various types of cancer . We invite you to explore this special issue and benefit from these papers .On behalf of the Editorial Committee , we take this opportunity to gratefully acknowledge the autors and the reviewers for their diligence in abilding by the editorial timeline . Our thanks also go to the Editors-in-Chief of Pattern Recognition , Dr. Robert S. Ledley and Dr.C.Y. Suen , for their encouragement and support for this special issue .英文文献译文数字图像处理和模式识别技术关于检测癌症的应用世界上癌症是对于人类(不论男人还是女人)生命的第二杀手。

数字图像处理技术在无损检测中的应用研究

数字图像处理技术在无损检测中的应用研究

数字图像处理技术在无损检测中的应用研究随着科技的发展和进步,无损检测技术已经成为了现代工业生产不可缺少的重要手段。

传统的无损检测方法主要是通过人眼观察或物理量测定来实现缺陷检测,但这种方法存在许多问题,如主观性强、不够精确等。

而数字图像处理技术的应用,则可以有效地解决这些问题。

数字图像处理技术(Digital Image Processing, DIP)是将数字计算机作为工具,对数字信号进行处理和分析的技术。

它可以将数字信号中的信息提取出来,并用数字计算机来进行进一步的处理与分析。

基于数字图像处理技术的无损检测方法不仅可以提高检测的准确性和精度,还能够大大缩短检测时间,降低人工成本,因此被广泛应用于可视化缺陷检测、结构健康监测等领域。

一、数字图像处理技术在无损检测中的应用1. 光学全景像技术光学全景像技术是一种基于数字图像处理技术的无损检测方法。

它是通过设置多个相机,同时对检测对象进行拍照,并将这些照片通过计算机处理后,得到一个连续的、高清晰度的全景像。

这种方法可以有效地消除图像拍摄时的盲区,大大提高了检测精度。

光学全景像技术的应用非常广泛。

例如,在航空航天、交通运输等领域,可以利用光学全景像技术对构件内部进行检测,发现缺陷和磨损等问题。

此外,在建筑工程领域,光学全景像技术也可以用于建筑表面检测和监测。

2. 红外热像技术红外热像技术是一种基于数字图像处理技术的无损检测方法,它是通过将红外辐射转化成数字信号,并用计算机对这些信号进行处理和分析,来实现无损检测。

这种方法可以用于检测目标的温度分布情况,发现其内部的缺陷和异常。

红外热像技术的应用非常广泛,例如在电力设备、建筑工程、冶金工业等领域都有广泛的应用。

3. 数字射线成像技术数字射线成像技术是一种基于数字图像处理技术的无损检测方法。

它是通过使用数字射线成像设备对检测对象进行扫描,获得其内部结构的数字图像,然后通过计算机进行处理和分析,从而实现无损检测。

数字图像处理英文词汇

数字图像处理英文词汇

Algebraic operation 代数运算;一种图像处理运算,包括两幅图像对应像素的和、差、积、商。
Aliasing 走样(混叠);当图像像素间距和图像细节相比太大时产生的一种人工痕迹。
Arc 弧;图的一部分;表示一曲线一段的相连的像素集合。
Run 行程;在图像编码中,具有相同灰度的相连像素序列
Run length 行程长度,行程;在行程中像素的个数
Run length encoding 行程编码;图像行以行程序列表示的图像压缩技术,每一行程以一个给定的行程长度和灰度值定义
Sampling 采样;(根据采样网络)将图像分为像素并测量其上局部特性(如亮度、颜色)的过程
Image matching 图像匹配;为决定两副图像相似程度对它们进行量化比较的过程。
Image-processing operation 图像处理运算;将输入图像变换为输出图像的一系列步骤
Image reconstruction 图像重构;从非图像形式构造或恢复图像的过程
Image registration 图像匹准;通过将景物中的一图幅像与相同景物的另一幅图像进行几何运算,以使其中物体对准的过程
Quantitative image analysis 图像定量分析;从一副数字图像中抽取定量数据的过程
Quantization 量化;在每一个像素处,将图像的局部特性赋予一个灰度集合中的元素的过程
Region 区域;一副图像中的相连子集
Region growing 区域增长;通过重复地求具有相似灰度或纹理的相邻子区域的并集形成区域的一种图像分割技术
Edge detection 边缘检测; 通过检查邻域,将边缘像素标识出的一种图像分割技术。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The research of digital image processing technique1 IntroductionInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception. This chapter has several objectives: (1)to define the scope of the field that we call image processing; (2)to give a historical perspective of the origins of this field;(3)to give an idea of the state of the art in image processing by examining some of the principal area in which it is applied; (4)to discuss briefly the principal approaches used in digital image processing; (5)to give an overview of the components contained in a typical, general-purpose image processing system; and (6) to provide direction to the books and other literature where image processing work normally is reporter.1.1What Is Digital Image Processing?An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image. We consider these definitions in more formal terms in Chapter2.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike human who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that human are not accustomed to associating with image. These include ultrasound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of application.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computer toemulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. This field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other. However , one useful paradigm is to consider three types of computerized processes is this continuum: low-, mid-, and high-ever processes. Low-level processes involve primitive operation such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its input and output are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual object. Amid-level process is characterized by the fact that its inputs generally are images, but its output is attributes extracted from those images (e. g., edges contours, and the identity of individual object). Finally, higher-level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive function normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text. Preprocessing that images, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making cense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad rang of areas of exceptional social and economic value. The concepts developed in the following chapters are the foundation for the methods used in those application areas.1.2T he Origins of Digital Image ProcessingOne of the first applications of digital images was in the newspaper industry, when pictures were first sent by submarine cable between London and NewYork. Introduction of the Bartlane cable picture transmission system in the early 1920s reduced the time required to transport a picture across the Atlantic from more than aweek to less than three hours. Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end. Figure 1.1 was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern.Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution of intensity levels. The printing method used to obtain Fig. 1.1 was abandoned toward the end of 1921 in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal. Figure 1.2 shows an images obtained using this method. The improvements over Fig. 1.1 are evident, both in tonal quality and in resolution.FIGURE 1.1 A digital picture produced in FIGURE 1.2 A digital picture 1921 from a coded tape by a telegraph printer made in 1922 from a tape punched With special type faces (McFarlane) after the signals had crossed the Atlantic twice. Some errors areVisible. (McFarlane)The early Bartlane systems were capable of coding images in five distinct level of gray. This capability was increased to 15 levels in 1929. Figure 1.3 is typical of the images that could be obtained using the 15-tone equipment. During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Although the examples just cited involve digital images, they are not considered digital image processing results in the context of our definition because computer were not involved in their creation. Thus, the history of digital processing is intimately tied to the development of the digital computer. In fact digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computers of supporting technologies that include data storage, display, and transmission.The idea of a computer goes back to the invention of the abacus in Asia Minor, more than 5000 years ago. More recently, there were developments in the past two centuries that are the foundation of what we call computer today. However, the basisfor what we call a modern digital computer dates back to only the 1940s with the introduction by John von Neumann of two key concepts: (1) a memory to hold a stored program and data, and (2) conditional branching. There two ideas are the foundation of a central processing unit (CPU), which is at the heart of computer today. Starting with von Neumann, there were a series of advances that led to computers powerful enough to be used for digital image processing. Briefly, these advances may be summarized as follow:(1)the invention of the transistor by Bell Laboratories in 1948;(2)the development in the 1950s and 1960s of the high-level programminglanguages COBOL (Common Business-Oriented Language) and FORTRAN ( Formula Translator);(3)the invention of the integrated circuit (IC) at Texas Instruments in 1958;(4)the development of operating system in the early 1960s;(5)the development of the microprocessor (a single chip consisting of the centralprocessing unit, memory, and input and output controls) by Inter in the early 1970s;(6)introduction by IBM of the personal computer in 1981;(7)progressive miniaturization of components, starting with large scale integration(LI) in the late 1970s, then very large scale integration (VLSI) in the 1980s, to the present use of ultra large scale integration (ULSI).Figure 1.3 In 1929 from London to Cenerale Pershingthat New York delivers with 15 level tone equipmentsthrough cable with Foch do not the photograph by decorationConcurrent with these advances were development in the areas of mass storage and display systems, both of which are fundamental requirements for digital image processing.The first computers powerful enough to carry out meaningful image processingtasks appeared in the early 1960s. The birth of what we call digital image processing today can be traced to the availability of those machines and the onset of the apace program during that period. It took the combination of those two developments to bring into focus the potential of digital image processing concepts. Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory (Pasadena, California) in 1964 when pictures of the moon transmitted by Ranger 7 were processed by a computer to correct various types of image distortion inherent in the on-board television camera. Figure1.4shows the first image of the moon taken by Ranger 7 on July 31, 1964 at 9: 09 A. M. Eastern Daylight Time (EDT), about 17 minutes before impacting the lunar surface (the markers, called reseau mark, are used for geometric corrections, as discussed in Chapter 5). This also is the first image of the moon taken by a U.S. spacecraft. The imaging lessons learned with ranger 7 served as the basis for improved methods used to enhance and restore images from the Surveyor missions to the moon, the Mariner series of flyby mission to Mars, the Apollo manned flights to the moon, and others.In parallel with space application, digital image processing techniques began in the late 1960s and early 1970s to be used in medical imaging, remote Earth resources observations, and astronomy. The invention in the early 1970s of computerized axial tomography (CAT), also called computerized tomography (CT) for short, is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors encircles an object (or patient) and an X-ray source, concentric with the detector ring, rotates about the object. The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring. As the source rotates, this procedure is repeated. Tomography consists of algorithms that use the sensed data to construct an image that represents a “slice” through the object. Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices, which constitute a three-dimensional (3-D) rendition of the inside of the object. Tomography was invented independently by Sir Godfrey N. Hounsfield and Professor Allan M. Cormack, who shared the X-rays were discovered in 1895 by Wilhelm Conrad Roentgen, for which he received the 1901 Nobel Prize for Physics. These two inventions, nearly 100 years apart, led to some of the most active application areas of image processing today.Figure 1.4 The first picture of the moon by a U.S. Spacecraft. Ranger 7 took this image on July 31, 1964 at 9: 09 A.M. EDT, about 17 minutes beforeImpacting the lunar surface.(Courtesy of NASA.)中文翻译数字图像处理方法的研究1 绪论数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进;其二是为了使机器自动理解而对图像数据进行存储、传输及显示。

相关文档
最新文档