数字图像处理 外文翻译 外文文献 英文文献 数字图像处理
图像处理中值滤波器中英文对照外文翻译文献
中英文资料对照外文翻译一、英文原文A NEW CONTENT BASED MEDIAN FILTERABSTRACTIn this paper the hardware implementation of a contentbased median filter suitabl e for real-time impulse noise suppression is presented. The function of the proposed ci rcuitry is adaptive; it detects the existence of impulse noise in an image neighborhood and applies the median filter operator only when necessary. In this way, the blurring o f the imagein process is avoided and the integrity of edge and detail information is pre served. The proposed digital hardware structure is capable of processing gray-scale im ages of 8-bit resolution and is fully pipelined, whereas parallel processing is used to m inimize computational time. The architecturepresented was implemented in FPGA an d it can be used in industrial imaging applications, where fast processing is of the utm ost importance. The typical system clock frequency is 55 MHz.1. INTRODUCTIONTwo applications of great importance in the area of image processing are noise filtering and image enhancement [1].These tasks are an essential part of any image pro cessor,whether the final image is utilized for visual interpretation or for automatic an alysis. The aim of noise filtering is to eliminate noise and its effects on the original im age, while corrupting the image as little as possible. To this end, nonlinear techniques (like the median and, in general, order statistics filters) have been found to provide mo re satisfactory results in comparison to linear methods. Impulse noise exists in many p ractical applications and can be generated by various sources, including a number of man made phenomena, such as unprotected switches, industrial machines and car ign ition systems. Images are often corrupted by impulse noise due to a noisy sensor or ch annel transmission errors. The most common method used for impulse noise suppressi on n forgray-scale and color images is the median filter (MF) [2].The basic drawback o f the application of the MF is the blurringof the image in process. In the general case,t he filter is applied uniformly across an image, modifying pixels that arenot contamina ted by noise. In this way, the effective elimination of impulse noise is often at the exp ense of an overalldegradation of the image and blurred or distorted features[3].In this paper an intelligent hardware structure of a content based median filter (CBMF) suita ble for impulse noise suppression is presented. The function of the proposed circuit is to detect the existence of noise in the image window and apply the corresponding MFonly when necessary. The noise detection procedure is based on the content of the im age and computes the differences between the central pixel and thesurrounding pixels of a neighborhood. The main advantage of this adaptive approach is that image blurrin g is avoided and the integrity of edge and detail information are preserved[4,5]. The pro posed digital hardware structure is capable of processing gray-scale images of 8-bitres olution and performs both positive and negative impulse noise removal. The architectt ure chosen is based on a sequence of four basic functional pipelined stages, and parall el processing is used within each stage. A moving window of a 3×3 and 5×5-pixel im age neighborhood can be selected. However, the system can be easily expanded to acc ommodate windows of larger sizes. The proposed structure was implemented using fi eld programmable gate arrays (FPGA). The digital circuit was designed, compiled and successfully simulated using the MAX+PLUS II Programmable Logic Development S ystem by Altera Corporation. The EPF10K200SFC484-1 FPGA device of the FLEX1 0KE device family was utilized for the realization of the system. The typical clock fre quency is 55 MHz and the system can be used for real-time imaging applications whe re fast processing is required [6]. As an example,the time required to perform filtering of a gray-scale image of 260×244 pixels is approximately 10.6 msec.2. ADAPTIVE FILTERING PROCEDUREThe output of a median filter at a point x of an image f depends on the values of t he image points in the neighborhood of x. This neighborhood is determined by a wind ow W that is located at point x of f including n points x1, x2, …, xn of f, with n=2k+1. The proposed adaptive content based median filter can be utilized for impulse noisesu p pression in gray-scale images. A block diagram of the adaptive filtering procedure is depicted in Fig. 1. The noise detection procedure for both positive and negative noise is as follows:(i) We consider a neighborhood window W that is located at point x of the image f. Th e differences between the central pixel at point x and the pixel values of the n-1surr ounding points of the neighborhood (excluding thevalue of the central pixel) are co mputed.(ii) The sum of the absolute values of these differences is computed, denoted as fabs(x ). This value provides ameasure of closeness between the central pixel and its su rrounding pixels.(iii) The value fabs(x) is compared to fthreshold(x), which is anappropriately selected positive integer threshold value and can be modified. The central pixel is conside red to be noise when the value fabs(x) is greater than thethreshold value fthresho d(x).(iv) When the central pixel is considered to be noise it is substituted by the median val ue of the image neighborhood,denoted as fk+1, which is the normal operationof the median filter. In the opposite case, the value of the central pixel is not altered and the procedure is repeated for the next neighborhood window.From the noised etection scheme described, it should be mentioned that the noise detection level procedure can be controlled and a range of pixel values (and not only the fixedvalues of 0 and 255, salt and pepper noise) is considered asimpulse noise.In Fig. 2 the results of the application of the median filter and the CBMF in the gray-sca le image “Peppers” are depicted.More specifically, in Fig. 2(a) the original,uncor rupted image“Peppers” is depicted. In Fig. 2(b) the original imagedegraded by 5% both positive and negative impulse noise isillustrated. In Figs 2(c) and 2(d) the resultant images of the application of median filter and CBMF for a 3×3-pixel win dow are shown, respectively. Finally, the resultant images of the application of m edian filter and CBMF for a 5×5-pixelwindow are presented in Figs 2(e) and 2(f). It can be noticed that the application of the CBMF preserves much better edges a nddetails of the images, in comparison to the median filter.A number of different objective measures can be utilized forthe evaluation of these results. The most wi dely used measures are the Mean Square Error (MSE) and the Normalized Mean Square Error (NMSE) [1]. The results of the estimation of these measures for the two filters are depicted in Table I.For the estimation of these measures, the result ant images of the filters are compared to the original, uncorrupted image.From T able I it can be noticed that the MSE and NMSE estimatedfor the application of t he CBMF are considerably smaller than those estimated for the median filter, in all the cases.Table I. Similarity measures.3. HARDWARE ARCHITECTUREThe structure of the adaptive filter comprises four basic functional units, the mo ving window unit , the median computation unit , the arithmetic operations unit , and th e output selection unit . The input data of the system are the gray-scale values of the pi xels of the image neighborhood and the noise threshold value. For the computation of the filter output a3×3 or 5×5-pixel image neighborhood can be selected. Image input d ata is serially imported into the first stage. In this way,the total number of the inputpin s are 24 (21 inputs for the input data and 3 inputs for the clock and the control signalsr equired). The output data of the system are the resultant gray-scale values computed f or the operation selected (8pins).The moving window unit is the internal memory of the system,used for storing th e input values of the pixels and for realizing the moving window operation. The pixel values of the input image, denoted as “IMAGE_INPUT[7..0]”, areimported into this u nit in serial. For the representation of thethreshold value used for the detection of a no Filter Impulse noise 5% mse Nmse(×10-2) 3×3 5×5 3×3 5×5Median CBMF 57.554 35.287 130.496 84.788 0.317 0.194 0.718 0.467ise pixel 13 bits are required. For the moving window operation a 3×3 (5×5)-pixel sep entine type memory is used, consisting of 9 (25)registers. In this way,when the windoP1 P2 P3w is moved into the next image neighborhood only 3 or 5 pixel values stored in the memory are altered. The “en5×5” control signal is used for the selection of the size of th e image window, when“en5×5” is equal to “0” (“1”) a 3×3 (5×5)-pixel neighborhood is selected. It should be mentioned that the modules of the circuit used for the 3×3-pix el window are utilized for the 5×5-pixel window as well. For these modules, 2-to-1mu ltiplexers are utilized to select the appropriate pixel values,where necessary. The mod ules that are utilized only in the case of the 5×5-pixel neighborhood are enabled by th e“en5×5” control signal. The outputs of this unit are rows ofpixel values (3 or 5, respe ctively), which are the inputs to the median computation unit.The task of the median c omputation unit is to compute themedian value of the image neighborhood in order to substitutethe central pixel value, if necessary. For this purpose a25-input sorter is utili zeed. The structure of the sorter has been proposed by Batcher and is based on the use of CS blocks. ACS block is a max/min module; its first output is the maximumof the i nputs and its second output the minimum. The implementation of a CS block includes a comparator and two 2-to-1 multiplexers. The outputs values of the sorter, denoted a s “OUT_0[7..0]”…. “OUT_24[7..0]”, produce a “sorted list” of the 25 initial pixel val ues. A 2-to-1 multiplexer isused for the selection of the median value for a 3×3 or 5×5-pixel neighborhood.The function of the arithmetic operations unit is to computethe value fabs(x), whi ch is compared to the noise threshold value in the final stage of the adaptive filter.The in puts of this unit are the surrounding pixel values and the central pixelof the neighb orhood. For the implementation of the mathematical expression of fabs(x), the circuit of this unit contains a number of adder modules. Note that registers have been used to achieve a pipelined operation. An additional 2-to-1 multiplexer is utilized for the selec tion of the appropriate output value, depending on the “en5×5” control signal. From th e implementation point of view, the use of arithmetic blocks makes this stage hardwar e demanding.The output selection unit is used for the selection of the appropriateoutput value of the performed noise suppression operation. For this selection, the corresponding no ise threshold value calculated for the image neighborhood,“NOISE_THRES HOLD[1 2..0]”,is employed. This value is compared to fabs(x) and the result of the comparison Classifies the central pixel either as impulse noise or not. If thevalue fabs(x) is greater than the threshold value fthreshold(x) the central pixel is positive or negative impulse noise and has to be eliminated. For this reason, the output of the comparison is used as the selection signal of a 2-to-1 multiplexer whose inputs are the central pixel and the c orresponding median value for the image neighborhood. The output of the multiplexer is the output of this stage and the final output of the circuit of the adaptive filter.The st ructure of the CBMF, the computation procedure and the design of the four aforeme n tioned units are illustrated in Fig. 3.ImagewindoeFigure 1: Block diagram of the filtering methodFigure 2: Results of the application of the CBMF: (a) Original image, (b) noise corrupted image (c) Restored image by a 3x3 MF, (d) Restored image by a 3x3 CBMF, (e) Restored image by a 5x5 MF and (f) Restored image by a 5x5 CBMF.4. IMPLEMENTATION ISSUESThe proposed structure was implemented in FPGA,which offer an attractive com bination of low cost, high performance and apparent flexibility, using the software pa ckage+PLUS II of Altera Corporation. The FPGA used is the EPF10K200SFC484-1 d evice of the FLEX10KE device family,a device family suitable for designs that requir e high densities and high I/O count. The 99% of the logic cells(9965/9984 logic cells) of the device was utilized to implement the circuit . The typical operating clock frequ ency of the system is 55 MHz. As a comparison, the time required to perform filtering of a gray-scale image of 260×244 pixelsusing Matlab® software on a Pentium 4/2.4 G Hz computer system is approximately 7.2 sec, whereas the corresponding time using h ardware is approximately 10.6 msec.The modification of the system to accommodate windows oflarger sizes can be done in a straightforward way, requiring onlya small nu mber of changes. More specifically, in the first unit the size of the serpentine memory P4P5P6P7P8P9SubtractorarryMedianfilteradder comparatormuitiplexerf abc(x)valueand the corresponding number of multiplexers increase following a square law. In the second unit, the sorter module should be modified,and in the third unit the number of the adder devicesincreases following a square law. In the last unit no changes are requ ired.5. CONCLUSIONSThis paper presents a new hardware structure of a content based median filter, ca pable of performing adaptive impulse noise removal for gray-scale images. The noise detection procedure takes into account the differences between the central pixel and th e surrounding pixels of a neighborhood.The proposed digital circuit is capable ofproce ssing grayscale images of 8-bit resolution, with 3×3 or 5×5-pixel neighborhoods as op tions for the computation of the filter output. However, the design of the circuit is dire ctly expandableto accommodate larger size image windows. The adaptive filter was d eigned and implemented in FPGA. The typical clock frequency is 55 MHz and the sys tem is suitable forreal-time imaging applications.REFERENCES[1] W. K. Pratt, Digital Image Processing. New York: Wiley,1991.[2] G. R. Arce, N. C. Gallagher and T. Nodes, “Median filters:Theory and applicat ions,” in Advances in ComputerVision and Image Processing, Greenwich, CT: JAI, 1986.[3] T. A. Nodes and N. C. Gallagher, Jr., “The output distributionof median type filte rs,” IEEE Transactions onCommunications, vol. COM-32, pp. 532-541, May1984.[4] T. Sun and Y. Neuvo, “Detail-preserving median basedfilters in imageprocessing,” Pattern Recognition Letters,vol. 15, pp. 341-347, Apr. 1994.[5] E. Abreau, M. Lightstone, S. K. Mitra, and K. Arakawa,“A new efficient approachfor the removal of impulsenoise from highly corrupted images,” IEEE Transa ctionson Image Processing, vol. 5, pp. 1012-1025, June 1996.[6] E. R. Dougherty and P. Laplante, Introduction to Real-Time Imaging, Bellingham:SPIE/IEEE Press, 1995.二、英文翻译基于中值滤波的新的内容摘要在本设计中的提出了基于中值滤波的硬件实现用来抑制脉冲噪声的干扰。
人脸识别 面部 数字图像处理相关 中英对照 外文文献翻译 毕业设计论文 高质量人工翻译 原文带出处
人脸识别相关文献翻译,纯手工翻译,带原文出处(原文及译文)如下翻译原文来自Thomas David Heseltine BSc. Hons. The University of YorkDepartment of Computer ScienceFor the Qualification of PhD. — September 2005 -《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (fbr a cooperative subject in a door access system fbr example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1).The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are representative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of feces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are included in a single template, rather than individually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the camera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin justbeneath the eyes. The reason being that in some cases the eyebrows can closely match the template, particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the individual left and right eyes then refines each eye position.This basic template-based method of eye localisation, although providing fairly preciselocalisations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.Eye localisation is performed on the set of training images, which is then separated into two sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localisations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 一Distance to the eye template for successful detections (top) indicating variance due to noise and failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have taken the set of failed localisations(images of the forehead, nose, cheeks, background etc. falsely detected by the localisation routine) and once again computed the average distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasise the difference of the pupil regions for these failed matches and minimise the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improved detection rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begin our investigation into face recognition with perhaps the simplest approach,known as the direct correlation method (also referred to as template matching by Brunelli and Poggio [29 ]) involving the direct comparison of pixel intensity values taken from facial images. We use the term "Direct Conelation, to encompass all techniques in which face images are compared directly, without any form of image space analysis, weighting schemes or feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearson's correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric in these investigations (inversely related to Pearson's correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.Firstly, all facial images must be aligned such that the eye centres are located at two specified pixel coordinates and the image cropped to remove any background information. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the corresponding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as the query image q, and gallery image g), we get an indication of similarity. A threshold is then applied to make the final verification decision.d . q - g ( threshold accept ) (d threshold ⇒ reject ). Equ. 4-14.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system's ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematicaldescription of data distribution in some classification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, although we also use Fisher's Linear Discriminant to analyse individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a system's ability to correctly accept or reject the proposed identity of an individual. At a functional level, this reduces to two images being presented for comparison, fbr which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site access. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rej ection decision.The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignificant (fbr example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test set.However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances as in the application environment.On the other hand, if the purpose of the experimentation is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of6difficult9 images than would be expected in the perceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: CompareF aces(F ace A, FaceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce high scores.Every image is compared with every other image, no image is compared with itself and nopair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are of the same person or different people. In practical tests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are then stored in one of two lists: a list containing scores produced by comparing images of different people and a list containing scores produced by comparing images of the same person. The final acceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of scores from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.For IndexA = 0 to length(TestSet) For IndexB = IndexA+l to length(TestSet) Score = CompareFaces(TestSet[IndexA], TestSet[IndexB]) If IndexA and IndexB are the same person Append Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCountF alse AcceptRate = FalseAcceptCount / Length(AcceptScoresList)FalseRej ectRate = FalseRejectCount / length(RejectScoresList)Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the inadequacies of the system when operating at aspecific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably resultin increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.False Acceptance Rate / %Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be seen as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances were equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were significantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a system with a lower EER might not necessarily be the better performer towards the extremes of its operating capability.There is a strong connection between the above graph and the receiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualisations of the same results, in that the ROC format uses the True Acceptance Rate(TAR), where TAR = 1.0 - FRR in place of the FRR, effectively flipping the graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This presentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.Figure 4-6 - Example error rate curve as a function of the score threshold The fluctuation of these error curves due to noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows fbr a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of the comparisons made. A typical dataset of 720 images (as used in section 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER to fluctuate by up to 0.28.422 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a likeness score, providing 258,840 verification operations from which to calculate false acceptance rates and false rejection rates. The error curve produced is shown in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER threshold approximately one quarter of all verification operations carried out resulted in an incorrect classification. Thereare a number of well-known reasons for this poor level of accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person's face. The distance between images of different people becomes smaller than the area of face space covered by images of the same person and hence false acceptances and false rejections occur frequently. Other disadvantages include the large amount of storage necessaryfor holding many face images and the intensive processing required for each comparison, making this method unsuitable fbr applications applied to a large database. In section 4.3 we explore the eigenface method, which attempts to address some of these issues.4二维人脸识别4.1功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。
图像识别中英文对照外文翻译文献
中英文对照外文翻译文献(文档含英文原文和中文翻译)Elastic image matchingAbstractOne fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NP-complete by reduction from 3-SAT, thus giving evidence that the known exponential time algorithms are justified, but approximation algorithms or simplifications are necessary.Keywords: Elastic image matching; Two-dimensional warping; NP-completeness 1. IntroductionIn image recognition, a common problem is to match two given images, e.g. when comparing an observed image to given references. In that pro-cess, elastic image matching, two-dimensional (2D-)warping (Uchida and Sakoe, 1998) or similar types of invariant methods (Keysers et al., 2000) can be used. For this purpose, we can define cost functions depending on the distortion introduced in the matching andsearch for the best matching with respect to a given cost function. In this paper, we show that it is an algorithmically hard problem to decide whether a matching between two images exists with costs below a given threshold. We show that the problem image matching is NP-complete by means of a reduction from 3-SAT, which is a common method of demonstrating a problem to be intrinsically hard (Garey and Johnson, 1979). This result shows the inherent computational difficulties in this type of image comparison, while interestingly the same problem is solvable for 1D sequences in polynomial time, e.g. the dynamic time warping problem in speech recognition (see e.g. Ney et al., 1992). This has the following implications: researchers who are interested in an exact solution to this problem cannot hope to find a polynomial time algorithm, unless P=NP. Furthermore, one can conclude that exponential time algorithms as presented and extended by Uchida and Sakoe (1998, 1999a,b, 2000a,b) may be justified for some image matching applications. On the other hand this shows that those interested in faster algorithms––e.g. for pattern recognition purposes––are right in searching for sub-optimal solutions. One method to do this is the restriction to local optimizations or linear approximations of global transformations as presented in (Keysers et al., 2000). Another possibility is to use heuristic approaches like simulated annealing or genetic algorithms to find an approximate solution. Furthermore, methods like beam search are promising candidates, as these are used successfully in speech recognition, although linguistic decoding is also an NP-complete problem (Casacuberta and de la Higuera, 1999). 2. Image matchingAmong the varieties of matching algorithms,we choose the one presented by Uchida and Sakoe(1998) as a starting point to formalize the problem image matching. Let the images be given as(without loss of generality) square grids of size M×M with gray values (respectively node labels)from a finite alphabet &={1,…,G}. To define thed:&×&→N , problem, two distance functions are needed,one acting on gray valuesg measuring the match in gray values, and one acting on displacement differences :Z×Z→N , measuring the distortion introduced by t he matching. For these distance ddfunctions we assume that they are monotonous functions (computable in polynomial time) of the commonly used squared Euclid-ean distance, i.ed g (g 1,g 2)=f 1(||g 1-g 2||²)and d d (z)=f 2(||z||²) monotonously increasing. Now we call the following optimization problem the image matching problem (let µ={1,…M} ).Instance: The pair( A ; B ) of two images A and B of size M×M .Solution: A mapping function f :µ×µ→µ×µ.Measure:c (A,B,f )=),(),(j i f ij g B Ad ∑μμ⨯∈),(j i+∑⨯-⋅⋅⋅∈+-+μ}1,{1,),()))0,1(),(())0,1(),(((M j i d j i f j i f dμ⨯-⋅⋅⋅∈}1,{1,),(M j i +∑⋅⋅⋅⨯∈+-+1}-M ,{1,),()))1,0(),(())1,0(),(((μj i d j i f j i f d 1}-M ,{1,),(⋅⋅⋅⨯∈μj iGoal:min f c(A,B,f).In other words, the problem is to find the mapping from A onto B that minimizes the distance between the mapped gray values together with a measure for the distortion introduced by the mapping. Here, the distortion is measured by the deviation from the identity mapping in the two dimensions. The identity mapping fulfills f(i,j)=(i,j),and therefore ,f((i,j)+(x,y))=f(i,j)+(x,y)The corresponding decision problem is fixed by the followingQuestion:Given an instance of image matching and a cost c′, does there exist a ma pping f such that c(A,B,f)≤c′?In the definition of the problem some care must be taken concerning the distance functions. For example, if either one of the distance functions is a constant function, the problem is clearly in P (for d g constant, the minimum is given by the identity mapping and for d d constant, the minimum can be determined by sorting all possible matching for each pixel by gray value cost and mapping to one of the pixels with minimum cost). But these special cases are not those we are concerned with in image matching in general.We choose the matching problem of Uchida and Sakoe (1998) to complete the definition of the problem. Here, the mapping functions are restricted by continuity and monotonicity constraints: the deviations from the identity mapping may locally be at most one pixel (i.e. limited to the eight-neighborhood with squared Euclidean distance less than or equal to 2). This can be formalized in this approach bychoosing the functions f1,f2as e.g.f 1=id,f2(x)=step(x):=⎩⎨⎧.2,)10(,2,0>≤⋅xGxMM3. Reduction from 3-SAT3-SAT is a very well-known NP-complete problem (Garey and Johnson, 1979), where 3-SAT is defined as follows:Instance: Collection of clauses C={C1,···,CK} on a set of variables X={x1, (x)L}such that each ckconsists of 3 literals for k=1,···K .Each literal is a variable or the negation of a variable.Question:Is there a truth assignment for X which satisfies each clause ck, k=1,···K ?The dependency graph D(Ф)corresponding to an instance Ф of 3-SAT is defined to be the bipartite graph whose independent sets are formed by the set of clauses Cand the set of variables X .Two vert ices ck and x1are adjacent iff ckinvolvesx 1or-xL.Given any 3-SAT formula U, we show how to construct in polynomial time anequivalent image matching problem l(Ф)=(A(Ф),B(Ф)); . The two images of l (Ф)are similar according to the cost function (i.e.f:c(A(Ф),B(Ф),f)≤0) iff the formulaФ is satisfiable. We perform the reduction from 3-SAT using the following steps:• From the formula Ф we construct the dependency graph D(Ф).• The dependency graph D(Ф)is drawn in the plane.• The drawing of D(Ф)is refined to depict the logical behaviour of Ф , yielding two images(A(Ф),B(Ф)).For this, we use three types of components: one component to represent variables of Ф , one component to represent clauses of Ф, and components which act as interfaces between the former two types. Before we give the formal reduction, we introduce these components.3.1. Basic componentsFor the reduction from 3-SAT we need five components from which we will construct the in-stances for image matching , given a Boolean formula in 3-DNF,respectively its graph. The five components are the building blocks needed for the graph drawing and will be introduced in the following, namely the representations of connectors,crossings, variables, and clauses. The connectors represent the edges and have two varieties, straight connectors and corner connectors. Each of the components consists of two parts, one for image A and one for image B , where blank pixels are considered to be of the‘background ’color.We will depict possible mappings in the following using arrows indicating the direction of displacement (where displacements within the eight-neighborhood of a pixel are the only cases considered). Blank squares represent mapping to the respective counterpart in the second image.For example, the following displacements of neighboring pixels can be used with zero cost:On the other hand, the following displacements result in costs greater than zero:Fig. 1 shows the first component, the straight connector component, which consists of a line of two different interchanging colors,here denoted by the two symbols◇and□. Given that the outside pixels are mapped to their respe ctive counterparts and the connector is continued infinitely, there are two possible ways in which the colored pixels can be mapped, namely to the left (i.e. f(2,j)=(2,j-1)) or to the right (i.e. f(2,j)=(2,j+1)),where the background pixels have different possibilities for the mapping, not influencing the main property of the connector. This property, which justifies the name ‘connector ’, is the following: It is not possible to find a mapping, which yields zero cost where the relative displacements of the connector pixels are not equal, i.e. one always has f(2,j)-(2,j)=f(2,j')-(2,j'),which can easily be observed by induction over j'.That is, given an initial displacement of one pixel (which will be ±1 in this context), the remaining end of the connector has the same displacement if overall costs of the mapping are zero. Given this property and the direction of a connector, which we define to be directed from variable to clause, wecan define the state of the connector as carrying the‘true’truth value, if the displacement is 1 pixel in the direction of the connector and as carrying the‘false’ truth value, if the displacement is -1 pixel in the direction of the connector. This property then ensures that the truth value transmitted by the connector cannot change at mappings of zero cost.Image A image Bmapping 1 mapping 2Fig. 1. The straight connector component with two possible zero cost mappings.For drawing of arbitrary graphs, clearly one also needs corners,which are represented in Fig. 2.By considering all possible displacements which guarantee overall cost zero, one can observe that the corner component also ensures the basic connector property. For example, consider the first depicted mapping, which has zero cost. On the other hand, the second mapping shows, that it is not possible to construct a zero cost mapping with both connectors‘leaving’the component. In that case, the pixel at the position marked‘? ’either has a conflict (that i s, introduces a cost greater than zero in the criterion function because of mapping mismatch) with the pixel above or to the right of it,if the same color is to be met and otherwise, a cost in the gray value mismatch term is introduced.image A image Bmapping 1 mapping 2Fig. 2. The corner connector component and two example mappings.Fig. 3 shows the variable component, in this case with two positive (to the left) and one negated output (to the right) leaving the component as connectors. Here, a fourth color is used, denoted by ·.This component has two possible mappings for thecolored pixels with zero cost, which map the vertical component of the source image to the left or the right vertical component in the target image, respectively. (In both cases the second vertical element in the target image is not a target of the mapping.) This ensures±1 pixel relative displacements at the entry to the connectors. This property again can be deducted by regarding all possible mappings of the two images.The property that follows (which is necessary for the use as variable) is that all zero cost mappings ensure that all positive connectors carry the same truth value,which is the opposite of the truth value for all the negated connectors. It is easy to see from this example how variable components for arbitrary numbers of positive and negated outputs can be constructed.image A image BImage C image DFig. 3. The variable component with two positive and one negated output and two possible mappings (for true and false truth value).Fig. 4 shows the most complex of the components, the clause component. This component consists of two parts. The first part is the horizontal connector with a 'bend' in it to the right.This part has the property that cost zero mappings are possible for all truth values of x and y with the exception of two 'false' values. This two input disjunction,can be extended to a three input dis-junction using the part in the lower left. If the z connector carries a 'false' truth value, this part can only be mapped one pixel downwards at zero cost.In that case the junction pixel (the fourth pixel in the third row) cannot be mapped upwards at zero cost and the 'two input clause' behaves as de-scribed above. On the other hand, if the z connector carries a 'true' truth value, this part can only be mapped one pixel upwards at zero cost,and the junction pixel can be mapped upwards,thus allowing both x and y to carry a 'false' truth value in a zero cost mapping. Thus there exists a zero cost mapping of the clause component iff at least one of the input connectors carries a truth value.image Aimage B mapping 1(true,true,false)mapping 2 (false,false,true,)Fig. 4. The clause component with three incoming connectors x, y , z and zero cost mappings forthe two cases(true,true,false)and (false, false, true).The described components are already sufficient to prove NP-completeness by reduction from planar 3-SAT (which is an NP-complete sub-problem of 3-SAT where the additional constraints on the instances is that the dependency graph is planar),but in order to derive a reduction from 3-SAT, we also include the possibility of crossing connectors.Fig. 5 shows the connector crossing, whose basic property is to allow zero cost mappings if the truth–values are consistently propagated. This is assured by a color change of the vertical connector and a 'flexible' middle part, which can be mapped to four different positions depending on the truth value distribution.image Aimage Bzero cost mappingFig. 5. The connector crossing component and one zero cost mapping.3.2. ReductionUsing the previously introduced components, we can now perform the reduction from 3-SAT to image matching .Proof of the claim that the image matching problem is NP-complete:Clearly, the image matching problem is in NP since, given a mapping f and two images A and B ,the computation of c(A,B,f)can be done in polynomial time. To prove NP-hardness, we construct a reduction from the 3-SAT problem. Given an instance of 3-SAT we construct two images A and B , for which a mapping of cost zero exists iff all the clauses can be satisfied.Given the dependency graph D ,we construct an embedding of the graph into a 2D pixel grid, placing the vertices on a large enough distance from each other (say100(K+L)² ).This can be done using well-known methods from graph drawing (see e.g.di Battista et al.,1999).From this image of the graph D we construct the two images A and B , using the components described above.Each vertex belonging to a variable is replaced with the respective parts of the variable component, having a number of leaving connectors equal to the number of incident edges under consideration of the positive or negative use in the respective clause. Each vertex belonging to a clause is replaced by the respective clause component,and each crossing of edges is replaced by the respective crossing component. Finally, all the edges are replaced with connectors and corner connectors, and the remaining pixels inside the rectangular hull of the construction are set to the background gray value. Clearly, the placement of the components can be done in such a way that all the components are at a large enough distance from each other, where the background pixels act as an 'insulation' against mapping of pixels, which do not belong to the same component. It can be easily seen, that the size of the constructed images is polynomial with respect to the number of vertices and edges of D and thus polynomial in the size of the instance of 3-SAT, at most in the order (K+L)².Furthermore, it can obviously be constructed in polynomial time, as the corresponding graph drawing algorithms are polynomial.Let there exist a truth assignment to the variables x1,…,xL, which satisfies allthe clauses c1,…,cK. We construct a mapping f , that satisfies c(f,A,B)=0 asfollows.For all pixels (i, j ) belonging to variable component l with A(i,j)not of the background color,set f(i,j)=(i,j-1)if xlis assigned the truth value 'true' , set f(i,j)=(i,j+1), otherwise. For the remaining pixels of the variable component set A(i,j)=B(i,j),if f(i,j)=(i,j), otherwise choose f(i,j)from{(i,j+1),(i+1,j+1),(i-1,j+1)}for xl'false' respectively from {(i,j-1),(i+1,j-1),(i-1,j-1)}for xl'true ',such that A(i,j)=B(f(i,j)). This assignment is always possible and has zero cost, as can be easily verified.For the pixels(i,j)belonging to (corner) connector components,the mapping function can only be extended in one way without the introduction of nonzero cost,starting from the connection with the variable component. This is ensured by thebasic connector property. By choosing f (i ,j )=(i,j )for all pixels of background color, we obtain a valid extension for the connectors. For the connector crossing components the extension is straight forward, although here ––as in the variable mapping ––some care must be taken with the assign ment of the background value pixels, but a zero cost assignment is always possible using the same scheme as presented for the variable mapping.It remains to be shown that the clause components can be mapped at zero cost, if at least one of the input connectors x , y , z carries a ' true' truth value.For a proof we regard alls even possibilities and construct a mapping for each case. In thedescription of the clause component it was already argued that this is possible,and due to space limitations we omit the formalization of the argument here.Finally, for all the pixels (i ,j )not belonging to any of the components, we set f (i ,j )=(i ,j )thus arriving at a mapping function which has c (f ,A ,B )=0。
毕业设计格式,答辩及装订细则
毕业设计外文资料翻译格式外文资料翻译资料来源:文章名:The Elements of Digital Image Processing 书刊名:《Digital Image Processing》作者: Kenneth R. Castleman 出版社:清华大学出版社, 2002章节: 1.2 The Element of Digital Image Processing页码: P2~P7文章译名:数字图像处理概述姓学名:号:指导教师 (职称):专班所在学业:级:院:外文原文 :附上原文的复印件或扫描打印件。
外文字数需在 3000 个单词以上。
来源必须是来自正式出版的书籍或期刊文章(含国际会议论文),不得使用从网页上下载的资料。
外文文章内容必须是与本人毕业设计(论文)相关的。
数字图像处理概述(黑体二号字)译文:翻译在忠实于原著的基础上,应做到准确、通顺、易读。
原文中的作者名、参考文献、专有名词可不翻;文章的一些无法重新制作的图表可直接拷贝原文。
页面设置:无网格。
页边距:上 2.8 厘米,下 2.2 厘米左 2.5 厘米,右 2.5 厘米页眉: 1.5 厘米;页脚 1.5 厘米格式(段落): 1.2 倍行距正文采用小四号字,宋体批改:教师应对学生的翻译进行批改和签阅。
毕业设计(论文)格式,答辩及装订细则毕业论文正文格式1、前置部分①目录(目录和正文不采用同一页码标注。
正文要从第一页开始。
)②中文摘要、关键词③英文摘要、关键词(英文摘要注意要用被动语态,同时要用一般过去时,这样才符合专业英语写作的规范。
)2、主体部分引言正文正文章节目标识第一章(二号字,黑体)1.1 ⋯(三号字,黑体)1.1.1 ⋯(小四字,宋体, 1.5 倍行距)1.1.21.2 ⋯1.2.1 ⋯⋯第二章(二号字,黑体)2.1 ⋯(三号字,黑体)2.1.1⋯(小四字,宋体, 1.5 倍行距)⋯⋯结论(二号字,黑体)致谢(二号字,黑体)参考文献(二号字,黑体)附录(二号字,黑体)毕业设计正文格式的形式说明1、前置部分①目录②中文摘要、关键词③英文摘要、关键词(中文摘要,英文摘要各占一页。
计算机图形_Digital Image Processing, 2nd ed(数字图像处理(第2版))
Digital Image Processing, 2nd ed(数字图像处理(第2版))数据摘要:DIGITAL IMAGE PROCESSING has been the world-wide leading textbook in its field for more than 30 years. As the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 edition by Gonzalez and Woods, the present edition was prepared with students and instructors in mind. The material is timely, highly readable, and illustrated with numerous examples of practical significance. All mainstream areas of image processing are covered, including a totally revised introduction and discussion of image fundamentals, image enhancement in the spatial and frequency domains, restoration, color image processing, wavelets, image compression, morphology, segmentation, and image description. Coverage concludes with a discussion on the fundamentals of object recognition.Although the book is completely self-contained, this companion web site provides additional support in the form of review material, answers to selected problems, laboratory project suggestions, and a score of other features. A supplementary instructor's manual is available to instructors who have adopted the book for classroom use.中文关键词:数字图像处理,图像基础,图像在空间和频率域的增强,图像压缩,图像描述,英文关键词:digital image processing,image fundamentals,image compression,image description,数据格式:IMAGE数据用途:DIGITAL IMAGE PROCESSING数据详细介绍:Digital Image Processing, 2nd editionAbout the BookBasic InformationISBN number 020*******.Publisher: Prentice Hall12 chapters.793 pages.© 2002.DIGITAL IMAGE PROCESSING has been the world-wide leading textbook in its field for more than 30 years. As the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 edition by Gonzalez and Woods, the present edition was prepared with students and instructors in mind. The material is timely, highly readable, and illustrated with numerous examples of practical significance. All mainstream areas of image processing are covered, including a totally revised introduction and discussion of image fundamentals, image enhancement in the spatial and frequency domains, restoration, color image processing, wavelets, image compression, morphology, segmentation, and image description. Coverage concludes with a discussion on the fundamentals of object recognition.Although the book is completely self-contained, this companion web site provides additional support in the form of review material, answers to selected problems, laboratory project suggestions, and a score of other features. A supplementary instructor's manual is available to instructors who have adopted the book for classroom use.Partial list of institutions that use the book.NEW FEATURESNew chapters on wavelets, image morphology, and color image processing.A revision and update of all chapters, including topics such as segmentation by watersheds.More than 500 new images and over 200 new line drawings and tables.A reorganization that allows the reader to get to the material on actual image processing much sooner than before.A more intuitive development of traditional topics such as image transforms and image restoration.Numerous new examples with processed images of higher resolution. Updated image compression standards and a new section on compression using wavelets.Updated bibliography.Differences Between the DIP and DIPUM BooksDigital Image Processing is a book on fundamentals.Digital Image Processing Using MATLAB is a book on the software implementation of those fundamentals.The key difference between the books is that Digital Image Processing (DIP) deals primarily with the theoretical foundation of digital image processing, while Digital Image Processing Using MATLAB (DIPUM) is a book whose main focus is the use of MATLAB for image processing. The DIPUM book covers essentially the same topics as DIP, but the theoretical treatment is not asdetailed. Some instructors prefer to fill in the theoretical details in class in favor of having available a book with a strong emphasis on implementation.© 2002 by Prentice-Hall, Inc.Upper Saddle River, New Jersey 07458All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher.The author and publisher of this book have used their best efforts in preparing this book.These efforts include the development, research, and testing of the theories and programs to determine their effectiveness.The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book.The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs.数据预览:点此下载完整数据集。
外文翻译----数字图像处理和模式识别技术关于检测癌症的应用
引言英文文献原文Digital image processing and pattern recognition techniques for the detection of cancerCancer is the second leading cause of death for both men and women in the world , and is expected to become the leading cause of death in the next few decades . In recent years , cancer detection has become a significant area of research activities in the image processing and pattern recognition community .Medical imaging technologies have already made a great impact on our capabilities of detecting cancer early and diagnosing the disease more accurately . In order to further improve the efficiency and veracity of diagnoses and treatment , image processing and pattern recognition techniques have been widely applied to analysis and recognition of cancer , evaluation of the effectiveness of treatment , and prediction of the development of cancer . The aim of this special issue is to bring together researchers working on image processing and pattern recognition techniques for the detection and assessment of cancer , and to promote research in image processing and pattern recognition for oncology . A number of papers were submitted to this special issue and each was peer-reviewed by at least three experts in the field . From these submitted papers , 17were finally selected for inclusion in this special issue . These selected papers cover a broad range of topics that are representative of the state-of-the-art in computer-aided detection or diagnosis(CAD)of cancer . They cover several imaging modalities(such as CT , MRI , and mammography) and different types of cancer (including breast cancer , skin cancer , etc.) , which we summarize below .Skin cancer is the most prevalent among all types of cancers . Three papers in this special issue deal with skin cancer . Y uan et al. propose a skin lesion segmentation method. The method is based on region fusion and narrow-band energy graph partitioning . The method can deal with challenging situations with skin lesions , such as topological changes , weak or false edges , and asymmetry . T ang proposes a snake-based approach using multi-direction gradient vector flow (GVF) for the segmentation of skin cancer images . A new anisotropic diffusion filter is developed as a preprocessing step . After the noise is removed , the image is segmented using a GVF1snake . The proposed method is robust to noise and can correctly trace the boundary of the skin cancer even if there are other objects near the skin cancer region . Serrano et al. present a method based on Markov random fields (MRF) to detect different patterns in dermoscopic images . Different from previous approaches on automatic dermatological image classification with the ABCD rule (Asymmetry , Border irregularity , Color variegation , and Diameter greater than 6mm or growing) , this paper follows a new trend to look for specific patterns in lesions which could lead physicians to a clinical assessment.Breast cancer is the most frequently diagnosed cancer other than skin cancer and a leading cause of cancer deaths in women in developed countries . In recent years , CAD schemes have been developed as a potentially efficacious solution to improving radiologists’diagnostic accuracy in breast cancer screening and diagnosis . The predominant approach of CAD in breast cancer and medical imaging in general is to use automated image analysis to serve as a “second reader”, with the aim of improving radiologists’diagnostic performance . Thanks to intense research and development efforts , CAD schemes have now been introduces in screening mammography , and clinical studies have shown that such schemes can result in higher sensitivity at the cost of a small increase in recall rate . In this issue , we have three papers in the area of CAD for breast cancer . Wei et al. propose an image-retrieval based approach to CAD , in which retrieved images similar to that being evaluated (called the query image) are used to support a CAD classifier , yielding an improved measure of malignancy . This involves searching a large database for the images that are most similar to the query image , based on features that are automatically extracted from the images . Dominguez et al. investigate the use of image features characterizing the boundary contours of mass lesions in mammograms for classification of benign vs. Malignant masses . They study and evaluate the impact of these features on diagnostic accuracy with several different classifier designs when the lesion contours are extracted using two different automatic segmentation techniques . Schaefer et al. study the use of thermal imaging for breast cancer detection . In their scheme , statistical features are extracted from thermograms to quantify bilateral differences between left and right breast regions , which are used subsequently as input to a fuzzy-rule-based classification system for diagnosis.Colon cancer is the third most common cancer in men and women , and also the third mostcommon cause of cancer-related death in the USA . Y ao et al. propose a novel technique to detect colonic polyps using CT Colonography . They use ideas from geographic information systems to employ topographical height maps , which mimic the procedure used by radiologists for the detection of polyps . The technique can also be used to measure consistently the size of polyps . Hafner et al. present a technique to classify and assess colonic polyps , which are precursors of colorectal cancer . The classification is performed based on the pit-pattern in zoom-endoscopy images . They propose a novel color waveler cross co-occurence matrix which employs the wavelet transform to extract texture features from color channels.Lung cancer occurs most commonly between the ages of 45 and 70 years , and has one of the worse survival rates of all the types of cancer . Two papers are included in this special issue on lung cancer research . Pattichis et al. evaluate new mathematical models that are based on statistics , logic functions , and several statistical classifiers to analyze reader performance in grading chest radiographs for pneumoconiosis . The technique can be potentially applied to the detection of nodules related to early stages of lung cancer . El-Baz et al. focus on the early diagnosis of pulmonary nodules that may lead to lung cancer . Their methods monitor the development of lung nodules in successive low-dose chest CT scans . They propose a new two-step registration method to align globally and locally two detected nodules . Experments on a relatively large data set demonstrate that the proposed registration method contributes to precise identification and diagnosis of nodule development .It is estimated that almost a quarter of a million people in the USA are living with kidney cancer and that the number increases by 51000 every year . Linguraru et al. propose a computer-assisted radiology tool to assess renal tumors in contrast-enhanced CT for the management of tumor diagnosis and response to treatment . The tool accurately segments , measures , and characterizes renal tumors, and has been adopted in clinical practice . V alidation against manual tools shows high correlation .Neuroblastoma is a cancer of the sympathetic nervous system and one of the most malignant diseases affecting children . Two papers in this field are included in this special issue . Sertel et al. present techniques for classification of the degree of Schwannian stromal development as either stroma-rich or stroma-poor , which is a critical decision factor affecting theprognosis . The classification is based on texture features extracted using co-occurrence statistics and local binary patterns . Their work is useful in helping pathologists in the decision-making process . Kong et al. propose image processing and pattern recognition techniques to classify the grade of neuroblastic differentiation on whole-slide histology images . The presented technique is promising to facilitate grading of whole-slide images of neuroblastoma biopsies with high throughput .This special issue also includes papers which are not derectly focused on the detection or diagnosis of a specific type of cancer but deal with the development of techniques applicable to cancer detection . T a et al. propose a framework of graph-based tools for the segmentation of microscopic cellular images . Based on the framework , automatic or interactive segmentation schemes are developed for color cytological and histological images . T osun et al. propose an object-oriented segmentation algorithm for biopsy images for the detection of cancer . The proposed algorithm uses a homogeneity measure based on the distribution of the objects to characterize tissue components . Colon biopsy images were used to verify the effectiveness of the method ; the segmentation accuracy was improved as compared to its pixel-based counterpart . Narasimha et al. present a machine-learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using an ion-abrasion scanning electron microscope . The proposed approach has minimal user intervention and can achieve high classification accuracy . El Naqa et al. investigate intensity-volume histogram metrics as well as shape and texture features extracted from PET images to predict a patient’s response to treatment . Preliminary results suggest that the proposed approach could potentially provide better tools and discriminant power for functional imaging in clinical prognosis.We hope that the collection of the selected papers in this special issue will serve as a basis for inspiring further rigorous research in CAD of various types of cancer . We invite you to explore this special issue and benefit from these papers .On behalf of the Editorial Committee , we take this opportunity to gratefully acknowledge the autors and the reviewers for their diligence in abilding by the editorial timeline . Our thanks also go to the Editors-in-Chief of Pattern Recognition , Dr. Robert S. Ledley and Dr.C.Y. Suen , for their encouragement and support for this special issue .英文文献译文数字图像处理和模式识别技术关于检测癌症的应用世界上癌症是对于人类(不论男人还是女人)生命的第二杀手。
图像去噪 英文文献及翻译
New Method for Image Denoising while Keeping Edge InformationEdge information is the most important high- frequency information of an image, so we should try to maintain more edge information while denoising。
In order to preserve image details as well as canceling image noise,we present a new image denoising method:image denoising based on edge detection。
Before denoising, image’s edges are first detected, and then the noised image is divided into two parts: edge part and smooth part。
We can therefore set high denoising threshold to smooth part of the image and low denoising threshold to edge part. The theoretical analyses and experimental results presented in this paper show that, compared to commonly—used wavelet threshold denoising methods,the proposed algorithm could not only keep edge information of an image, but also could improve signal-to-noise ratio of the denoised image。
数字图像处理外文翻译参考文献
数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。
外文翻译----基于数字图像处理技术的边缘特征提取
The traditionaldenoisingmethod is the use of a low-pass or band-pass filter todenoise. Its shortcoming is that the signal is blurred when noises are removed. There is irreconcilable contradiction between removing noise and edge maintenance. Yet wavelet analysis has been proved to be a powerful tool for image processing. Because Waveletdenoisinguses a different frequency band-pass filters on the signal filtering. It removes the coefficients of some scales which mainly reflect the noise frequency. Then the coefficient of every remaining scale is integrated for inverse transform, so that noise can be suppressed well. So wavelet analysis can be widely used in manyaspects such as image compression, imagedenoising, etc.
matlab图像处理外文翻译外文文献
matlab图像处理外文翻译外文文献附录A 英文原文Scene recognition for mine rescue robotlocalization based on visionCUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐)Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments.Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model1 IntroductionSearch and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization.Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.。
Digital-Signal-Processing数字信号处理大学毕业论文英文文献翻译及原文
毕业设计(论文)外文文献翻译文献、资料中文题目:数字信号处理文献、资料英文题目:Digital Signal Processing 文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14数字信号处理一、导论数字信号处理(DSP)是由一系列的数字或符号来表示这些信号的处理的过程的。
数字信号处理与模拟信号处理属于信号处理领域。
DSP包括子域的音频和语音信号处理,雷达和声纳信号处理,传感器阵列处理,谱估计,统计信号处理,数字图像处理,通信信号处理,生物医学信号处理,地震数据处理等。
由于DSP的目标通常是对连续的真实世界的模拟信号进行测量或滤波,第一步通常是通过使用一个模拟到数字的转换器将信号从模拟信号转化到数字信号。
通常,所需的输出信号却是一个模拟输出信号,因此这就需要一个数字到模拟的转换器。
即使这个过程比模拟处理更复杂的和而且具有离散值,由于数字信号处理的错误检测和校正不易受噪声影响,它的稳定性使得它优于许多模拟信号处理的应用(虽然不是全部)。
DSP算法一直是运行在标准的计算机,被称为数字信号处理器(DSP)的专用处理器或在专用硬件如特殊应用集成电路(ASIC)。
目前有用于数字信号处理的附加技术包括更强大的通用微处理器,现场可编程门阵列(FPGA),数字信号控制器(大多为工业应用,如电机控制)和流处理器和其他相关技术。
在数字信号处理过程中,工程师通常研究数字信号的以下领域:时间域(一维信号),空间域(多维信号),频率域,域和小波域的自相关。
他们选择在哪个领域过程中的一个信号,做一个明智的猜测(或通过尝试不同的可能性)作为该域的最佳代表的信号的本质特征。
从测量装置对样品序列产生一个时间或空间域表示,而离散傅立叶变换产生的频谱的频率域信息。
自相关的定义是互相关的信号本身在不同时间间隔的时间或空间的相关情况。
二、信号采样随着计算机的应用越来越多地使用,数字信号处理的需要也增加了。
外文翻译----数字图像处理方法的研究(中英文)(1)
The research of digital image processing technique1IntroductionInterest in digital image processing methods stems from two principal application areas:improvement of pictorial information for human interpretation;and processing of image data for storage,transmission,and representation for autonomous machine perception.1.1What Is Digital Image Processing?An image may be defined as a two-dimensional function,f(x,y),where x and y are spatial(plane)coordinates,and the amplitude of f at any pair of coordinates(x,y)is called the intensity or gray level of the image at that point.When x,y,and digital image.The field of digital image processing refers to processing digital images by means of a digital computer.Note that a digital image is composed of a finite number of elements,each of which has a particular location and value.These elements are referred to as picture elements,image elements,pels,and pixels.Pixel is the term most widely used to denote the elements of a digital image.We consider these definitions in more formal terms in Chapter2.Vision is the most advanced of our senses,so it is not surprising that images play the single most important role in human perception.However,unlike human who are limited to the visual band of the electromagnetic(EM)spectrum,imaging machines cover almost the entire EM spectrum,ranging from gamma to radio waves.They can operate on images generated by sources that human are not accustomed to associating with image.These include ultrasound,electron microscopy,and computer-generated images.Thus,digital image processing encompasses a wide and varied field of application.There is no general agreement among authors regarding where image processing stops and other related areas,such as image analysis and computer vision,start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images.We believe this to be a limiting and somewhat artificial boundary.For example,under this definition,even the trivial task of computing the average intensity of an image(which yields a single number)would not be considered an image processing operation.On the other hand, there are fields such as computer vision whose ultimate goal is to use computer to emulate human vision,including learning and being able to make inferences and take actions based on visual inputs.This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence.This field of AI is in its earliest stages of infancy in terms of development,with progress having been much slower than originally anticipated.The area of image analysis(also called image understanding)is in between image processing and computer vision.There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other.However,one useful paradigm is to consider three types of computerized processes is this continuum:low-,mid-,and high-ever processes.Low-level processes involve primitive operation such as image preprocessing to reduce noise,contrast enhancement,and image sharpening.A low-level process is characterized by the fact that both its input and output are images. Mid-level processing on images involves tasks such as segmentation(partitioning an image into regions or objects),description of those objects to reduce them to a form suitable for computer processing,and classification(recognition)of individual object. Amid-level process is characterized by the fact that its inputs generally are images, but its output is attributes extracted from those images(e.g.,edges contours,and the identity of individual object).Finally,higher-level processing involves“making sense”of an ensemble of recognized objects,as in image analysis,and,at the far end of the continuum,performing the cognitive function normally associated with vision. Based on the preceding comments,we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image.Thus,what we call in this book digital image processing encompasses processes whose inputs and outputs are images and,in addition, encompasses processes that extract attributes from images,up to and including the recognition of individual objects.As a simple illustration to clarify these concepts, consider the area of automated analysis of text.The processes of acquiring an image of the area containing the text.Preprocessing that images,extracting(segmenting)the individual characters,describing the characters in a form suitable for computer processing,and recognizing those individual characters are in the scope of what we call digital image processing in this book.Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement“making cense.”As will become evident shortly,digital image processing,as we have defined it,is used successfully in a broad rang of areas of exceptional social and economic value.The concepts developed in the following chapters are the foundation for the methods used in those application areas.1.2The Origins of Digital Image ProcessingOne of the first applications of digital images was in the newspaper industry,when pictures were first sent by submarine cable between London and NewYork. Introduction of the Bartlane cable picture transmission system in the early1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours.Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end.Figure 1.1was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern.Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution ofintensity levels.The printing method used to obtain Fig.1.1was abandoned toward the end of1921in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal.Figure1.2shows an images obtained using this method.The improvements over Fig.1.1are evident,both in tonal quality and in resolution.FIGURE1.1A digital picture produced in FIGURE1.2A digital picture 1921from a coded tape by a telegraph printer made in1922from a tape punched With special type faces(McFarlane)after the signals had crossed theAtlantic twice.Some errors areVisible.(McFarlane)The early Bartlane systems were capable of coding images in five distinct level of gray.This capability was increased to15levels in1929.Figure1.3is typical of the images that could be obtained using the15-tone equipment.During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Although the examples just cited involve digital images,they are not considered digital image processing results in the context of our definition because computer were not involved in their creation.Thus,the history of digital processing is intimately tied to the development of the digital computer.In fact digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computers of supporting technologies that include data storage,display,and transmission.The idea of a computer goes back to the invention of the abacus in Asia Minor, more than5000years ago.More recently,there were developments in the past two centuries that are the foundation of what we call computer today.However,the basis for what we call a modern digital computer dates back to only the1940s with the introduction by John von Neumann of two key concepts:(1)a memory to hold a stored program and data,and(2)conditional branching.There two ideas are the foundation of a central processing unit(CPU),which is at the heart of computer today. Starting with von Neumann,there were a series of advances that led to computers powerful enough to be used for digital image processing.Briefly,these advances maybe summarized as follow:(1)the invention of the transistor by Bell Laboratories in1948;(2)the development in the1950s and1960s of the high-level programminglanguages COBOL(Common Business-Oriented Language)and FORTRAN (Formula Translator);(3)the invention of the integrated circuit(IC)at Texas Instruments in1958;(4)the development of operating system in the early1960s;(5)the development of the microprocessor(a single chip consisting of the centralprocessing unit,memory,and input and output controls)by Inter in the early 1970s;(6)introduction by IBM of the personal computer in1981;(7)progressive miniaturization of components,starting with large scale integration(LI)in the late1970s,then very large scale integration(VLSI)in the1980s,to the present use of ultra large scale integration(ULSI).Figure1.3In1929from London to Cenerale Pershingthat New York delivers with15level tone equipmentsthrough cable with Foch do not the photograph by decorationConcurrent with these advances were development in the areas of mass storage and display systems,both of which are fundamental requirements for digital image processing.The first computers powerful enough to carry out meaningful image processing tasks appeared in the early1960s.The birth of what we call digital image processing today can be traced to the availability of those machines and the onset of the apace program during that period.It took the combination of those two developments to bring into focus the potential of digital image processing concepts.Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory(Pasadena,California)in1964when pictures of the moontransmitted by Ranger7were processed by a computer to correct various types of image distortion inherent in the on-board television camera.Figure1.4shows the first image of the moon taken by Ranger7on July31,1964at9:09A.M.Eastern Daylight Time(EDT),about17minutes before impacting the lunar surface(the markers,called reseau mark,are used for geometric corrections,as discussed in Chapter5).This also is the first image of the moon taken by a U.S.spacecraft.The imaging lessons learned with ranger7served as the basis for improved methods used to enhance and restore images from the Surveyor missions to the moon,the Mariner series of flyby mission to Mars,the Apollo manned flights to the moon,and others.In parallel with space application,digital image processing techniques began in the late1960s and early1970s to be used in medical imaging,remote Earth resources observations,and astronomy.The invention in the early1970s of computerized axial tomography(CAT),also called computerized tomography(CT)for short,is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors encircles an object(or patient)and an X-ray source,concentric with the detector ring,rotates about the object.The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring.As the source rotates,this procedure is repeated.Tomography consists of algorithms that use the sensed data to construct an image that represents a“slice”through the object.Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices,which constitute a three-dimensional(3-D)rendition of the inside of the object.Tomography was invented independently by Sir Godfrey N.Hounsfield and Professor Allan M. Cormack,who shared the X-rays were discovered in1895by Wilhelm Conrad Roentgen,for which he received the1901Nobel Prize for Physics.These two inventions,nearly100years apart,led to some of the most active application areas of image processing today.Figure1.4The first picture of the moon by a U.S.Spacecraft.Ranger7took this image on July31,1964at9:09A.M.EDT,about17minutes beforeImpacting the lunar surface.(Courtesy of NASA.)中文翻译数字图像处理方法的研究1绪论数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进;其二是为了使机器自动理解而对图像数据进行存储、传输及显示。
数字图像处理论文中英文对照资料外文翻译文献
第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。
数字图像处理文献综述
医学图像增强处理与分析【摘要】医学图像处理技术作为医学成像技术的发展基础,带动着现代医学诊断产生着深刻的变革。
图像增强技术在医学数字图像的定量、定性分析中扮演着重要的角色,它直接影响到后续的处理与分析工作。
本文以医学图像(主要为X光、CT、B超等医用透视图像)为主要的研究对象,研究图像增强技术在医学图像处理领域中的应用。
本文通过对多种图像增强方法的图像处理效果进行了比较和验证,最后总结出了针对医学图像的各项特点最有效的图像增强处理方法。
关键词:医学图像处理;图像增强;有效方法;Medical Image has been an important supplementary measure of the doctor's diagnosis and treatment. As the developmental foundation of these imaging technology, Medical Image Processing leads to profoundly changes of modern medical diagnosis. Image enhancement technology plays an important role in quantitative and qualitative analysis of medical imaging .It has affected the following treatment and analysis directly. The thesis chooses medical images (including X-ray, CT, B ultrasonic image) as the main research object, studies the application of image enhancement technology in the field of medical images processing. and then we sum up the most effective processing method for image enhancement according to the characteristics of image.Key words:Medical Image ;Medical image enhancement ;effective method11 引言近年来,随着信息时代特别是数字时代的来临数字医学影像成为医生诊断和治疗的重要辅助手段。
图像处理方面的参考文献
图像处理方面的参考文献:由整理[1] Pratt W K. Digital image processing :3rd edition [M]. New York :Wiley Inter-science ,1991.[2] HUANG Kai-qi ,WANG Qiao ,WU Zhen-yang ,et al. Multi-Scale Color Image Enhancement AlgorithmBased on Human Visual System [J]. Journal of Image and Graphics ,2003,8A(11) :1242-1247.[3] 赵春燕,郑永果,王向葵.基于直方图的图像模糊增强算法[J]. 计算机工程,2005,31(12):185-187.[4] 刘惠燕,何文章,马云飞. 基于数学形态学的雾天图像增强算法[J]. 天津工程师范学院学报, 2021.[5] Pal S K, King R A. Image enhancement using fuzzy sets[J]. Electron Lett, 1980, 16(9): 376 —378.[6] Zhou S M, Gan J Q, Xu L D, et al. Interactive image enhancement by fuzzy relaxation [J]. International Journalof Automation and Computing, 2007, 04(3): 229 —235.[7] Mir A.H. Fuzzy entropy based interactive enhancement of radiographic images [J]. Journal of MedicalEngineering and Technology, 2007, 31(3): 220 —231.[8] TAN K, OAKLEY J P. Enhancement of color images in poor visibility conditions[C] / / Proceedings of IEEEInternational Conference on Image Processing. Vancouver, Canada: IEEE press, 2000, 788 - 791.[9] TAN R T, PETTERSSON N, PETERSSON L. Visibility enhancement for roads with foggy or hazy scenes[C] / /IEEE intelligent Vehicles Symposium. Istanbul Turkey: IEEE Press, 2007: 19 - 24.[10] 祝培,朱虹,钱学明, 等. 一种有雾天气图像景物影像的清晰化方法[J ]. 中国图象图形学报, 2004, 9 (1) : 124 -128.[11] 王萍,张春,罗颖昕. 一种雾天图像低比照度增强的快速算法[J ]. 计算机应用, 2006, 26 (1) : 152 - 156.[12] 詹翔,周焰. 一种基于局部方差的雾天图像增强算法[J ]. 计算机应用, 2007, 27 (2) : 510 - 512.[13] 芮义斌,李鹏,孙锦涛. 一种图像去薄雾方法[ J ]. 计算机应用,2006, 26 (1) : 154 - 156.[14] Coltuc D ,Bolon P,Chassery J-M. Exact Histogram Specification [J]. IEEE TIP ,2006,15(05):1143-1152.[15] Menotti D. Contrast Enhancement in Digital Imaging Using Histogram Equalization [D]. Brazil :UFMGUniversidade Federal de Minas Gerais ,2021.[16] Alsuwailem A M. A Novel FPGA Based Real-time Histogram Equalization Circuit for Infrared ImageEnhancement [J]. Active and Passive Electronic Devices ,2021(03):311-321.[17] 杨必武,郭晓松,王克军,等.基于直方图的线性拉伸的红外图像增强新算法[J]. 红外与激光工程,2003,32(1):1-3,7[18] 赵立兴,唐英干,刘冬,关新平.基于直方图指数平滑的模糊散度图像分割[J].系统工程与电子技术,2005,27(7):1182-1185[19] 李忠海.图像直方图局部极值算法及其在边界检测中的应用[J]. 吉林大学学报,2003,21( 5):89-91[20] Dah-Chung Chang,Wen-Rong Wu.Image Contrast Enhancement Based on a Histogram Transformation ofLocal Standard Deviation[J].IEEE Transactions on Medical Imageing,1998,17(4):518-530[21] Andrea Polesel, Giovanni Ramponi, and V.John Mathews. Image Enhancement via Adaptive UnsharpMasking[J].IEEE Transactions on Image Processing,2000,9(3)[22] 苏秀琴,张广华,李哲. 一种陡峭边缘检测的改进Laplacian 算子[J]. 科学技术与工程,2006,6(13):1833-1835[23] 张利平,何金其,黄廉卿•多尺度抗噪反锐化淹没的医学影像增强算法[J].光电工程,2004,31( 10):53-56 ,68[24] CHENG H D, XU HU I2JUAN. A novel fuzzy logic app roach to mammogram contrast enhancement[J ].Information Sciences, 2002,148: 167 - 184.[25] Zhou S M, Gan J Q. A new fuzzy relaxation algorithm for image enhancement [J]. International Journal ofKnowledge-based and Intelligent Engineering Systems, 2006, 10(3): 181 —192.[26] 李水根,吴纪桃.分形与小波[M]. 北京:科学出版社,2002[27] 冈萨雷斯. 数字图像处理[M]. 电子工业出版社,2003.[28] Silvano Di Zenzo,Luigi Cinque.Image Thresholding Using FuzzyEntrops.IEEE TRANSACTIONSONSYSTEM,MAN,ANDCYBERNETICS-PARTB:CYBERNETICS,1998,28(1):23[29] 王保平,刘升虎,范九伦,谢维信.基于模糊熵的自适应图像多层次模糊增强算法[J].电子学报,2005,33(4):730-734[30] Zhang D,Wang Z.Impulse noise detection and removal using fuzzy techniques[J].IEEE ElectronicsLetters.1997,33(5):378-379[31] Jun Wang,Jian Xiao, Dan Hu.Fuzzy wavelet network modeling with B-spline wavelet[J].Machine Learning andCybernetics,2005:4144-4148[32] 刘国军,唐降龙,黄剑华,刘家峰.基于模糊小波的图像比照度增强算[J].电子学报,2005, 33⑷:643-646[33] 陈佳娟,陈晓光,纪寿文.基于遗传算法的图像模糊增强处理方法[J].计算机工程与应用,2001, 21:109-111[34] 汪亚明,吴国忠,赵匀•基于模糊上级遗传算法的图象增强技术[J].农业机械学报,2003,34(3):96-98[35] 赵春燕,郑永果,王向葵•基于直方图的图像模糊增强算法[J].计算机工程,2005, 31(12):185-186,222[36] Tobias,,Seara.Image segmentation by histogram thresholding using fuzzy sets[J].IEEE Transactions onImage Processing,2002 11(12):1457-1465[37] 王玉平,蔡元龙多尺度B样条小波边缘算子]J]冲国科学,A辑,1995,25⑷:426 —437.[38] 李玉,于凤琴,杨慧中等基于新的阈值函数的小波阈值去噪方法[J].江南大学学报,2006 ,5(4):476-479[39] ,,K.R.Subramanian.Low-Complexity Image Denoiseing based on Statistical Modeling of WaveletCofficients[J].IEEE Signal Process,1999,6(12):300-303[40] 朱云芳,戴朝华,陈维荣.小波信号消噪及阈值函数的一种改良方法[J].中国测试技术,2006, 32(7):28-30[41] 凌毓涛,姚远,曾竞.基于小波的医学图像自适应增强[J].华中师范大学学报(自然科学版),2004,38(1):47-51[42] 何锦平.基于小波多分辨分析的图像增强及其应用研究[D]. 西北工业大学硕士学位论文,2003,4[43] Hayit Grernspan,Charles H,Anderson.Image Enhancement by Nonlinear Extrapolation[J].IEEE Transactionson Image Processing,2000,9(6):[44] 李旭超,朱善安.基于小波模极大值和Neyman Pearson准那么阈值的图像去噪.中国图象图形学报,10(8):964-969[45] Tai-Chiu Hsung,Daniel Pak-Kong Lun,Wan-Chi Siu.Denoising by Singularity Detection[J].IEEETransactions on Signal Processing,1999,47(11)[46] Lei Zhang,Paul Bao.Denoising by Spatial Correlation Thresholding[J]. IEEE Transactions on Circuts andSystems for Video Technology,2003,13(6):535-538[47] ,Huai Li,Matthew T.Freedman Optimization of Wavelet Decomposition for ImageCompression and Feature Preservation.IEEE Transactions on Medical Imaging,2003,22(9):1141-1151 [48] Junmei Zhong,Ruola Ning.Image Denoising Based on Wavelets and Multifractals for SingularityDetection[J].IEEE Transactions on Image Processing,2005,14(10):1435-1447[49] Z Cai,T H Cheng,C Lu,, 2001,37(11):683-685[50] Ahmed,J,Jafri,Ahmad.Target Tracking in an Image Sequence Using Wavelet Features and a NeuralNetwork[J].TENCON 2005,10(11):1-6[51] Yunyi Yan,Baolong Guo,Wei Ni.Image Denoising:An Approach Based on Wavelet Neural Network andImproved Median Filtering[J].WCICA 2006:10063-10067[52] Changjiang Zhang,Jinshan Wang,Xiaodong Wang,Huajun Feng.Contrast Enhancement for Image withIncomplete Beta Transform and Wavelet Neural Network[J].Neural Networks and Brain,2005,10:1236-1241 [53] Cao Wanpeng,Che Rensheng,Ye Dong An illumination independent edge detection and fuzzy enhancementalgorithm based on wavelet transform for non-uniform weak illumination images[J]Pattern Recognition Letters 29(2021)192-199金炜,潘英俊,魏彪 .基于改良遗传算法的图象小波阈值去噪研究 [M]. 刘轩,刘佳宾.基于比照度受限自适应直方图均衡的乳腺图像增强[J].计算机工程与应 用,2021,44(10):173-175Pal S k,King R A.Image enhancement using smoothing with fuzzy sets[J].IEEE Trans,Syst,Man,Cybern, 1981,11(7):494-501徐朝伦.基于子波变换和模糊数学的图像分割研究[D]:[博士学位论文].北京:北京理工大学电子工程 系, 1998Philippe Saint-Marc,Jer-Sen Chen. Adaptive Smoothing: A General Tool for Early Vision[J].IEEE Transactionson Pattern Analysis and Machine Intelligence.2003 , 13( 6) : 514-528翟艺书柳晓鸣,涂雅瑗.基于模糊逻辑的雾天降质图像比照度增强算法 [J].计算机工程与应用,2021,3 艾明晶,戴隆忠,曹庆华.雾天环境下自适应图像增强去雾方法研究 [J].计算机仿真,2021,7 Joung - YounKim ,Lee - SupKim , Seung - HoHwang. An Advanced Contrast Enhancement Using PartiallyOverlapped Sub - Block Histogram Equalization [J]. IEEE Transactions on Circuits and Systems of VideoTechnology , 2001, 11 (4) : 475 - 484.Srinivasa G Narasimhan , Shree K. Nayar. Contrast Restoration of weather Degraded Images[ J ]. IEEE Transactions on Pattern Analysis andMachine Intelligence, 2003, 25 (6) : 713 -724陈武凡 ,鲁贤庆 . 彩色图像边缘检测的新算法 ——广义模糊算子法 [ J ]. 中国科学 (A) , 2000, 15 ( 2) : 219-224.刘博,胡正平,王成儒. 基于模糊松弛迭代的分层图像增强算法 [ J ]. 光学技术, 2021,1 黄凯奇,王桥,吴镇扬,等. 基于视觉特性和彩色空间的多尺度彩色图像增强算法 [J]. 电子学报,2004, 32(4): 673-676.吴颖谦,方涛,李聪亮,等. 一种基于小波分析和人眼视觉特性的图像增强方法 [J]. 数据采集与处理, 2003,18: 17-21.陈少卿,吴朝霞,程敬之 .骨肿瘤 X 光片的多分辨特征增强 .西安:西安交通大学校报, 2003. 周旋,周树道,黄峰. 基于小波变换的图像增强新算法 [J ] . 计算机应用 :2005 ,25 (3) :606 - 608. 何锦平. 基于小波多分辨分析的图像增强及其应用研究 [D] .西安:西北工业大学 ,2003. 张德干 .小波变换在数字图像处理中的应用研究[J]. 沈阳:东北大学, 2000. 方勇•基于软阈值的小波图像增强方法〔J 〕计算机工程与应用,2002, 23:16 一 19. 谢杰成,张大力,徐文立 .小波图像去噪综述 .中国图象图形学报,2002, 3(7), 209 一 217. 欧阳诚梓、李勇等,基于小波变换与中值滤波相结合的图像去噪处理 [J].中原工学院学报 王云松、林德杰,基于小波包分析的图像去噪处理 [J].计算技术与自动 李晓漫 , 雷英杰等,小波变换和粗糙集的图像增强方法 [J]. 电光与控制 ,2007.12 刘晨 ,候德文等 . 基于多小波变换与图像融合的图像增强方法 [J]. 计算机技术及应用 ,2021.5 徐凌,刘薇,杨光等. 结合小波域变换和空间域变换的图像增强方法 [J]. 波谱学杂志 Peli E. Cont rast in complex images [J ] . J Opt Soc Am A , 1990 , 7 (10) : 2 032 - 2 040.Huang K Q , Wu Z Y, Wang Q. Image enhancement based on t he statistics of visual representation [J ] .Image Vision Comput , 2005 , 23 (1) : 51 - 57.高彦平 .图像增强方法的研究与实现 [D]. 山东科技大学硕士学位论文, 2005, 4XIANG Q S, LI A. Water-fat imaging with direct phase encoding [J]. Magnetic ResonanceImaging,1997,7(6):1002-1015.MA J, SINGH S K, KUMAR A J, et al. Method for efficient fast spin echo Dixon imaging [J]. Magnetic Resonance in Medicine, 2002, 48(6):1021-1027. [54][55] [56] [57][58] [59][60][61] [62] [63] [64][65] [66] [67] [68] [69][70][71][72] [73] [74] [75] [76][77][78][79] [80][81] [82]。
数字图像处理英文原版及翻译
数字图象处理英文原版及翻译Digital Image Processing: English Original Version and TranslationIntroduction:Digital Image Processing is a field of study that focuses on the analysis and manipulation of digital images using computer algorithms. It involves various techniques and methods to enhance, modify, and extract information from images. In this document, we will provide an overview of the English original version and translation of digital image processing materials.English Original Version:The English original version of digital image processing is a comprehensive textbook written by Richard E. Woods and Rafael C. Gonzalez. It covers the fundamental concepts and principles of image processing, including image formation, image enhancement, image restoration, image segmentation, and image compression. The book also explores advanced topics such as image recognition, image understanding, and computer vision.The English original version consists of 14 chapters, each focusing on different aspects of digital image processing. It starts with an introduction to the field, explaining the basic concepts and terminology. The subsequent chapters delve into topics such as image transforms, image enhancement in the spatial domain, image enhancement in the frequency domain, image restoration, color image processing, and image compression.The book provides a theoretical foundation for digital image processing and is accompanied by numerous examples and illustrations to aid understanding. It also includes MATLAB codes and exercises to reinforce the concepts discussed in each chapter. The English original version is widely regarded as a comprehensive and authoritative reference in the field of digital image processing.Translation:The translation of the digital image processing textbook into another language is an essential task to make the knowledge and concepts accessible to a wider audience. The translation process involves converting the English original version into the target language while maintaining the accuracy and clarity of the content.To ensure a high-quality translation, it is crucial to select a professional translator with expertise in both the source language (English) and the target language. The translator should have a solid understanding of the subject matter and possess excellent language skills to convey the concepts accurately.During the translation process, the translator carefully reads and comprehends the English original version. They then analyze the text and identify any cultural or linguistic nuances that need to be considered while translating. The translator may consult subject matter experts or reference materials to ensure the accuracy of technical terms and concepts.The translation process involves several stages, including translation, editing, and proofreading. After the initial translation, the editor reviews the translated text to ensure its coherence, accuracy, and adherence to the target language's grammar and style. The proofreader then performs a final check to eliminate any errors or inconsistencies.It is important to note that the translation may require adapting certain examples, illustrations, or exercises to suit the target language and culture. This adaptation ensures that the translated version resonates with the local audience and facilitates better understanding of the concepts.Conclusion:Digital Image Processing: English Original Version and Translation provides a comprehensive overview of the field of digital image processing. The English original version, authored by Richard E. Woods and Rafael C. Gonzalez, serves as a valuable reference for understanding the fundamental concepts and techniques in image processing.The translation process plays a crucial role in making this knowledge accessible to non-English speakers. It involves careful selection of a professional translator, thoroughunderstanding of the subject matter, and meticulous translation, editing, and proofreading stages. The translated version aims to accurately convey the concepts while adapting to the target language and culture.By providing both the English original version and its translation, individuals from different linguistic backgrounds can benefit from the knowledge and advancements in digital image processing, fostering international collaboration and innovation in this field.。
数字图像处理 外文翻译 外文文献 英文文献 数字图像处理
数字图像处理外文翻译外文文献英文文献数字图像处理Digital Image Processing1 IntroductionMany operators have been proposed for presenting a connected component n a digital image by a reduced amount of data or simplied shape. In general we have to state that the development, choice and modi_cation of such algorithms in practical applications are domain and task dependent, and there is no \best method". However, it isinteresting to note that there are several equivalences between published methods and notions, and characterizing such equivalences or di_erences should be useful to categorize the broad diversity of published methods for skeletonization. Discussing equivalences is a main intention of this report.1.1 Categories of MethodsOne class of shape reduction operators is based on distance transforms. A distance skeleton is a subset of points of a given component such that every point of this subset represents the center of a maximal disc (labeled with the radius of this disc) contained in the given component. As an example in this _rst class of operators, this report discusses one method for calculating a distance skeleton using the d4 distance function which is appropriate to digitized pictures. A second class of operators produces median or center lines of the digitalobject in a non-iterative way. Normally such operators locate critical points _rst, and calculate a speci_ed path through the object by connecting these points.The third class of operators is characterized by iterative thinning. Historically, Listing [10] used already in 1862 the term linear skeleton for the result of a continuous deformation of the frontier of a connected subset of a Euclidean space without changing the connectivity of the original set, until only a set of lines and points remains. Many algorithms in image analysis are based on this general concept of thinning. The goal is a calculation of characteristic properties of digital objects which are not related to size or quantity. Methods should be independent from the position of a set in the plane or space, grid resolution (for digitizing this set) or the shape complexity of the given set. In the literature the term \thinning" is not used - 1 -in a unique interpretation besides that it always denotes a connectivity preserving reduction operation applied to digital images, involving iterations of transformations of speci_ed contour points into background points. A subset Q _ I of object points is reduced by ade_ned set D in one iteration, and the result Q0 = Q n D becomes Q for the next iteration. Topology-preserving skeletonization is a special case of thinning resulting in a connected set of digital arcs or curves.A digital curve is a path p =p0; p1; p2; :::; pn = q such that pi is a neighbor of pi?1, 1 _ i _ n, and p = q. A digital curve is called simpleif each point pi has exactly two neighbors in this curve. A digital arc is a subset of a digital curve such that p 6= q. A point of a digital arc which has exactly one neighbor is called an end point of this arc. Within this third class of operators (thinning algorithms) we may classify with respect to algorithmic strategies: individual pixels are either removed in a sequential order or in parallel. For example, the often cited algorithm by Hilditch [5] is an iterative process of testing and deleting contour pixels sequentially in standard raster scan order. Another sequential algorithm by Pavlidis [12] uses the de_nition of multiple points and proceeds by contour following. Examples of parallel algorithms in this third class are reduction operators which transform contour points into background points. Di_erences between these parallel algorithms are typically de_ned by tests implemented to ensure connectedness in a local neighborhood. The notion of a simple point is of basic importance for thinning and it will be shown in this reportthat di_erent de_nitions of simple points are actually equivalent. Several publications characterize properties of a set D of points (to be turned from object points to background points) to ensure that connectivity of object and background remain unchanged. The report discusses some of these properties in order to justify parallel thinning algorithms.1.2 BasicsThe used notation follows [17]. A digital image I is a functionde_ned on a discrete set C , which is called the carrier of the image.The elements of C are grid points or grid cells, and the elements (p;I(p)) of an image are pixels (2D case) or voxels (3D case). The range of a (scalar) image is f0; :::Gmaxg with Gmax _ 1. The range of a binary image is f0; 1g. We only use binary images I in this report. Let hIi be the set of all pixel locations with value 1, i.e. hIi = I?1(1). The image carrier is de_ned on an orthogonal grid in 2D or 3D - 2 -space. There are two options: using the grid cell model a 2D pixel location p is a closed square (2-cell) in the Euclidean plane and a 3D pixel location is a closed cube (3-cell) in the Euclidean space, where edges are of length 1 and parallel to the coordinate axes, and centers have integer coordinates. As a second option, using the grid point model a 2D or 3D pixel location is a grid point.Two pixel locations p and q in the grid cell model are called 0-adjacent i_ p 6= q and they share at least one vertex (which is a 0-cell). Note that this speci_es 8-adjacency in 2D or 26-adjacency in 3D if the grid point model is used. Two pixel locations p and q in the grid cell model are called 1- adjacent i_ p 6= q and they share at least one edge (which is a 1-cell). Note that this speci_es 4-adjacency in 2D or 18-adjacency in 3D if the grid point model is used. Finally, two 3Dpixel locations p and q in the grid cell model are called 2-adjacent i_ p 6= q and they share at least one face (which is a 2-cell). Note that this speci_es 6-adjacency if the grid point model is used. Any of these adjacency relations A_, _ 2 f0; 1; 2; 4; 6; 18; 26g, is irreexive andsymmetric on an image carrier C. The _-neighborhood N_(p) of a pixel location p includes p and its _-adjacent pixel locations. Coordinates of 2D grid points are denoted by (i; j), with 1 _ i _ n and 1 _ j _ m; i; j are integers and n;m are the numbers of rows and columns of C. In 3Dwe use integer coordinates (i; j; k). Based on neighborhood relations wede_ne connectedness as usual: two points p; q 2 C are _-connected with respect to M _ C and neighborhood relation N_ i_ there is a sequence of points p = p0; p1; p2; :::; pn = q such that pi is an _-neighbor of pi?1, for 1 _ i _ n, and all points on this sequence are either in M or all in the complement of M. A subset M _ C of an image carrier is called _-connected i_ M is not empty and all points in M are pairwise _-connected with respect to set M. An _-component of a subset S of C is a maximal _-connected subset of S. The study of connectivity in digital images has been introduced in [15]. It follows that any set hIi consists of a number of _-components. In case of the grid cell model, a component is the union of closed squares (2D case) or closed cubes (3D case). The boundary of a 2-cell is the union of its four edges and the boundary of a 3-cell is the union of its six faces. For practical purposes it iseasy to use neighborhood operations (called local operations) on adigital image I which de_ne a value at p 2 C in the transformed image based on pixel- 3 -values in I at p 2 C and its immediate neighbors in N_(p).2 Non-iterative AlgorithmsNon-iterative algorithms deliver subsets of components in specied scan orders without testing connectivity preservation in a number of iterations. In this section we only use the grid point model.2.1 \Distance Skeleton" AlgorithmsBlum [3] suggested a skeleton representation by a set of symmetric points.In a closed subset of the Euclidean plane a point p is called symmetric i_ at least 2 points exist on the boundary with equal distances to p. For every symmetric point, the associated maximal discis the largest disc in this set. The set of symmetric points, each labeled with the radius of the associated maximal disc, constitutes the skeleton of the set. This idea of presenting a component of a digital image as a \distance skeleton" is based on the calculation of a speci_ed distance from each point in a connected subset M _ C to the complement of the subset. The local maxima of the subset represent a \distance skeleton". In [15] the d4-distance is specied as follows. De_nition 1 The distance d4(p; q) from point p to point q, p 6= q, is the smallest positive integer n such that there exists a sequence of distinct grid points p = p0,p1; p2; :::; pn = q with pi is a 4-neighbor of pi?1, 1 _ i _ n.If p = q the distance between them is de_ned to be zero. Thedistance d4(p; q) has all properties of a metric. Given a binary digital image. We transform this image into a new one which represents at each point p 2 hIi the d4-distance to pixels having value zero. The transformation includes two steps. We apply functions f1 to the image Iin standard scan order, producing I_(i; j) = f1(i; j; I(i; j)), and f2in reverse standard scan order, producing T(i; j) = f2(i; j; I_(i; j)), as follows:f1(i; j; I(i; j)) =8><>>:0 if I(i; j) = 0minfI_(i ? 1; j)+ 1; I_(i; j ? 1) + 1gif I(i; j) = 1 and i 6= 1 or j 6= 1- 4 -m+ n otherwisef2(i; j; I_(i; j)) = minfI_(i; j); T(i+ 1; j)+ 1; T(i; j + 1) + 1g The resulting image T is the distance transform image of I. Notethat T is a set f[(i; j); T(i; j)] : 1 _ i _ n ^ 1 _ j _ mg, and let T_ _ T such that [(i; j); T(i; j)] 2 T_ i_ none of the four points in A4((i; j)) has a value in T equal to T(i; j)+1. For all remaining points (i; j) let T_(i; j) = 0. This image T_ is called distance skeleton. Now weapply functions g1 to the distance skeleton T_ in standard scan order, producing T__(i; j) = g1(i; j; T_(i; j)), and g2 to the result of g1 in reverse standard scan order, producing T___(i; j) = g2(i; j; T__(i; j)), as follows:g1(i; j; T_(i; j)) = maxfT_(i; j); T__(i ? 1; j)? 1; T__(i; j ? 1) ? 1gg2(i; j; T__(i; j)) = maxfT__(i; j); T___(i + 1; j)? 1; T___(i; j + 1) ? 1gThe result T___ is equal to the distance transform image T. Both functions g1 and g2 de_ne an operator G, with G(T_) = g2(g1(T_)) = T___, and we have [15]: Theorem 1 G(T_) = T, and if T0 is any subset of image T (extended to an image by having value 0 in all remaining positions) such that G(T0) = T, then T0(i; j) = T_(i; j) at all positions of T_with non-zero values. Informally, the theorem says that the distance transform image is reconstructible from the distance skeleton, and it is the smallest data set needed for such a reconstruction. The useddistance d4 di_ers from the Euclidean metric. For instance, this d4-distance skeleton is not invariant under rotation. For an approximation of the Euclidean distance, some authors suggested the use of di_erent weights for grid point neighborhoods [4]. Montanari [11] introduced a quasi-Euclidean distance. In general, the d4-distance skeleton is a subset of pixels (p; T(p)) of the transformed image, and it is not necessarily connected.2.2 \Critical Points" AlgorithmsThe simplest category of these algorithms determines the midpointsof subsets of connected components in standard scan order for each row. Let l be an index for the number of connected components in one row of the original image. We de_ne the following functions for 1 _ i _ n: ei(l) = _ j if this is the lth case I(i; j) = 1 ^ I(i; j ? 1) = 0 in row i, counting from the left, with I(i;?1) = 0 ,oi(l) = _ j if this is the lth case I(i; j) = 1- 5 -^ I(i; j+ 1) = 0 ,in row i, counting from the left, with I(i;m+ 1)= 0 ,mi(l) = int((oi(l) ?ei(l)=2)+ oi(l) ,The result of scanning row i is a set ofcoordinates (i;mi(l)) ofof the connected components in row i. The set of midpoints of all rows midpoints ,constitutes a critical point skeleton of an image I. This method is computationally eÆcient.The results are subsets of pixels of the original objects, and these subsets are not necessarily connected. They can form \noisy branches" when object components are nearly parallel to image rows. They may be useful for special applications where the scanning direction is approximately perpendicular to main orientations of object components.References[1] C. Arcelli, L. Cordella, S. Levialdi: Parallel thinning ofbinary pictures. Electron. Lett. 11:148{149, 1975}.[2] C. Arcelli, G. Sanniti di Baja: Skeletons of planar patterns. in: Topolog- ical Algorithms for Digital Image Processing (T. Y. Kong, A. Rosenfeld, eds.), North-Holland, 99{143, 1996.}[3] H. Blum: A transformation for extracting new descriptors of shape. in: Models for the Perception of Speech and Visual Form (W. Wathen- Dunn, ed.), MIT Press, Cambridge, Mass., 362{380, 1967.19} - 6 -数字图像处理1引言许多研究者已提议提出了在数字图像里的连接组件是由一个减少的数据量或简化的形状。
基于区域的分割与基于形态学分水岭的分割(部分)
本科毕业设计外文翻译外文译文题目:基于区域的分割与基于形态学分水岭的分割(部分)学院: 信息科学与工程学院专业: 自动化学号: 200604134167学生姓名: 成俊涛指导教师: 王斌日期: 2010年6月6日《Digital Image Processing》(Second Edition)10.4 Region-Based Segmentation10.5 Segmentation by Morphological Watersheds(Part)(From page 612 to page 622)By Rafael C. Gonzalez and Richard E. WoodsPublishing House of Electronics IndustryBeijingMarch, 2008《数字图像处理》(第二版)10.4基于区域的分割10.5基于形态学分水岭的分割(部分)(612页至622页)Rafael C. Gonzalez and Richard E. Woods著电子工业出版社北京2008年3月10.4 基于区域的分割分割的目的是将图像划分为不同区域。
在第10.1和10.2节中,我们根据基于区域间灰度层次不连续性质通过搜寻边界来解决这个问题,而10.3节是通过对像素的属性,如灰度值或颜色,阈值的分布进行分割来完成。
在本节中我们讨论的是直接寻找区域为基础的分割技术。
10.4.1 基本公式设R 代表整个图像区域。
我们可以认为这是一个将区域R 分割成n 个区域R1,R2 ,..., Rn 的过程:(a) n1i i R R ==; (b)i R 是一个连通区域, i = 1,2 ..... n ; (c)i j R R =∅对所有的i 、j, i ≠j ; (d)()i P R TRUE =, i = 1,2 ..... n ; (e) ()i j P R R FALSE =对任意相邻区间 Ri 和 Rj 。
介绍数字图像处理外文翻译
附录1 外文原文Source: "the 21st century literature the applied undergraduate electronic communication series of practical teaching planThe information and communication engineering specialty in English ch02_1. PDF 120-124Ed: HanDing ZhaoJuMin, etcText A: An Introduction to Digital Image Processing1. IntroductionDigital image processing remains a challenging domain of programming for several reasons. First the issue of digital image processing appeared relatively late in computer history. It had to wait for the arrival of the first graphical operating systems to become a true matter. Secondly, digital image processing requires the most careful optimizations especially for real time applications. Comparing image processing and audio processing is a good way to fix ideas. Let us consider the necessary memory bandwidth for examining the pixels of a 320x240, 32 bits bitmap, 30 times a second: 10 Mo/sec. Now with the same quality standard, an audio stereo wave real time processing needs 44100 (samples per second) x 2 (bytes per sample per channel) x 2(channels) = 176Ko/sec, which is 50 times less.Obviously we will not be able to use the same techniques for both audio and image signal processing. Finally, digital image processing is by definition a two dimensions domain; this somehow complicates things when elaborating digital filters.We will explore some of the existing methods used to deal with digital images starting by a very basic approach of color interpretation. As a moreadvanced level of interpretation comes the matrix convolution and digital filters. Finally, we will have an overview of some applications of image processing.The aim of this document is to give the reader a little overview of the existing techniques in digital image processing. We will neither penetrate deep into theory, nor will we in the coding itself; we will more concentrate on the algorithms themselves, the methods. Anyway, this document should be used as a source of ideas only, and not as a source of code. 2. A simple approach to image processing(1) The color data: Vector representation①BitmapsThe original and basic way of representing a digital colored image in a computer's memory is obviously a bitmap. A bitmap is constituted of rows of pixels, contraction of the word s “Picture Element”. Each pixel has a particular value which determines its appearing color. This value is qualified by three numbers giving the decomposition of the color in the three primary colors Red, Green and Blue. Any color visible to human eye can be represented this way. The decomposition of a color in the three primary colors is quantified by a number between 0 and 255. For example, white will be coded as R = 255, G = 255, B = 255; black will be known as (R,G,B)= (0,0,0); and say, bright pink will be : (255,0,255). In other words, an image is an enormous two-dimensional array of color values, pixels, each of them coded on 3 bytes, representing the three primary colors. This allows the image to contain a total of 256×256×256 = 16.8 million different colors. This technique is also known as RGB encoding, and is specifically adapted to human vision. With cameras or other measure instruments we are capable of “seeing”thousands of other “colors”, in which cases the RG B encoding is inappropriate.The range of 0-255 was agreed for two good reasons: The first is that the human eye is not sensible enough to make the difference between more than 256 levels of intensity (1/256 = 0.39%) for a color. That is to say, an image presented to a human observer will not be improved by using more than 256 levels of gray (256shades of gray between black and white). Therefore 256 seems enough quality. The second reason for the value of 255 is obviously that it is convenient for computer storage. Indeed on a byte, which is the computer's memory unit, can be coded up to 256 values.As opposed to the audio signal which is coded in the time domain, the image signal is coded in a two dimensional spatial domain. The raw image data is much more straightforward and easy to analyze than the temporal domain data of the audio signal. This is why we will be able to do lots of stuff and filters for images without transforming the source data, while this would have been totally impossible for audio signal. This first part deals with the simple effects and filters you can compute without transforming the source data, just by analyzing the raw image signal as it is.The standard dimensions, also called resolution, for a bitmap are about 500 rows by 500 columns. This is the resolution encountered in standard analogical television and standard computer applications. You can easily calculate the memory space a bitmap of this size will require. We have 500×500 pixels, each coded on three bytes, this makes 750 Ko. It might not seem enormous compared to the size of hard drives, but if you must deal with an image in real time then processing things get tougher. Indeed rendering images fluidly demands a minimum of 30 images per second, the required bandwidth of 10 Mo/sec is enormous. We will see later that the limitation of data access and transfer in RAM has a crucial importance in image processing, and sometimes it happens to be much more important than limitation of CPU computing, which may seem quite different from what one can be used to in optimization issues. Notice that, with modern compression techniques such as JPEG 2000, the total size of the image can be easily reduced by 50 times without losing a lot of quality, but this is another topic.②Vector representation of colorsAs we have seen, in a bitmap, colors are coded on three bytes representing their decomposition on the three primary colors. It sounds obvious to a mathematician to immediately interpret colors as vectors in athree-dimension space where each axis stands for one of the primary colors. Therefore we will benefit of most of the geometric mathematical concepts to deal with our colors, such as norms, scalar product, projection, rotation or distance. This will be really interesting for some kind of filters we will see soon. Figure 1 illustrates this new interpretation:Figure 1(2) Immediate application to filters① Edge DetectionFrom what we have said before we can quantify the 'difference' between two colors by computing the geometric distance between the vectors representing those two colors. Lets consider two colors C1 = (R1,G1,B1) and C2 = (R2,B2,G2), the distance between the two colors is given by the formula :D(C1, C2) =(R1+This leads us to our first filter: edge detection. The aim of edge detection is to determine the edge of shapes in a picture and to be able to draw a resultbitmap where edges are in white on black background (for example). The idea is very simple; we go through the image pixel by pixel and compare the color of each pixel to its right neighbor, and to its bottom neighbor. If one of these comparison results in a too big difference the pixel studied is part of an edge and should be turned to white, otherwise it is kept in black. The fact that we compare each pixel with its bottom and right neighbor comes from the fact that images are in two dimensions. Indeed if you imagine an image with only alternative horizontal stripes of red and blue, the algorithms wouldn't see the edges of those stripes if it only compared a pixel to its right neighbor. Thus the two comparisons for each pixel are necessary.This algorithm was tested on several source images of different types and it gives fairly good results. It is mainly limited in speed because of frequent memory access. The two square roots can be removed easily by squaring the comparison; however, the color extractions cannot be improved very easily. If we consider that the longest operations are the get pixel function and put pixel functions, we obtain a polynomial complexity of 4*N*M, where N is the number of rows and M the number of columns. This is not reasonably fast enough to be computed in realtime. For a 300×300×32 image I get about 26 transforms per second on an Athlon XP 1600+. Quite slow indeed.Here are the results of the algorithm on an example image:A few words about the results of this algorithm: Notice that the quality of the results depends on the sharpness of the source image. Ifthe source image is very sharp edged, the result will reach perfection. However if you have a very blurry source you might want to make it pass through a sharpness filter first, which we will study later. Another remark, you can also compare each pixel with its second or third nearest neighbors on the right and on the bottom instead of the nearest neighbors. The edges will be thicker but also more exact depending on the source image's sharpness. Finally we will see later on that there is another way to make edge detection with matrix convolution.②Color extractionThe other immediate application of pixel comparison is color extraction.Instead of comparing each pixel with its neighbors, we are going to compare it with a given color C1. This algorithm will try to detect all the objects in the image that are colored with C1. This was quite useful for robotics for example. It enables you to search on streaming images for a particular color. You can then make you robot go get a red ball for example. We will call the reference color, the one we are looking for in the image C0 = (R0,G0,B0).Once again, even if the square root can be easily removed it doesn't really improve the speed of the algorithm. What really slows down the whole loop is the NxM get pixel accesses to memory and put pixel. This determines the complexity of this algorithm: 2xNxM, where N and M are respectively the numbers of rows and columns in the bitmap. The effective speed measured on my computer is about 40 transforms per second on a 300x300x32 source bitmap.3.JPEG image compression theory(一)JPEG compression is divided into four steps to achieve:(1) Color mode conversion and samplingRGB color system is the most common ways that color. JPEG uses a YCbCr colorsystem. Want to use JPEG compression method dealing with the basic full-color images, RGB color mode to first image data is converted to YCbCr color model data. Y representative of brightness, Cb and Cr represents the hue, saturation. By the following calculation to be completed by data conversion. Y = 0.2990R +0.5870 G+0.1140 B Cb =- 0.1687R-0.3313G +0.5000 B +128 Cr = 0.5000R-0.4187G-0.0813B+128 of human eyes on the low-frequency data than high-frequency data with higher The sensitivity, in fact, the human eye to changes in brightness than to color changes should be much more sensitive, ie Y component of the data is more important. Since the Cb and Cr components is relatively unimportant component of the data comparison, you can just take part of the data to deal with. To increase the compression ratio. JPEG usually have two kinds of sampling methods: YUV411 and YUV422, they represent is the meaning of Y, Cb and Cr data sampling ratio of three components.(2)DCT transformationThe full name is the DCT-discrete cosine transform (Discrete Cosine Transform), refers to a group of light intensity data into frequency data, in order that intensity changes of circumstances. If the modification of high-frequency data do, and then back to the original form of data, it is clear there are some differences with the original data, but the human eye is not easy to recognize. Compression, the original image data is divided into 8 * 8 matrix of data units. JPEG entire luminance and chrominance Cb matrix matrix, saturation Cr matrix as a basic unit called the MCU. Each MCU contains a matrix of no more than 10. For example, the ratio of rows and columns Jie Wei 4:2:2 sampling, each MCU will contain four luminance matrix, a matrix and a color saturation matrix. When the image data is divided into an 8 * 8 matrix, you must also be subtracted for each value of 128, and then a generation of formula into the DCT transform can be achieved by DCT transform purposes. The image data value must be reduced by 128, because the formula accepted by the DCT-figure range is between -128 to +127.(3)QuantizationImage data is converted to the frequency factor, you still need to accept a quantitative procedure to enter the coding phase. Quantitative phase requires two 8 * 8 matrix of data, one is to deal specifically with the brightness of the frequency factor, the other is the frequency factor for the color will be the frequency coefficient divided by the value of quantization matrix to obtain the nearest whole number with the quotient, that is completed to quantify. When the frequency coefficients after quantization, will be transformed into the frequency coefficients from the floating-point integer This facilitate the implementation of the final encoding. However, after quantitative phase, all the data to retain only the integer approximation, also once again lost some data content.(4)CodingHuffman encoding without patent issues, to become the most commonly used JPEG encoding, Huffman coding is usually carried out in a complete MCU. Coding, each of the DC value matrix data 63 AC value, will use a different Huffman code tables, while the brightness and chroma also require a different Huffman code tables, it needs a total of four code tables, in order to successfully complete the JPEG coding. DC Code DC is a color difference pulse code modulation using the difference coding method, which is in the same component to obtain an image of each DC value and the difference between the previous DC value to encode. DC pulse code using the main reason for the difference is due to a continuous tone image, the difference mostly smaller than the original value of the number of bits needed to encode the difference will be more than the original value of the number of bits needed to encode the less. For example, a margin of 5, and its binary representation of a value of 101, if the difference is -5, then the first changed to a positive integer 5, and then converted into its 1's complement binary number can be. The so-called one's complement number, that is, if the value is 0 for each Bit, then changed to 1; Bit is 1, it becomes 0. Difference between the five should retain the median 3, the following table that lists the difference between the Bit to be retained and the difference between the number of content controls.In the margin of the margin front-end add some additional value Hoffman code, such as the brightness difference of 5 (101) of the median of three, then the Huffman code value should be 100, the two connected together shall be 100101. The following two tables are the brightness and chroma DC difference encoding table. According to these two forms content, you can add the difference for the DC value Huffman code to complete the DC coding.4. ConclusionsDigital image processing is far from being a simple transpose of audiosignal principles to a two dimensions space. Image signal has its particular properties, and therefore we have to deal with it in a specificway. The Fast Fourier Transform, for example, which was such a practical tool in audio processing, becomes useless in image processing. Oppositely, digital filters are easier to create directly, without any signal transforms, in image processing.Digital image processing has become a vast domain of modern signal technologies. Its applications pass far beyond simple aesthetical considerations, and they include medical imagery, television and multimedia signals, security, portable digital devices, video compression,and even digital movies. We have been flying over some elementarynotions in image processing but there is yet a lot more to explore. Ifyou are beginning in this topic, I hope this paper will have given you thetaste and the motivation to carry on.附录2 外文翻译文献出处:《21 世纪全国应用型本科电子通信系列实用规划教材》之《信息与通信工程专业英语》ch02_1.pdf 120-124页主编:韩定定、赵菊敏等正文:介绍数字图像处理1.导言有几个原因使数字图像处理仍然是一个具有挑战性的领域。
数字图像处理课件(冈萨雷斯第三版)英文翻译优秀课件
The image on the left is the image processing technique . Used to test computer algorithms A standard image of actual effects . The name of this image is lenna . It is made up of a set of numbers. Original image The width and height are 256 pixels each .There are eight bits in pixels. It is in BMP form at About 66K bytes in size.
The objective world is a three-dimensional space, but the general image is two-dimensional. Two dimensional images inevitably lose part of the information in the process of reflecting the three-dimensional world. Even recorded information can be distorted and even difficult to recognize objects. Therefore, it is necessary to recover and reconstruct information from images, and to analyze and extract mathematical models of images so that people can have a correct and profound understanding of what is recorded in the image. This process becomes the process of image processing.
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Digital Image Processing1 IntroductionMany operators have been proposed for presenting a connected component n a digital image by a reduced amount of data or simplied shape. In general we have to state that the development, choice and modi_cation of such algorithms in practical applications are domain and task dependent, and there is no \best method". However, it is interesting to note that there are several equivalences between published methods and notions, and characterizing such equivalences or di_erences should be useful to categorize the broad diversity of published methods for skeletonization. Discussing equivalences is a main intention of this report.1.1 Categories of MethodsOne class of shape reduction operators is based on distance transforms. A distance skeleton is a subset of points of a given component such that every point of this subset represents the center of a maximal disc (labeled with the radius of this disc) contained in the given component. As an example in this _rst class of operators, this report discusses one method for calculating a distance skeleton using the d4 distance function which is appropriate to digitized pictures. A second class of operators produces median or center lines of the digital object in a non-iterative way. Normally such operators locate critical points _rst, and calculate a speci_ed path through the object by connecting these points.The third class of operators is characterized by iterative thinning. Historically, Listing [10] used already in 1862 the term linear skeleton for the result of a continuous deformation of the frontier of a connected subset of a Euclidean space without changing the connectivity of the original set, until only a set of lines and points remains. Many algorithms in image analysis are based on this general concept of thinning. The goal is a calculation of characteristic properties of digital objects which are not related to size or quantity. Methods should be independent from the position of a set in the plane or space, grid resolution (for digitizing this set) or the shape complexity of the given set. In the literature the term \thinning" is not usedin a unique interpretation besides that it always denotes a connectivity preserving reduction operation applied to digital images, involving iterations of transformations of speci_ed contour points into background points. A subset Q _ I of object points is reduced by a de_ned set D in one iteration, and the result Q0 = Q n D becomes Q for the next iteration. Topology-preserving skeletonization is a special case of thinning resulting in a connected set of digital arcs or curves. A digital curve is a path p =p0; p1; p2; :::; pn = q such that pi is a neighbor of pi 1, 1 _ i _ n, and p = q. A digital curve is called simple if each point pi has exactly two neighbors in this curve. A digital arc is a subset of a digital curve such that p 6= q. A point of a digital arc which has exactly one neighbor is called an end point of this arc. Within this third class of operators (thinning algorithms) we may classify with respect to algorithmic strategies: individual pixels are either removed in a sequential order or in parallel. For example, the often cited algorithm by Hilditch [5] is an iterative process of testing and deleting contour pixels sequentially in standard raster scan order. Another sequential algorithm by Pavlidis [12] uses the de_nition of multiple points and proceeds by contour following. Examples of parallel algorithms in this third class are reduction operators which transform contour points into background points. Di_erences between these parallel algorithms are typically de_ned by tests implemented to ensure connectedness in a local neighborhood. The notion of a simple point is of basic importance for thinning and it will be shown in this report that di_erent de_nitions of simple points are actually equivalent. Several publications characterize properties of a set D of points (to be turned from object points to background points) to ensure that connectivity of object and background remain unchanged. The report discusses some of these properties in order to justify parallel thinning algorithms.1.2 BasicsThe used notation follows [17]. A digital image I is a function de_ned on a discrete set C , which is called the carrier of the image. The elements of C are grid points or grid cells, and the elements (p; I(p)) of an image are pixels (2D case) or voxels (3D case). The range of a (scalar) image is f0; :::Gmaxg with Gmax _ 1. The range of a binary image is f0; 1g. We only use binary images I in this report. Let hIi be the set of all pixel locations with value 1, i.e. hIi = I 1(1). The image carrier is de_ned on an orthogonal grid in 2D or 3Dspace. There are two options: using the grid cell model a 2D pixel location p is a closed square (2-cell) in the Euclidean plane and a 3D pixel location is a closed cube (3-cell) in the Euclidean space, where edges are of length 1 and parallel to the coordinate axes, and centers have integer coordinates. As a second option, using the grid point model a 2D or 3D pixel location is a grid point.Two pixel locations p and q in the grid cell model are called 0-adjacent i_ p 6= q and they share at least one vertex (which is a 0-cell). Note that this speci_es 8-adjacency in 2D or 26-adjacency in 3D if the grid point model is used. Two pixel locations p and q in the grid cell model are called 1- adjacent i_ p 6= q and they share at least one edge (which is a 1-cell). Note that this speci_es 4-adjacency in 2D or 18-adjacency in 3D if the grid point model is used. Finally, two 3D pixel locations p and q in the grid cell model are called 2-adjacent i_ p 6= q and they share at least one face (which is a 2-cell). Note that this speci_es 6-adjacency if the grid point model is used. Any of these adjacency relations A_, _ 2 f0; 1; 2; 4; 6; 18; 26g, is irreexive and symmetric on an image carrier C. The _-neighborhood N_(p) of a pixel location p includes p and its _-adjacent pixel locations. Coordinates of 2D grid points are denoted by (i; j), with 1 _ i _ n and 1 _ j _ m; i; j are integers and n;m are the numbers of rows and columns of C. In 3Dwe use integer coordinates (i; j; k). Based on neighborhood relations we de_ne connectedness as usual: two points p; q 2 C are _-connected with respect to M _ C and neighborhood relation N_ i_ there is a sequence of points p = p0; p1; p2; :::; pn = q such that pi is an _-neighbor of pi 1, for 1 _ i _ n, and all points on this sequence are either in M or all in the complement of M. A subset M _ C of an image carrier is called _-connected i_ M is not empty and all points in M are pairwise _-connected with respect to set M. An _-component of a subset S of C is a maximal _-connected subset of S. The study of connectivity in digital images has been introduced in [15]. It follows that any set hIi consists of a number of _-components. In case of the grid cell model, a component is the union of closed squares (2D case) or closed cubes (3D case). The boundary of a 2-cell is the union of its four edges and the boundary of a 3-cell is the union of its six faces. For practical purposes it is easy to use neighborhood operations (called local operations) on a digital image I which de_ne a value at p 2 C in the transformed image based on pixelvalues in I at p 2 C and its immediate neighbors in N_(p).2 Non-iterative AlgorithmsNon-iterative algorithms deliver subsets of components in specied scan orders without testing connectivity preservation in a number of iterations. In this section we only use the grid point model.2.1 \Distance Skeleton" AlgorithmsBlum [3] suggested a skeleton representation by a set of symmetric points.In a closed subset of the Euclidean plane a point p is called symmetric i_ at least 2 points exist on the boundary with equal distances to p. For every symmetric point, the associated maximal disc is the largest disc in this set. The set of symmetric points, each labeled with the radius of the associated maximal disc, constitutes the skeleton of the set. This idea of presenting a component of a digital image as a \distance skeleton" is based on the calculation of a speci_ed distance from each point in a connected subset M _ C to the complement of the subset. The local maxima of the subset represent a \distance skeleton". In [15] the d4-distance is specied as follows. De_nition 1 The distance d4(p; q) from point p to point q, p 6= q, is the smallest positive integer n such that there exists a sequence of distinct grid points p = p0,p1; p2; :::; pn = q with pi is a 4-neighbor of pi 1, 1 _ i _ n. If p = q the distance between them is de_ned to be zero. The distance d4(p; q) has all properties of a metric. Given a binary digital image. We transform this image into a new one which represents at each point p 2 hIi the d4-distance to pixels having value zero. The transformation includes two steps. We apply functions f1 to the image I in standard scan order, producing I_(i; j) = f1(i; j; I(i; j)), and f2 in reverse standard scan order, producing T(i; j) = f2(i; j; I_(i; j)), as follows:f1(i; j; I(i; j)) =8><>>:0 if I(i; j) = 0minfI_(i 1; j)+ 1; I_(i; j 1) + 1gif I(i; j) = 1 and i 6= 1 or j 6= 1m+ n otherwisef2(i; j; I_(i; j)) = minfI_(i; j); T(i+ 1; j)+ 1; T(i; j + 1) + 1gThe resulting image T is the distance transform image of I. Note that T is a set f[(i; j); T(i; j)] : 1 _ i _ n ^ 1 _ j _ mg, and let T_ _ T such that [(i; j); T(i; j)] 2 T_ i_ none of the four points in A4((i; j)) has a value in T equal to T(i; j)+1. For all remaining points (i; j) let T_(i; j) = 0. This image T_ is called distance skeleton. Now we apply functions g1 to the distance skeleton T_ in standard scan order, producing T__(i; j) = g1(i; j; T_(i; j)), and g2 to the result of g1 in reverse standard scan order, producing T___(i; j) = g2(i; j; T__(i; j)), as follows:g1(i; j; T_(i; j)) = maxfT_(i; j); T__(i 1; j) 1; T__(i; j 1) 1gg2(i; j; T__(i; j)) = maxfT__(i; j); T___(i + 1; j) 1; T___(i; j + 1) 1gThe result T___ is equal to the distance transform image T. Both functions g1 and g2 de_ne an operator G, with G(T_) = g2(g1(T_)) = T___, and we have [15]: Theorem 1 G(T_) = T, and if T0 is any subset of image T (extended to an image by having value 0 in all remaining positions) such that G(T0) = T, then T0(i; j) = T_(i; j) at all positions of T_ with non-zero values. Informally, the theorem says that the distance transform image is reconstructible from the distance skeleton, and it is the smallest data set needed for such a reconstruction. The used distance d4 di_ers from the Euclidean metric. For instance, this d4-distance skeleton is not invariant under rotation. For an approximation of the Euclidean distance, some authors suggested the use of di_erent weights for grid point neighborhoods [4]. Montanari [11] introduced a quasi-Euclidean distance. In general, the d4-distance skeleton is a subset of pixels (p; T(p)) of the transformed image, and it is not necessarily connected.2.2 \Critical Points" AlgorithmsThe simplest category of these algorithms determines the midpoints of subsets of connected components in standard scan order for each row. Let l be an index for the number of connected components in one row of the original image. We de_ne the following functions for 1 _ i _ n: ei(l) = _ j if this is the lth case I(i; j) = 1 ^ I(i; j 1) = 0 in row i, counting from the left, with I(i; 1) = 0 ,oi(l) = _ j if this is the lth case I(i; j) = 1^ I(i; j+ 1) = 0 ,in row i, counting from the left, with I(i;m+ 1) = 0 ,mi(l) = int((oi(l) ei(l)=2)+ oi(l) ,The result of scanning row i is a set of coordinates (i;mi(l)) of midpoints ,of the connected components in row i. The set of midpoints of all rows constitutes a critical point skeleton of an image I. This method is computationally eÆcient.The results are subsets of pixels of the original objects, and these subsets are not necessarily connected. They can form \noisy branches" when object components are nearly parallel to image rows. They may be useful for special applications where the scanning direction is approximately perpendicular to main orientations of object components.References[1] C. Arcelli, L. Cordella, S. Levialdi: Parallel thinning of binary pictures. Electron. Lett. 11:148{149, 1975}.[2] C. Arcelli, G. Sanniti di Baja: Skeletons of planar patterns. in: Topolog- ical Algorithms for Digital Image Processing (T. Y. Kong, A. Rosenfeld, eds.), North-Holland, 99{143, 1996.}[3] H. Blum: A transformation for extracting new descriptors of shape. in: Models for the Perception of Speech and Visual Form (W. Wathen- Dunn, ed.), MIT Press, Cambridge, Mass., 362{380, 1967.19}数字图像处理1引言许多研究者已提议提出了在数字图像里的连接组件是由一个减少的数据量或简化的形状。